[jira] [Created] (GEODE-3030) The possibleDuplicate boolean may not be set to true in previously processed AEQ events

2017-06-05 Thread Barry Oglesby (JIRA)
Barry Oglesby created GEODE-3030:


 Summary: The possibleDuplicate boolean may not be set to true in 
previously processed AEQ events
 Key: GEODE-3030
 URL: https://issues.apache.org/jira/browse/GEODE-3030
 Project: Geode
  Issue Type: Bug
  Components: wan
Reporter: Barry Oglesby


When a secondary bucket becomes primary, it sets possibleDuplicate=true for 
batchSize events in AbstractBucketRegionQueue.markEventsAsDuplicate:
{noformat}
java.lang.Exception: Stack trace
at java.lang.Thread.dumpStack(Thread.java:1329)
at 
com.gemstone.gemfire.internal.cache.AbstractBucketRegionQueue.markEventsAsDuplicate(AbstractBucketRegionQueue.java:329)
at 
com.gemstone.gemfire.internal.cache.BucketRegionQueue.beforeAcquiringPrimaryState(BucketRegionQueue.java:203)
at 
com.gemstone.gemfire.internal.cache.BucketAdvisor.acquiredPrimaryLock(BucketAdvisor.java:1257)
at 
com.gemstone.gemfire.internal.cache.BucketAdvisor.acquirePrimaryRecursivelyForColocated(BucketAdvisor.java:1397)
at 
com.gemstone.gemfire.internal.cache.BucketAdvisor$VolunteeringDelegate.doVolunteerForPrimary(BucketAdvisor.java:2695)
at 
com.gemstone.gemfire.internal.cache.BucketAdvisor$VolunteeringDelegate$1.run(BucketAdvisor.java:2575)
at 
com.gemstone.gemfire.internal.cache.BucketAdvisor$VolunteeringDelegate$2.run(BucketAdvisor.java:2908)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at 
com.gemstone.gemfire.distributed.internal.DistributionManager.runUntilShutdown(DistributionManager.java:692)
at 
com.gemstone.gemfire.distributed.internal.DistributionManager$6$1.run(DistributionManager.java:1029)
at java.lang.Thread.run(Thread.java:745)
{noformat}
In my test case, the batch size is 1 so possibleDuplicate is set to true for 
only 1 event in each bucket. It is not set for the remaining events in the 
bucket.

The ParallelQueueRemovalMessage is sent asynchronously from remote members so 
more than 1 batch of events could have been processed between message sends. 
So, possibleDuplicate should be set to true for more than batchSize events 
(possibly all of them).

Here is an example from my test. 

Server 1 is primary for bucket 5 and is stopped. Server 2 takes over primary 
for bucket 5.

{noformat}
Server 1

Server 1 processed 3 events from bucket 5 right before it was stopped:

TestGatewayEventListener processed 51427367-8d36-4425-aa02-e44c54774543
TestGatewayEventListener processed a1af3501-9030-460d-86cc-fe5b88bd5b0a
TestGatewayEventListener processed c1db3ec2-4dad--9ea7-11bd34e492ec

No ParallelQueueRemovalMessage was sent to the remote nodes before the member 
was stopped.

Server 2

Server 2 took over primary for bucket 5 and processed those same 3 events - one 
with possibleDuplicate=true, the others with possibleDuplicate=false. In all 
three cases a SQLIntegrityConstraintViolationException was thrown since the 
event had already been processed by the previous primary server.

TestGatewayEventListener caught EXPECTED exception 
eventKey=51427367-8d36-4425-aa02-e44c54774543; operation=CREATE; 
possibleDuplicate=true; 
exception=java.sql.SQLIntegrityConstraintViolationException: The statement was 
aborted because it would have caused a duplicate key value in a unique or 
primary key constraint or unique index identified by 'SQL170601145521130' 
defined on 'TRADES'.
TestGatewayEventListener caught UNEXPECTED exception 
eventKey=a1af3501-9030-460d-86cc-fe5b88bd5b0a; operation=CREATE; 
possibleDuplicate=false; 
exception=java.sql.SQLIntegrityConstraintViolationException: The statement was 
aborted because it would have caused a duplicate key value in a unique or 
primary key constraint or unique index identified by 'SQL170601145521130' 
defined on 'TRADES'.
TestGatewayEventListener caught UNEXPECTED exception 
eventKey=c1db3ec2-4dad--9ea7-11bd34e492ec; operation=CREATE; 
possibleDuplicate=false; 
exception=java.sql.SQLIntegrityConstraintViolationException: The statement was 
aborted because it would have caused a duplicate key value in a unique or 
primary key constraint or unique index identified by 'SQL170601145521130' 
defined on 'TRADES'.

AbstractBucketRegionQueue.markEventsAsDuplicate set possibleDuplicate=true for 
51427367-8d36-4425-aa02-e44c54774543, but not for the other events:

AbstractBucketRegionQueue.markEventsAsDuplicate marking posDup 
eventKey=51427367-8d36-4425-aa02-e44c54774543
AbstractBucketRegionQueue.markEventsAsDuplicate not marking posDup 
eventKey=a1af3501-9030-460d-86cc-fe5b88bd5b0a
AbstractBucketRegionQueue.markEventsAsDuplicate not marking posDup 
eventKey=c1db3ec2-4dad--9ea7-11bd34e492ec
{noformat}



--
This message was sent by Atlassian JIRA

[jira] [Created] (GEODE-3026) If a region defining lucene indexes cannot be created, it leaves an AEQ behind

2017-06-02 Thread Barry Oglesby (JIRA)
Barry Oglesby created GEODE-3026:


 Summary: If a region defining lucene indexes cannot be created, it 
leaves an AEQ behind
 Key: GEODE-3026
 URL: https://issues.apache.org/jira/browse/GEODE-3026
 Project: Geode
  Issue Type: Bug
  Components: lucene
Reporter: Barry Oglesby


This is ok if the member is started with xml, because it will fail to start, 
but if the region is attempted to be created using java API or gfsh, then the 
server will be in an inconsistent state.

It will have defined the AEQ like:
{noformat}
[info 2017/06/02 13:02:15.047 PDT   tid=0x1] Started  
ParallelGatewaySender{id=AsyncEventQueue_full_index#_data,remoteDsId=-1,isRunning
 =true}
{noformat}
But will fail to create the region (in this case I created the region with a 
different number of buckets):
{noformat}
[warning 2017/06/02 13:02:15.126 PDT   tid=0x1] Initialization failed for 
Region /data
java.lang.IllegalStateException: The total number of buckets found in 
PartitionAttributes ( 16 ) is incompatible with the total number of buckets 
used by other distributed members. Set the number of buckets to  66
at 
org.apache.geode.internal.cache.PartitionRegionConfigValidator.validatePartitionAttrsFromPRConfig(PartitionRegionConfigValidator.java:102)
at 
org.apache.geode.internal.cache.PartitionedRegion.registerPartitionedRegion(PartitionedRegion.java:1337)
at 
org.apache.geode.internal.cache.PartitionedRegion.initPRInternals(PartitionedRegion.java:987)
at 
org.apache.geode.internal.cache.PartitionedRegion.initialize(PartitionedRegion.java:1157)
at 
org.apache.geode.internal.cache.GemFireCacheImpl.createVMRegion(GemFireCacheImpl.java:3104)
at 
org.apache.geode.internal.cache.GemFireCacheImpl.basicCreateRegion(GemFireCacheImpl.java:3004)
at 
org.apache.geode.internal.cache.GemFireCacheImpl.createRegion(GemFireCacheImpl.java:2992)
at org.apache.geode.cache.RegionFactory.create(RegionFactory.java:758)
at TestServer.createIndexAndRegionUsingAPI(TestServer.java:104)
at TestServer.main(TestServer.java:47)
{noformat}
So, at the end of the GemFireCacheImpl.createVMRegion call, the AEQ exists but 
the region doesn't.




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (GEODE-2947) Improve error message (seen in gfsh) when attempting to destroy a region before destroying lucene indexes

2017-06-01 Thread Barry Oglesby (JIRA)

[ 
https://issues.apache.org/jira/browse/GEODE-2947?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16033883#comment-16033883
 ] 

Barry Oglesby commented on GEODE-2947:
--

The message displayed now is now:
{noformat}
Region {0} cannot be destroyed because it defines Lucene index(es) [{1}]. 
Destroy all Lucene indexes before destroying the region.
{noformat}
Where \{0\} is the region name and \{1\} is the list of defined indexes.

> Improve error message (seen in gfsh) when attempting to destroy a region 
> before destroying lucene indexes
> -
>
> Key: GEODE-2947
> URL: https://issues.apache.org/jira/browse/GEODE-2947
> Project: Geode
>  Issue Type: Bug
>  Components: docs, lucene
>Affects Versions: 1.2.0
>Reporter: Shelley Lynn Hughes-Godfrey
>Assignee: Karen Smoler Miller
> Fix For: 1.2.0
>
>
> If a user attempta to destroy region before destroying the lucene index (via 
> gfsh), the error message returned is not clear.  It should state that the 
> lucene index should be destroyed prior to destroying the region.  
> Instead it states this:
> {noformat}
> Error occurred while destroying region "testRegion". Reason: The parent 
> region [/testRegion] in colocation chain cannot be destroyed, unless all its 
> children [[/testIndex#_testRegion.files]] are destroyed
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (GEODE-2947) Improve error message (seen in gfsh) when attempting to destroy a region before destroying lucene indexes

2017-05-31 Thread Barry Oglesby (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-2947?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Barry Oglesby resolved GEODE-2947.
--
   Resolution: Fixed
Fix Version/s: 1.2.0

> Improve error message (seen in gfsh) when attempting to destroy a region 
> before destroying lucene indexes
> -
>
> Key: GEODE-2947
> URL: https://issues.apache.org/jira/browse/GEODE-2947
> Project: Geode
>  Issue Type: Bug
>  Components: lucene
>Affects Versions: 1.2.0
>Reporter: Shelley Lynn Hughes-Godfrey
> Fix For: 1.2.0
>
>
> If a user attempta to destroy region before destroying the lucene index (via 
> gfsh), the error message returned is not clear.  It should state that the 
> lucene index should be destroyed prior to destroying the region.  
> Instead it states this:
> {noformat}
> Error occurred while destroying region "testRegion". Reason: The parent 
> region [/testRegion] in colocation chain cannot be destroyed, unless all its 
> children [[/testIndex#_testRegion.files]] are destroyed
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (GEODE-3014) Document gfsh create lucene index and region failure sequence

2017-05-31 Thread Barry Oglesby (JIRA)
Barry Oglesby created GEODE-3014:


 Summary: Document gfsh create lucene index and region failure 
sequence
 Key: GEODE-3014
 URL: https://issues.apache.org/jira/browse/GEODE-3014
 Project: Geode
  Issue Type: Bug
  Components: docs
Reporter: Barry Oglesby


When creating a lucene index and region using gfsh, there is a specific command 
sequence that causes the region to not be created successfully.

The sequence that fails is:

- start server(s)
- create lucene index
- start additional server(s)
- create region

What fails about this sequence is the lucene index is not saved in cluster 
configuration until after the region is created. Before the region is created, 
its only saved locally in existing servers. Since new servers don't have the 
index when the region is created, the index definitions aren't consistent 
across servers. This causes the region to be created only in some servers 
(either all the original ones with the index or all the new ones without the 
index).

An alternate sequence that succeeds is:

- start server(s)
- create lucene index
- create region
- start additional server(s)

Once the region has been created, then both the lucene index and region are 
saved in cluster configuration, so new servers will create both the region and 
index.




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Comment Edited] (GEODE-2979) Adding server after defining Lucene index results in unusable cluster

2017-05-31 Thread Barry Oglesby (JIRA)

[ 
https://issues.apache.org/jira/browse/GEODE-2979?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16030461#comment-16030461
 ] 

Barry Oglesby edited comment on GEODE-2979 at 5/31/17 7:01 PM:
---

Creating an index currently just creates an entry in each existing member's 
LuceneServiceImpl definedIndexMap. This is an in-memory map. There is no entry 
in cluster config. The lucene index element has to be created in the context of 
a region element. Currently, there isn't any mechanism for creating an 
unattached element which is basically what an index without a region is.

If the lucene indexes are listed, the existing members show the Status as 
Defined. That means the region hasn't been created yet.
{noformat}
gfsh>list lucene indexes
Index Name | Region Path | Server Name | Indexed Fields | 
Field Analyzer  | Status
-- | --- | --- | -- | 
--- | ---
testIndex  | /testRegion | server1 | [__REGION_VALUE_FIELD] | 
{__REGION_VALUE_FIELD=StandardAnalyzer} | Defined
{noformat}
In this case, if another member is started, it will not get the index 
definition since it is not saved in cluster config nor is there a message 
between the members exchanging defined indexes.

If the region is created in this scenario, then only one of the servers will 
successfully create it.
{noformat}
gfsh>create region --name=testRegion --type=PARTITION_REDUNDANT_PERSISTENT
Member  | Status
--- | 

server2 | ERROR: Must create Lucene index testIndex on region /testRegion 
because it is defined in another member.
server1 | Region "/testRegion" created on "server1"
{noformat}
When the region is created, the defined indexes for it are then created and an 
element like this is added to cluster config:
{noformat}




http://geode.apache.org/schema/lucene; 
name="testIndex">



{noformat}
Any member started after the region element is in cluster config will get the 
region and index. The Initialized Status means that the region has been created.
{noformat}
gfsh>list lucene indexes
Index Name | Region Path | Server Name | Indexed Fields | 
Field Analyzer  | Status
-- | --- | --- | -- | 
--- | ---
testIndex  | /testRegion | server1 | [__REGION_VALUE_FIELD] | 
{__REGION_VALUE_FIELD=StandardAnalyzer} | Initialized
testIndex  | /testRegion | server2 | [__REGION_VALUE_FIELD] | 
{__REGION_VALUE_FIELD=StandardAnalyzer} | Initialized
{noformat}
So, this sequence fails:

- start server
- create index
- start other server
- create region

This sequence succeeds:

- start server
- create index
- create region
- start other server

In order to make the first scenario successful, we would have to either:

- persist the index definition in the cluster config so that when other members 
start, they get all the defined indexes and create them when the region is 
created
- pass LuceneService defined indexes between members at startup. Maybe the 
CacheServiceProfile should be exchanged in its own message instead of as part 
of the CacheProfile (which is for a region) as it is now.



was (Author: barry.oglesby):
Creating an index currently just creates an entry in each existing member's 
LuceneServiceImpl definedIndexMap. This is an in-memory map. There is no entry 
in cluster config. The lucene index element has to be created in the context of 
a region element. Currently, there isn't any mechanism for creating an 
unattached element which is basically what an index without a region is.

If the lucene indexes are listed, the existing members show the Status as 
Defined. That means the region hasn't been created yet.
{noformat}
gfsh>list lucene indexes
Index Name | Region Path | Server Name | Indexed Fields | 
Field Analyzer  | Status
-- | --- | --- | -- | 
--- | ---
testIndex  | /testRegion | server1 | [__REGION_VALUE_FIELD] | 
{__REGION_VALUE_FIELD=StandardAnalyzer} | Defined
{noformat}
In this case, if another member is started, it will not get the index 
definition since it is not saved in cluster config nor is there a message 
between the members exchanging defined indexes.

If the region is created in this scenario, then only one of the servers will 
successfully create it.
{noformat}
gfsh>create region --name=testRegion --type=PARTITION_REDUNDANT_PERSISTENT
Member  | Status
--- | 

server2 | ERROR: Must create Lucene index 

[jira] [Resolved] (GEODE-2973) A lucene index element defined outside the context of a region element in xml throws a ClassCastException

2017-05-31 Thread Barry Oglesby (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-2973?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Barry Oglesby resolved GEODE-2973.
--
Resolution: Fixed

> A lucene index element defined outside the context of a region element in xml 
> throws a ClassCastException
> -
>
> Key: GEODE-2973
> URL: https://issues.apache.org/jira/browse/GEODE-2973
> Project: Geode
>  Issue Type: Bug
>  Components: lucene
>Reporter: Barry Oglesby
> Fix For: 1.2.0
>
>
> If a {{lucene:index}} element is defined directly below the {{cache}} element 
> like this:
> {noformat}
> 
>   
> 
> {noformat}
> Then a ClassCastException like below is thrown rather than a 
> {{SAXParseException}} (or maybe a more specific exception):
> {noformat}
> Exception in thread "main" org.apache.geode.cache.CacheXmlException: While 
> reading Cache XML 
> file:/Users/boglesby/Dev/Tests/client-server/lucene/nyc-311/geode-lucene/config/gemfire-server.xml.
>  While parsing XML, caused by java.lang.ClassCastException: 
> org.apache.geode.internal.cache.xmlcache.CacheCreation cannot be cast to 
> org.apache.geode.internal.cache.xmlcache.RegionCreation
>   at 
> org.apache.geode.internal.cache.xmlcache.CacheXmlParser.parse(CacheXmlParser.java:267)
>   at 
> org.apache.geode.internal.cache.GemFireCacheImpl.loadCacheXml(GemFireCacheImpl.java:4282)
>   at 
> org.apache.geode.internal.cache.GemFireCacheImpl.initializeDeclarativeCache(GemFireCacheImpl.java:1390)
>   at 
> org.apache.geode.internal.cache.GemFireCacheImpl.initialize(GemFireCacheImpl.java:1195)
>   at 
> org.apache.geode.internal.cache.GemFireCacheImpl.basicCreate(GemFireCacheImpl.java:758)
>   at 
> org.apache.geode.internal.cache.GemFireCacheImpl.create(GemFireCacheImpl.java:745)
>   at org.apache.geode.cache.CacheFactory.create(CacheFactory.java:173)
>   at org.apache.geode.cache.CacheFactory.create(CacheFactory.java:212)
>   at TestBase.initializeServerCache(TestBase.java:22)
>   at TestServer.main(TestServer.java:12)
> Caused by: java.lang.ClassCastException: 
> org.apache.geode.internal.cache.xmlcache.CacheCreation cannot be cast to 
> org.apache.geode.internal.cache.xmlcache.RegionCreation
>   at 
> org.apache.geode.cache.lucene.internal.xml.LuceneXmlParser.startIndex(LuceneXmlParser.java:71)
>   at 
> org.apache.geode.cache.lucene.internal.xml.LuceneXmlParser.startElement(LuceneXmlParser.java:47)
>   at 
> org.apache.geode.internal.cache.xmlcache.CacheXmlParser.startElement(CacheXmlParser.java:2748)
>   at 
> org.apache.geode.internal.cache.xmlcache.CacheXmlParser$DefaultHandlerDelegate.startElement(CacheXmlParser.java:3369)
>   at 
> com.sun.org.apache.xerces.internal.parsers.AbstractSAXParser.startElement(AbstractSAXParser.java:509)
>   at 
> com.sun.org.apache.xerces.internal.impl.xs.XMLSchemaValidator.startElement(XMLSchemaValidator.java:749)
>   at 
> com.sun.org.apache.xerces.internal.impl.XMLNSDocumentScannerImpl.scanStartElement(XMLNSDocumentScannerImpl.java:379)
>   at 
> com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl$FragmentContentDriver.next(XMLDocumentFragmentScannerImpl.java:2786)
>   at 
> com.sun.org.apache.xerces.internal.impl.XMLDocumentScannerImpl.next(XMLDocumentScannerImpl.java:606)
>   at 
> com.sun.org.apache.xerces.internal.impl.XMLNSDocumentScannerImpl.next(XMLNSDocumentScannerImpl.java:117)
>   at 
> com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanDocument(XMLDocumentFragmentScannerImpl.java:510)
>   at 
> com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:848)
>   at 
> com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:777)
>   at 
> com.sun.org.apache.xerces.internal.parsers.XMLParser.parse(XMLParser.java:141)
>   at 
> com.sun.org.apache.xerces.internal.parsers.AbstractSAXParser.parse(AbstractSAXParser.java:1213)
>   at 
> com.sun.org.apache.xerces.internal.jaxp.SAXParserImpl$JAXPSAXParser.parse(SAXParserImpl.java:649)
>   at 
> com.sun.org.apache.xerces.internal.jaxp.SAXParserImpl.parse(SAXParserImpl.java:333)
>   at javax.xml.parsers.SAXParser.parse(SAXParser.java:195)
>   at 
> org.apache.geode.internal.cache.xmlcache.CacheXmlParser.parse(CacheXmlParser.java:224)
>   ... 9 more
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (GEODE-2972) An incorrectly named lucene element in xml throws a ClassCastException

2017-05-31 Thread Barry Oglesby (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-2972?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Barry Oglesby resolved GEODE-2972.
--
Resolution: Fixed

> An incorrectly named lucene element in xml throws a ClassCastException
> --
>
> Key: GEODE-2972
> URL: https://issues.apache.org/jira/browse/GEODE-2972
> Project: Geode
>  Issue Type: Bug
>  Components: lucene
>Reporter: Barry Oglesby
> Fix For: 1.2.0
>
>
> If a region is defined like this:
> {noformat}
> 
>   
>   
>   
> 
> {noformat}
> Then the {{lucene:indexxx}}  element is ignored, and a {{ClassCastException}} 
> like below is thrown when the {{lucene:field}} element is processed rather 
> than a {{SAXParseException}} (or maybe a more specific exception).
> {noformat}
> Exception in thread "main" org.apache.geode.cache.CacheXmlException: While 
> reading Cache XML 
> file:/Users/boglesby/Dev/Tests/client-server/lucene/nyc-311/geode-lucene/config/gemfire-server.xml.
>  While parsing XML, caused by java.lang.ClassCastException: 
> org.apache.geode.internal.cache.xmlcache.RegionCreation cannot be cast to 
> org.apache.geode.cache.lucene.internal.xml.LuceneIndexCreation
>   at 
> org.apache.geode.internal.cache.xmlcache.CacheXmlParser.parse(CacheXmlParser.java:267)
>   at 
> org.apache.geode.internal.cache.GemFireCacheImpl.loadCacheXml(GemFireCacheImpl.java:4282)
>   at 
> org.apache.geode.internal.cache.GemFireCacheImpl.initializeDeclarativeCache(GemFireCacheImpl.java:1390)
>   at 
> org.apache.geode.internal.cache.GemFireCacheImpl.initialize(GemFireCacheImpl.java:1195)
>   at 
> org.apache.geode.internal.cache.GemFireCacheImpl.basicCreate(GemFireCacheImpl.java:758)
>   at 
> org.apache.geode.internal.cache.GemFireCacheImpl.create(GemFireCacheImpl.java:745)
>   at org.apache.geode.cache.CacheFactory.create(CacheFactory.java:173)
>   at org.apache.geode.cache.CacheFactory.create(CacheFactory.java:212)
>   at TestBase.initializeServerCache(TestBase.java:22)
> Caused by: java.lang.ClassCastException: 
> org.apache.geode.internal.cache.xmlcache.RegionCreation cannot be cast to 
> org.apache.geode.cache.lucene.internal.xml.LuceneIndexCreation
>   at 
> org.apache.geode.cache.lucene.internal.xml.LuceneXmlParser.startField(LuceneXmlParser.java:59)
>   at 
> org.apache.geode.cache.lucene.internal.xml.LuceneXmlParser.startElement(LuceneXmlParser.java:50)
>   at 
> org.apache.geode.internal.cache.xmlcache.CacheXmlParser.startElement(CacheXmlParser.java:2748)
>   at 
> org.apache.geode.internal.cache.xmlcache.CacheXmlParser$DefaultHandlerDelegate.startElement(CacheXmlParser.java:3369)
>   at 
> com.sun.org.apache.xerces.internal.parsers.AbstractSAXParser.startElement(AbstractSAXParser.java:509)
>   at 
> com.sun.org.apache.xerces.internal.parsers.AbstractXMLDocumentParser.emptyElement(AbstractXMLDocumentParser.java:182)
>   at 
> com.sun.org.apache.xerces.internal.impl.xs.XMLSchemaValidator.emptyElement(XMLSchemaValidator.java:780)
>   at 
> com.sun.org.apache.xerces.internal.impl.XMLNSDocumentScannerImpl.scanStartElement(XMLNSDocumentScannerImpl.java:356)
>   at 
> com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl$FragmentContentDriver.next(XMLDocumentFragmentScannerImpl.java:2786)
>   at 
> com.sun.org.apache.xerces.internal.impl.XMLDocumentScannerImpl.next(XMLDocumentScannerImpl.java:606)
>   at 
> com.sun.org.apache.xerces.internal.impl.XMLNSDocumentScannerImpl.next(XMLNSDocumentScannerImpl.java:117)
>   at 
> com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanDocument(XMLDocumentFragmentScannerImpl.java:510)
>   at 
> com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:848)
>   at 
> com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:777)
>   at 
> com.sun.org.apache.xerces.internal.parsers.XMLParser.parse(XMLParser.java:141)
>   at 
> com.sun.org.apache.xerces.internal.parsers.AbstractSAXParser.parse(AbstractSAXParser.java:1213)
>   at 
> com.sun.org.apache.xerces.internal.jaxp.SAXParserImpl$JAXPSAXParser.parse(SAXParserImpl.java:649)
>   at 
> com.sun.org.apache.xerces.internal.jaxp.SAXParserImpl.parse(SAXParserImpl.java:333)
>   at javax.xml.parsers.SAXParser.parse(SAXParser.java:195)
>   at 
> org.apache.geode.internal.cache.xmlcache.CacheXmlParser.parse(CacheXmlParser.java:224)
>   ... 9 more
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (GEODE-2979) Adding server after defining Lucene index results in unusable cluster

2017-05-30 Thread Barry Oglesby (JIRA)

[ 
https://issues.apache.org/jira/browse/GEODE-2979?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16030461#comment-16030461
 ] 

Barry Oglesby commented on GEODE-2979:
--

Creating an index currently just creates an entry in each existing member's 
LuceneServiceImpl definedIndexMap. This is an in-memory map. There is no entry 
in cluster config. The lucene index element has to be created in the context of 
a region element. Currently, there isn't any mechanism for creating an 
unattached element which is basically what an index without a region is.

If the lucene indexes are listed, the existing members show the Status as 
Defined. That means the region hasn't been created yet.
{noformat}
gfsh>list lucene indexes
Index Name | Region Path | Server Name | Indexed Fields | 
Field Analyzer  | Status
-- | --- | --- | -- | 
--- | ---
testIndex  | /testRegion | server1 | [__REGION_VALUE_FIELD] | 
{__REGION_VALUE_FIELD=StandardAnalyzer} | Defined
{noformat}
In this case, if another member is started, it will not get the index 
definition since it is not saved in cluster config nor is there a message 
between the members exchanging defined indexes.

If the region is created in this scenario, then only one of the servers will 
successfully create it.
{noformat}
gfsh>create region --name=testRegion --type=PARTITION_REDUNDANT_PERSISTENT
Member  | Status
--- | 

server2 | ERROR: Must create Lucene index testIndex on region /testRegion 
because it is defined in another member.
server1 | Region "/testRegion" created on "server1"
{noformat}
When the region is created, the defined indexes for it are then created and an 
element like this is added to cluster config:
{noformat}




http://geode.apache.org/schema/lucene; 
name="testIndex">



{noformat}
Any member started after the region element is in cluster config will get the 
region and index. The Initialized Status means that the region has been created.
{noformat}
gfsh>list lucene indexes
Index Name | Region Path | Server Name | Indexed Fields | 
Field Analyzer  | Status
-- | --- | --- | -- | 
--- | ---
testIndex  | /testRegion | server1 | [__REGION_VALUE_FIELD] | 
{__REGION_VALUE_FIELD=StandardAnalyzer} | Initialized
testIndex  | /testRegion | server2 | [__REGION_VALUE_FIELD] | 
{__REGION_VALUE_FIELD=StandardAnalyzer} | Initialized
{noformat}
So, this sequence fails:

- start server
- create index
- start other server
- create region

This sequence succeeds:

- start server
- create index
- create region
- start other server

In order to make the first scenario successful, we would have to either:

- persist the index definition in the cluster config so that when other members 
start, they get all the defined indexes and create them when the region is 
created
- pass LuceneService defined indexes between members at startup (maybe the 
StartupMessage can be used for this?)


> Adding server after defining Lucene index results in unusable cluster
> -
>
> Key: GEODE-2979
> URL: https://issues.apache.org/jira/browse/GEODE-2979
> Project: Geode
>  Issue Type: Bug
>  Components: lucene
>Reporter: Diane Hardman
> Fix For: 1.2.0
>
>
> Here are the gfsh commands I used:
> {noformat}
> ## start locator
> start locator --name=locator1 --port=12345
> ## start first server
> start server --name=server50505 --server-port=50505 
> --locators=localhost[12345] --start-rest-api --http-service-port=8080 
> --http-service-bind-address=localhost
> ## create lucene index on region testRegion
> create lucene index --name=testIndex --region=testRegion 
> --field=__REGION_VALUE_FIELD
> ## start second server
> start server --name=server50506 --server-port=50506 
> --locators=localhost[12345] --start-rest-api --http-service-port=8080 
> --http-service-bind-address=localhost
> ## list indexes - NOTE lucene index only listed on first server
> gfsh>list members
>Name | Id
> --- | -
> locator1| 192.168.1.57(locator1:60525:locator):1024
> server50505 | 192.168.1.57(server50505:60533):1025
> server50506 | 192.168.1.57(server50506:60587):1026
> gfsh>list lucene indexes --with-stats
> Index Name | Region Path | Server Name | Inde.. | Field Anal.. | Status  | 
> Query Executions | Updates | Commits | Documents
> -- | --- | --- | -- |  | --- | 
>  | 

[jira] [Resolved] (GEODE-2958) create replicate region with lucene index may restore destroyed defined lucene index

2017-05-24 Thread Barry Oglesby (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-2958?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Barry Oglesby resolved GEODE-2958.
--
Resolution: Fixed

> create replicate region with lucene index may restore destroyed defined 
> lucene index 
> -
>
> Key: GEODE-2958
> URL: https://issues.apache.org/jira/browse/GEODE-2958
> Project: Geode
>  Issue Type: Bug
>  Components: lucene
>Reporter: Jason Huynh
> Fix For: 1.2.0
>
>
> Executing the below commands in gfsh will result in the destroyed index being 
> created.  It appears that the combination of destroying the lucene index 
> while specifying the region name and index name, along with attempting to 
> create a replicate region can cause the destroyed index to be restored and 
> created when a partition region with the same name finally is created.
> create lucene index --name="GHOST_INDEX" --region="test" --field=name
> list lucene indexes
> destroy lucene index --region=test --name="GHOST_INDEX"
> create lucene index --name="LUCENE_INDEX" --region="test" --field=name
> create region --name=test --type=REPLICATE
> create region --name=test --type=PARTITION
> list lucene indexes
> If the --name parameter of the index was not supplied on the destroy, then 
> things work fine.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (GEODE-2943) Invalid queryStrings cause lucene searches to hang in in PR with multiple nodes

2017-05-23 Thread Barry Oglesby (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-2943?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Barry Oglesby resolved GEODE-2943.
--
   Resolution: Fixed
Fix Version/s: 1.2.0

> Invalid queryStrings cause lucene searches to hang in in PR with multiple 
> nodes
> ---
>
> Key: GEODE-2943
> URL: https://issues.apache.org/jira/browse/GEODE-2943
> Project: Geode
>  Issue Type: Bug
>  Components: lucene
>Affects Versions: 1.2.0
>Reporter: Shelley Lynn Hughes-Godfrey
> Fix For: 1.2.0
>
>
> Some invalid query strings might be "*" or " ".
> When used with a single node dataStore, we see the correct Exception returned:
> {noformat}
> gfsh>search lucene --name=testIndex --region=/testRegion --queryStrings="*" 
> --defaultField=__REGION_VALUE_FIELD
> Could not process command due to GemFire error. An error occurred while 
> searching lucene index across the Geode cluster: Leading wildcard is not 
> allowed: __REGION_VALUE_FIELD:*
> {noformat}
> However, with multiple nodes, the query hangs. 
> Jason debugged this a bit and found:
> {noformat}
> org.apache.geode.InternalGemFireException: java.io.NotSerializableException: 
> org.apache.lucene.queryparser.flexible.messages.MessageImpl
> at 
> org.apache.geode.distributed.internal.DistributionManager.putOutgoing(DistributionManager.java:1838)
> at 
> org.apache.geode.distributed.internal.ReplyMessage.send(ReplyMessage.java:111)
> at 
> org.apache.geode.internal.cache.partitioned.PartitionMessage.sendReply(PartitionMessage.java:441)
> at 
> org.apache.geode.internal.cache.partitioned.PartitionMessage.process(PartitionMessage.java:421)
> at 
> org.apache.geode.distributed.internal.DistributionMessage.scheduleAction(DistributionMessage.java:376)
> at 
> org.apache.geode.distributed.internal.DistributionMessage$1.run(DistributionMessage.java:442)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at 
> org.apache.geode.distributed.internal.DistributionManager.runUntilShutdown(DistributionManager.java:625)
> at 
> org.apache.geode.distributed.internal.DistributionManager$9$1.run(DistributionManager.java:1071)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: java.io.NotSerializableException: 
> org.apache.lucene.queryparser.flexible.messages.MessageImpl
> at 
> java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1184)
> at 
> java.io.ObjectOutputStream.defaultWriteFields(ObjectOutputStream.java:1548)
> at 
> java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1509)
> at 
> java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1432)
> at 
> java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1178)
> at 
> java.io.ObjectOutputStream.defaultWriteFields(ObjectOutputStream.java:1548)
> at 
> java.io.ObjectOutputStream.defaultWriteObject(ObjectOutputStream.java:441)
> at java.lang.Throwable.writeObject(Throwable.java:985)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:483)
> at 
> java.io.ObjectStreamClass.invokeWriteObject(ObjectStreamClass.java:988)
> at 
> java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1496)
> {noformat}
> The executing node fails with 
> {noformat}
> [info 2017/05/18 13:50:34.115 PDT server1  
> tid=0x120] Unexpected exception during function execution on local node 
> Partitioned Region
> org.apache.geode.cache.execute.FunctionException: 
> org.apache.geode.cache.lucene.LuceneQueryException: Malformed lucene query: 
> *asdf*
> at 
> org.apache.geode.cache.lucene.internal.distributed.LuceneQueryFunction.getQuery(LuceneQueryFunction.java:163)
> at 
> org.apache.geode.cache.lucene.internal.distributed.LuceneQueryFunction.execute(LuceneQueryFunction.java:87)
> at 
> org.apache.geode.internal.cache.execute.AbstractExecution.executeFunctionLocally(AbstractExecution.java:332)
> at 
> org.apache.geode.internal.cache.execute.AbstractExecution$1.run(AbstractExecution.java:274)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at 
> 

[jira] [Created] (GEODE-2975) Attributes are not validated in lucene xml configuration

2017-05-22 Thread Barry Oglesby (JIRA)
Barry Oglesby created GEODE-2975:


 Summary: Attributes are not validated in lucene xml configuration
 Key: GEODE-2975
 URL: https://issues.apache.org/jira/browse/GEODE-2975
 Project: Geode
  Issue Type: Bug
  Components: lucene
Reporter: Barry Oglesby


No exception is thrown for a lucene xml configuration missing a required 
attribute.

No exception is thrown for a lucene xml configuration including an unknown 
attribute.

If a {{lucene:field}} element is defined like below, no exception is thrown for 
the invalid attribute called {{namexx}} and no exceptio is thrown because the 
required attribute called {{name}} is not included.
{noformat}

{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (GEODE-2973) A lucene index element defined outside the context of a region element in xml throws a ClassCastException

2017-05-22 Thread Barry Oglesby (JIRA)
Barry Oglesby created GEODE-2973:


 Summary: A lucene index element defined outside the context of a 
region element in xml throws a ClassCastException
 Key: GEODE-2973
 URL: https://issues.apache.org/jira/browse/GEODE-2973
 Project: Geode
  Issue Type: Bug
  Components: lucene
Reporter: Barry Oglesby


If a {{lucene:index}} element is defined directly below the {{cache}} element 
like this:
{noformat}



{noformat}
Then a ClassCastException like below is thrown rather than a 
{{SAXParseException}} (or maybe a more specific exception):
{noformat}
Exception in thread "main" org.apache.geode.cache.CacheXmlException: While 
reading Cache XML 
file:/Users/boglesby/Dev/Tests/client-server/lucene/nyc-311/geode-lucene/config/gemfire-server.xml.
 While parsing XML, caused by java.lang.ClassCastException: 
org.apache.geode.internal.cache.xmlcache.CacheCreation cannot be cast to 
org.apache.geode.internal.cache.xmlcache.RegionCreation
at 
org.apache.geode.internal.cache.xmlcache.CacheXmlParser.parse(CacheXmlParser.java:267)
at 
org.apache.geode.internal.cache.GemFireCacheImpl.loadCacheXml(GemFireCacheImpl.java:4282)
at 
org.apache.geode.internal.cache.GemFireCacheImpl.initializeDeclarativeCache(GemFireCacheImpl.java:1390)
at 
org.apache.geode.internal.cache.GemFireCacheImpl.initialize(GemFireCacheImpl.java:1195)
at 
org.apache.geode.internal.cache.GemFireCacheImpl.basicCreate(GemFireCacheImpl.java:758)
at 
org.apache.geode.internal.cache.GemFireCacheImpl.create(GemFireCacheImpl.java:745)
at org.apache.geode.cache.CacheFactory.create(CacheFactory.java:173)
at org.apache.geode.cache.CacheFactory.create(CacheFactory.java:212)
at TestBase.initializeServerCache(TestBase.java:22)
at TestServer.main(TestServer.java:12)
Caused by: java.lang.ClassCastException: 
org.apache.geode.internal.cache.xmlcache.CacheCreation cannot be cast to 
org.apache.geode.internal.cache.xmlcache.RegionCreation
at 
org.apache.geode.cache.lucene.internal.xml.LuceneXmlParser.startIndex(LuceneXmlParser.java:71)
at 
org.apache.geode.cache.lucene.internal.xml.LuceneXmlParser.startElement(LuceneXmlParser.java:47)
at 
org.apache.geode.internal.cache.xmlcache.CacheXmlParser.startElement(CacheXmlParser.java:2748)
at 
org.apache.geode.internal.cache.xmlcache.CacheXmlParser$DefaultHandlerDelegate.startElement(CacheXmlParser.java:3369)
at 
com.sun.org.apache.xerces.internal.parsers.AbstractSAXParser.startElement(AbstractSAXParser.java:509)
at 
com.sun.org.apache.xerces.internal.impl.xs.XMLSchemaValidator.startElement(XMLSchemaValidator.java:749)
at 
com.sun.org.apache.xerces.internal.impl.XMLNSDocumentScannerImpl.scanStartElement(XMLNSDocumentScannerImpl.java:379)
at 
com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl$FragmentContentDriver.next(XMLDocumentFragmentScannerImpl.java:2786)
at 
com.sun.org.apache.xerces.internal.impl.XMLDocumentScannerImpl.next(XMLDocumentScannerImpl.java:606)
at 
com.sun.org.apache.xerces.internal.impl.XMLNSDocumentScannerImpl.next(XMLNSDocumentScannerImpl.java:117)
at 
com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanDocument(XMLDocumentFragmentScannerImpl.java:510)
at 
com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:848)
at 
com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:777)
at 
com.sun.org.apache.xerces.internal.parsers.XMLParser.parse(XMLParser.java:141)
at 
com.sun.org.apache.xerces.internal.parsers.AbstractSAXParser.parse(AbstractSAXParser.java:1213)
at 
com.sun.org.apache.xerces.internal.jaxp.SAXParserImpl$JAXPSAXParser.parse(SAXParserImpl.java:649)
at 
com.sun.org.apache.xerces.internal.jaxp.SAXParserImpl.parse(SAXParserImpl.java:333)
at javax.xml.parsers.SAXParser.parse(SAXParser.java:195)
at 
org.apache.geode.internal.cache.xmlcache.CacheXmlParser.parse(CacheXmlParser.java:224)
... 9 more
{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (GEODE-2972) An incorrectly named lucene element in xml throws a ClassCastException

2017-05-22 Thread Barry Oglesby (JIRA)
Barry Oglesby created GEODE-2972:


 Summary: An incorrectly named lucene element in xml throws a 
ClassCastException
 Key: GEODE-2972
 URL: https://issues.apache.org/jira/browse/GEODE-2972
 Project: Geode
  Issue Type: Bug
  Components: lucene
Reporter: Barry Oglesby


If a region is defined like this:
{noformat}





{noformat}
Then the {{lucene:indexxx}}  element is ignored, and a {{ClassCastException}} 
like below is thrown when the {{lucene:field}} element is processed rather than 
a {{SAXParseException}} (or maybe a more specific exception).
{noformat}

Exception in thread "main" org.apache.geode.cache.CacheXmlException: While 
reading Cache XML 
file:/Users/boglesby/Dev/Tests/client-server/lucene/nyc-311/geode-lucene/config/gemfire-server.xml.
 While parsing XML, caused by java.lang.ClassCastException: 
org.apache.geode.internal.cache.xmlcache.RegionCreation cannot be cast to 
org.apache.geode.cache.lucene.internal.xml.LuceneIndexCreation
at 
org.apache.geode.internal.cache.xmlcache.CacheXmlParser.parse(CacheXmlParser.java:267)
at 
org.apache.geode.internal.cache.GemFireCacheImpl.loadCacheXml(GemFireCacheImpl.java:4282)
at 
org.apache.geode.internal.cache.GemFireCacheImpl.initializeDeclarativeCache(GemFireCacheImpl.java:1390)
at 
org.apache.geode.internal.cache.GemFireCacheImpl.initialize(GemFireCacheImpl.java:1195)
at 
org.apache.geode.internal.cache.GemFireCacheImpl.basicCreate(GemFireCacheImpl.java:758)
at 
org.apache.geode.internal.cache.GemFireCacheImpl.create(GemFireCacheImpl.java:745)
at org.apache.geode.cache.CacheFactory.create(CacheFactory.java:173)
at org.apache.geode.cache.CacheFactory.create(CacheFactory.java:212)
at TestBase.initializeServerCache(TestBase.java:22)
Caused by: java.lang.ClassCastException: 
org.apache.geode.internal.cache.xmlcache.RegionCreation cannot be cast to 
org.apache.geode.cache.lucene.internal.xml.LuceneIndexCreation
at 
org.apache.geode.cache.lucene.internal.xml.LuceneXmlParser.startField(LuceneXmlParser.java:59)
at 
org.apache.geode.cache.lucene.internal.xml.LuceneXmlParser.startElement(LuceneXmlParser.java:50)
at 
org.apache.geode.internal.cache.xmlcache.CacheXmlParser.startElement(CacheXmlParser.java:2748)
at 
org.apache.geode.internal.cache.xmlcache.CacheXmlParser$DefaultHandlerDelegate.startElement(CacheXmlParser.java:3369)
at 
com.sun.org.apache.xerces.internal.parsers.AbstractSAXParser.startElement(AbstractSAXParser.java:509)
at 
com.sun.org.apache.xerces.internal.parsers.AbstractXMLDocumentParser.emptyElement(AbstractXMLDocumentParser.java:182)
at 
com.sun.org.apache.xerces.internal.impl.xs.XMLSchemaValidator.emptyElement(XMLSchemaValidator.java:780)
at 
com.sun.org.apache.xerces.internal.impl.XMLNSDocumentScannerImpl.scanStartElement(XMLNSDocumentScannerImpl.java:356)
at 
com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl$FragmentContentDriver.next(XMLDocumentFragmentScannerImpl.java:2786)
at 
com.sun.org.apache.xerces.internal.impl.XMLDocumentScannerImpl.next(XMLDocumentScannerImpl.java:606)
at 
com.sun.org.apache.xerces.internal.impl.XMLNSDocumentScannerImpl.next(XMLNSDocumentScannerImpl.java:117)
at 
com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanDocument(XMLDocumentFragmentScannerImpl.java:510)
at 
com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:848)
at 
com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:777)
at 
com.sun.org.apache.xerces.internal.parsers.XMLParser.parse(XMLParser.java:141)
at 
com.sun.org.apache.xerces.internal.parsers.AbstractSAXParser.parse(AbstractSAXParser.java:1213)
at 
com.sun.org.apache.xerces.internal.jaxp.SAXParserImpl$JAXPSAXParser.parse(SAXParserImpl.java:649)
at 
com.sun.org.apache.xerces.internal.jaxp.SAXParserImpl.parse(SAXParserImpl.java:333)
at javax.xml.parsers.SAXParser.parse(SAXParser.java:195)
at 
org.apache.geode.internal.cache.xmlcache.CacheXmlParser.parse(CacheXmlParser.java:224)
... 9 more
{noformat}




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (GEODE-2968) Provide an API to set identity field(s) on JSON objects

2017-05-22 Thread Barry Oglesby (JIRA)
Barry Oglesby created GEODE-2968:


 Summary: Provide an API to set identity field(s) on JSON objects
 Key: GEODE-2968
 URL: https://issues.apache.org/jira/browse/GEODE-2968
 Project: Geode
  Issue Type: Improvement
  Components: serialization
Reporter: Barry Oglesby


I have a JSON object with 53 fields. The identity of that object is one 
specific field (the {{Unique_Key}} field in this case), but I can't specify 
that when loading the object. This causes {{PdxInstanceImpl equals}} and 
{{hashCode}} to use all 53 fields in their determinations and is especially bad 
for OQL queries.

I hacked {{PdxInstanceHelper addIntField}} to set an identity field like:
{noformat}
if (fieldName.equals("Unique_Key")) {
  m_pdxInstanceFactory.markIdentityField(fieldName);
}
{noformat}
Here are some queries before and after this change:

Before:
{noformat}
Totals query=SELECT * FROM /data WHERE Agency = 'NYPD'; resultSize=1890; 
iterations=1000; totalTime=30529 ms; averagePerQuery=30.529 ms
Totals query=SELECT * FROM /data WHERE Incident_Address LIKE '%AVENUE%'; 
resultSize=2930; iterations=1000; totalTime=62723 ms; averagePerQuery=62.723 ms
Totals query=SELECT * FROM /data; resultSize=1; iterations=1000; 
totalTime=87673 ms; averagePerQuery=87.673 ms
{noformat}
After:
{noformat}
Totals query=SELECT * FROM /data WHERE Agency = 'NYPD'; resultSize=1890; 
iterations=1000; totalTime=12417 ms; averagePerQuery=12.417 ms
Totals query=SELECT * FROM /data WHERE Incident_Address LIKE '%AVENUE%'; 
resultSize=2930; iterations=1000; totalTime=29517 ms; averagePerQuery=29.517 ms
Totals query=SELECT * FROM /data; resultSize=1; iterations=1000; 
totalTime=44127 ms; averagePerQuery=44.127 ms
{noformat}
Here is an example of the JSON object:
{noformat}
 {
   "Unique_Key": 25419013,
   "Created_Date": "04/24/2013 12:00:00 AM",
   "Closed_Date": "04/25/2013 12:00:00 AM",
   "Agency": "HPD",
   "Agency_Name": "Department of Housing Preservation and Development",
   "Complaint_Type": "PLUMBING",
   "Descriptor": "WATER-SUPPLY",
   "Location_Type": "RESIDENTIAL BUILDING",
   "Incident_Zip": "11372",
   "Incident_Address": "37-37 88 STREET",
   "Street_Name": "88 STREET",
   "Cross_Street_1": "37 AVENUE",
   "Cross_Street_2": "ROOSEVELT AVENUE",
   "Intersection_Street_1": "",
   "Intersection_Street_2": "",
   "Address_Type": "ADDRESS",
   "City": "Jackson Heights",
   "Landmark": "",
   "Facility_Type": "N/A",
   "Status": "Closed",
   "Due_Date": "",
   "Resolution_Description": "The Department of Housing Preservation and 
Development inspected the following conditions. No violations were issued. The 
complaint has been closed.",
   "Resolution_Action_Updated_Date": "04/25/2013 12:00:00 AM",
   "Community_Board": "03 QUEENS",
   "Borough": "QUEENS",
   "X_Coordinate_State_Plane": 1017897,
   "Y_Coordinate_State_Plane": 212354,
   "Park_Facility_Name": "Unspecified",
   "Park_Borough": "QUEENS",
   "School_Name": "Unspecified",
   "School_Number": "Unspecified",
   "School_Region": "Unspecified",
   "School_Code": "Unspecified",
   "School_Phone_Number": "Unspecified",
   "School_Address": "Unspecified",
   "School_City": "Unspecified",
   "School_State": "Unspecified",
   "School_Zip": "Unspecified",
   "School_Not_Found": "",
   "School_or_Citywide_Complaint": "",
   "Vehicle_Type": "",
   "Taxi_Company_Borough": "",
   "Taxi_Pick_Up_Location": "",
   "Bridge_Highway_Name": "",
   "Bridge_Highway_Direction": "",
   "Road_Ramp": "",
   "Bridge_Highway_Segment": "",
   "Garage_Lot_Name": "",
   "Ferry_Direction": "",
   "Ferry_Terminal_Name": "",
   "Latitude": 40.74947521870806,
   "Longitude": -73.87856355000383,
   "Location": "(40.74947521870806, -73.87856355000383)"
 }
 {noformat}




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (GEODE-2943) Invalid queryStrings cause lucene searches to hang in in PR with multiple nodes

2017-05-19 Thread Barry Oglesby (JIRA)

[ 
https://issues.apache.org/jira/browse/GEODE-2943?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16017944#comment-16017944
 ] 

Barry Oglesby commented on GEODE-2943:
--

These queries work fine:
{noformat}
gfsh>search lucene --name=index --region=data --defaultField=* --queryStrings=*
gfsh>search lucene --name=index --region=data --defaultField=Agency 
--queryStrings=*:*
gfsh>search lucene --name=index --region=data --defaultField=XXX 
--queryStrings=*:*
{noformat}
This one fails:
{noformat}
gfsh>search lucene --name=index --region=data --defaultField=Agency 
--queryStrings=*
{noformat}
By default, lucene queries don't support wildcards, so this one fails with a 
LEADING_WILDCARD_NOT_ALLOWED.

The ones that succeed are parsed a bit differently than the one that fails.

The StandardQueryParser converts the input query to a QueryNode and sends it 
through a StandardQueryNodeProcessorPipeline to manipulate the query. The 
StandardQueryNodeProcessorPipeline is a ordered list of QueryNodeProcessors. 
The QueryNode is sent through all of these, any one of which may change the 
original QueryNode and return a different one. 

The StandardQueryNodeProcessorPipeline defines these QueryNodeProcessors:

- WildcardQueryNodeProcessor
- MultiFieldQueryNodeProcessor
- FuzzyQueryNodeProcessor
- MatchAllDocsQueryNodeProcessor
- OpenRangeQueryNodeProcessor
- LegacyNumericQueryNodeProcessor
- LegacyNumericRangeQueryNodeProcessor
- PointQueryNodeProcessor
- PointRangeQueryNodeProcessor
- LowercaseExpandedTermsQueryNodeProcessor
- TermRangeQueryNodeProcessor
- AllowLeadingWildcardProcessor
- AnalyzerQueryNodeProcessor
- PhraseSlopQueryNodeProcessor
- BooleanQuery2ModifierNodeProcessor
- NoChildOptimizationQueryNodeProcessor
- RemoveDeletedQueryNodesProcessor
- RemoveEmptyNonLeafQueryNodeProcessor
- BooleanSingleChildOptimizationQueryNodeProcessor
- DefaultPhraseSlopQueryNodeProcessor
- BoostQueryNodeProcessor
- MultiTermRewriteMethodProcessor

In all cases, the QueryNode starts out as a FieldQueryNode:
{noformat}
StringQueryProvider.getQuery initialQueryTree= 
class=org.apache.lucene.queryparser.flexible.core.nodes.FieldQueryNode
{noformat}
The query that fails gets converted to a WildcardQueryNode by the 
WildcardQueryNodeProcessor and remains that way until it gets to the 
AllowLeadingWildcardProcessor. At that point it fails with the exception:
{noformat}
StringQueryProvider.getQuery 
processor=org.apache.lucene.queryparser.flexible.standard.processors.WildcardQueryNodeProcessor@3af718f0
StringQueryProvider.getQuery intermediateQueryTree= 
class=org.apache.lucene.queryparser.flexible.standard.nodes.WildcardQueryNode
StringQueryProvider.getQuery 
processor=org.apache.lucene.queryparser.flexible.standard.processors.MultiFieldQueryNodeProcessor@b4c44dc
StringQueryProvider.getQuery intermediateQueryTree= 
class=org.apache.lucene.queryparser.flexible.standard.nodes.WildcardQueryNode
StringQueryProvider.getQuery 
processor=org.apache.lucene.queryparser.flexible.standard.processors.FuzzyQueryNodeProcessor@97eeddd
StringQueryProvider.getQuery intermediateQueryTree= 
class=org.apache.lucene.queryparser.flexible.standard.nodes.WildcardQueryNode
StringQueryProvider.getQuery 
processor=org.apache.lucene.queryparser.flexible.standard.processors.MatchAllDocsQueryNodeProcessor@2a7914c6
StringQueryProvider.getQuery intermediateQueryTree= 
class=org.apache.lucene.queryparser.flexible.standard.nodes.WildcardQueryNode
StringQueryProvider.getQuery 
processor=org.apache.lucene.queryparser.flexible.standard.processors.OpenRangeQueryNodeProcessor@6358136c
StringQueryProvider.getQuery intermediateQueryTree= 
class=org.apache.lucene.queryparser.flexible.standard.nodes.WildcardQueryNode
StringQueryProvider.getQuery 
processor=org.apache.lucene.queryparser.flexible.standard.processors.LegacyNumericQueryNodeProcessor@a5bc17f
StringQueryProvider.getQuery intermediateQueryTree= 
class=org.apache.lucene.queryparser.flexible.standard.nodes.WildcardQueryNode
StringQueryProvider.getQuery 
processor=org.apache.lucene.queryparser.flexible.standard.processors.LegacyNumericRangeQueryNodeProcessor@46fca899
StringQueryProvider.getQuery intermediateQueryTree= 
class=org.apache.lucene.queryparser.flexible.standard.nodes.WildcardQueryNode
StringQueryProvider.getQuery 
processor=org.apache.lucene.queryparser.flexible.standard.processors.PointQueryNodeProcessor@17746c87
StringQueryProvider.getQuery intermediateQueryTree= 
class=org.apache.lucene.queryparser.flexible.standard.nodes.WildcardQueryNode
StringQueryProvider.getQuery 
processor=org.apache.lucene.queryparser.flexible.standard.processors.PointRangeQueryNodeProcessor@2d71f37f
StringQueryProvider.getQuery intermediateQueryTree= 
class=org.apache.lucene.queryparser.flexible.standard.nodes.WildcardQueryNode
StringQueryProvider.getQuery 

[jira] [Created] (GEODE-2952) gfsh doesn't support exact match lucene queries

2017-05-19 Thread Barry Oglesby (JIRA)
Barry Oglesby created GEODE-2952:


 Summary: gfsh doesn't support exact match lucene queries
 Key: GEODE-2952
 URL: https://issues.apache.org/jira/browse/GEODE-2952
 Project: Geode
  Issue Type: Bug
  Components: lucene
Reporter: Barry Oglesby


This command:
{noformat}
gfsh>search lucene --name=index --region=data 
--defaultField=Resolution_Description --queryStrings='Police Dept'
{noformat}
Runs this lucene query:
{noformat}
Resolution_Description:police Resolution_Description:dept
{noformat}
I also tried this command which ran the same lucene query as above:
{noformat}
gfsh>search lucene --name=index --region=data 
--defaultField=Resolution_Description --queryStrings='\"Police Dept\"'
{noformat}
The java API supports exact match queries with "" around the queryString. Doing 
this causes this lucene query to be run:
{noformat}
Resolution_Description:"police dept"
{noformat}




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (GEODE-2951) A gfsh lucene query specifying --pageSize fails with a NullPointerException

2017-05-19 Thread Barry Oglesby (JIRA)
Barry Oglesby created GEODE-2951:


 Summary: A gfsh lucene query specifying --pageSize fails with a 
NullPointerException
 Key: GEODE-2951
 URL: https://issues.apache.org/jira/browse/GEODE-2951
 Project: Geode
  Issue Type: Bug
  Components: lucene
Reporter: Barry Oglesby


A gfsh lucene query specifying {{--pageSize}} fails with a NullPointerException:

{noformat}
gfsh>search lucene --name=index --region=data --queryStrings=NYPD 
--defaultField=Agency --pageSize=10
Could not process command due to GemFire error. An error occurred while 
searching lucene index across the Geode cluster: null
{noformat}
This exception is logged in the locator.log:
{noformat}
[info 2017/05/18 12:42:22.317 PDT locator  
tid=0x7f] null
java.lang.NullPointerException
at 
org.apache.geode.cache.lucene.internal.cli.LuceneIndexCommands.displayResults(LuceneIndexCommands.java:476)
at 
org.apache.geode.cache.lucene.internal.cli.LuceneIndexCommands.searchIndex(LuceneIndexCommands.java:299)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
{noformat}
The same query without the {{--pageSize=10}} setting works fine.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (GEODE-2950) Lucene index names should be restricted to valid region names since the index name becomes part of a region

2017-05-19 Thread Barry Oglesby (JIRA)
Barry Oglesby created GEODE-2950:


 Summary: Lucene index names should be restricted to valid region 
names since the index name becomes part of a region
 Key: GEODE-2950
 URL: https://issues.apache.org/jira/browse/GEODE-2950
 Project: Geode
  Issue Type: Bug
  Components: lucene
Reporter: Barry Oglesby


Currently, invalid region name characters can be used as index names. The index 
name becomes part of the async event queue id which becomes part of the 
colocated region name, so invalid characters shouldn't be allowed as index 
names. LocalRegion has a validateRegionName method that restricts the names to 
{{\[aA-zZ0-9-_.\]+}}. This method should be called to validate index names.

Here is an example (option-j creates the ∆):
{noformat}
gfsh>create lucene index --name=∆∆∆ --region=data --field=text
  Member| Status
--- | -
192.168.2.4(server2:53308):1025 | Successfully created lucene index
192.168.2.4(server1:53315):1026 | Successfully created lucene index
{noformat}
{noformat}
gfsh>create region --name=data --type=PARTITION
Member  | Status
--- | ---
server2 | Region "/data" created on "server2"
server1 | Region "/data" created on "server1"
{noformat}
{noformat}
gfsh>put --key=0 --value=0 --region=data
Result  : true
Key Class   : java.lang.String
Key : 0
Value Class : java.lang.String
Old Value   : 
{noformat}
{noformat}
gfsh>describe lucene index --name=∆∆∆ --region=/data
Index Name | Region Path | Server Name | Indexed Fields | Field Analyzer
  |   Status| Query Executions | Updates | Commits | Documents
-- | --- | --- | -- | 
--- | --- |  | --- | --- | 
-
∆∆∆| /data   | server1 | [text] | 
{text=StandardAnalyzer} | Initialized | 0| 0   | 0   | 0
∆∆∆| /data   | server2 | [text] | 
{text=StandardAnalyzer} | Initialized | 0| 1   | 1   | 1
{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (GEODE-2949) Creating a lucene index containing a slash and then the region using gfsh causes an inconsistent state

2017-05-19 Thread Barry Oglesby (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-2949?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Barry Oglesby updated GEODE-2949:
-
Summary: Creating a lucene index containing a slash and then the region 
using gfsh causes an inconsistent state  (was: Using gfsh to create an index 
containing a slash and then the region causes an inconsistent state)

> Creating a lucene index containing a slash and then the region using gfsh 
> causes an inconsistent state
> --
>
> Key: GEODE-2949
> URL: https://issues.apache.org/jira/browse/GEODE-2949
> Project: Geode
>  Issue Type: Bug
>  Components: lucene
>Reporter: Barry Oglesby
>
> Creating an index containing a slash with gfsh is successful:
> {noformat}
> gfsh>create lucene index --name='/slashed with spaces' --region=sam 
> --field=text
>Member| Status
>  | -
> 192.168.2.4(server2:52699):1026 | Successfully created lucene index
> 192.168.2.4(server1:52692):1025 | Successfully created lucene index
> {noformat}
> And creating the region with gfsh fails:
> {noformat}
> gfsh>create region --name=sam --type=PARTITION
> Member  | Status
> --- | --
> server2 | ERROR: name cannot contain the separator ' / '
> server1 | ERROR: name cannot contain the separator ' / '
> {noformat}
> But the logs show the async event queue and region have been created:
> {noformat}
> [info 2017/05/18 11:25:53.089 PDT server2  
> tid=0x41] Started  ParallelGatewaySender{id=AsyncEventQueue_/slashed with 
> spaces#_sam,remoteDsId=-1,isRunning =true}
> [info 2017/05/18 11:25:53.094 PDT server2  
> tid=0x41] Partitioned Region /sam is born with prId=11 ident:#sam
> {noformat}
> And destroying the index says no indexes were found:
> {noformat}
>  gfsh>destroy lucene index --region=/data
>Member| Status
>  | 
> 
> 192.168.2.4(server1:52692):1025 | No Lucene indexes were found in region 
> /data
> 192.168.2.4(server2:52699):1026 | No Lucene indexes were found in region 
> /data
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (GEODE-2949) Using gfsh to create an index containing a slash and then the region causes an inconsistent state

2017-05-19 Thread Barry Oglesby (JIRA)
Barry Oglesby created GEODE-2949:


 Summary: Using gfsh to create an index containing a slash and then 
the region causes an inconsistent state
 Key: GEODE-2949
 URL: https://issues.apache.org/jira/browse/GEODE-2949
 Project: Geode
  Issue Type: Bug
  Components: lucene
Reporter: Barry Oglesby


Creating an index containing a slash with gfsh is successful:
{noformat}
gfsh>create lucene index --name='/slashed with spaces' --region=sam --field=text
   Member| Status
 | -
192.168.2.4(server2:52699):1026 | Successfully created lucene index
192.168.2.4(server1:52692):1025 | Successfully created lucene index
{noformat}
And creating the region with gfsh fails:
{noformat}
gfsh>create region --name=sam --type=PARTITION
Member  | Status
--- | --
server2 | ERROR: name cannot contain the separator ' / '
server1 | ERROR: name cannot contain the separator ' / '
{noformat}
But the logs show the async event queue and region have been created:
{noformat}
[info 2017/05/18 11:25:53.089 PDT server2  
tid=0x41] Started  ParallelGatewaySender{id=AsyncEventQueue_/slashed with 
spaces#_sam,remoteDsId=-1,isRunning =true}
[info 2017/05/18 11:25:53.094 PDT server2  
tid=0x41] Partitioned Region /sam is born with prId=11 ident:#sam
{noformat}
And destroying the index says no indexes were found:
{noformat}
 gfsh>destroy lucene index --region=/data
   Member| Status
 | 

192.168.2.4(server1:52692):1025 | No Lucene indexes were found in region 
/data
192.168.2.4(server2:52699):1026 | No Lucene indexes were found in region 
/data
{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (GEODE-1130) Session state modules DeltaEvent logs extraneous 'attribute is already a byte[]' message

2017-05-11 Thread Barry Oglesby (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-1130?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Barry Oglesby resolved GEODE-1130.
--
Resolution: Fixed

> Session state modules DeltaEvent logs extraneous 'attribute is already a 
> byte[]' message
> 
>
> Key: GEODE-1130
> URL: https://issues.apache.org/jira/browse/GEODE-1130
> Project: Geode
>  Issue Type: Bug
>  Components: http session
>Reporter: Barry Oglesby
>Assignee: Barry Oglesby
>
> The following message is logged by {{DeltaEvent.blobifyValue}}:
> {noformat}
> [warning 2016/03/22 15:00:06.867 GMT+09:00   tid=0x60] Session 
> attribute is already a byte[] - problems may occur transmitting this delta.
> {noformat}
> Here:
> {noformat}if (value instanceof byte[]) {
>   LOG.warn("Session attribute is already a byte[] - problems may "
> + "occur transmitting this delta.");
> }
> {noformat}
> It can safely be removed.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (GEODE-1130) Session state modules DeltaEvent logs extraneous 'attribute is already a byte[]' message

2017-05-11 Thread Barry Oglesby (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-1130?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Barry Oglesby updated GEODE-1130:
-
Description: 
The following message is logged by {{DeltaEvent.blobifyValue}}:
{noformat}
[warning 2016/03/22 15:00:06.867 GMT+09:00   tid=0x60] Session 
attribute is already a byte[] - problems may occur transmitting this delta.
{noformat}
Here:
{noformat}if (value instanceof byte[]) {
  LOG.warn("Session attribute is already a byte[] - problems may "
+ "occur transmitting this delta.");
}
{noformat}
It can safely be removed.

  was:
The following message is logged by {{DeltaEvent.blobifyValue}}:
{noformat}
[warning 2016/03/22 15:00:06.867 GMT+09:00   tid=0x60] Session 
attribute is already a byte[] - problems may occur transmitting this delta.
{noformat}
Here:
{noformat}if (value instanceof byte[]) {
  LOG.warn("Session attribute is already a byte[] - problems may "
+ "occur transmitting this delta.");
}
{noformat}
I can safely be removed.


> Session state modules DeltaEvent logs extraneous 'attribute is already a 
> byte[]' message
> 
>
> Key: GEODE-1130
> URL: https://issues.apache.org/jira/browse/GEODE-1130
> Project: Geode
>  Issue Type: Bug
>  Components: http session
>Reporter: Barry Oglesby
>Assignee: Barry Oglesby
>
> The following message is logged by {{DeltaEvent.blobifyValue}}:
> {noformat}
> [warning 2016/03/22 15:00:06.867 GMT+09:00   tid=0x60] Session 
> attribute is already a byte[] - problems may occur transmitting this delta.
> {noformat}
> Here:
> {noformat}if (value instanceof byte[]) {
>   LOG.warn("Session attribute is already a byte[] - problems may "
> + "occur transmitting this delta.");
> }
> {noformat}
> It can safely be removed.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (GEODE-387) GatewayReceivers configured using cache xml should be started after the regions are created

2017-05-11 Thread Barry Oglesby (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-387?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Barry Oglesby resolved GEODE-387.
-
Resolution: Duplicate

This is the same as GEODE-1814.

> GatewayReceivers configured using cache xml should be started after the 
> regions are created
> ---
>
> Key: GEODE-387
> URL: https://issues.apache.org/jira/browse/GEODE-387
> Project: Geode
>  Issue Type: Bug
>  Components: wan
>Reporter: Barry Oglesby
>Assignee: Barry Oglesby
>
> Currently, {{GatewayReceivers}} configured using cache xml are started before 
> the {{Regions}} in that xml. This can cause data loss since a remote 
> {{GatewaySender}} could connect to that {{GatewayReceiver}} and send batches 
> before the {{Regions}} are created.
> First, the {{GatewayReceiver}} is started:
> {noformat}
> [info 2015/04/29 16:30:38.644 PDT host1  tid=0x1] The GatewayReceiver 
> started on port : 1,575
> {noformat}
> Then, events start being received and dropped:
> {noformat}
> [warning 2015/04/29 16:30:38.978 PDT host1  Thread 1> tid=0x7b] Server connection from 
> [identity(xx.x.xx.xx(host1:6372):27373,connection=1; port=36493]: Wrote 
> batch exception: 
> com.gemstone.gemfire.internal.cache.wan.BatchException70: Exception occurred 
> while processing a batch on the receiver running on DistributedSystem with 
> Id: 2, DistributedMember on which the receiver is running: 
> xx.x.xx.xxx(host1:25784):3298
>   at 
> com.gemstone.gemfire.internal.cache.tier.sockets.command.GatewayReceiverCommand.cmdExecute(GatewayReceiverCommand.java:648)
>   at 
> com.gemstone.gemfire.internal.cache.tier.sockets.BaseCommand.execute(BaseCommand.java:182)
>   at 
> com.gemstone.gemfire.internal.cache.tier.sockets.ServerConnection.doNormalMsg(ServerConnection.java:789)
>   at 
> com.gemstone.gemfire.internal.cache.tier.sockets.ServerConnection.doOneMessage(ServerConnection.java:920)
>   at 
> com.gemstone.gemfire.internal.cache.tier.sockets.ServerConnection.run(ServerConnection.java:1165)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at 
> com.gemstone.gemfire.internal.cache.tier.sockets.AcceptorImpl$1$1.run(AcceptorImpl.java:577)
>   at java.lang.Thread.run(Thread.java:724)
> Caused by: com.gemstone.gemfire.cache.RegionDestroyedException: Region /trade 
> was not found during batch create request 0
>   at 
> com.gemstone.gemfire.internal.cache.tier.sockets.command.GatewayReceiverCommand.cmdExecute(GatewayReceiverCommand.java:288)
>   ... 8 more
> {noformat}
> Finally, the region is created:
> {noformat}
> [info 2015/04/29 16:30:39.203 PDT host1  tid=0x1] Partitioned Region 
> /trade is born with prId=21 ident:#trade
> {noformat}
> In {{CacheCreation.create}}, the {{GatewayReceiver}} is created and started 
> before the regions are initialized. For comparison, the {{GatewayHub}} (which 
> is the predecessor to the {{GatewayReceiver}}) is started after the regions 
> are created. This is the way it should be.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (GEODE-2886) The WaitUntilFlushedFunction throws an IllegalArgumentException instead of an IllegalStateException

2017-05-05 Thread Barry Oglesby (JIRA)
Barry Oglesby created GEODE-2886:


 Summary: The WaitUntilFlushedFunction throws an 
IllegalArgumentException instead of an IllegalStateException
 Key: GEODE-2886
 URL: https://issues.apache.org/jira/browse/GEODE-2886
 Project: Geode
  Issue Type: Bug
  Components: lucene
Reporter: Barry Oglesby


When the AEQ doesn't exist, the WaitUntilFlushedFunction throws an 
IllegalArgumentException like:
{noformat}
Caused by: java.lang.IllegalArgumentException: The AEQ does not exist for the 
index xxx region /yyy
at 
org.apache.geode.cache.lucene.internal.distributed.WaitUntilFlushedFunction.execute(WaitUntilFlushedFunction.java:89)
at 
org.apache.geode.internal.cache.execute.AbstractExecution.executeFunctionLocally(AbstractExecution.java:333)
{noformat}
The arguments are actually fine so should it instead throw an 
IllegalStateException?





--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (GEODE-2883) gfsh gc reports incorrect heap sizes

2017-05-05 Thread Barry Oglesby (JIRA)
Barry Oglesby created GEODE-2883:


 Summary: gfsh gc reports incorrect heap sizes
 Key: GEODE-2883
 URL: https://issues.apache.org/jira/browse/GEODE-2883
 Project: Geode
  Issue Type: Bug
  Components: gfsh
Reporter: Barry Oglesby


I have 3 servers each with -Xms1g and -Xmx1g, and I load some data.

If I run a gfsh gc, it reports:
{noformat}
GC Summary

  Member ID/Name| HeapSize (MB) Before GC | HeapSize(MB) After GC | 
Time Taken for GC in ms
--- | --- | - | 
---
192.168.2.7(98588):1025 | 2078| 1098  | 
516
192.168.2.7(98602):1027 | 1942| 1110  | 
530
192.168.2.7(98595):1026 | 2019| 1109  | 
531
{noformat}
The heap sizes before and after are higher than they should be.

I added some debugging in the GarbageCollectionFunction, and it shows:
{noformat}
GarbageCollectionFunction.execute freeMemoryBeforeGC=765581248; 
totalMemoryBeforeGC=1037959168
GarbageCollectionFunction.execute freeMemoryAfterGC=893946464; 
totalMemoryAfterGC=1037959168
GarbageCollectionFunction.execute HeapSizeBeforeGC=2078
GarbageCollectionFunction.execute HeapSizeAfterGC=1098

GarbageCollectionFunction.execute freeMemoryBeforeGC=773307528; 
totalMemoryBeforeGC=1037959168
GarbageCollectionFunction.execute freeMemoryAfterGC=892508088; 
totalMemoryAfterGC=1037959168
GarbageCollectionFunction.execute HeapSizeBeforeGC=2019
GarbageCollectionFunction.execute HeapSizeAfterGC=1109

GarbageCollectionFunction.execute freeMemoryBeforeGC=783373464; 
totalMemoryBeforeGC=1037959168
GarbageCollectionFunction.execute freeMemoryAfterGC=892349664; 
totalMemoryAfterGC=1037959168
GarbageCollectionFunction.execute HeapSizeBeforeGC=1942
GarbageCollectionFunction.execute HeapSizeAfterGC=1110
{noformat}
The free and total memory are fine, but the heap sizes before and after are 
incorrect.

The function is using 131072 as its divisor
{noformat}
1037959168-765581248=272377920 / 131072 = 2078
1037959168-893946464=144012704 / 131072 = 1098
{noformat}
It should be using 1024*1024.
{noformat}
1037959168-765581248=272377920 / (1024*1024) = 259
1037959168-893946464=144012704 / (1024*1024) = 137
{noformat}




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (GEODE-1337) Define the API for lucene per-field analyzers

2017-05-05 Thread Barry Oglesby (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-1337?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Barry Oglesby resolved GEODE-1337.
--
Resolution: Not A Bug

> Define the API for lucene per-field analyzers
> -
>
> Key: GEODE-1337
> URL: https://issues.apache.org/jira/browse/GEODE-1337
> Project: Geode
>  Issue Type: Sub-task
>  Components: lucene
>Reporter: Barry Oglesby
>
> The current API (Map) is used by LuceneService:
> {noformat}
> public void createIndex(String indexName, String regionPath, Map Analyzer> analyzerPerField);
> {noformat}
> It is also used by LuceneIndex:
> {noformat}
> public Map getFieldAnalyzers();
> {noformat}
> Three options are:
> - keep it as it is
> - change it to {{Map}}
> - change it to {{Map}}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (GEODE-2878) If an exception occurs after retrieving an XAConnection from the ConnectionProvider but before returning it to the application, the GemFireTransactionDataSource doesn't r

2017-05-04 Thread Barry Oglesby (JIRA)
Barry Oglesby created GEODE-2878:


 Summary: If an exception occurs after retrieving an XAConnection 
from the ConnectionProvider but before returning it to the application, the 
GemFireTransactionDataSource doesn't return it to the pool
 Key: GEODE-2878
 URL: https://issues.apache.org/jira/browse/GEODE-2878
 Project: Geode
  Issue Type: Bug
  Components: transactions
Reporter: Barry Oglesby


In my test, I have 5 threads inserting rows into a derby database.

At first, as connections are being used and returned, the {{activeConnections}} 
is updated correctly:
{noformat}
Thread-16: AbstractPoolCache.getPooledConnectionFromPool activeConnections=1
Thread-15: AbstractPoolCache.getPooledConnectionFromPool activeConnections=2
Thread-17: AbstractPoolCache.getPooledConnectionFromPool activeConnections=3
Thread-14: AbstractPoolCache.getPooledConnectionFromPool activeConnections=4
Thread-18: AbstractPoolCache.getPooledConnectionFromPool activeConnections=5
Thread-16: AbstractPoolCache.returnPooledConnectionToPool activeConnections=4
Thread-14: AbstractPoolCache.returnPooledConnectionToPool activeConnections=3
Thread-18: AbstractPoolCache.returnPooledConnectionToPool activeConnections=2
Thread-17: AbstractPoolCache.returnPooledConnectionToPool activeConnections=1
Thread-15: AbstractPoolCache.returnPooledConnectionToPool activeConnections=0
{noformat}
But, then if an exception occurs after retrieving the {{XAConnection}}, it is 
not return to the {{ConnectionProvider}}.

In my test, the exception occurs in 
{{GemFireTransactionDataSource.registerTranxConnection}}:
{noformat}
java.lang.Exception: GemFireTransactionDataSource-registerTranxConnection(). 
Exception in registering the XAResource with the Transaction.Exception 
occurred= javax.transaction.SystemException: 
GlobalTransaction::enlistResource::error while enlisting XAResource 
org.apache.derby.client.am.XaException: XAER_RMFAIL : An error occurred during 
a deferred connect reset and the connection has been terminated.
at 
org.apache.geode.internal.datasource.GemFireTransactionDataSource.registerTranxConnection(GemFireTransactionDataSource.java:218)
at 
org.apache.geode.internal.datasource.GemFireTransactionDataSource.getConnection(GemFireTransactionDataSource.java:127)
at TestServer.saveToDB(TestServer.java:177)
at TestServer.save(TestServer.java:154)
at TestServer.loadEntriesIntoDerby(TestServer.java:127)
at TestServer$1.run(TestServer.java:112)
at java.lang.Thread.run(Thread.java:745)
{noformat}
This is after the {{XAConnection}} has been retrieved from the 
{{ConnectionProvider}} and the {{activeConnections}} incremented, but before it 
has been returned to the application. Neither the {{registerTranxConnection}} 
method nor its caller ({{getConnection}}) does anything other than to throw the 
exception. The {{XAConnection}} is not returned to the pool nor is the 
{{activeConnections}} decremented.

Finally, if enough of these exceptions occur, the test stops because all 30 
(default max) connections are in use. They aren't really in use, its just that 
the activeConnections counter hasn't been properly maintained.
{noformat}
Thread-14: AbstractPoolCache.returnPooledConnectionToPool activeConnections=28
Thread-15: AbstractPoolCache.getPooledConnectionFromPool activeConnections=29
Thread-14: AbstractPoolCache.getPooledConnectionFromPool activeConnections=30
Thread-16: AbstractPoolCache.returnPooledConnectionToPool activeConnections=29
Thread-18: AbstractPoolCache.returnPooledConnectionToPool activeConnections=28
Thread-15: AbstractPoolCache.getPooledConnectionFromPool activeConnections=29
Thread-17: AbstractPoolCache.getPooledConnectionFromPool activeConnections=30
Thread-14: AbstractPoolCache.returnPooledConnectionToPool activeConnections=29
Thread-18: AbstractPoolCache.getPooledConnectionFromPool activeConnections=30
Thread-17: AbstractPoolCache.returnPooledConnectionToPool activeConnections=29
Thread-14: AbstractPoolCache.getPooledConnectionFromPool activeConnections=30
{noformat}
It doesn't really matter what the exception is. If one occurs after retrieving 
the {{XAConnection}}, it needs to be returned to the {{ConnectionProvider}} or 
at the very least, the {{activeConnections}} must be decremented.




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (GEODE-1189) Should the LuceneQueryFactory create API throw a Lucene ParseException?

2017-05-04 Thread Barry Oglesby (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-1189?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Barry Oglesby resolved GEODE-1189.
--
   Resolution: Fixed
Fix Version/s: 1.2.0

> Should the LuceneQueryFactory create API throw a Lucene ParseException?
> ---
>
> Key: GEODE-1189
> URL: https://issues.apache.org/jira/browse/GEODE-1189
> Project: Geode
>  Issue Type: Bug
>  Components: lucene
>Reporter: Barry Oglesby
> Fix For: 1.2.0
>
>
> Or should {{Geode}} be wrapping {{Lucene}} exceptions with {{Geode}} 
> exceptions?
> Also, related to that, the {{LuceneQueryFactory create}} is defined like:
> {noformat}
> public  LuceneQuery create(String indexName, String regionName, 
> String queryString) throws ParseException;
> {noformat}
> But, the {{LuceneQueryFactoryImpl create}} method is defined without the 
> throws clause like:
> {noformat}
> public  LuceneQuery create(String indexName, String regionName, 
> String queryString)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (GEODE-969) Various WAN dunit tests sometimes cause OOME during combineReports task

2017-05-04 Thread Barry Oglesby (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-969?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Barry Oglesby resolved GEODE-969.
-
   Resolution: Fixed
Fix Version/s: 1.2.0

> Various WAN dunit tests sometimes cause OOME during combineReports task
> ---
>
> Key: GEODE-969
> URL: https://issues.apache.org/jira/browse/GEODE-969
> Project: Geode
>  Issue Type: Bug
>  Components: wan
>Reporter: Barry Oglesby
>Assignee: Barry Oglesby
> Fix For: 1.2.0
>
>
> The combineReports task sometimes throws a 'java.lang.OutOfMemoryError: Java 
> heap space' error when creating the JUnit test report like:
> {noformat}
> :combineReports
> All test reports at 
> /japan1/users/build/jenkins/blds/workspace/GemFire_develop_closed_CC/gemfire/closed/build/reports/combined
> FAILURE: Build failed with an exception.
> Where:
> Build file 
> '/japan1/users/build/jenkins/blds/workspace/GemFire_develop_closed_CC/gemfire/closed/pivotalgf-assembly/build.gradle'
>  line: 156
> What went wrong:
> Execution failed for task ':pivotalgf-assembly:legacyDunit'.
> > There were failing tests.
> Try:
> Run with --stacktrace option to get the stack trace. Run with --info or 
> --debug option to get more log output.
> BUILD FAILED
> Total time: 2 hrs 8 mins 51.811 secs
> Recording test results
> ERROR: Publisher 'Publish JUnit test result report' aborted due to exception:
> java.io.IOException: remote file operation failed: 
> /japan1/users/build/jenkins/blds/workspace/GemFire_develop_closed_CC at 
> hudson.remoting.Channel@a3f9b9e:Japan: java.io.IOException: Remote call on 
> Japan failed
> at hudson.FilePath.act(FilePath.java:987)
> at hudson.FilePath.act(FilePath.java:969)
> at hudson.tasks.junit.JUnitParser.parseResult(JUnitParser.java:90)
> at hudson.tasks.junit.JUnitResultArchiver.parse(JUnitResultArchiver.java:120)
> at 
> hudson.tasks.junit.JUnitResultArchiver.perform(JUnitResultArchiver.java:137)
> at 
> hudson.tasks.BuildStepCompatibilityLayer.perform(BuildStepCompatibilityLayer.java:75)
> at hudson.tasks.BuildStepMonitor$1.perform(BuildStepMonitor.java:20)
> at 
> hudson.model.AbstractBuild$AbstractBuildExecution.perform(AbstractBuild.java:779)
> at 
> hudson.model.AbstractBuild$AbstractBuildExecution.performAllBuildSteps(AbstractBuild.java:726)
> at hudson.model.Build$BuildExecution.post2(Build.java:185)
> at 
> hudson.model.AbstractBuild$AbstractBuildExecution.post(AbstractBuild.java:671)
> at hudson.model.Run.execute(Run.java:1766)
> at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
> at hudson.model.ResourceController.execute(ResourceController.java:98)
> at hudson.model.Executor.run(Executor.java:408)
> Caused by: java.io.IOException: Remote call on Japan failed
> at hudson.remoting.Channel.call(Channel.java:786)
> at hudson.FilePath.act(FilePath.java:980)
> ... 14 more
> Caused by: java.lang.OutOfMemoryError: Java heap space
> at 
> com.sun.org.apache.xerces.internal.util.XMLStringBuffer.append(XMLStringBuffer.java:208)
> at 
> com.sun.org.apache.xerces.internal.impl.XMLEntityScanner.scanData(XMLEntityScanner.java:1378)
> at 
> com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanCDATASection(XMLDocumentFragmentScannerImpl.java:1654)
> at 
> com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl$FragmentContentDriver.next(XMLDocumentFragmentScannerImpl.java:3020)
> at 
> com.sun.org.apache.xerces.internal.impl.XMLDocumentScannerImpl.next(XMLDocumentScannerImpl.java:606)
> at 
> com.sun.org.apache.xerces.internal.impl.XMLNSDocumentScannerImpl.next(XMLNSDocumentScannerImpl.java:117)
> at 
> com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanDocument(XMLDocumentFragmentScannerImpl.java:510)
> at 
> com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:848)
> at 
> com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:777)
> at 
> com.sun.org.apache.xerces.internal.parsers.XMLParser.parse(XMLParser.java:141)
> at 
> com.sun.org.apache.xerces.internal.parsers.AbstractSAXParser.parse(AbstractSAXParser.java:1213)
> at 
> com.sun.org.apache.xerces.internal.jaxp.SAXParserImpl$JAXPSAXParser.parse(SAXParserImpl.java:649)
> at org.dom4j.io.SAXReader.read(SAXReader.java:465)
> at org.dom4j.io.SAXReader.read(SAXReader.java:264)
> at hudson.tasks.junit.SuiteResult.parse(SuiteResult.java:123)
> at hudson.tasks.junit.TestResult.parse(TestResult.java:282)
> at hudson.tasks.junit.TestResult.parsePossiblyEmpty(TestResult.java:228)
> at hudson.tasks.junit.TestResult.parse(TestResult.java:163)
> at hudson.tasks.junit.TestResult.parse(TestResult.java:146)
> at hudson.tasks.junit.TestResult.(TestResult.java:122)
> at 
> 

[jira] [Resolved] (GEODE-2612) Add option to invoke callbacks while loading a snapshot

2017-05-04 Thread Barry Oglesby (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-2612?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Barry Oglesby resolved GEODE-2612.
--
   Resolution: Fixed
Fix Version/s: 1.2.0

> Add option to invoke callbacks while loading a snapshot
> ---
>
> Key: GEODE-2612
> URL: https://issues.apache.org/jira/browse/GEODE-2612
> Project: Geode
>  Issue Type: New Feature
>  Components: lucene
>Reporter: Barry Oglesby
>Assignee: Barry Oglesby
> Fix For: 1.2.0
>
>
> As a work-around to recreating a lucene index (which is not currently 
> supported), the recommendation is to:
> - export user region
> - destroy indexes and user region
> - recreate index
> - recreate user region
> - import user region
> The lucene index is populated using an hidden AsyncEventQueue, but currently 
> import doesn't invoke callbacks. This feature request is to add an option to 
> SnapshotOptions to cause callbacks to be invoked, so that the index is 
> repopulated during import.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (GEODE-2553) After deleting and recreating my Lucene index and region, my Lucene query hung.

2017-05-04 Thread Barry Oglesby (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-2553?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Barry Oglesby resolved GEODE-2553.
--
Resolution: Fixed

> After deleting and recreating my Lucene index and region, my Lucene query 
> hung.
> ---
>
> Key: GEODE-2553
> URL: https://issues.apache.org/jira/browse/GEODE-2553
> Project: Geode
>  Issue Type: Bug
>  Components: lucene
>Reporter: Diane Hardman
>Assignee: Barry Oglesby
> Fix For: 1.2.0
>
> Attachments: server50505.log, stack.log
>
>
> While manually testing in gfsh the process of deleting Lucene indexes, 
> deleting the region, creating new indexes and a new empty region, I was able 
> to hang gfsh while doing a Lucene search on the new region with no data.
> Here are the steps I used:
> _ __
>/ _/ __/ __/ // /
>   / /  __/ /___  /_  / _  / 
>  / /__/ / /  _/ / // /  
> /__/_/  /__/_//_/1.2.0-SNAPSHOT
> Monitor and Manage Apache Geode
> gfsh>start locator --name=locator1 --port=12345
> gfsh>start server --name=server50505 --server-port=50505 
> --locators=localhost[12345] --start-rest-api --http-service-port=8080 
> --http-service-bind-address=localhost
> gfsh>create lucene index --name=testIndex --region=testRegion 
> --field=__REGION_VALUE_FIELD
> gfsh>create lucene index --name=testIndex2 --region=testRegion 
> gfsh>list lucene indexes --with-stats
> gfsh>create region --name=testRegion --type=PARTITION_PERSISTENT
> gfsh>put --key=1 --value=value1 --region=testRegion
> gfsh>put --key=2 --value=value2 --region=testRegion
> gfsh>put --key=3 --value=value3 --region=testRegion
> gfsh>search lucene --name=testIndex --region=/testRegion 
> --queryStrings=value* --defaultField=__REGION_VALUE_FIELD
> gfsh>search lucene --name=testIndex2 --region=/testRegion 
> --queryStrings=value* --defaultField=__REGION_VALUE_FIELD
> gfsh>destroy lucene index --region=/testRegion --name=testIndex
> gfsh>list lucene indexes --with-stats
> gfsh>search lucene --name=testIndex2 --region=/testRegion 
> --queryStrings=value* --defaultField=__REGION_VALUE_FIELD
> gfsh>search lucene --name=testIndex --region=/testRegion 
> --queryStrings=value* --defaultField=__REGION_VALUE_FIELD
> 
> gfsh>destroy lucene index --region=/testRegion
> gfsh>list lucene indexes --with-stats
> gfsh>destroy region --name=/testRegion
> gfsh>create lucene index --name=testIndex --region=testRegion 
> gfsh>create lucene index --name=testIndex2 --region=testRegion 
> --field=__REGION_VALUE_FIELD
> gfsh>list lucene indexes --with-stats
> gfsh>create region --name=testRegion --type=PARTITION_PERSISTENT
> gfsh>search lucene --name=testIndex --region=/testRegion 
> --queryStrings=value* --defaultField=__REGION_VALUE_FIELD
> The gfsh process hangs at this point.
> I'll attach the stacktrace for the server.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (GEODE-2230) LuceneIndex.waitUntilFlushed should not have to wait for the queue to be completely empty

2017-05-04 Thread Barry Oglesby (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-2230?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Barry Oglesby resolved GEODE-2230.
--
   Resolution: Fixed
Fix Version/s: 1.2.0

> LuceneIndex.waitUntilFlushed should not have to wait for the queue to be 
> completely empty
> -
>
> Key: GEODE-2230
> URL: https://issues.apache.org/jira/browse/GEODE-2230
> Project: Geode
>  Issue Type: Improvement
>  Components: lucene
>Reporter: Dan Smith
>Assignee: Barry Oglesby
> Fix For: 1.2.0
>
>
> We added a function to LuceneIndex to wait until updates are flushed to the 
> index with GEODE-1351.
> Unfortunately, the current approach has a few problems. It just waits in a 
> loop polling the size of the queue until it reaches zero. If someone uses 
> this method while the system is constantly receiving updates, the queue may 
> never reach zero.
> It would be better if this method could wait until any data at the time the 
> method was called was completely flushed.
> One way to accomplish this might be to send a function or message to all of 
> the members holding the async event queue for the lucene index. The function 
> could capture the current tail of the queue and wait until that event is 
> dispatched.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (GEODE-2823) The LuceneEventListener caues deserialized values to be stored in the entry when the region contains DataSerializable or Serializable values

2017-05-01 Thread Barry Oglesby (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-2823?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Barry Oglesby resolved GEODE-2823.
--
   Resolution: Fixed
Fix Version/s: 1.2.0

> The LuceneEventListener caues deserialized values to be stored in the entry 
> when the region contains DataSerializable or Serializable values
> 
>
> Key: GEODE-2823
> URL: https://issues.apache.org/jira/browse/GEODE-2823
> Project: Geode
>  Issue Type: Bug
>  Components: lucene
>Reporter: Barry Oglesby
> Fix For: 1.2.0
>
>
> If the {{Region}} contains {{DataSerializable}} or {{Serializable}} values, 
> then each {{RegionEntry}} contains a {{VMCachedDeserializable}}. When 
> {{LuceneEventListener.process}} calls {{entry.getValue()}}, the value is 
> deserialized and left in that state in the {{VMCachedDeserializable}}.
> Below is a live histogram for the no index test.
> {noformat}
>  num #instances #bytes  class name
> --
>1:1019016088544  [B
>2:115600056  
> org.apache.geode.internal.cache.VersionedThinRegionEntryHeapStringKey1
>3: 363463236272  [C
>4:10240  
> org.apache.geode.internal.cache.VMCachedDeserializable
>5:  3792 905488  
> [Lorg.apache.geode.internal.util.concurrent.CustomEntryConcurrentHashMap$HashEntry;
>6: 36161 867864  java.lang.String
>7:  6546 750464  java.lang.Class
>8:  8051 523264  [Ljava.lang.Object;
>9:  5151 453288  java.lang.reflect.Method
>   10:   704 405280  [J
>   11:  8390 402720  
> java.util.concurrent.locks.ReentrantReadWriteLock$NonfairSync
>   12: 11796 377472  
> java.util.concurrent.ConcurrentHashMap$Node
>   13: 11379 364128  java.util.HashMap$Node
>   14:   597 357552  
> [Ljava.util.concurrent.ConcurrentHashMap$Node;
>   15:  3409 319888  [Ljava.util.HashMap$Node;
>   16:  7754 310160  java.util.LinkedHashMap$Entry
>   17:  5817 279216  java.util.TreeMap
>   18:  4031 257984  java.util.concurrent.ConcurrentHashMap
>   19:  6385 255400  java.util.TreeMap$Entry
>   20: 13587 217392  java.lang.Object
> Total611397   28902304
> {noformat}
> Below is a live histogram for the index test. The main thing to notice 
> regarding this bug is the 10 Trade instances.
> {noformat}
>  num #instances #bytes  class name
> --
>1:338275   16181384  [C
>2:322931   15500688  
> org.apache.geode.internal.cache.TombstoneService$Tombstone
>3:220717   12360152  
> org.apache.geode.internal.cache.VersionedThinRegionEntryHeapObjectKey
>4:197837   11078872  
> org.apache.geode.internal.cache.VersionedThinRegionEntryHeapStringKey1
>5:3380368112864  java.lang.String
>6:3231287755072  
> java.util.concurrent.ConcurrentLinkedQueue$Node
>7: 205015658048  [B
>8:1626495204768  java.util.UUID
>9:1592753822600  
> org.apache.geode.cache.lucene.internal.filesystem.ChunkKey
>   10:  56003787016  
> [Lorg.apache.geode.internal.util.concurrent.CustomEntryConcurrentHashMap$HashEntry;
>   11:10320  Trade
>   12:1034872483688  
> org.apache.geode.internal.cache.VMCachedDeserializable
>   13: 634942031808  java.util.HashMap$Node
>   14: 139741241616  [Ljava.util.HashMap$Node;
>   15: 254561221888  java.util.HashMap
>   16:  7396 843664  java.lang.Class
>   17: 10948 726856  [Ljava.lang.Object;
>   18: 11357 726848  
> org.apache.geode.internal.cache.VersionedThinRegionEntryHeapStringKey2
>   19: 15856 634240  java.util.TreeMap$Entry
>   20:  1067 614992  
> [Ljava.util.concurrent.ConcurrentHashMap$Node;
> Total   2856366  118323456
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (GEODE-2848) While destroying a LuceneIndex, the AsyncEventQueue region is destroyed in remote members before stopping the AsyncEventQueue

2017-04-28 Thread Barry Oglesby (JIRA)
Barry Oglesby created GEODE-2848:


 Summary: While destroying a LuceneIndex, the AsyncEventQueue 
region is destroyed in remote members before stopping the AsyncEventQueue
 Key: GEODE-2848
 URL: https://issues.apache.org/jira/browse/GEODE-2848
 Project: Geode
  Issue Type: Bug
  Components: lucene
Reporter: Barry Oglesby


This causes a NullPointerException in BatchRemovalThread getAllRecipients like:
{noformat}
[fine 2017/04/24 14:27:29.163 PDT gemfire4_r02-s28_3222  
tid=0x6b] BatchRemovalThread: ignoring exception
java.lang.NullPointerException
  at 
org.apache.geode.internal.cache.wan.parallel.ParallelGatewaySenderQueue$BatchRemovalThread.getAllRecipients(ParallelGatewaySenderQueue.java:1776)
  at 
org.apache.geode.internal.cache.wan.parallel.ParallelGatewaySenderQueue$BatchRemovalThread.run(ParallelGatewaySenderQueue.java:1722)
{noformat}
This message is currently only logged at fine level and doesn't cause any real 
issues.

The simple fix is to check for null in getAllRecipients like:
{noformat}
PartitionedRegion pReg = ((PartitionedRegion) (cache.getRegion((String) pr)));
if (pReg != null) {
  recipients.addAll(pReg.getRegionAdvisor().adviseDataStore());
}
{noformat}
Another more complex fix is to change the destroyIndex sequence.

The current destroyIndex sequence is:

# stops and destroys the AEQ in the initiator (including the underlying PR)
# closes the repository manager in the initiator
# stops and destroys the AEQ in remote members (not including the underlying PR)
# closes the repository manager in the remote members
# destroys the fileAndChunk region in the initiator

Between steps 1 and 3, the region will be null in the remote members, so the 
NPE can occur.

A better sequence would be:

# stops the AEQ in the initiator
# stops the AEQ in remote members
# closes the repository manager in the initiator
# closes the repository manager in the remote members
# destroys the AEQ in the initiator (including the underlying PR) 
# destroys the AEQ in the remote members (not including the underlying PR)
# destroys the fileAndChunk region in the initiator

That would be 3 messages between the members.

I think that can be combined into one remote message like:

# stops the AEQ in the initiator
# closes the repository manager in the initiator
# stops the AEQ in remote members
# closes the repository manager in the remote members
# destroys the AEQ in the remote members (not including the underlying PR)
# destroys the AEQ in the initiator (including the underlying PR) 
# destroys the fileAndChunk region in the initiator




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (GEODE-2689) If a region containing a Lucene index is created in one group and altered in another, a member in the other group will fail to start

2017-04-26 Thread Barry Oglesby (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-2689?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Barry Oglesby resolved GEODE-2689.
--
   Resolution: Fixed
Fix Version/s: 1.2.0

> If a region containing a Lucene index is created in one group and altered in 
> another, a member in the other group will fail to start
> 
>
> Key: GEODE-2689
> URL: https://issues.apache.org/jira/browse/GEODE-2689
> Project: Geode
>  Issue Type: Bug
>  Components: lucene
>Reporter: Barry Oglesby
> Fix For: 1.2.0
>
>
> Steps to reproduce:
> - create lucene index --name=full_index --region=data --field=field1
> - create region --name=data --type=PARTITION_REDUNDANT
> - alter region --name=data --cache-listener=TestCacheListener --group=group1
> At this point, the cluster config xml looks like:
> {noformat}
> [info 2017/03/15 17:04:17.375 PDT server3  tid=0x1] 
>   ***
>   Configuration for  'cluster'
>   
>   Jar files to deployed
>   
>   http://geode.apache.org/schema/cache; 
> xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance; copy-on-read="false" 
> is-server="false" lock-lease="120" lock-timeout="60" search-timeout="300" 
> version="1.0" xsi:schemaLocation="http://geode.apache.org/schema/cache 
> http://geode.apache.org/schema/cache/cache-1.0.xsd;>
>   
>data-policy="partition">
> 
>   
>   http://geode.apache.org/schema/lucene; 
> name="full_index">
>  analyzer="org.apache.lucene.analysis.standard.StandardAnalyzer" 
> name="field1"/>
>   
> 
>   
>   
>   ***
>   Configuration for  'group1'
>   
>   Jar files to deployed
>   
>   http://geode.apache.org/schema/cache; 
> xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance; copy-on-read="false" 
> is-server="false" lock-lease="120" lock-timeout="60" search-timeout="300" 
> version="1.0" xsi:schemaLocation="http://geode.apache.org/schema/cache 
> http://geode.apache.org/schema/cache/cache-1.0.xsd;>
>   
>data-policy="partition">
> 
> 
>   TestCacheListener
> 
>   
>   http://geode.apache.org/schema/lucene; 
> name="full_index">
>  analyzer="org.apache.lucene.analysis.standard.StandardAnalyzer" 
> name="field1"/>
>   
> 
>   
> {noformat}
> If a member is started in the group (group1 in this case), it will fail to 
> start with the following error:
> {noformat}
> [error 2017/03/15 17:04:19.715 PDT  tid=0x1] Lucene index already 
> exists in region
> Exception in thread "main" java.lang.IllegalArgumentException: Lucene index 
> already exists in region
>   at 
> org.apache.geode.cache.lucene.internal.LuceneServiceImpl.registerDefinedIndex(LuceneServiceImpl.java:201)
>   at 
> org.apache.geode.cache.lucene.internal.LuceneServiceImpl.createIndex(LuceneServiceImpl.java:154)
>   at 
> org.apache.geode.cache.lucene.internal.xml.LuceneIndexCreation.beforeCreate(LuceneIndexCreation.java:85)
>   at 
> org.apache.geode.internal.cache.extension.SimpleExtensionPoint.beforeCreate(SimpleExtensionPoint.java:77)
>   at 
> org.apache.geode.internal.cache.xmlcache.RegionCreation.createRoot(RegionCreation.java:252)
>   at 
> org.apache.geode.internal.cache.xmlcache.CacheCreation.initializeRegions(CacheCreation.java:544)
>   at 
> org.apache.geode.internal.cache.xmlcache.CacheCreation.create(CacheCreation.java:495)
>   at 
> org.apache.geode.internal.cache.xmlcache.CacheXmlParser.create(CacheXmlParser.java:343)
>   at 
> org.apache.geode.internal.cache.GemFireCacheImpl.loadCacheXml(GemFireCacheImpl.java:4479)
>   at 
> org.apache.geode.internal.cache.ClusterConfigurationLoader.applyClusterXmlConfiguration(ClusterConfigurationLoader.java:129)
>   at 
> org.apache.geode.internal.cache.GemFireCacheImpl.initialize(GemFireCacheImpl.java:1243)
>   at 
> org.apache.geode.internal.cache.GemFireCacheImpl.basicCreate(GemFireCacheImpl.java:798)
>   at 
> org.apache.geode.internal.cache.GemFireCacheImpl.create(GemFireCacheImpl.java:783)
>   at org.apache.geode.cache.CacheFactory.create(CacheFactory.java:178)
>   at org.apache.geode.cache.CacheFactory.create(CacheFactory.java:218)
>   at TestBase.initializeServerCache(TestBase.java:22)
>   at TestServer.main(TestServer.java:7)
> {noformat}
> I made a quick change in {{LuceneIndexCreation beforeCreate}} to just log the 
> {{IllegalArgumentException}}. I'm not sure if this is good enough or not.
> {noformat}
> public void beforeCreate(Extensible source, Cache cache) {
>   LuceneServiceImpl service = (LuceneServiceImpl) 
> LuceneServiceProvider.get(cache);
>   Analyzer analyzer = 

[jira] [Created] (GEODE-2823) The LuceneEventListener caues deserialized values to be stored in the entry when the region contains DataSerializable or Serializable values

2017-04-24 Thread Barry Oglesby (JIRA)
Barry Oglesby created GEODE-2823:


 Summary: The LuceneEventListener caues deserialized values to be 
stored in the entry when the region contains DataSerializable or Serializable 
values
 Key: GEODE-2823
 URL: https://issues.apache.org/jira/browse/GEODE-2823
 Project: Geode
  Issue Type: Bug
  Components: lucene
Reporter: Barry Oglesby


If the {{Region}} contains {{DataSerializable}} or {{Serializable}} values, 
then each {{RegionEntry}} contains a {{VMCachedDeserializable}}. When 
{{LuceneEventListener.process}} calls {{entry.getValue()}}, the value is 
deserialized and left in that state in the {{VMCachedDeserializable}}.

Below is a live histogram for the no index test.
{noformat}
 num #instances #bytes  class name
--
   1:1019016088544  [B
   2:115600056  
org.apache.geode.internal.cache.VersionedThinRegionEntryHeapStringKey1
   3: 363463236272  [C
   4:10240  
org.apache.geode.internal.cache.VMCachedDeserializable
   5:  3792 905488  
[Lorg.apache.geode.internal.util.concurrent.CustomEntryConcurrentHashMap$HashEntry;
   6: 36161 867864  java.lang.String
   7:  6546 750464  java.lang.Class
   8:  8051 523264  [Ljava.lang.Object;
   9:  5151 453288  java.lang.reflect.Method
  10:   704 405280  [J
  11:  8390 402720  
java.util.concurrent.locks.ReentrantReadWriteLock$NonfairSync
  12: 11796 377472  java.util.concurrent.ConcurrentHashMap$Node
  13: 11379 364128  java.util.HashMap$Node
  14:   597 357552  
[Ljava.util.concurrent.ConcurrentHashMap$Node;
  15:  3409 319888  [Ljava.util.HashMap$Node;
  16:  7754 310160  java.util.LinkedHashMap$Entry
  17:  5817 279216  java.util.TreeMap
  18:  4031 257984  java.util.concurrent.ConcurrentHashMap
  19:  6385 255400  java.util.TreeMap$Entry
  20: 13587 217392  java.lang.Object
Total611397   28902304
{noformat}
Below is a live histogram for the index test. The main thing to notice 
regarding this bug is the 10 Trade instances.
{noformat}
 num #instances #bytes  class name
--
   1:338275   16181384  [C
   2:322931   15500688  
org.apache.geode.internal.cache.TombstoneService$Tombstone
   3:220717   12360152  
org.apache.geode.internal.cache.VersionedThinRegionEntryHeapObjectKey
   4:197837   11078872  
org.apache.geode.internal.cache.VersionedThinRegionEntryHeapStringKey1
   5:3380368112864  java.lang.String
   6:3231287755072  
java.util.concurrent.ConcurrentLinkedQueue$Node
   7: 205015658048  [B
   8:1626495204768  java.util.UUID
   9:1592753822600  
org.apache.geode.cache.lucene.internal.filesystem.ChunkKey
  10:  56003787016  
[Lorg.apache.geode.internal.util.concurrent.CustomEntryConcurrentHashMap$HashEntry;
  11:10320  Trade
  12:1034872483688  
org.apache.geode.internal.cache.VMCachedDeserializable
  13: 634942031808  java.util.HashMap$Node
  14: 139741241616  [Ljava.util.HashMap$Node;
  15: 254561221888  java.util.HashMap
  16:  7396 843664  java.lang.Class
  17: 10948 726856  [Ljava.lang.Object;
  18: 11357 726848  
org.apache.geode.internal.cache.VersionedThinRegionEntryHeapStringKey2
  19: 15856 634240  java.util.TreeMap$Entry
  20:  1067 614992  
[Ljava.util.concurrent.ConcurrentHashMap$Node;
Total   2856366  118323456
{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (GEODE-2605) Unable to do a Lucene query without CLUSTER:READ privilege

2017-04-19 Thread Barry Oglesby (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-2605?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Barry Oglesby reassigned GEODE-2605:


Assignee: Barry Oglesby

> Unable to do a Lucene query without CLUSTER:READ privilege
> --
>
> Key: GEODE-2605
> URL: https://issues.apache.org/jira/browse/GEODE-2605
> Project: Geode
>  Issue Type: Bug
>  Components: docs, lucene, security
>Reporter: Diane Hardman
>Assignee: Barry Oglesby
> Fix For: 1.2.0
>
> Attachments: security.json
>
>
> I have configured a small cluster with security and am testing the privileges 
> I need for creating a Lucene index and then executing a query/search using 
> Lucene. 
> I have confirmed that DATA:MANAGE privilege allows me to create a lucene 
> index (similar to creating OQL indexes).
> I assumed I needed DATA:WRITE privilege to execute 'search lucene' because 
> the implementation uses a function. Instead, I am getting an error that I 
> need CLUSTER:READ privilege. I don't know why.
> As an aside, we may want to document that all DATA privileges automatically 
> include CLUSTER:READ as I found I could create indexes with DATA:WRITE, but 
> could not list the indexes I created without CLUSTER:READ... go figure.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (GEODE-2605) Unable to do a Lucene query without CLUSTER:READ privilege

2017-04-19 Thread Barry Oglesby (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-2605?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Barry Oglesby resolved GEODE-2605.
--
   Resolution: Fixed
Fix Version/s: 1.2.0

> Unable to do a Lucene query without CLUSTER:READ privilege
> --
>
> Key: GEODE-2605
> URL: https://issues.apache.org/jira/browse/GEODE-2605
> Project: Geode
>  Issue Type: Bug
>  Components: docs, lucene, security
>Reporter: Diane Hardman
>Assignee: Barry Oglesby
> Fix For: 1.2.0
>
> Attachments: security.json
>
>
> I have configured a small cluster with security and am testing the privileges 
> I need for creating a Lucene index and then executing a query/search using 
> Lucene. 
> I have confirmed that DATA:MANAGE privilege allows me to create a lucene 
> index (similar to creating OQL indexes).
> I assumed I needed DATA:WRITE privilege to execute 'search lucene' because 
> the implementation uses a function. Instead, I am getting an error that I 
> need CLUSTER:READ privilege. I don't know why.
> As an aside, we may want to document that all DATA privileges automatically 
> include CLUSTER:READ as I found I could create indexes with DATA:WRITE, but 
> could not list the indexes I created without CLUSTER:READ... go figure.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (GEODE-2793) Look into reducing the amount of PDX deserializations in OQL query intermediate result sets for indexed OR queries containing PdxInstanceImpls

2017-04-18 Thread Barry Oglesby (JIRA)
Barry Oglesby created GEODE-2793:


 Summary: Look into reducing the amount of PDX deserializations in 
OQL query intermediate result sets for indexed OR queries containing 
PdxInstanceImpls
 Key: GEODE-2793
 URL: https://issues.apache.org/jira/browse/GEODE-2793
 Project: Geode
  Issue Type: Bug
  Components: querying
Reporter: Barry Oglesby


Intermediate result sets for each of the indexed OR clauses are represented by 
ResultsBags. Each index is sorted and iterated in AbstractGroupOrRangeJunction 
auxFilterEvaluate. When entry in the index is added to a ResultsBag, hashCode 
is invoked. In the case of a PdxInstanceImpl, this causes all of its identity 
fields to be deserialized so that hashCode can be invoked on them.

Then, when each ResultsBag is sorted during QueryUtils union and 
sizeSortedUnion by invoking occurrences on each entry, equals is invoked each 
entry. In the case of a PdxInstanceImpl, this causes all of its identity fields 
to be deserialized so that equals can be invoked on them.

Here is an example query that shows the PDX deserializations:
{noformat}
select * from /region this where ((map['entry1']='value1' OR 
map['entry2']='value2' OR map['entry3']='value3' OR map['entry4']='value4' OR 
map['entry5']='value5' OR map['entry6']='value6' OR map['entry7']='value7' OR 
map['entry8']='value8' OR map['entry9']='value9' OR map['entry10']='value10')) 
...
{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (GEODE-2689) If a region containing a Lucene index is created in one group and altered in another, a member in the other group will fail to start

2017-04-14 Thread Barry Oglesby (JIRA)

[ 
https://issues.apache.org/jira/browse/GEODE-2689?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15969667#comment-15969667
 ] 

Barry Oglesby commented on GEODE-2689:
--

If I run the same test with OQL, the member starts with the index defined in 
the 'cluster' config. The one defined in the 'group1' config is ignored. In 
this case, only the name is checked. It might be possible to create an index in 
one group and create the same named index in another have unexpected results if 
the wrong index is kept. I haven't tried this test.

In lucene we should at least mimic this behavior by handling if the index is 
already there. A further step we can take is to actually compare the index to 
verify more than the name matches.

> If a region containing a Lucene index is created in one group and altered in 
> another, a member in the other group will fail to start
> 
>
> Key: GEODE-2689
> URL: https://issues.apache.org/jira/browse/GEODE-2689
> Project: Geode
>  Issue Type: Bug
>  Components: lucene
>Reporter: Barry Oglesby
>
> Steps to reproduce:
> - create lucene index --name=full_index --region=data --field=field1
> - create region --name=data --type=PARTITION_REDUNDANT
> - alter region --name=data --cache-listener=TestCacheListener --group=group1
> At this point, the cluster config xml looks like:
> {noformat}
> [info 2017/03/15 17:04:17.375 PDT server3  tid=0x1] 
>   ***
>   Configuration for  'cluster'
>   
>   Jar files to deployed
>   
>   http://geode.apache.org/schema/cache; 
> xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance; copy-on-read="false" 
> is-server="false" lock-lease="120" lock-timeout="60" search-timeout="300" 
> version="1.0" xsi:schemaLocation="http://geode.apache.org/schema/cache 
> http://geode.apache.org/schema/cache/cache-1.0.xsd;>
>   
>data-policy="partition">
> 
>   
>   http://geode.apache.org/schema/lucene; 
> name="full_index">
>  analyzer="org.apache.lucene.analysis.standard.StandardAnalyzer" 
> name="field1"/>
>   
> 
>   
>   
>   ***
>   Configuration for  'group1'
>   
>   Jar files to deployed
>   
>   http://geode.apache.org/schema/cache; 
> xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance; copy-on-read="false" 
> is-server="false" lock-lease="120" lock-timeout="60" search-timeout="300" 
> version="1.0" xsi:schemaLocation="http://geode.apache.org/schema/cache 
> http://geode.apache.org/schema/cache/cache-1.0.xsd;>
>   
>data-policy="partition">
> 
> 
>   TestCacheListener
> 
>   
>   http://geode.apache.org/schema/lucene; 
> name="full_index">
>  analyzer="org.apache.lucene.analysis.standard.StandardAnalyzer" 
> name="field1"/>
>   
> 
>   
> {noformat}
> If a member is started in the group (group1 in this case), it will fail to 
> start with the following error:
> {noformat}
> [error 2017/03/15 17:04:19.715 PDT  tid=0x1] Lucene index already 
> exists in region
> Exception in thread "main" java.lang.IllegalArgumentException: Lucene index 
> already exists in region
>   at 
> org.apache.geode.cache.lucene.internal.LuceneServiceImpl.registerDefinedIndex(LuceneServiceImpl.java:201)
>   at 
> org.apache.geode.cache.lucene.internal.LuceneServiceImpl.createIndex(LuceneServiceImpl.java:154)
>   at 
> org.apache.geode.cache.lucene.internal.xml.LuceneIndexCreation.beforeCreate(LuceneIndexCreation.java:85)
>   at 
> org.apache.geode.internal.cache.extension.SimpleExtensionPoint.beforeCreate(SimpleExtensionPoint.java:77)
>   at 
> org.apache.geode.internal.cache.xmlcache.RegionCreation.createRoot(RegionCreation.java:252)
>   at 
> org.apache.geode.internal.cache.xmlcache.CacheCreation.initializeRegions(CacheCreation.java:544)
>   at 
> org.apache.geode.internal.cache.xmlcache.CacheCreation.create(CacheCreation.java:495)
>   at 
> org.apache.geode.internal.cache.xmlcache.CacheXmlParser.create(CacheXmlParser.java:343)
>   at 
> org.apache.geode.internal.cache.GemFireCacheImpl.loadCacheXml(GemFireCacheImpl.java:4479)
>   at 
> org.apache.geode.internal.cache.ClusterConfigurationLoader.applyClusterXmlConfiguration(ClusterConfigurationLoader.java:129)
>   at 
> org.apache.geode.internal.cache.GemFireCacheImpl.initialize(GemFireCacheImpl.java:1243)
>   at 
> org.apache.geode.internal.cache.GemFireCacheImpl.basicCreate(GemFireCacheImpl.java:798)
>   at 
> org.apache.geode.internal.cache.GemFireCacheImpl.create(GemFireCacheImpl.java:783)
>   at org.apache.geode.cache.CacheFactory.create(CacheFactory.java:178)
>   at 

[jira] [Commented] (GEODE-2605) Unable to do a Lucene query without CLUSTER:READ privilege

2017-04-14 Thread Barry Oglesby (JIRA)

[ 
https://issues.apache.org/jira/browse/GEODE-2605?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15969659#comment-15969659
 ] 

Barry Oglesby commented on GEODE-2605:
--

I went through the 4 gfsh commands and compared them to the equivalent client 
commands.

*Search Index*
To search an index, a client requires DATA:WRITE because of the 
ExecuteRegionFunction66 command:
{noformat}
Exception in thread "main" 
org.apache.geode.cache.client.ServerOperationException: 
org.apache.geode.security.NotAuthorizedException: 
TestPrincipal[username=locator] not authorized for DATA:WRITE
at 
org.apache.geode.cache.client.internal.OpExecutorImpl.handleException(OpExecutorImpl.java:678)
at 
org.apache.geode.cache.client.internal.OpExecutorImpl.handleException(OpExecutorImpl.java:754)
at 
org.apache.geode.cache.client.internal.OpExecutorImpl.handleException(OpExecutorImpl.java:623)
at 
org.apache.geode.cache.client.internal.OpExecutorImpl.execute(OpExecutorImpl.java:167)
at 
org.apache.geode.cache.client.internal.PoolImpl.execute(PoolImpl.java:751)
at 
org.apache.geode.cache.client.internal.ExecuteRegionFunctionOp.execute(ExecuteRegionFunctionOp.java:98)
at 
org.apache.geode.cache.client.internal.ServerRegionProxy.executeFunction(ServerRegionProxy.java:689)
at 
org.apache.geode.internal.cache.execute.ServerRegionFunctionExecutor.executeOnServer(ServerRegionFunctionExecutor.java:210)
at 
org.apache.geode.internal.cache.execute.ServerRegionFunctionExecutor.executeFunction(ServerRegionFunctionExecutor.java:164)
at 
org.apache.geode.internal.cache.execute.ServerRegionFunctionExecutor.execute(ServerRegionFunctionExecutor.java:378)
at 
org.apache.geode.cache.lucene.internal.LuceneQueryImpl.findTopEntries(LuceneQueryImpl.java:115)
at 
org.apache.geode.cache.lucene.internal.LuceneQueryImpl.findPages(LuceneQueryImpl.java:95)
at 
org.apache.geode.cache.lucene.internal.LuceneQueryImpl.findPages(LuceneQueryImpl.java:91)
at QueryHelper.executeQuery(QueryHelper.java:35)
at QueryHelper.executeQuery(QueryHelper.java:31)
at TestClient.executeQuery(TestClient.java:47)
at TestClient.main(TestClient.java:30)
Caused by: org.apache.geode.security.NotAuthorizedException: 
TestPrincipal[username=locator] not authorized for DATA:WRITE
at 
org.apache.geode.internal.security.IntegratedSecurityService.authorize(IntegratedSecurityService.java:279)
at 
org.apache.geode.internal.security.IntegratedSecurityService.authorize(IntegratedSecurityService.java:257)
at 
org.apache.geode.internal.security.IntegratedSecurityService.authorize(IntegratedSecurityService.java:252)
at 
org.apache.geode.internal.security.IntegratedSecurityService.authorize(IntegratedSecurityService.java:248)
at 
org.apache.geode.internal.security.IntegratedSecurityService.authorizeDataWrite(IntegratedSecurityService.java:216)
at 
org.apache.geode.internal.cache.tier.sockets.command.ExecuteRegionFunction66.cmdExecute(ExecuteRegionFunction66.java:210)
at 
org.apache.geode.internal.cache.tier.sockets.BaseCommand.execute(BaseCommand.java:141)
at 
org.apache.geode.internal.cache.tier.sockets.ServerConnection.doNormalMsg(ServerConnection.java:783)
at 
org.apache.geode.internal.cache.tier.sockets.ServerConnection.doOneMessage(ServerConnection.java:914)
at 
org.apache.geode.internal.cache.tier.sockets.ServerConnection.run(ServerConnection.java:1171)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at 
org.apache.geode.internal.cache.tier.sockets.AcceptorImpl$1$1.run(AcceptorImpl.java:519)
at java.lang.Thread.run(Thread.java:745)
{noformat}
So, now gfsh matches that permission requirement:
{noformat}
./runlucenequery.sh 
(2) Executing - search lucene --name=cusip_index --region=data 
--queryStrings=AAPL --defaultField=cusip

Unauthorized. Reason : TestPrincipal[username=locator] not authorized for 
DATA:WRITE
{noformat}
I think this needs to be re-examined at some point so that the permission on 
searching a lucene index match that of an OQL query (DATA:READ:\[region\]). 
That would require adding a client operation and server command rather than 
using a function.
*Create Index*
gfsh list lucene indexes requires DATA:MANAGE\[region\]:
{noformat}
./createluceneindex.sh 
(2) Executing - create lucene index --name=cusip_index --region=data2 
--field=cusip

Unauthorized. Reason : TestPrincipal[username=locator] not authorized for 
DATA:MANAGE:data2
{noformat}
Creating an OQL index through gfsh requires the same permission.

Creating either a lucene or OQL index on the server through a function only 
requires DATA:WRITE (for the function call). *Is this correct 

[jira] [Resolved] (GEODE-2750) Lucene destroy index should destroy the index on remote members before destroying it in the local member

2017-04-04 Thread Barry Oglesby (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-2750?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Barry Oglesby resolved GEODE-2750.
--
   Resolution: Fixed
Fix Version/s: 1.2.0

> Lucene destroy index should destroy the index on remote members before 
> destroying it in the local member
> 
>
> Key: GEODE-2750
> URL: https://issues.apache.org/jira/browse/GEODE-2750
> Project: Geode
>  Issue Type: Bug
>  Components: lucene
>Reporter: Barry Oglesby
> Fix For: 1.2.0
>
>
> Destroying the AsyncEventQueue is partially a local and partially a 
> distributed operation. The local part is the stopping and destroying the 
> actual AsyncEventQueue and GatewaySender instances. Also, removing the 
> AsyncEventQueue id from the data Region. The distributed part is the 
> underlying co-located AsyncEventQueue  and fileAndChunk PartitionedRegions. 
> Co-located PRs cannot be locally destroyed, so they have to be distributed 
> destroys.
> Destroying the local parts of the index in remote members first followed by 
> the local parts in the initiating member and finally the co-located regions 
> should help with RegionDestroyedExceptions occurring when regions are 
> destroyed out from under the index.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (GEODE-2404) Add API to destroy a region containing lucene indexes

2017-04-03 Thread Barry Oglesby (JIRA)

[ 
https://issues.apache.org/jira/browse/GEODE-2404?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15954095#comment-15954095
 ] 

Barry Oglesby commented on GEODE-2404:
--

I created GEODE-2750 related to this issue:

Lucene destroy index should destroy the index on remote members before 
destroying it in the local member

> Add API to destroy a region containing lucene indexes
> -
>
> Key: GEODE-2404
> URL: https://issues.apache.org/jira/browse/GEODE-2404
> Project: Geode
>  Issue Type: New Feature
>  Components: docs, lucene
>Reporter: Barry Oglesby
>Assignee: Barry Oglesby
> Fix For: 1.2.0
>
> Attachments: DestroyRegionMultipleMembersFunction.java
>
>
> h2. Description
> An application {{Region}} containing {{LuceneIndexes}} should be able to be 
> destroyed.
> There are several options, including:
> - Invoke one API to destroy both the application {{Region}} and its 
> {{LuceneIndexes}}
> - Invoke two API:
> ## destroy the {{LuceneIndexes}}
> ## destroy application {{Region}} as it is done currently
> h3. One API
> In this case, we would need a callback on {{LuceneService}} to destroy the 
> {{LuceneIndexes}} before destroying the application {{Region}} like:
> {noformat}
> public void beforeDestroyRegion(Region region);
> {noformat}
> This API would get all the {{LuceneIndexes}} for the application {{Region}}, 
> then destroy each one. See the *Two API* section below for details on 
> destroying a {{LuceneIndex}}.
> Without changes to the way {{PartitionedRegions}} are destroyed, this causes 
> an issue though.
> The current behavior of {{PartitionedRegion destroyRegion}} is to first check 
> for colocated children. If there are any, the call fails.
> There are two options for adding the call to destroy the {{LuceneIndexes}}:
> # check for colocated children
> # invoke {{LuceneService beforeDestroyRegion}} to remove the {{LuceneIndexes}}
> # do the rest of the destroy
> 
> # invoke {{LuceneService beforeDestroyRegion}} to remove the {{LuceneIndexes}}
> # check for colocated children
> # do the rest of the destroy
> Both of these options are problematic in different ways.
> In the case of a {{PartitionedRegion}} with {{LuceneIndexes}}, there are 
> colocated children, so the first option would cause the {{destroyRegion}} 
> call to fail; the second option would succeed. I don't think the first option 
> should fail since the colocated children are internal {{Regions}} that the 
> application knows nothing about.
> In the case of a {{PartitionedRegion}} defining {{LuceneIndexes}} and having 
> an {{AsyncEventQueue}}, there are colocated children, so the first option 
> would cause the {{destroyRegion}} call to fail. This is ok since one of the 
> children is an application-known {{AsyncEventQueue}}. The second option would 
> fail in a bad way. It would first remove the {{LuceneIndexes}}, then fail the 
> colocated children check, so the {{destroyRegion}} call would fail. In this 
> case, the application {{Region}} doesn't get destroyed but its 
> {{LuceneIndexes}} do. This would be bad.
> One option would be to look into changing the check for colocated children to 
> check for application-defined (or not hidden) colocated children. Then the 
> code would be something like:
> # check for application-defined colocated children
> # invoke LuceneService beforeDestroyRegion to remove the LuceneIndexes
> # do the rest of the destroy
> I think this would be ok in both cases.
> h3. Two API
> The destroy API on {{LuceneIndex}} would be something like:
> {noformat}
> public void destroy();
> {noformat}
> Destroying each {{LuceneIndex}} would require:
> # destroying the chunk {{Region}}
> # destroying the file {{Region}}
> # destroying the {{AsyncEventQueue}} which would require:
> ## retrieving and stopping the {{AsyncEventQueue's}} underlying 
> {{GatewaySender}} (there probably should be stop API on {{AsyncEventQueue}} 
> which does this)
> ## removing the id from the application {{Region's AsyncEventQueue}} ids
> ## destroying the {{AsyncEventQueue}} (this destroys the underlying 
> {{GatewaySender}} and removes it from the {{GemFireCacheImpl's}} collection 
> of {{GatewaySenders}})
> ## removing the {{AsyncEventQueue}} from the {{GemFireCacheImpl's}} 
> collection of {{AsyncEventQueues}} (this should be included in the destroy 
> method above)
> # removing {{LuceneIndex}} from {{LuceneService's}} map of indexes
> I also think the API on {{LuceneService}} should be something like:
> {noformat}
> public void destroyIndexes(String regionPath);
> public void destroyIndex(String indexName, String regionPath);
> {noformat}
> These methods would get the appropriate {{LuceneIndex(es)}} and invoke 
> destroy on them. Then they would remove the index(es) from the 
> {{LuceneService's}} collection of 

[jira] [Resolved] (GEODE-2404) Add API to destroy a region containing lucene indexes

2017-04-03 Thread Barry Oglesby (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-2404?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Barry Oglesby resolved GEODE-2404.
--
Resolution: Fixed

> Add API to destroy a region containing lucene indexes
> -
>
> Key: GEODE-2404
> URL: https://issues.apache.org/jira/browse/GEODE-2404
> Project: Geode
>  Issue Type: New Feature
>  Components: docs, lucene
>Reporter: Barry Oglesby
>Assignee: Barry Oglesby
> Fix For: 1.2.0
>
> Attachments: DestroyRegionMultipleMembersFunction.java
>
>
> h2. Description
> An application {{Region}} containing {{LuceneIndexes}} should be able to be 
> destroyed.
> There are several options, including:
> - Invoke one API to destroy both the application {{Region}} and its 
> {{LuceneIndexes}}
> - Invoke two API:
> ## destroy the {{LuceneIndexes}}
> ## destroy application {{Region}} as it is done currently
> h3. One API
> In this case, we would need a callback on {{LuceneService}} to destroy the 
> {{LuceneIndexes}} before destroying the application {{Region}} like:
> {noformat}
> public void beforeDestroyRegion(Region region);
> {noformat}
> This API would get all the {{LuceneIndexes}} for the application {{Region}}, 
> then destroy each one. See the *Two API* section below for details on 
> destroying a {{LuceneIndex}}.
> Without changes to the way {{PartitionedRegions}} are destroyed, this causes 
> an issue though.
> The current behavior of {{PartitionedRegion destroyRegion}} is to first check 
> for colocated children. If there are any, the call fails.
> There are two options for adding the call to destroy the {{LuceneIndexes}}:
> # check for colocated children
> # invoke {{LuceneService beforeDestroyRegion}} to remove the {{LuceneIndexes}}
> # do the rest of the destroy
> 
> # invoke {{LuceneService beforeDestroyRegion}} to remove the {{LuceneIndexes}}
> # check for colocated children
> # do the rest of the destroy
> Both of these options are problematic in different ways.
> In the case of a {{PartitionedRegion}} with {{LuceneIndexes}}, there are 
> colocated children, so the first option would cause the {{destroyRegion}} 
> call to fail; the second option would succeed. I don't think the first option 
> should fail since the colocated children are internal {{Regions}} that the 
> application knows nothing about.
> In the case of a {{PartitionedRegion}} defining {{LuceneIndexes}} and having 
> an {{AsyncEventQueue}}, there are colocated children, so the first option 
> would cause the {{destroyRegion}} call to fail. This is ok since one of the 
> children is an application-known {{AsyncEventQueue}}. The second option would 
> fail in a bad way. It would first remove the {{LuceneIndexes}}, then fail the 
> colocated children check, so the {{destroyRegion}} call would fail. In this 
> case, the application {{Region}} doesn't get destroyed but its 
> {{LuceneIndexes}} do. This would be bad.
> One option would be to look into changing the check for colocated children to 
> check for application-defined (or not hidden) colocated children. Then the 
> code would be something like:
> # check for application-defined colocated children
> # invoke LuceneService beforeDestroyRegion to remove the LuceneIndexes
> # do the rest of the destroy
> I think this would be ok in both cases.
> h3. Two API
> The destroy API on {{LuceneIndex}} would be something like:
> {noformat}
> public void destroy();
> {noformat}
> Destroying each {{LuceneIndex}} would require:
> # destroying the chunk {{Region}}
> # destroying the file {{Region}}
> # destroying the {{AsyncEventQueue}} which would require:
> ## retrieving and stopping the {{AsyncEventQueue's}} underlying 
> {{GatewaySender}} (there probably should be stop API on {{AsyncEventQueue}} 
> which does this)
> ## removing the id from the application {{Region's AsyncEventQueue}} ids
> ## destroying the {{AsyncEventQueue}} (this destroys the underlying 
> {{GatewaySender}} and removes it from the {{GemFireCacheImpl's}} collection 
> of {{GatewaySenders}})
> ## removing the {{AsyncEventQueue}} from the {{GemFireCacheImpl's}} 
> collection of {{AsyncEventQueues}} (this should be included in the destroy 
> method above)
> # removing {{LuceneIndex}} from {{LuceneService's}} map of indexes
> I also think the API on {{LuceneService}} should be something like:
> {noformat}
> public void destroyIndexes(String regionPath);
> public void destroyIndex(String indexName, String regionPath);
> {noformat}
> These methods would get the appropriate {{LuceneIndex(es)}} and invoke 
> destroy on them. Then they would remove the index(es) from the 
> {{LuceneService's}} collection of {{LuceneIndexes}}.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (GEODE-2404) Add API to destroy a region containing lucene indexes

2017-04-03 Thread Barry Oglesby (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-2404?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Barry Oglesby updated GEODE-2404:
-
Fix Version/s: (was: 1.2.0)

> Add API to destroy a region containing lucene indexes
> -
>
> Key: GEODE-2404
> URL: https://issues.apache.org/jira/browse/GEODE-2404
> Project: Geode
>  Issue Type: New Feature
>  Components: docs, lucene
>Reporter: Barry Oglesby
>Assignee: Barry Oglesby
> Attachments: DestroyRegionMultipleMembersFunction.java
>
>
> h2. Description
> An application {{Region}} containing {{LuceneIndexes}} should be able to be 
> destroyed.
> There are several options, including:
> - Invoke one API to destroy both the application {{Region}} and its 
> {{LuceneIndexes}}
> - Invoke two API:
> ## destroy the {{LuceneIndexes}}
> ## destroy application {{Region}} as it is done currently
> h3. One API
> In this case, we would need a callback on {{LuceneService}} to destroy the 
> {{LuceneIndexes}} before destroying the application {{Region}} like:
> {noformat}
> public void beforeDestroyRegion(Region region);
> {noformat}
> This API would get all the {{LuceneIndexes}} for the application {{Region}}, 
> then destroy each one. See the *Two API* section below for details on 
> destroying a {{LuceneIndex}}.
> Without changes to the way {{PartitionedRegions}} are destroyed, this causes 
> an issue though.
> The current behavior of {{PartitionedRegion destroyRegion}} is to first check 
> for colocated children. If there are any, the call fails.
> There are two options for adding the call to destroy the {{LuceneIndexes}}:
> # check for colocated children
> # invoke {{LuceneService beforeDestroyRegion}} to remove the {{LuceneIndexes}}
> # do the rest of the destroy
> 
> # invoke {{LuceneService beforeDestroyRegion}} to remove the {{LuceneIndexes}}
> # check for colocated children
> # do the rest of the destroy
> Both of these options are problematic in different ways.
> In the case of a {{PartitionedRegion}} with {{LuceneIndexes}}, there are 
> colocated children, so the first option would cause the {{destroyRegion}} 
> call to fail; the second option would succeed. I don't think the first option 
> should fail since the colocated children are internal {{Regions}} that the 
> application knows nothing about.
> In the case of a {{PartitionedRegion}} defining {{LuceneIndexes}} and having 
> an {{AsyncEventQueue}}, there are colocated children, so the first option 
> would cause the {{destroyRegion}} call to fail. This is ok since one of the 
> children is an application-known {{AsyncEventQueue}}. The second option would 
> fail in a bad way. It would first remove the {{LuceneIndexes}}, then fail the 
> colocated children check, so the {{destroyRegion}} call would fail. In this 
> case, the application {{Region}} doesn't get destroyed but its 
> {{LuceneIndexes}} do. This would be bad.
> One option would be to look into changing the check for colocated children to 
> check for application-defined (or not hidden) colocated children. Then the 
> code would be something like:
> # check for application-defined colocated children
> # invoke LuceneService beforeDestroyRegion to remove the LuceneIndexes
> # do the rest of the destroy
> I think this would be ok in both cases.
> h3. Two API
> The destroy API on {{LuceneIndex}} would be something like:
> {noformat}
> public void destroy();
> {noformat}
> Destroying each {{LuceneIndex}} would require:
> # destroying the chunk {{Region}}
> # destroying the file {{Region}}
> # destroying the {{AsyncEventQueue}} which would require:
> ## retrieving and stopping the {{AsyncEventQueue's}} underlying 
> {{GatewaySender}} (there probably should be stop API on {{AsyncEventQueue}} 
> which does this)
> ## removing the id from the application {{Region's AsyncEventQueue}} ids
> ## destroying the {{AsyncEventQueue}} (this destroys the underlying 
> {{GatewaySender}} and removes it from the {{GemFireCacheImpl's}} collection 
> of {{GatewaySenders}})
> ## removing the {{AsyncEventQueue}} from the {{GemFireCacheImpl's}} 
> collection of {{AsyncEventQueues}} (this should be included in the destroy 
> method above)
> # removing {{LuceneIndex}} from {{LuceneService's}} map of indexes
> I also think the API on {{LuceneService}} should be something like:
> {noformat}
> public void destroyIndexes(String regionPath);
> public void destroyIndex(String indexName, String regionPath);
> {noformat}
> These methods would get the appropriate {{LuceneIndex(es)}} and invoke 
> destroy on them. Then they would remove the index(es) from the 
> {{LuceneService's}} collection of {{LuceneIndexes}}.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (GEODE-2745) The AsyncEventQueueImpl waitUntilFlushed method waits longer than it should for events to be flushed

2017-04-03 Thread Barry Oglesby (JIRA)
Barry Oglesby created GEODE-2745:


 Summary: The AsyncEventQueueImpl waitUntilFlushed method waits 
longer than it should for events to be flushed
 Key: GEODE-2745
 URL: https://issues.apache.org/jira/browse/GEODE-2745
 Project: Geode
  Issue Type: Bug
  Components: wan
Reporter: Barry Oglesby


With the changes to waitUntilFlushed to process 10 buckets at a time, if events 
are happening while waitUntilFlushed is in progress, then all the buckets after 
the first 10 will have processed more than it should before returning.

If the update rate is causing the queue to always contain 113000 events, and 
the events are spread evenly across the buckets, each bucket will have 1000 
events to wait for. The first 10 buckets will wait for their 1000 events. When 
those have been processed, the next 10 buckets will wait for their 1000 events 
starting from that point, but they've already processed 1000 events. So, these 
buckets will actually wait for 2000 events to be processed before returning. 
This pattern continues until all the buckets are done.

The WaitUntilBucketRegionQueueFlushedCallable needs to track not only the 
BucketRegionQueue but also the latestQueuedKey.




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (GEODE-2689) If a region containing a Lucene index is created in one group and altered in another, a member in the other group will fail to start

2017-03-17 Thread Barry Oglesby (JIRA)
Barry Oglesby created GEODE-2689:


 Summary: If a region containing a Lucene index is created in one 
group and altered in another, a member in the other group will fail to start
 Key: GEODE-2689
 URL: https://issues.apache.org/jira/browse/GEODE-2689
 Project: Geode
  Issue Type: Bug
  Components: lucene
Reporter: Barry Oglesby


Steps to reproduce:

- create lucene index --name=full_index --region=data --field=field1
- create region --name=data --type=PARTITION_REDUNDANT
- alter region --name=data --cache-listener=TestCacheListener --group=group1

At this point, the cluster config xml looks like:
{noformat}
[info 2017/03/15 17:04:17.375 PDT server3  tid=0x1] 
  ***
  Configuration for  'cluster'
  
  Jar files to deployed
  
  http://geode.apache.org/schema/cache; 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance; copy-on-read="false" 
is-server="false" lock-lease="120" lock-timeout="60" search-timeout="300" 
version="1.0" xsi:schemaLocation="http://geode.apache.org/schema/cache 
http://geode.apache.org/schema/cache/cache-1.0.xsd;>
  
  

  
  http://geode.apache.org/schema/lucene; 
name="full_index">

  

  
  
  ***
  Configuration for  'group1'
  
  Jar files to deployed
  
  http://geode.apache.org/schema/cache; 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance; copy-on-read="false" 
is-server="false" lock-lease="120" lock-timeout="60" search-timeout="300" 
version="1.0" xsi:schemaLocation="http://geode.apache.org/schema/cache 
http://geode.apache.org/schema/cache/cache-1.0.xsd;>
  
  


  TestCacheListener

  
  http://geode.apache.org/schema/lucene; 
name="full_index">

  

  
{noformat}
If a member is started in the group (group1 in this case), it will fail to 
start with the following error:
{noformat}
[error 2017/03/15 17:04:19.715 PDT  tid=0x1] Lucene index already exists 
in region

Exception in thread "main" java.lang.IllegalArgumentException: Lucene index 
already exists in region
at 
org.apache.geode.cache.lucene.internal.LuceneServiceImpl.registerDefinedIndex(LuceneServiceImpl.java:201)
at 
org.apache.geode.cache.lucene.internal.LuceneServiceImpl.createIndex(LuceneServiceImpl.java:154)
at 
org.apache.geode.cache.lucene.internal.xml.LuceneIndexCreation.beforeCreate(LuceneIndexCreation.java:85)
at 
org.apache.geode.internal.cache.extension.SimpleExtensionPoint.beforeCreate(SimpleExtensionPoint.java:77)
at 
org.apache.geode.internal.cache.xmlcache.RegionCreation.createRoot(RegionCreation.java:252)
at 
org.apache.geode.internal.cache.xmlcache.CacheCreation.initializeRegions(CacheCreation.java:544)
at 
org.apache.geode.internal.cache.xmlcache.CacheCreation.create(CacheCreation.java:495)
at 
org.apache.geode.internal.cache.xmlcache.CacheXmlParser.create(CacheXmlParser.java:343)
at 
org.apache.geode.internal.cache.GemFireCacheImpl.loadCacheXml(GemFireCacheImpl.java:4479)
at 
org.apache.geode.internal.cache.ClusterConfigurationLoader.applyClusterXmlConfiguration(ClusterConfigurationLoader.java:129)
at 
org.apache.geode.internal.cache.GemFireCacheImpl.initialize(GemFireCacheImpl.java:1243)
at 
org.apache.geode.internal.cache.GemFireCacheImpl.basicCreate(GemFireCacheImpl.java:798)
at 
org.apache.geode.internal.cache.GemFireCacheImpl.create(GemFireCacheImpl.java:783)
at org.apache.geode.cache.CacheFactory.create(CacheFactory.java:178)
at org.apache.geode.cache.CacheFactory.create(CacheFactory.java:218)
at TestBase.initializeServerCache(TestBase.java:22)
at TestServer.main(TestServer.java:7)
{noformat}
I made a quick change in {{LuceneIndexCreation beforeCreate}} to just log the 
{{IllegalArgumentException}}. I'm not sure if this is good enough or not.
{noformat}
public void beforeCreate(Extensible source, Cache cache) {
  LuceneServiceImpl service = (LuceneServiceImpl) 
LuceneServiceProvider.get(cache);
  Analyzer analyzer = this.fieldAnalyzers == null ? new StandardAnalyzer()
: new PerFieldAnalyzerWrapper(new StandardAnalyzer(), this.fieldAnalyzers);
  try {
service.createIndex(getName(), getRegionPath(), analyzer, 
this.fieldAnalyzers,
  getFieldNames());
  } catch (IllegalArgumentException e) {
// log a warning or info here
  }
}
{noformat}
We might want to create a {{LuceneIndexExistsException}} to catch here. We also 
might want to compare the indexes to see that they are the same.

btw - this same test with OQL works:

In the OQL case, the cluster config looks like:
{noformat}
[info 2017/03/15 17:14:12.364 PDT server3  tid=0x1] 
  ***
  

[jira] [Created] (GEODE-2688) The Lucene xml in the cluster config includes the internal async event queue id

2017-03-17 Thread Barry Oglesby (JIRA)
Barry Oglesby created GEODE-2688:


 Summary: The Lucene xml in the cluster config includes the 
internal async event queue id
 Key: GEODE-2688
 URL: https://issues.apache.org/jira/browse/GEODE-2688
 Project: Geode
  Issue Type: Bug
  Components: lucene
Reporter: Barry Oglesby


The cluster config xml contains the internal async-event-queue-ids like:
{noformat}

  

  
  http://geode.apache.org/schema/lucene; 
name="full_index">

  

{noformat}
This is not necessary since the async event id will be added to the 
{{AttributesFactory}} in the {{RegionListener beforeCreate}} call:
{noformat}
java.lang.Exception: Stack trace
at java.lang.Thread.dumpStack(Thread.java:1329)
at 
org.apache.geode.cache.lucene.internal.LuceneServiceImpl.getUniqueIndexName(LuceneServiceImpl.java:127)
at 
org.apache.geode.cache.lucene.internal.LuceneRegionListener.beforeCreate(LuceneRegionListener.java:88)
at 
org.apache.geode.internal.cache.GemFireCacheImpl.invokeRegionBefore(GemFireCacheImpl.java:3363)
at 
org.apache.geode.internal.cache.GemFireCacheImpl.createVMRegion(GemFireCacheImpl.java:3212)
at 
org.apache.geode.internal.cache.GemFireCacheImpl.basicCreateRegion(GemFireCacheImpl.java:3190)
at 
org.apache.geode.internal.cache.GemFireCacheImpl.createRegion(GemFireCacheImpl.java:3178)
at org.apache.geode.cache.RegionFactory.create(RegionFactory.java:762)
at 
org.apache.geode.management.internal.cli.functions.RegionCreateFunction.createRegion(RegionCreateFunction.java:358)
at 
org.apache.geode.management.internal.cli.functions.RegionCreateFunction.execute(RegionCreateFunction.java:93)
at 
org.apache.geode.internal.cache.MemberFunctionStreamingMessage.process(MemberFunctionStreamingMessage.java:191)
at 
org.apache.geode.distributed.internal.DistributionMessage.scheduleAction(DistributionMessage.java:376)
at 
org.apache.geode.distributed.internal.DistributionMessage$1.run(DistributionMessage.java:442)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at 
org.apache.geode.distributed.internal.DistributionManager.runUntilShutdown(DistributionManager.java:621)
at 
org.apache.geode.distributed.internal.DistributionManager$9$1.run(DistributionManager.java:1067)
at java.lang.Thread.run(Thread.java:745)
{noformat}
This is not a huge deal, except in the case where the index is destroyed. The 
_destroy lucene index_ command currently removes just the *lucene:index* from 
the cluster config xml. It doesn't do anything with the 
*async-event-queue-ids*. There would have to be a separate {{XmlEntity}} to 
deal with those, so it would be better if they weren't included in the first 
place.




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (GEODE-2638) A GatewaySender fails to start if it attempts to connect to a remote locator that is an unresolvable hostname

2017-03-10 Thread Barry Oglesby (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-2638?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Barry Oglesby resolved GEODE-2638.
--
   Resolution: Fixed
Fix Version/s: 1.2.0

> A GatewaySender fails to start if it attempts to connect to a remote locator 
> that is an unresolvable hostname
> -
>
> Key: GEODE-2638
> URL: https://issues.apache.org/jira/browse/GEODE-2638
> Project: Geode
>  Issue Type: Bug
>  Components: wan
>Reporter: Barry Oglesby
> Fix For: 1.2.0
>
>
> A GatewaySender fails to start if it attempts to connect to a remote locator 
> that is an unresolvable hostname.
> An exception like below is thrown:
> {noformat}
> [severe 2017/03/08 16:34:14.927 PST ln-1  GatewaySender_ny_3> tid=0x3a] Message dispatch failed due to unexpected 
> exception..
> org.apache.geode.InternalGemFireException: Failed getting host from name:  
> unknown
>   at 
> org.apache.geode.internal.admin.remote.DistributionLocatorId.(DistributionLocatorId.java:132)
>   at 
> org.apache.geode.internal.cache.wan.AbstractRemoteGatewaySender.initProxy(AbstractRemoteGatewaySender.java:104)
>   at 
> org.apache.geode.internal.cache.wan.GatewaySenderEventRemoteDispatcher.initializeConnection(GatewaySenderEventRemoteDispatcher.java:372)
>   at 
> org.apache.geode.internal.cache.wan.GatewaySenderEventRemoteDispatcher.(GatewaySenderEventRemoteDispatcher.java:78)
>   at 
> org.apache.geode.internal.cache.wan.parallel.RemoteParallelGatewaySenderEventProcessor.initializeEventDispatcher(RemoteParallelGatewaySenderEventProcessor.java:74)
>   at 
> org.apache.geode.internal.cache.wan.AbstractGatewaySenderEventProcessor.setRunningStatus(AbstractGatewaySenderEventProcessor.java:1063)
>   at 
> org.apache.geode.internal.cache.wan.AbstractGatewaySenderEventProcessor.run(AbstractGatewaySenderEventProcessor.java:1035)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Reopened] (GEODE-2404) Add API to destroy a region containing lucene indexes

2017-03-10 Thread Barry Oglesby (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-2404?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Barry Oglesby reopened GEODE-2404:
--
  Assignee: Barry Oglesby

Support for groups needs to be added.

> Add API to destroy a region containing lucene indexes
> -
>
> Key: GEODE-2404
> URL: https://issues.apache.org/jira/browse/GEODE-2404
> Project: Geode
>  Issue Type: New Feature
>  Components: docs, lucene
>Reporter: Barry Oglesby
>Assignee: Barry Oglesby
> Fix For: 1.2.0
>
> Attachments: DestroyRegionMultipleMembersFunction.java
>
>
> h2. Description
> An application {{Region}} containing {{LuceneIndexes}} should be able to be 
> destroyed.
> There are several options, including:
> - Invoke one API to destroy both the application {{Region}} and its 
> {{LuceneIndexes}}
> - Invoke two API:
> ## destroy the {{LuceneIndexes}}
> ## destroy application {{Region}} as it is done currently
> h3. One API
> In this case, we would need a callback on {{LuceneService}} to destroy the 
> {{LuceneIndexes}} before destroying the application {{Region}} like:
> {noformat}
> public void beforeDestroyRegion(Region region);
> {noformat}
> This API would get all the {{LuceneIndexes}} for the application {{Region}}, 
> then destroy each one. See the *Two API* section below for details on 
> destroying a {{LuceneIndex}}.
> Without changes to the way {{PartitionedRegions}} are destroyed, this causes 
> an issue though.
> The current behavior of {{PartitionedRegion destroyRegion}} is to first check 
> for colocated children. If there are any, the call fails.
> There are two options for adding the call to destroy the {{LuceneIndexes}}:
> # check for colocated children
> # invoke {{LuceneService beforeDestroyRegion}} to remove the {{LuceneIndexes}}
> # do the rest of the destroy
> 
> # invoke {{LuceneService beforeDestroyRegion}} to remove the {{LuceneIndexes}}
> # check for colocated children
> # do the rest of the destroy
> Both of these options are problematic in different ways.
> In the case of a {{PartitionedRegion}} with {{LuceneIndexes}}, there are 
> colocated children, so the first option would cause the {{destroyRegion}} 
> call to fail; the second option would succeed. I don't think the first option 
> should fail since the colocated children are internal {{Regions}} that the 
> application knows nothing about.
> In the case of a {{PartitionedRegion}} defining {{LuceneIndexes}} and having 
> an {{AsyncEventQueue}}, there are colocated children, so the first option 
> would cause the {{destroyRegion}} call to fail. This is ok since one of the 
> children is an application-known {{AsyncEventQueue}}. The second option would 
> fail in a bad way. It would first remove the {{LuceneIndexes}}, then fail the 
> colocated children check, so the {{destroyRegion}} call would fail. In this 
> case, the application {{Region}} doesn't get destroyed but its 
> {{LuceneIndexes}} do. This would be bad.
> One option would be to look into changing the check for colocated children to 
> check for application-defined (or not hidden) colocated children. Then the 
> code would be something like:
> # check for application-defined colocated children
> # invoke LuceneService beforeDestroyRegion to remove the LuceneIndexes
> # do the rest of the destroy
> I think this would be ok in both cases.
> h3. Two API
> The destroy API on {{LuceneIndex}} would be something like:
> {noformat}
> public void destroy();
> {noformat}
> Destroying each {{LuceneIndex}} would require:
> # destroying the chunk {{Region}}
> # destroying the file {{Region}}
> # destroying the {{AsyncEventQueue}} which would require:
> ## retrieving and stopping the {{AsyncEventQueue's}} underlying 
> {{GatewaySender}} (there probably should be stop API on {{AsyncEventQueue}} 
> which does this)
> ## removing the id from the application {{Region's AsyncEventQueue}} ids
> ## destroying the {{AsyncEventQueue}} (this destroys the underlying 
> {{GatewaySender}} and removes it from the {{GemFireCacheImpl's}} collection 
> of {{GatewaySenders}})
> ## removing the {{AsyncEventQueue}} from the {{GemFireCacheImpl's}} 
> collection of {{AsyncEventQueues}} (this should be included in the destroy 
> method above)
> # removing {{LuceneIndex}} from {{LuceneService's}} map of indexes
> I also think the API on {{LuceneService}} should be something like:
> {noformat}
> public void destroyIndexes(String regionPath);
> public void destroyIndex(String indexName, String regionPath);
> {noformat}
> These methods would get the appropriate {{LuceneIndex(es)}} and invoke 
> destroy on them. Then they would remove the index(es) from the 
> {{LuceneService's}} collection of {{LuceneIndexes}}.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Comment Edited] (GEODE-2404) Add API to destroy a region containing lucene indexes

2017-03-10 Thread Barry Oglesby (JIRA)

[ 
https://issues.apache.org/jira/browse/GEODE-2404?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15905490#comment-15905490
 ] 

Barry Oglesby edited comment on GEODE-2404 at 3/10/17 6:06 PM:
---

Support for groups in gfsh needs to be added.


was (Author: barry.oglesby):
Support for groups needs to be added.

> Add API to destroy a region containing lucene indexes
> -
>
> Key: GEODE-2404
> URL: https://issues.apache.org/jira/browse/GEODE-2404
> Project: Geode
>  Issue Type: New Feature
>  Components: docs, lucene
>Reporter: Barry Oglesby
>Assignee: Barry Oglesby
> Fix For: 1.2.0
>
> Attachments: DestroyRegionMultipleMembersFunction.java
>
>
> h2. Description
> An application {{Region}} containing {{LuceneIndexes}} should be able to be 
> destroyed.
> There are several options, including:
> - Invoke one API to destroy both the application {{Region}} and its 
> {{LuceneIndexes}}
> - Invoke two API:
> ## destroy the {{LuceneIndexes}}
> ## destroy application {{Region}} as it is done currently
> h3. One API
> In this case, we would need a callback on {{LuceneService}} to destroy the 
> {{LuceneIndexes}} before destroying the application {{Region}} like:
> {noformat}
> public void beforeDestroyRegion(Region region);
> {noformat}
> This API would get all the {{LuceneIndexes}} for the application {{Region}}, 
> then destroy each one. See the *Two API* section below for details on 
> destroying a {{LuceneIndex}}.
> Without changes to the way {{PartitionedRegions}} are destroyed, this causes 
> an issue though.
> The current behavior of {{PartitionedRegion destroyRegion}} is to first check 
> for colocated children. If there are any, the call fails.
> There are two options for adding the call to destroy the {{LuceneIndexes}}:
> # check for colocated children
> # invoke {{LuceneService beforeDestroyRegion}} to remove the {{LuceneIndexes}}
> # do the rest of the destroy
> 
> # invoke {{LuceneService beforeDestroyRegion}} to remove the {{LuceneIndexes}}
> # check for colocated children
> # do the rest of the destroy
> Both of these options are problematic in different ways.
> In the case of a {{PartitionedRegion}} with {{LuceneIndexes}}, there are 
> colocated children, so the first option would cause the {{destroyRegion}} 
> call to fail; the second option would succeed. I don't think the first option 
> should fail since the colocated children are internal {{Regions}} that the 
> application knows nothing about.
> In the case of a {{PartitionedRegion}} defining {{LuceneIndexes}} and having 
> an {{AsyncEventQueue}}, there are colocated children, so the first option 
> would cause the {{destroyRegion}} call to fail. This is ok since one of the 
> children is an application-known {{AsyncEventQueue}}. The second option would 
> fail in a bad way. It would first remove the {{LuceneIndexes}}, then fail the 
> colocated children check, so the {{destroyRegion}} call would fail. In this 
> case, the application {{Region}} doesn't get destroyed but its 
> {{LuceneIndexes}} do. This would be bad.
> One option would be to look into changing the check for colocated children to 
> check for application-defined (or not hidden) colocated children. Then the 
> code would be something like:
> # check for application-defined colocated children
> # invoke LuceneService beforeDestroyRegion to remove the LuceneIndexes
> # do the rest of the destroy
> I think this would be ok in both cases.
> h3. Two API
> The destroy API on {{LuceneIndex}} would be something like:
> {noformat}
> public void destroy();
> {noformat}
> Destroying each {{LuceneIndex}} would require:
> # destroying the chunk {{Region}}
> # destroying the file {{Region}}
> # destroying the {{AsyncEventQueue}} which would require:
> ## retrieving and stopping the {{AsyncEventQueue's}} underlying 
> {{GatewaySender}} (there probably should be stop API on {{AsyncEventQueue}} 
> which does this)
> ## removing the id from the application {{Region's AsyncEventQueue}} ids
> ## destroying the {{AsyncEventQueue}} (this destroys the underlying 
> {{GatewaySender}} and removes it from the {{GemFireCacheImpl's}} collection 
> of {{GatewaySenders}})
> ## removing the {{AsyncEventQueue}} from the {{GemFireCacheImpl's}} 
> collection of {{AsyncEventQueues}} (this should be included in the destroy 
> method above)
> # removing {{LuceneIndex}} from {{LuceneService's}} map of indexes
> I also think the API on {{LuceneService}} should be something like:
> {noformat}
> public void destroyIndexes(String regionPath);
> public void destroyIndex(String indexName, String regionPath);
> {noformat}
> These methods would get the appropriate {{LuceneIndex(es)}} and invoke 
> destroy on them. Then they would remove the index(es) from the 
> {{LuceneService's}} 

[jira] [Created] (GEODE-2638) A GatewaySender fails to start if it attempts to connect to a remote locator that is an unresolvable hostname

2017-03-08 Thread Barry Oglesby (JIRA)
Barry Oglesby created GEODE-2638:


 Summary: A GatewaySender fails to start if it attempts to connect 
to a remote locator that is an unresolvable hostname
 Key: GEODE-2638
 URL: https://issues.apache.org/jira/browse/GEODE-2638
 Project: Geode
  Issue Type: Bug
  Components: wan
Reporter: Barry Oglesby


A GatewaySender fails to start if it attempts to connect to a remote locator 
that is an unresolvable hostname.

An exception like below is thrown:
{noformat}
[severe 2017/03/08 16:34:14.927 PST ln-1  tid=0x3a] Message dispatch failed due to unexpected 
exception..
org.apache.geode.InternalGemFireException: Failed getting host from name:  
unknown
at 
org.apache.geode.internal.admin.remote.DistributionLocatorId.(DistributionLocatorId.java:132)
at 
org.apache.geode.internal.cache.wan.AbstractRemoteGatewaySender.initProxy(AbstractRemoteGatewaySender.java:104)
at 
org.apache.geode.internal.cache.wan.GatewaySenderEventRemoteDispatcher.initializeConnection(GatewaySenderEventRemoteDispatcher.java:372)
at 
org.apache.geode.internal.cache.wan.GatewaySenderEventRemoteDispatcher.(GatewaySenderEventRemoteDispatcher.java:78)
at 
org.apache.geode.internal.cache.wan.parallel.RemoteParallelGatewaySenderEventProcessor.initializeEventDispatcher(RemoteParallelGatewaySenderEventProcessor.java:74)
at 
org.apache.geode.internal.cache.wan.AbstractGatewaySenderEventProcessor.setRunningStatus(AbstractGatewaySenderEventProcessor.java:1063)
at 
org.apache.geode.internal.cache.wan.AbstractGatewaySenderEventProcessor.run(AbstractGatewaySenderEventProcessor.java:1035)
{noformat}




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (GEODE-2612) Add option to invoke callbacks while loading a snapshot

2017-03-07 Thread Barry Oglesby (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-2612?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Barry Oglesby updated GEODE-2612:
-
Description: 
As a work-around to recreating a lucene index (which is not currently 
supported), the recommendation is to:

- export user region
- destroy indexes and user region
- recreate index
- recreate user region
- import user region

The lucene index is populated using an hidden AsyncEventQueue, but currently 
import doesn't invoke callbacks. This feature request is to add an option to 
SnapshotOptions to cause callbacks to be invoked, so that the index is 
repopulated during import.


> Add option to invoke callbacks while loading a snapshot
> ---
>
> Key: GEODE-2612
> URL: https://issues.apache.org/jira/browse/GEODE-2612
> Project: Geode
>  Issue Type: New Feature
>  Components: lucene
>Reporter: Barry Oglesby
>Assignee: Barry Oglesby
>
> As a work-around to recreating a lucene index (which is not currently 
> supported), the recommendation is to:
> - export user region
> - destroy indexes and user region
> - recreate index
> - recreate user region
> - import user region
> The lucene index is populated using an hidden AsyncEventQueue, but currently 
> import doesn't invoke callbacks. This feature request is to add an option to 
> SnapshotOptions to cause callbacks to be invoked, so that the index is 
> repopulated during import.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (GEODE-2612) Add option to invoke callbacks while loading a snapshot

2017-03-07 Thread Barry Oglesby (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-2612?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Barry Oglesby reassigned GEODE-2612:


Assignee: Barry Oglesby

> Add option to invoke callbacks while loading a snapshot
> ---
>
> Key: GEODE-2612
> URL: https://issues.apache.org/jira/browse/GEODE-2612
> Project: Geode
>  Issue Type: New Feature
>  Components: lucene
>Reporter: Barry Oglesby
>Assignee: Barry Oglesby
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (GEODE-2612) Add option to invoke callbacks while loading a snapshot

2017-03-07 Thread Barry Oglesby (JIRA)
Barry Oglesby created GEODE-2612:


 Summary: Add option to invoke callbacks while loading a snapshot
 Key: GEODE-2612
 URL: https://issues.apache.org/jira/browse/GEODE-2612
 Project: Geode
  Issue Type: New Feature
  Components: lucene
Reporter: Barry Oglesby






--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (GEODE-2568) When its AsyncEventQueue is destroyed, its JMX MBean is not removed

2017-03-03 Thread Barry Oglesby (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-2568?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Barry Oglesby resolved GEODE-2568.
--
   Resolution: Fixed
Fix Version/s: 1.2.0

> When its AsyncEventQueue is destroyed, its JMX MBean is not removed
> ---
>
> Key: GEODE-2568
> URL: https://issues.apache.org/jira/browse/GEODE-2568
> Project: Geode
>  Issue Type: Bug
>  Components: wan
>Reporter: Barry Oglesby
>Assignee: Barry Oglesby
> Fix For: 1.2.0
>
>
> This results in the exception below if the AsyncEventQueue is re-added:
> {noformat}
> [warning 2017/03/01 10:53:33.500 PST   0> tid=0x3f] javax.management.InstanceAlreadyExistsException: 
> GemFire:service=AsyncEventQueue,queue=cusip_index#_data,type=Member,member=192.168.2.9(74354)-1026
> org.apache.geode.management.ManagementException: 
> javax.management.InstanceAlreadyExistsException: 
> GemFire:service=AsyncEventQueue,queue=cusip_index#_data,type=Member,member=192.168.2.9(74354)-1026
>   at 
> org.apache.geode.management.internal.MBeanJMXAdapter.registerMBean(MBeanJMXAdapter.java:114)
>   at 
> org.apache.geode.management.internal.SystemManagementService.registerInternalMBean(SystemManagementService.java:407)
>   at 
> org.apache.geode.management.internal.beans.ManagementAdapter.handleAsyncEventQueueCreation(ManagementAdapter.java:584)
>   at 
> org.apache.geode.management.internal.beans.ManagementListener.handleEvent(ManagementListener.java:186)
>   at 
> org.apache.geode.distributed.internal.InternalDistributedSystem.notifyResourceEventListeners(InternalDistributedSystem.java:2146)
>   at 
> org.apache.geode.distributed.internal.InternalDistributedSystem.handleResourceEvent(InternalDistributedSystem.java:536)
>   at 
> org.apache.geode.internal.cache.GemFireCacheImpl.addAsyncEventQueue(GemFireCacheImpl.java:4068)
>   at 
> org.apache.geode.cache.asyncqueue.internal.AsyncEventQueueFactoryImpl.create(AsyncEventQueueFactoryImpl.java:197)
>   at 
> org.apache.geode.cache.lucene.internal.LuceneIndexImpl.createAEQ(LuceneIndexImpl.java:184)
>   at 
> org.apache.geode.cache.lucene.internal.LuceneIndexImpl.createAEQ(LuceneIndexImpl.java:150)
>   at 
> org.apache.geode.cache.lucene.internal.LuceneIndexImpl.initialize(LuceneIndexImpl.java:140)
>   at 
> org.apache.geode.cache.lucene.internal.LuceneServiceImpl.afterDataRegionCreated(LuceneServiceImpl.java:227)
>   at 
> org.apache.geode.cache.lucene.internal.LuceneServiceImpl$1.afterCreate(LuceneServiceImpl.java:207)
>   at 
> org.apache.geode.internal.cache.GemFireCacheImpl.invokeRegionAfter(GemFireCacheImpl.java:3370)
>   at 
> org.apache.geode.internal.cache.GemFireCacheImpl.createVMRegion(GemFireCacheImpl.java:3349)
>   at 
> org.apache.geode.internal.cache.GemFireCacheImpl.basicCreateRegion(GemFireCacheImpl.java:3190)
>   at 
> org.apache.geode.internal.cache.GemFireCacheImpl.createRegion(GemFireCacheImpl.java:3178)
>   at org.apache.geode.cache.RegionFactory.create(RegionFactory.java:762)
>   at CreateRegionFunction.createRegion(CreateRegionFunction.java:27)
>   at CreateRegionFunction.execute(CreateRegionFunction.java:22)
>   at 
> org.apache.geode.internal.cache.tier.sockets.command.ExecuteFunction66.executeFunctionaLocally(ExecuteFunction66.java:324)
>   at 
> org.apache.geode.internal.cache.tier.sockets.command.ExecuteFunction66.cmdExecute(ExecuteFunction66.java:249)
>   at 
> org.apache.geode.internal.cache.tier.sockets.command.ExecuteFunction70.cmdExecute(ExecuteFunction70.java:55)
>   at 
> org.apache.geode.internal.cache.tier.sockets.BaseCommand.execute(BaseCommand.java:141)
>   at 
> org.apache.geode.internal.cache.tier.sockets.ServerConnection.doNormalMsg(ServerConnection.java:783)
>   at 
> org.apache.geode.internal.cache.tier.sockets.ServerConnection.doOneMessage(ServerConnection.java:914)
>   at 
> org.apache.geode.internal.cache.tier.sockets.ServerConnection.run(ServerConnection.java:1171)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at 
> org.apache.geode.internal.cache.tier.sockets.AcceptorImpl$1$1.run(AcceptorImpl.java:519)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: javax.management.InstanceAlreadyExistsException: 
> GemFire:service=AsyncEventQueue,queue=cusip_index#_data,type=Member,member=192.168.2.9(74354)-1026
>   at com.sun.jmx.mbeanserver.Repository.addMBean(Repository.java:437)
>   at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerWithRepository(DefaultMBeanServerInterceptor.java:1898)
>   at 
> 

[jira] [Assigned] (GEODE-2568) When its AsyncEventQueue is destroyed, its JMX MBean is not removed

2017-03-01 Thread Barry Oglesby (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-2568?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Barry Oglesby reassigned GEODE-2568:


Assignee: Barry Oglesby

> When its AsyncEventQueue is destroyed, its JMX MBean is not removed
> ---
>
> Key: GEODE-2568
> URL: https://issues.apache.org/jira/browse/GEODE-2568
> Project: Geode
>  Issue Type: Bug
>  Components: wan
>Reporter: Barry Oglesby
>Assignee: Barry Oglesby
>
> This results in the exception below if the AsyncEventQueue is re-added:
> {noformat}
> [warning 2017/03/01 10:53:33.500 PST   0> tid=0x3f] javax.management.InstanceAlreadyExistsException: 
> GemFire:service=AsyncEventQueue,queue=cusip_index#_data,type=Member,member=192.168.2.9(74354)-1026
> org.apache.geode.management.ManagementException: 
> javax.management.InstanceAlreadyExistsException: 
> GemFire:service=AsyncEventQueue,queue=cusip_index#_data,type=Member,member=192.168.2.9(74354)-1026
>   at 
> org.apache.geode.management.internal.MBeanJMXAdapter.registerMBean(MBeanJMXAdapter.java:114)
>   at 
> org.apache.geode.management.internal.SystemManagementService.registerInternalMBean(SystemManagementService.java:407)
>   at 
> org.apache.geode.management.internal.beans.ManagementAdapter.handleAsyncEventQueueCreation(ManagementAdapter.java:584)
>   at 
> org.apache.geode.management.internal.beans.ManagementListener.handleEvent(ManagementListener.java:186)
>   at 
> org.apache.geode.distributed.internal.InternalDistributedSystem.notifyResourceEventListeners(InternalDistributedSystem.java:2146)
>   at 
> org.apache.geode.distributed.internal.InternalDistributedSystem.handleResourceEvent(InternalDistributedSystem.java:536)
>   at 
> org.apache.geode.internal.cache.GemFireCacheImpl.addAsyncEventQueue(GemFireCacheImpl.java:4068)
>   at 
> org.apache.geode.cache.asyncqueue.internal.AsyncEventQueueFactoryImpl.create(AsyncEventQueueFactoryImpl.java:197)
>   at 
> org.apache.geode.cache.lucene.internal.LuceneIndexImpl.createAEQ(LuceneIndexImpl.java:184)
>   at 
> org.apache.geode.cache.lucene.internal.LuceneIndexImpl.createAEQ(LuceneIndexImpl.java:150)
>   at 
> org.apache.geode.cache.lucene.internal.LuceneIndexImpl.initialize(LuceneIndexImpl.java:140)
>   at 
> org.apache.geode.cache.lucene.internal.LuceneServiceImpl.afterDataRegionCreated(LuceneServiceImpl.java:227)
>   at 
> org.apache.geode.cache.lucene.internal.LuceneServiceImpl$1.afterCreate(LuceneServiceImpl.java:207)
>   at 
> org.apache.geode.internal.cache.GemFireCacheImpl.invokeRegionAfter(GemFireCacheImpl.java:3370)
>   at 
> org.apache.geode.internal.cache.GemFireCacheImpl.createVMRegion(GemFireCacheImpl.java:3349)
>   at 
> org.apache.geode.internal.cache.GemFireCacheImpl.basicCreateRegion(GemFireCacheImpl.java:3190)
>   at 
> org.apache.geode.internal.cache.GemFireCacheImpl.createRegion(GemFireCacheImpl.java:3178)
>   at org.apache.geode.cache.RegionFactory.create(RegionFactory.java:762)
>   at CreateRegionFunction.createRegion(CreateRegionFunction.java:27)
>   at CreateRegionFunction.execute(CreateRegionFunction.java:22)
>   at 
> org.apache.geode.internal.cache.tier.sockets.command.ExecuteFunction66.executeFunctionaLocally(ExecuteFunction66.java:324)
>   at 
> org.apache.geode.internal.cache.tier.sockets.command.ExecuteFunction66.cmdExecute(ExecuteFunction66.java:249)
>   at 
> org.apache.geode.internal.cache.tier.sockets.command.ExecuteFunction70.cmdExecute(ExecuteFunction70.java:55)
>   at 
> org.apache.geode.internal.cache.tier.sockets.BaseCommand.execute(BaseCommand.java:141)
>   at 
> org.apache.geode.internal.cache.tier.sockets.ServerConnection.doNormalMsg(ServerConnection.java:783)
>   at 
> org.apache.geode.internal.cache.tier.sockets.ServerConnection.doOneMessage(ServerConnection.java:914)
>   at 
> org.apache.geode.internal.cache.tier.sockets.ServerConnection.run(ServerConnection.java:1171)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at 
> org.apache.geode.internal.cache.tier.sockets.AcceptorImpl$1$1.run(AcceptorImpl.java:519)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: javax.management.InstanceAlreadyExistsException: 
> GemFire:service=AsyncEventQueue,queue=cusip_index#_data,type=Member,member=192.168.2.9(74354)-1026
>   at com.sun.jmx.mbeanserver.Repository.addMBean(Repository.java:437)
>   at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerWithRepository(DefaultMBeanServerInterceptor.java:1898)
>   at 
> 

[jira] [Created] (GEODE-2568) When its AsyncEventQueue is destroyed, its JMX MBean is not removed

2017-03-01 Thread Barry Oglesby (JIRA)
Barry Oglesby created GEODE-2568:


 Summary: When its AsyncEventQueue is destroyed, its JMX MBean is 
not removed
 Key: GEODE-2568
 URL: https://issues.apache.org/jira/browse/GEODE-2568
 Project: Geode
  Issue Type: Bug
  Components: wan
Reporter: Barry Oglesby


This results in the exception below if the AsyncEventQueue is re-added:
{noformat}
[warning 2017/03/01 10:53:33.500 PST   
tid=0x3f] javax.management.InstanceAlreadyExistsException: 
GemFire:service=AsyncEventQueue,queue=cusip_index#_data,type=Member,member=192.168.2.9(74354)-1026
org.apache.geode.management.ManagementException: 
javax.management.InstanceAlreadyExistsException: 
GemFire:service=AsyncEventQueue,queue=cusip_index#_data,type=Member,member=192.168.2.9(74354)-1026
at 
org.apache.geode.management.internal.MBeanJMXAdapter.registerMBean(MBeanJMXAdapter.java:114)
at 
org.apache.geode.management.internal.SystemManagementService.registerInternalMBean(SystemManagementService.java:407)
at 
org.apache.geode.management.internal.beans.ManagementAdapter.handleAsyncEventQueueCreation(ManagementAdapter.java:584)
at 
org.apache.geode.management.internal.beans.ManagementListener.handleEvent(ManagementListener.java:186)
at 
org.apache.geode.distributed.internal.InternalDistributedSystem.notifyResourceEventListeners(InternalDistributedSystem.java:2146)
at 
org.apache.geode.distributed.internal.InternalDistributedSystem.handleResourceEvent(InternalDistributedSystem.java:536)
at 
org.apache.geode.internal.cache.GemFireCacheImpl.addAsyncEventQueue(GemFireCacheImpl.java:4068)
at 
org.apache.geode.cache.asyncqueue.internal.AsyncEventQueueFactoryImpl.create(AsyncEventQueueFactoryImpl.java:197)
at 
org.apache.geode.cache.lucene.internal.LuceneIndexImpl.createAEQ(LuceneIndexImpl.java:184)
at 
org.apache.geode.cache.lucene.internal.LuceneIndexImpl.createAEQ(LuceneIndexImpl.java:150)
at 
org.apache.geode.cache.lucene.internal.LuceneIndexImpl.initialize(LuceneIndexImpl.java:140)
at 
org.apache.geode.cache.lucene.internal.LuceneServiceImpl.afterDataRegionCreated(LuceneServiceImpl.java:227)
at 
org.apache.geode.cache.lucene.internal.LuceneServiceImpl$1.afterCreate(LuceneServiceImpl.java:207)
at 
org.apache.geode.internal.cache.GemFireCacheImpl.invokeRegionAfter(GemFireCacheImpl.java:3370)
at 
org.apache.geode.internal.cache.GemFireCacheImpl.createVMRegion(GemFireCacheImpl.java:3349)
at 
org.apache.geode.internal.cache.GemFireCacheImpl.basicCreateRegion(GemFireCacheImpl.java:3190)
at 
org.apache.geode.internal.cache.GemFireCacheImpl.createRegion(GemFireCacheImpl.java:3178)
at org.apache.geode.cache.RegionFactory.create(RegionFactory.java:762)
at CreateRegionFunction.createRegion(CreateRegionFunction.java:27)
at CreateRegionFunction.execute(CreateRegionFunction.java:22)
at 
org.apache.geode.internal.cache.tier.sockets.command.ExecuteFunction66.executeFunctionaLocally(ExecuteFunction66.java:324)
at 
org.apache.geode.internal.cache.tier.sockets.command.ExecuteFunction66.cmdExecute(ExecuteFunction66.java:249)
at 
org.apache.geode.internal.cache.tier.sockets.command.ExecuteFunction70.cmdExecute(ExecuteFunction70.java:55)
at 
org.apache.geode.internal.cache.tier.sockets.BaseCommand.execute(BaseCommand.java:141)
at 
org.apache.geode.internal.cache.tier.sockets.ServerConnection.doNormalMsg(ServerConnection.java:783)
at 
org.apache.geode.internal.cache.tier.sockets.ServerConnection.doOneMessage(ServerConnection.java:914)
at 
org.apache.geode.internal.cache.tier.sockets.ServerConnection.run(ServerConnection.java:1171)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at 
org.apache.geode.internal.cache.tier.sockets.AcceptorImpl$1$1.run(AcceptorImpl.java:519)
at java.lang.Thread.run(Thread.java:745)
Caused by: javax.management.InstanceAlreadyExistsException: 
GemFire:service=AsyncEventQueue,queue=cusip_index#_data,type=Member,member=192.168.2.9(74354)-1026
at com.sun.jmx.mbeanserver.Repository.addMBean(Repository.java:437)
at 
com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerWithRepository(DefaultMBeanServerInterceptor.java:1898)
at 
com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerDynamicMBean(DefaultMBeanServerInterceptor.java:966)
at 
com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerObject(DefaultMBeanServerInterceptor.java:900)
at 
com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:324)
at 

[jira] [Resolved] (GEODE-2547) Interest registration can cause a CacheLoader to be invoked

2017-02-28 Thread Barry Oglesby (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-2547?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Barry Oglesby resolved GEODE-2547.
--
   Resolution: Fixed
Fix Version/s: 1.2.0

> Interest registration can cause a CacheLoader to be invoked
> ---
>
> Key: GEODE-2547
> URL: https://issues.apache.org/jira/browse/GEODE-2547
> Project: Geode
>  Issue Type: Bug
>  Components: client queues
>Reporter: Barry Oglesby
>Assignee: Barry Oglesby
> Fix For: 1.2.0
>
>
> A simple scenario to reproduce this issue is:
> - configure a Region with a CacheLoader
> - destroy a key (it doesn't matter if the entry exists)
> - register interest in that key
> The CacheLoader will be invoked



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (GEODE-2404) Add API to destroy a region containing lucene indexes

2017-02-27 Thread Barry Oglesby (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-2404?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Barry Oglesby resolved GEODE-2404.
--
   Resolution: Fixed
Fix Version/s: 1.2.0

> Add API to destroy a region containing lucene indexes
> -
>
> Key: GEODE-2404
> URL: https://issues.apache.org/jira/browse/GEODE-2404
> Project: Geode
>  Issue Type: New Feature
>  Components: lucene
>Reporter: Barry Oglesby
> Fix For: 1.2.0
>
> Attachments: DestroyRegionMultipleMembersFunction.java
>
>
> h2. Description
> An application {{Region}} containing {{LuceneIndexes}} should be able to be 
> destroyed.
> There are several options, including:
> - Invoke one API to destroy both the application {{Region}} and its 
> {{LuceneIndexes}}
> - Invoke two API:
> ## destroy the {{LuceneIndexes}}
> ## destroy application {{Region}} as it is done currently
> h3. One API
> In this case, we would need a callback on {{LuceneService}} to destroy the 
> {{LuceneIndexes}} before destroying the application {{Region}} like:
> {noformat}
> public void beforeDestroyRegion(Region region);
> {noformat}
> This API would get all the {{LuceneIndexes}} for the application {{Region}}, 
> then destroy each one. See the *Two API* section below for details on 
> destroying a {{LuceneIndex}}.
> Without changes to the way {{PartitionedRegions}} are destroyed, this causes 
> an issue though.
> The current behavior of {{PartitionedRegion destroyRegion}} is to first check 
> for colocated children. If there are any, the call fails.
> There are two options for adding the call to destroy the {{LuceneIndexes}}:
> # check for colocated children
> # invoke {{LuceneService beforeDestroyRegion}} to remove the {{LuceneIndexes}}
> # do the rest of the destroy
> 
> # invoke {{LuceneService beforeDestroyRegion}} to remove the {{LuceneIndexes}}
> # check for colocated children
> # do the rest of the destroy
> Both of these options are problematic in different ways.
> In the case of a {{PartitionedRegion}} with {{LuceneIndexes}}, there are 
> colocated children, so the first option would cause the {{destroyRegion}} 
> call to fail; the second option would succeed. I don't think the first option 
> should fail since the colocated children are internal {{Regions}} that the 
> application knows nothing about.
> In the case of a {{PartitionedRegion}} defining {{LuceneIndexes}} and having 
> an {{AsyncEventQueue}}, there are colocated children, so the first option 
> would cause the {{destroyRegion}} call to fail. This is ok since one of the 
> children is an application-known {{AsyncEventQueue}}. The second option would 
> fail in a bad way. It would first remove the {{LuceneIndexes}}, then fail the 
> colocated children check, so the {{destroyRegion}} call would fail. In this 
> case, the application {{Region}} doesn't get destroyed but its 
> {{LuceneIndexes}} do. This would be bad.
> One option would be to look into changing the check for colocated children to 
> check for application-defined (or not hidden) colocated children. Then the 
> code would be something like:
> # check for application-defined colocated children
> # invoke LuceneService beforeDestroyRegion to remove the LuceneIndexes
> # do the rest of the destroy
> I think this would be ok in both cases.
> h3. Two API
> The destroy API on {{LuceneIndex}} would be something like:
> {noformat}
> public void destroy();
> {noformat}
> Destroying each {{LuceneIndex}} would require:
> # destroying the chunk {{Region}}
> # destroying the file {{Region}}
> # destroying the {{AsyncEventQueue}} which would require:
> ## retrieving and stopping the {{AsyncEventQueue's}} underlying 
> {{GatewaySender}} (there probably should be stop API on {{AsyncEventQueue}} 
> which does this)
> ## removing the id from the application {{Region's AsyncEventQueue}} ids
> ## destroying the {{AsyncEventQueue}} (this destroys the underlying 
> {{GatewaySender}} and removes it from the {{GemFireCacheImpl's}} collection 
> of {{GatewaySenders}})
> ## removing the {{AsyncEventQueue}} from the {{GemFireCacheImpl's}} 
> collection of {{AsyncEventQueues}} (this should be included in the destroy 
> method above)
> # removing {{LuceneIndex}} from {{LuceneService's}} map of indexes
> I also think the API on {{LuceneService}} should be something like:
> {noformat}
> public void destroyIndexes(String regionPath);
> public void destroyIndex(String indexName, String regionPath);
> {noformat}
> These methods would get the appropriate {{LuceneIndex(es)}} and invoke 
> destroy on them. Then they would remove the index(es) from the 
> {{LuceneService's}} collection of {{LuceneIndexes}}.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (GEODE-2547) Interest registration can cause a CacheLoader to be invoked

2017-02-24 Thread Barry Oglesby (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-2547?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Barry Oglesby reassigned GEODE-2547:


Assignee: Barry Oglesby

> Interest registration can cause a CacheLoader to be invoked
> ---
>
> Key: GEODE-2547
> URL: https://issues.apache.org/jira/browse/GEODE-2547
> Project: Geode
>  Issue Type: Bug
>  Components: client queues
>Reporter: Barry Oglesby
>Assignee: Barry Oglesby
>
> A simple scenario to reproduce this issue is:
> - configure a Region with a CacheLoader
> - destroy a key (it doesn't matter if the entry exists)
> - register interest in that key
> The CacheLoader will be invoked



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Comment Edited] (GEODE-2404) Add API to destroy a region containing lucene indexes

2017-01-31 Thread Barry Oglesby (JIRA)

[ 
https://issues.apache.org/jira/browse/GEODE-2404?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15847700#comment-15847700
 ] 

Barry Oglesby edited comment on GEODE-2404 at 1/31/17 11:18 PM:


I attached a {{Function}} that does all the above steps to destroy an 
application {{Region}}. It doesn't use use any destroy API in either 
{{LuceneIndex}} or {{LuceneService}} since those API either don't exist yet or 
are incomplete. Its just meant to show all the required steps to destroy the 
application {{Region}}.


was (Author: barry.oglesby):
I attached a {{Function}} that does all the above steps to destroy an 
application {{Region}}. It doesn't use use any destroy API in either 
{{LuceneIndex}} or {{LuceneService}} (apart from getting the {{LuceneIndexes}}) 
since those API either don't exist yet or are incomplete. Its just meant to 
show all the required steps to destroy the application {{Region}}.

> Add API to destroy a region containing lucene indexes
> -
>
> Key: GEODE-2404
> URL: https://issues.apache.org/jira/browse/GEODE-2404
> Project: Geode
>  Issue Type: New Feature
>  Components: lucene
>Reporter: Barry Oglesby
> Attachments: DestroyRegionMultipleMembersFunction.java
>
>
> h2. Description
> An application {{Region}} containing {{LuceneIndexes}} should be able to be 
> destroyed.
> There are several options, including:
> - Invoke one API to destroy both the application {{Region}} and its 
> {{LuceneIndexes}}
> - Invoke two API:
> ## destroy the {{LuceneIndexes}}
> ## destroy application {{Region}} as it is done currently
> h3. One API
> In this case, we would need a callback on {{LuceneService}} to destroy the 
> {{LuceneIndexes}} before destroying the application {{Region}} like:
> {noformat}
> public void beforeDestroyRegion(Region region);
> {noformat}
> This API would get all the {{LuceneIndexes}} for the application {{Region}}, 
> then destroy each one. See the *Two API* section below for details on 
> destroying a {{LuceneIndex}}.
> Without changes to the way {{PartitionedRegions}} are destroyed, this causes 
> an issue though.
> The current behavior of {{PartitionedRegion destroyRegion}} is to first check 
> for colocated children. If there are any, the call fails.
> There are two options for adding the call to destroy the {{LuceneIndexes}}:
> # check for colocated children
> # invoke {{LuceneService beforeDestroyRegion}} to remove the {{LuceneIndexes}}
> # do the rest of the destroy
> 
> # invoke {{LuceneService beforeDestroyRegion}} to remove the {{LuceneIndexes}}
> # check for colocated children
> # do the rest of the destroy
> Both of these options are problematic in different ways.
> In the case of a {{PartitionedRegion}} with {{LuceneIndexes}}, there are 
> colocated children, so the first option would cause the {{destroyRegion}} 
> call to fail; the second option would succeed. I don't think the first option 
> should fail since the colocated children are internal {{Regions}} that the 
> application knows nothing about.
> In the case of a {{PartitionedRegion}} defining {{LuceneIndexes}} and having 
> an {{AsyncEventQueue}}, there are colocated children, so the first option 
> would cause the {{destroyRegion}} call to fail. This is ok since one of the 
> children is an application-known {{AsyncEventQueue}}. The second option would 
> fail in a bad way. It would first remove the {{LuceneIndexes}}, then fail the 
> colocated children check, so the {{destroyRegion}} call would fail. In this 
> case, the application {{Region}} doesn't get destroyed but its 
> {{LuceneIndexes}} do. This would be bad.
> One option would be to look into changing the check for colocated children to 
> check for application-defined (or not hidden) colocated children. Then the 
> code would be something like:
> # check for application-defined colocated children
> # invoke LuceneService beforeDestroyRegion to remove the LuceneIndexes
> # do the rest of the destroy
> I think this would be ok in both cases.
> h3. Two API
> The destroy API on {{LuceneIndex}} would be something like:
> {noformat}
> public void destroy();
> {noformat}
> Destroying each {{LuceneIndex}} would require:
> # destroying the chunk {{Region}}
> # destroying the file {{Region}}
> # destroying the {{AsyncEventQueue}} which would require:
> ## retrieving and stopping the {{AsyncEventQueue's}} underlying 
> {{GatewaySender}} (there probably should be stop API on {{AsyncEventQueue}} 
> which does this)
> ## removing the id from the application {{Region's AsyncEventQueue}} ids
> ## destroying the {{AsyncEventQueue}} (this destroys the underlying 
> {{GatewaySender}} and removes it from the {{GemFireCacheImpl's}} collection 
> of {{GatewaySenders}})
> ## removing the {{AsyncEventQueue}} from the {{GemFireCacheImpl's}} 
> 

[jira] [Comment Edited] (GEODE-2404) Add API to destroy a region containing lucene indexes

2017-01-31 Thread Barry Oglesby (JIRA)

[ 
https://issues.apache.org/jira/browse/GEODE-2404?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15847700#comment-15847700
 ] 

Barry Oglesby edited comment on GEODE-2404 at 1/31/17 11:17 PM:


I attached a {{Function}} that does all the above steps to destroy an 
application {{Region}}. It doesn't use use any destroy API in either 
{{LuceneIndex}} or {{LuceneService}} (apart from getting the {{LuceneIndexes}}) 
since those API either don't exist yet or are incomplete. Its just meant to 
show all the required steps to destroy the application {{Region}}.


was (Author: barry.oglesby):
Here is a {{Function}} that does all the above steps to destroy an application 
{{Region}}. It doesn't use use any destroy API in either {{LuceneIndex}} or 
{{LuceneService}} (apart from getting the {{LuceneIndexes}}) since those API 
either don't exist yet or are incomplete. Its just meant to show all the 
required steps to destroy the application {{Region}}.

> Add API to destroy a region containing lucene indexes
> -
>
> Key: GEODE-2404
> URL: https://issues.apache.org/jira/browse/GEODE-2404
> Project: Geode
>  Issue Type: New Feature
>  Components: lucene
>Reporter: Barry Oglesby
> Attachments: DestroyRegionMultipleMembersFunction.java
>
>
> h2. Description
> An application {{Region}} containing {{LuceneIndexes}} should be able to be 
> destroyed.
> There are several options, including:
> - Invoke one API to destroy both the application {{Region}} and its 
> {{LuceneIndexes}}
> - Invoke two API:
> ## destroy the {{LuceneIndexes}}
> ## destroy application {{Region}} as it is done currently
> h3. One API
> In this case, we would need a callback on {{LuceneService}} to destroy the 
> {{LuceneIndexes}} before destroying the application {{Region}} like:
> {noformat}
> public void beforeDestroyRegion(Region region);
> {noformat}
> This API would get all the {{LuceneIndexes}} for the application {{Region}}, 
> then destroy each one. See the *Two API* section below for details on 
> destroying a {{LuceneIndex}}.
> Without changes to the way {{PartitionedRegions}} are destroyed, this causes 
> an issue though.
> The current behavior of {{PartitionedRegion destroyRegion}} is to first check 
> for colocated children. If there are any, the call fails.
> There are two options for adding the call to destroy the {{LuceneIndexes}}:
> # check for colocated children
> # invoke {{LuceneService beforeDestroyRegion}} to remove the {{LuceneIndexes}}
> # do the rest of the destroy
> 
> # invoke {{LuceneService beforeDestroyRegion}} to remove the {{LuceneIndexes}}
> # check for colocated children
> # do the rest of the destroy
> Both of these options are problematic in different ways.
> In the case of a {{PartitionedRegion}} with {{LuceneIndexes}}, there are 
> colocated children, so the first option would cause the {{destroyRegion}} 
> call to fail; the second option would succeed. I don't think the first option 
> should fail since the colocated children are internal {{Regions}} that the 
> application knows nothing about.
> In the case of a {{PartitionedRegion}} defining {{LuceneIndexes}} and having 
> an {{AsyncEventQueue}}, there are colocated children, so the first option 
> would cause the {{destroyRegion}} call to fail. This is ok since one of the 
> children is an application-known {{AsyncEventQueue}}. The second option would 
> fail in a bad way. It would first remove the {{LuceneIndexes}}, then fail the 
> colocated children check, so the {{destroyRegion}} call would fail. In this 
> case, the application {{Region}} doesn't get destroyed but its 
> {{LuceneIndexes}} do. This would be bad.
> One option would be to look into changing the check for colocated children to 
> check for application-defined (or not hidden) colocated children. Then the 
> code would be something like:
> # check for application-defined colocated children
> # invoke LuceneService beforeDestroyRegion to remove the LuceneIndexes
> # do the rest of the destroy
> I think this would be ok in both cases.
> h3. Two API
> The destroy API on {{LuceneIndex}} would be something like:
> {noformat}
> public void destroy();
> {noformat}
> Destroying each {{LuceneIndex}} would require:
> # destroying the chunk {{Region}}
> # destroying the file {{Region}}
> # destroying the {{AsyncEventQueue}} which would require:
> ## retrieving and stopping the {{AsyncEventQueue's}} underlying 
> {{GatewaySender}} (there probably should be stop API on {{AsyncEventQueue}} 
> which does this)
> ## removing the id from the application {{Region's AsyncEventQueue}} ids
> ## destroying the {{AsyncEventQueue}} (this destroys the underlying 
> {{GatewaySender}} and removes it from the {{GemFireCacheImpl's}} collection 
> of {{GatewaySenders}})
> ## removing the 

[jira] [Updated] (GEODE-2404) Add API to destroy a region containing lucene indexes

2017-01-31 Thread Barry Oglesby (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-2404?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Barry Oglesby updated GEODE-2404:
-
Attachment: DestroyRegionMultipleMembersFunction.java

Here is a {{Function}} that does all the above steps to destroy an application 
{{Region}}. It doesn't use use any destroy API in either {{LuceneIndex}} or 
{{LuceneService}} (apart from getting the {{LuceneIndexes}}) since those API 
either don't exist yet or are incomplete. Its just meant to show all the 
required steps to destroy the application {{Region}}.

> Add API to destroy a region containing lucene indexes
> -
>
> Key: GEODE-2404
> URL: https://issues.apache.org/jira/browse/GEODE-2404
> Project: Geode
>  Issue Type: New Feature
>  Components: lucene
>Reporter: Barry Oglesby
> Attachments: DestroyRegionMultipleMembersFunction.java
>
>
> h2. Description
> An application {{Region}} containing {{LuceneIndexes}} should be able to be 
> destroyed.
> There are several options, including:
> - Invoke one API to destroy both the application {{Region}} and its 
> {{LuceneIndexes}}
> - Invoke two API:
> ## destroy the {{LuceneIndexes}}
> ## destroy application {{Region}} as it is done currently
> h3. One API
> In this case, we would need a callback on {{LuceneService}} to destroy the 
> {{LuceneIndexes}} before destroying the application {{Region}} like:
> {noformat}
> public void beforeDestroyRegion(Region region);
> {noformat}
> This API would get all the {{LuceneIndexes}} for the application {{Region}}, 
> then destroy each one. See the *Two API* section below for details on 
> destroying a {{LuceneIndex}}.
> Without changes to the way {{PartitionedRegions}} are destroyed, this causes 
> an issue though.
> The current behavior of {{PartitionedRegion destroyRegion}} is to first check 
> for colocated children. If there are any, the call fails.
> There are two options for adding the call to destroy the {{LuceneIndexes}}:
> # check for colocated children
> # invoke {{LuceneService beforeDestroyRegion}} to remove the {{LuceneIndexes}}
> # do the rest of the destroy
> 
> # invoke {{LuceneService beforeDestroyRegion}} to remove the {{LuceneIndexes}}
> # check for colocated children
> # do the rest of the destroy
> Both of these options are problematic in different ways.
> In the case of a {{PartitionedRegion}} with {{LuceneIndexes}}, there are 
> colocated children, so the first option would cause the {{destroyRegion}} 
> call to fail; the second option would succeed. I don't think the first option 
> should fail since the colocated children are internal {{Regions}} that the 
> application knows nothing about.
> In the case of a {{PartitionedRegion}} defining {{LuceneIndexes}} and having 
> an {{AsyncEventQueue}}, there are colocated children, so the first option 
> would cause the {{destroyRegion}} call to fail. This is ok since one of the 
> children is an application-known {{AsyncEventQueue}}. The second option would 
> fail in a bad way. It would first remove the {{LuceneIndexes}}, then fail the 
> colocated children check, so the {{destroyRegion}} call would fail. In this 
> case, the application {{Region}} doesn't get destroyed but its 
> {{LuceneIndexes}} do. This would be bad.
> One option would be to look into changing the check for colocated children to 
> check for application-defined (or not hidden) colocated children. Then the 
> code would be something like:
> # check for application-defined colocated children
> # invoke LuceneService beforeDestroyRegion to remove the LuceneIndexes
> # do the rest of the destroy
> I think this would be ok in both cases.
> h3. Two API
> The destroy API on {{LuceneIndex}} would be something like:
> {noformat}
> public void destroy();
> {noformat}
> Destroying each {{LuceneIndex}} would require:
> # destroying the chunk {{Region}}
> # destroying the file {{Region}}
> # destroying the {{AsyncEventQueue}} which would require:
> ## retrieving and stopping the {{AsyncEventQueue's}} underlying 
> {{GatewaySender}} (there probably should be stop API on {{AsyncEventQueue}} 
> which does this)
> ## removing the id from the application {{Region's AsyncEventQueue}} ids
> ## destroying the {{AsyncEventQueue}} (this destroys the underlying 
> {{GatewaySender}} and removes it from the {{GemFireCacheImpl's}} collection 
> of {{GatewaySenders}})
> ## removing the {{AsyncEventQueue}} from the {{GemFireCacheImpl's}} 
> collection of {{AsyncEventQueues}} (this should be included in the destroy 
> method above)
> # removing {{LuceneIndex}} from {{LuceneService's}} map of indexes
> I also think the API on {{LuceneService}} should be something like:
> {noformat}
> public void destroyIndexes(String regionPath);
> public void destroyIndex(String indexName, String regionPath);
> {noformat}
> These methods 

[jira] [Created] (GEODE-2404) Add API to destroy a region containing lucene indexes

2017-01-31 Thread Barry Oglesby (JIRA)
Barry Oglesby created GEODE-2404:


 Summary: Add API to destroy a region containing lucene indexes
 Key: GEODE-2404
 URL: https://issues.apache.org/jira/browse/GEODE-2404
 Project: Geode
  Issue Type: New Feature
  Components: lucene
Reporter: Barry Oglesby


h2. Description
An application {{Region}} containing {{LuceneIndexes}} should be able to be 
destroyed.

There are several options, including:
- Invoke one API to destroy both the application {{Region}} and its 
{{LuceneIndexes}}
- Invoke two API:
## destroy the {{LuceneIndexes}}
## destroy application {{Region}} as it is done currently

h3. One API

In this case, we would need a callback on {{LuceneService}} to destroy the 
{{LuceneIndexes}} before destroying the application {{Region}} like:
{noformat}
public void beforeDestroyRegion(Region region);
{noformat}
This API would get all the {{LuceneIndexes}} for the application {{Region}}, 
then destroy each one. See the *Two API* section below for details on 
destroying a {{LuceneIndex}}.

Without changes to the way {{PartitionedRegions}} are destroyed, this causes an 
issue though.

The current behavior of {{PartitionedRegion destroyRegion}} is to first check 
for colocated children. If there are any, the call fails.

There are two options for adding the call to destroy the {{LuceneIndexes}}:

# check for colocated children
# invoke {{LuceneService beforeDestroyRegion}} to remove the {{LuceneIndexes}}
# do the rest of the destroy



# invoke {{LuceneService beforeDestroyRegion}} to remove the {{LuceneIndexes}}
# check for colocated children
# do the rest of the destroy

Both of these options are problematic in different ways.

In the case of a {{PartitionedRegion}} with {{LuceneIndexes}}, there are 
colocated children, so the first option would cause the {{destroyRegion}} call 
to fail; the second option would succeed. I don't think the first option should 
fail since the colocated children are internal {{Regions}} that the application 
knows nothing about.

In the case of a {{PartitionedRegion}} defining {{LuceneIndexes}} and having an 
{{AsyncEventQueue}}, there are colocated children, so the first option would 
cause the {{destroyRegion}} call to fail. This is ok since one of the children 
is an application-known {{AsyncEventQueue}}. The second option would fail in a 
bad way. It would first remove the {{LuceneIndexes}}, then fail the colocated 
children check, so the {{destroyRegion}} call would fail. In this case, the 
application {{Region}} doesn't get destroyed but its {{LuceneIndexes}} do. This 
would be bad.

One option would be to look into changing the check for colocated children to 
check for application-defined (or not hidden) colocated children. Then the code 
would be something like:

# check for application-defined colocated children
# invoke LuceneService beforeDestroyRegion to remove the LuceneIndexes
# do the rest of the destroy

I think this would be ok in both cases.

h3. Two API

The destroy API on {{LuceneIndex}} would be something like:
{noformat}
public void destroy();
{noformat}
Destroying each {{LuceneIndex}} would require:

# destroying the chunk {{Region}}
# destroying the file {{Region}}
# destroying the {{AsyncEventQueue}} which would require:
## retrieving and stopping the {{AsyncEventQueue's}} underlying 
{{GatewaySender}} (there probably should be stop API on {{AsyncEventQueue}} 
which does this)
## removing the id from the application {{Region's AsyncEventQueue}} ids
## destroying the {{AsyncEventQueue}} (this destroys the underlying 
{{GatewaySender}} and removes it from the {{GemFireCacheImpl's}} collection of 
{{GatewaySenders}})
## removing the {{AsyncEventQueue}} from the {{GemFireCacheImpl's}} collection 
of {{AsyncEventQueues}} (this should be included in the destroy method above)
# removing {{LuceneIndex}} from {{LuceneService's}} map of indexes

I also think the API on {{LuceneService}} should be something like:
{noformat}
public void destroyIndexes(String regionPath);
public void destroyIndex(String indexName, String regionPath);
{noformat}
These methods would get the appropriate {{LuceneIndex(es)}} and invoke destroy 
on them. Then they would remove the index(es) from the {{LuceneService's}} 
collection of {{LuceneIndexes}}.





--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (GEODE-2293) The AckReaderThread incorrectly shuts down when an IllegalStateException is throw whiel releasing an off-heap object

2017-01-10 Thread Barry Oglesby (JIRA)
Barry Oglesby created GEODE-2293:


 Summary: The AckReaderThread incorrectly shuts down when an 
IllegalStateException is throw whiel releasing an off-heap object
 Key: GEODE-2293
 URL: https://issues.apache.org/jira/browse/GEODE-2293
 Project: Geode
  Issue Type: Bug
  Components: wan
Reporter: Barry Oglesby


The regression test run showed the following severe message:
{noformat}
[severe 2017/01/07 09:14:40.789 UTC 
bridgegemfire_1_1_rs-QueuesBTTest-2017-01-06-14-07-15-client-12_3912 
 tid=0x97] Stopping the 
processor because the following exception occurred while processing a batch:
java.lang.IllegalStateException: It looks like off heap memory @7f33a8000238 
was already freed. rawBits=0 history=null
at 
org.apache.geode.internal.offheap.OffHeapStoredObject.release(OffHeapStoredObject.java:675)
at 
org.apache.geode.internal.offheap.OffHeapStoredObject.release(OffHeapStoredObject.java:659)
at 
org.apache.geode.internal.offheap.OffHeapStoredObject.release(OffHeapStoredObject.java:373)
at 
org.apache.geode.internal.offheap.OffHeapHelper.releaseAndTrackOwner(OffHeapHelper.java:138)
at 
org.apache.geode.internal.cache.wan.GatewaySenderEventImpl.release(GatewaySenderEventImpl.java:1213)
at 
org.apache.geode.internal.cache.wan.parallel.ParallelGatewaySenderQueue.remove(ParallelGatewaySenderQueue.java:1096)
at 
org.apache.geode.internal.cache.wan.parallel.ParallelGatewaySenderQueue.remove(ParallelGatewaySenderQueue.java:1531)
at 
org.apache.geode.internal.cache.wan.AbstractGatewaySenderEventProcessor.eventQueueRemove(AbstractGatewaySenderEventProcessor.java:231)
at 
org.apache.geode.internal.cache.wan.AbstractGatewaySenderEventProcessor.handleSuccessBatchAck(AbstractGatewaySenderEventProcessor.java:981)
at 
org.apache.geode.internal.cache.wan.GatewaySenderEventRemoteDispatcher$AckReaderThread.run(GatewaySenderEventRemoteDispatcher.java:636)
{noformat}
This exception shows that the {{AckReaderThread}} was processing a successful 
batch acknowledgement, and an {{IllegalStateException}} was thrown while 
releasing a {{GatewaySenderEventImpl}} from off-heap memory. This caused the 
{{AckReaderThread}} to shut down. It looks like the {{GatewaySenderEventImpl}} 
had already been released and is being released again. I'm not sure how the 
{{GatewaySenderEventImpl}} got into this state, but the {{AckReaderThread}} 
should not shut down because of this {{IllegalStateException}}.

The code in question is in the finally block of 
{{ParallelGatewaySenderQueue.remove}}:
{noformat}
} finally {
  event.release();
}
{noformat}

The test run is here: 
/export/monaco1/users/lhughes/xfer/wanconflationPersist-0107-091328




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (GEODE-2174) Provide more detailed reason when unregistering clients

2017-01-03 Thread Barry Oglesby (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-2174?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Barry Oglesby updated GEODE-2174:
-
Assignee: Hitesh Khamesra  (was: Barry Oglesby)

> Provide more detailed reason when unregistering clients
> ---
>
> Key: GEODE-2174
> URL: https://issues.apache.org/jira/browse/GEODE-2174
> Project: Geode
>  Issue Type: Improvement
>  Components: client/server
>Reporter: Barry Oglesby
>Assignee: Hitesh Khamesra
> Fix For: 1.1.0
>
>
> This is the same as GEM-778.
> When a client is unregistered for an abnormal reason, log the reason.
> In previous versions, when a client was unregistered from the server, a 
> message like this was logged:
> {noformat}
> [info 2015/01/07 16:19:55.992 JST cache1  Thread 3> tid=0x44] ClientHealthMonitor: Unregistering client with member id 
> identity(,connection=1
> {noformat}
> Then, that message was eliminated altogether since it was logged when a 
> client left normally as well as abnormally. Now, the request is to add it 
> back in for clients who unregister abnormally.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (GEODE-2242) Destroy operations on PRELOADED regions are not applied in the receiving WAN site

2016-12-21 Thread Barry Oglesby (JIRA)
Barry Oglesby created GEODE-2242:


 Summary: Destroy operations on PRELOADED regions are not applied 
in the receiving WAN site
 Key: GEODE-2242
 URL: https://issues.apache.org/jira/browse/GEODE-2242
 Project: Geode
  Issue Type: Bug
  Components: wan
Reporter: Barry Oglesby


In the receiving site, all the processing in 
AbstractRegionEntry.processGatewayTag fails for the destroy event, a 
ConcurrentCacheModificationException is thrown, and the destroy event is not 
applied.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (GEODE-2242) Destroy operations on PRELOADED regions are not applied in the receiving WAN site

2016-12-21 Thread Barry Oglesby (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-2242?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Barry Oglesby reassigned GEODE-2242:


Assignee: Barry Oglesby

> Destroy operations on PRELOADED regions are not applied in the receiving WAN 
> site
> -
>
> Key: GEODE-2242
> URL: https://issues.apache.org/jira/browse/GEODE-2242
> Project: Geode
>  Issue Type: Bug
>  Components: wan
>Reporter: Barry Oglesby
>Assignee: Barry Oglesby
>
> In the receiving site, all the processing in 
> AbstractRegionEntry.processGatewayTag fails for the destroy event, a 
> ConcurrentCacheModificationException is thrown, and the destroy event is not 
> applied.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (GEODE-2234) Lucene query hit stats shows number higher than number of calls

2016-12-20 Thread Barry Oglesby (JIRA)

[ 
https://issues.apache.org/jira/browse/GEODE-2234?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15764893#comment-15764893
 ] 

Barry Oglesby commented on GEODE-2234:
--

I moved the queryExecution stat manipulation to the {{LuceneFunction}} execute 
method instead of the {{IndexRepositoryImpl}} query method. That method 
essentially counts the bucket queries rather than the top-level query. I 
renamed the stats used by {{IndexRepositoryImpl}} so that we have those as well.

Here is gfsh output with 1 query:
{noformat}
gfsh>list lucene indexes --with-stats
Index Name  | Region Path |   Indexed Fields   | Field Analyzer  |   
Status| Query Executions | Updates | Commits | Documents
--- | --- | -- | --- | 
--- |  | --- | --- | -
cusip_index | /data   | [cusip]| {}  | 
Initialized | 1| 328 | 315 | 328
cusip_index | /data   | [cusip]| {}  | 
Initialized | 1| 335 | 323 | 335
cusip_index | /data   | [cusip]| {}  | 
Initialized | 1| 337 | 318 | 337
{noformat}

> Lucene query hit stats shows number higher than number of calls
> ---
>
> Key: GEODE-2234
> URL: https://issues.apache.org/jira/browse/GEODE-2234
> Project: Geode
>  Issue Type: Bug
>  Components: lucene
>Reporter: Barry Oglesby
>Assignee: Barry Oglesby
>
> Scenario:
> System with 0 entries
> Add 2 entries
> Query 1 time.
> Add the same 2 entries (update)
> Query 1 time.
> Result:
> {noformat}
> gfsh>list lucene indexes --with-stats
> Index Name | Region Path | Indexed Fields | Field Analyzer | Query Executions 
> | Updates | Commits | Documents
> -- | --- | -- | -- |  
> | --- | --- | -
> customerF1 | /Customer   | [f1]   | {} | 0
> | 0   | 0   | 0
> customerF1 | /Customer   | [f1]   | {} | 0
> | 0   | 0   | 0
> gfsh>list lucene indexes --with-stats
> Index Name | Region Path | Indexed Fields | Field Analyzer | Query Executions 
> | Updates | Commits | Documents
> -- | --- | -- | -- |  
> | --- | --- | -
> customerF1 | /Customer   | [f1]   | {} | 112  
> | 2   | 2   | 1
> customerF1 | /Customer   | [f1]   | {} | 114  
> | 2   | 2   | 1
> gfsh>list lucene indexes --with-stats
> Index Name | Region Path | Indexed Fields | Field Analyzer | Query Executions 
> | Updates | Commits | Documents
> -- | --- | -- | -- |  
> | --- | --- | -
> customerF1 | /Customer   | [f1]   | {} | 224  
> | 3   | 3   | 1
> customerF1 | /Customer   | [f1]   | {} | 228  
> | 3   | 3   | 1
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (GEODE-2234) Lucene query hit stats shows number higher than number of calls

2016-12-20 Thread Barry Oglesby (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-2234?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Barry Oglesby reassigned GEODE-2234:


Assignee: Barry Oglesby

> Lucene query hit stats shows number higher than number of calls
> ---
>
> Key: GEODE-2234
> URL: https://issues.apache.org/jira/browse/GEODE-2234
> Project: Geode
>  Issue Type: Bug
>  Components: lucene
>Reporter: Barry Oglesby
>Assignee: Barry Oglesby
>
> Scenario:
> System with 0 entries
> Add 2 entries
> Query 1 time.
> Add the same 2 entries (update)
> Query 1 time.
> Result:
> {noformat}
> gfsh>list lucene indexes --with-stats
> Index Name | Region Path | Indexed Fields | Field Analyzer | Query Executions 
> | Updates | Commits | Documents
> -- | --- | -- | -- |  
> | --- | --- | -
> customerF1 | /Customer   | [f1]   | {} | 0
> | 0   | 0   | 0
> customerF1 | /Customer   | [f1]   | {} | 0
> | 0   | 0   | 0
> gfsh>list lucene indexes --with-stats
> Index Name | Region Path | Indexed Fields | Field Analyzer | Query Executions 
> | Updates | Commits | Documents
> -- | --- | -- | -- |  
> | --- | --- | -
> customerF1 | /Customer   | [f1]   | {} | 112  
> | 2   | 2   | 1
> customerF1 | /Customer   | [f1]   | {} | 114  
> | 2   | 2   | 1
> gfsh>list lucene indexes --with-stats
> Index Name | Region Path | Indexed Fields | Field Analyzer | Query Executions 
> | Updates | Commits | Documents
> -- | --- | -- | -- |  
> | --- | --- | -
> customerF1 | /Customer   | [f1]   | {} | 224  
> | 3   | 3   | 1
> customerF1 | /Customer   | [f1]   | {} | 228  
> | 3   | 3   | 1
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (GEODE-2234) Lucene query hit stats shows number higher than number of calls

2016-12-20 Thread Barry Oglesby (JIRA)
Barry Oglesby created GEODE-2234:


 Summary: Lucene query hit stats shows number higher than number of 
calls
 Key: GEODE-2234
 URL: https://issues.apache.org/jira/browse/GEODE-2234
 Project: Geode
  Issue Type: Bug
  Components: lucene
Reporter: Barry Oglesby


Scenario:

System with 0 entries
Add 2 entries
Query 1 time.
Add the same 2 entries (update)
Query 1 time.

Result:
{noformat}
gfsh>list lucene indexes --with-stats
Index Name | Region Path | Indexed Fields | Field Analyzer | Query Executions | 
Updates | Commits | Documents
-- | --- | -- | -- |  | 
--- | --- | -
customerF1 | /Customer   | [f1]   | {} | 0| 
0   | 0   | 0
customerF1 | /Customer   | [f1]   | {} | 0| 
0   | 0   | 0

gfsh>list lucene indexes --with-stats
Index Name | Region Path | Indexed Fields | Field Analyzer | Query Executions | 
Updates | Commits | Documents
-- | --- | -- | -- |  | 
--- | --- | -
customerF1 | /Customer   | [f1]   | {} | 112  | 
2   | 2   | 1
customerF1 | /Customer   | [f1]   | {} | 114  | 
2   | 2   | 1

gfsh>list lucene indexes --with-stats
Index Name | Region Path | Indexed Fields | Field Analyzer | Query Executions | 
Updates | Commits | Documents
-- | --- | -- | -- |  | 
--- | --- | -
customerF1 | /Customer   | [f1]   | {} | 224  | 
3   | 3   | 1
customerF1 | /Customer   | [f1]   | {} | 228  | 
3   | 3   | 1
{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (GEODE-2224) A ClassCastException occurs while attempting to execute a local query in transaction on a client

2016-12-16 Thread Barry Oglesby (JIRA)
Barry Oglesby created GEODE-2224:


 Summary: A ClassCastException occurs while attempting to execute a 
local query in transaction on a client
 Key: GEODE-2224
 URL: https://issues.apache.org/jira/browse/GEODE-2224
 Project: Geode
  Issue Type: Bug
  Components: querying
Reporter: Barry Oglesby
Assignee: Mark Bretl


Code:
{noformat}
CacheTransactionManager cacheTransactionManager = 
cache.getCacheTransactionManager();
QueryService localQueryService = ((ClientCache) 
this.cache).getLocalQueryService();
cacheTransactionManager.begin();
Query query = queryService.newQuery(QUERY_STRING);
SelectResults results = (SelectResults) query.execute(PARAMETERS);
cacheTransactionManager.commit();
{noformat}
Exception:
{noformat}
Exception in thread "main" java.lang.ClassCastException: 
org.apache.geode.internal.cache.EntrySnapshot cannot be cast to 
org.apache.geode.internal.cache.LocalRegion$NonTXEntry
at 
org.apache.geode.internal.cache.EntriesSet$EntriesIterator.moveNext(EntriesSet.java:179)
at 
org.apache.geode.internal.cache.EntriesSet$EntriesIterator.(EntriesSet.java:118)
at 
org.apache.geode.internal.cache.EntriesSet.iterator(EntriesSet.java:83)
at 
org.apache.geode.cache.query.internal.ResultsCollectionWrapper.iterator(ResultsCollectionWrapper.java:183)
at 
org.apache.geode.cache.query.internal.QRegion.iterator(QRegion.java:243)
at 
org.apache.geode.cache.query.internal.CompiledSelect.doNestedIterations(CompiledSelect.java:848)
at 
org.apache.geode.cache.query.internal.CompiledSelect.doIterationEvaluate(CompiledSelect.java:715)
at 
org.apache.geode.cache.query.internal.CompiledSelect.evaluate(CompiledSelect.java:553)
at 
org.apache.geode.cache.query.internal.CompiledSelect.evaluate(CompiledSelect.java:57)
at 
org.apache.geode.cache.query.internal.DefaultQuery.executeUsingContext(DefaultQuery.java:582)
at 
org.apache.geode.cache.query.internal.DefaultQuery.execute(DefaultQuery.java:391)
{noformat}




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (GEODE-1831) Function gets executed twice on server with gateways if groups are configured

2016-12-07 Thread Barry Oglesby (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-1831?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Barry Oglesby resolved GEODE-1831.
--
   Resolution: Fixed
Fix Version/s: 1.1.0

> Function gets executed twice on server with gateways if groups are configured
> -
>
> Key: GEODE-1831
> URL: https://issues.apache.org/jira/browse/GEODE-1831
> Project: Geode
>  Issue Type: Bug
>  Components: functions
>Reporter: Barry Oglesby
>Assignee: Barry Oglesby
> Fix For: 1.1.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)