Re: Cloudburst from RISELabs

2020-03-13 Thread Diane Hardman
Thanks Sai! Hope you are doing well.

On Wed, Mar 4, 2020 at 6:01 PM Sai Boorlagadda 
wrote:

> Devs,
>
> I came across Cloudburst paper which introduces function execution
> integrated into a KV store (Anna) by Berkley folks @ RISELabs which
> resembles Geode's functions execution engine.
>
> I thought I would share this paper with the rest of the community.
>
> https://arxiv.org/pdf/2001.04592.pdf
>
> Sai
>


[PROPOSAL] New WAN callback API & Dead-letter queue example

2018-11-07 Thread Diane Hardman
WAN replication allows 2 remote data centers to maintain consistency across
their data regions. There are circumstances when one data center cannot
successfully process incoming events delivered over the WAN gateway and an
exception is sent back to the sending data center along with the
acknowledgment.

This proposal is to add a callback API to the Gateway Sender that will be
executed when an exception is returned from the Gateway Receiver.

Please review the following Wiki page containing more details and give us
your feedback.
https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=80452478


Re: [DISCUSS] Predictable minor release cadence

2018-10-05 Thread Diane Hardman
+1 to a regular cadence and starting with a 3-month cadence. As we learned
earlier this year, monthly was too frequent to support our testing cycles
and for users to update.

On Fri, Oct 5, 2018 at 11:54 AM, Robert Houghton 
wrote:

> +1 to Dan
>
> On Fri, Oct 5, 2018, 09:27 Dan Smith  wrote:
>
> > Ok, I buy your arguments to cut the release branch 1 month ahead of time.
> > I'm fine with that plan, as long as we can stick to only putting critical
> > fixes on the release branch. Once the release branch is cut, it ships
> > without further changes unless we find new issues.
> >
> > -Dan
> >
> >
> > On Fri, Oct 5, 2018 at 8:58 AM Alexander Murmann 
> > wrote:
> >
> > > Robert and Sai, I think either release process can be stressful if your
> > > team doesn't understand that there is no faster button, but that the
> only
> > > lever is to cut scope (you can also compromise quality, but let's not
> do
> > > that).
> > > In either scenario there can be release pressure. To me the biggest
> > > difference is that with a fixed schedule I at least have a good chance
> to
> > > see sooner that I need to cut scope to catch the next train. Without a
> > > fixed schedule, I suddenly might find myself in the situation that
> > everyone
> > > else is ready to ship and is waiting on me and getting impatient. I
> might
> > > have not even been able to see that coming unless I am constantly
> > checking
> > > with everyone else to find out when they think they might be ready to
> > ship.
> > >
> > > To the Kafka & Spark point: I'd love to see Geode evolve rapidly and
> > have a
> > > massively growing community 
> > >
> > > On Fri, Oct 5, 2018 at 8:45 AM Anthony Baker 
> wrote:
> > >
> > > > I’ve been advocating for a fixed release schedule for a long time.  3
> > > > months seems like a good rate given the release overhead.
> > > >
> > > > +1 on cutting the next release branch in November and shooting for an
> > > > early December v1.8.0 release.
> > > >
> > > > Anthony
> > > >
> > > >
> > > > > On Oct 4, 2018, at 6:48 PM, Sai Boorlagadda <
> > sai.boorlaga...@gmail.com
> > > >
> > > > wrote:
> > > > >
> > > > > I agree with Robert that we should release based on features that
> go
> > > in.
> > > > >
> > > > > I am not sure if Apache Kafka & Spark are a good reference for
> GEODE.
> > > > > These tools were evolving rapidly in the last couple of years and
> > > > frequent
> > > > > releases would be good for a growing community.
> > > > >
> > > > > Should we do a patch release every few months to include bug fixes?
> > > > >
> > > > > Sai
> > > > >
> > > > >
> > > > > On Thu, Oct 4, 2018 at 6:40 PM Robert Houghton <
> rhough...@pivotal.io
> > >
> > > > wrote:
> > > > >
> > > > >> I have found it refreshing that the geode released cadence has
> been
> > > > based
> > > > >> on features being done,  rather than a set schedule.
> > > > >>
> > > > >> I come from an environment where we had quarterly releases and
> > monthly
> > > > >> patches to all supported previous releases, and I can tell you
> that
> > it
> > > > >> became a grind. That being said, within that release cadence a
> > > one-month
> > > > >> ramp for bug fixes on the release branch was almost always
> > sufficient.
> > > > >>
> > > > >> On Thu, Oct 4, 2018, 18:32 Ryan McMahon 
> > > wrote:
> > > > >>
> > > > >>> +1 for scheduled releases and cutting the branch one month prior
> to
> > > > >>> release. Given the time it took to fully root out, classify, and
> > > solve
> > > > >>> issues with this 1.7 release, I think a month is the right amount
> > of
> > > > time
> > > > >>> between cutting the branch and releasing.  If it ends up being
> too
> > > much
> > > > >> or
> > > > >>> too little, we can always adjust.
> > > > >>>
> > > > >>> I don’t feel strongly about the release cadence, but I generally
> > > favor
> > > > >> more
> > > > >>> frequent releases if possible (3 month over 6 month).  That way
> new
> > > > >>> features can get into the hands of users sooner, assuming the
> > feature
> > > > >> takes
> > > > >>> less than 3 months to complete.  Again, we can adjust the cadence
> > if
> > > > >>> necessary if it is too frequent or infrequent.
> > > > >>>
> > > > >>> Ryan
> > > > >>>
> > > > >>> On Thu, Oct 4, 2018 at 4:18 PM Alexander Murmann <
> > > amurm...@pivotal.io>
> > > > >>> wrote:
> > > > >>>
> > > >  Anil, releasing every 3 months should give us 3 months of dev
> > work.
> > > > >> Don't
> > > >  forget that there will be one month during which the next
> release
> > is
> > > >  already cut, but the vast majority of the work is going to the
> > > release
> > > >  after that. So while we cut 1.7 one month ago and release 1.7
> > today,
> > > > we
> > > >  already have one month of work on develop again. It's not going
> to
> > > > work
> > > > >>> out
> > > >  for this first release though, due to my suggestion to cut a
> month
> > > > >> early
> > > > >>> to
> > > >  avoid holidays. If I recall correctly our 

Re: [ANNOUNCE] Apache Geode 1.7.0

2018-10-05 Thread Diane Hardman
Woohoo! Awesome news. Congratulations to all the Geode committers and for
Naba's great work shepherding the release process!

On Thu, Oct 4, 2018 at 10:29 AM, Nabarun Nag  wrote:

> The Apache Geode community is pleased to announce the availability of
> Apache Geode 1.7.0.
>
> Apache Geode is a data management platform that provides a database-like
> consistency model, reliable transaction processing and a shared-nothing
> architecture to maintain very low latency performance with high concurrency
> processing.
>
> Geode 1.7.0 contains a number of improvements and bug fixes. It includes
> performance improvements in OQL order-by and distinct queries in
> client/server when security is enabled. New GFSH commands were added to
> get/set cluster config and to destroy gateway receivers. A new post
> processor was added to the new client protocol. Pulse now supports legacy
> SSL options. Auto-reconnecting members no more reuse old addresses and IDs.
> Duplicated or member-specific receivers are removed from cluster config
> during rolling upgrades. Users are encouraged to upgrade to the latest
> release.
>
> For the full list of changes please review the release notes:
> https://cwiki.apache.org/confluence/display/GEODE/
> Release+Notes#ReleaseNotes-1.7.0
>
> The release artifacts can be downloaded from the project website:
> http://geode.apache.org/releases/
>
> The release documentation is available at:
> http://geode.apache.org/docs/guide/17/about_geode.html
>
> We would like to thank all the contributors that made the release possible.
>
> Regards,
> Nabarun Nag on behalf of the Apache Geode team
>


Re: [Proposal]: behavior change when region doesn't exist in cluster configuration

2018-04-27 Thread Diane Hardman
I talked with Barbara and understand the long term effort to deprecate
cache.xml in favor of cluster config and I heartily agree.
I think a good step in that direction is to provide a migration tool for
users that reads all cache.xml files for current members and store them in
cluster config, throwing exceptions and logging errors when region
definitions conflict (for the same region name) on different servers in the
same cluster.
We might then consider removing the cache.xml files and rely on gfsh and
(in the future, hopefully) Java API's to keep cluster config up-to-date.

Thanks!

On Fri, Apr 27, 2018 at 12:56 PM, Jinmei Liao <jil...@pivotal.io> wrote:

> The decision is to go with the new behavior (I believe :-)).  The region
> does not exist in the cluster configuration to begin with since it's not
> created using gfsh, so we have no way of checking unless we make an extra
> trip to the region to find out what kind of region it is, but again
> different server might have different opinion about what it is.
>
> On Fri, Apr 27, 2018 at 12:49 PM, Diane Hardman <dhard...@pivotal.io>
> wrote:
>
> > Since we are working on enhancing Lucene support to allow adding a Lucene
> > index to an existing region containing data, I am very interested in the
> > decision here.
> > Like Mike, I also prefer keeping the original behavior of updating
> cluster
> > config with both the region and the index if it was not there before.
> > Is there something preventing you from checking cluster config for a
> region
> > of the same name but different properties and, if so, throwing an
> exception
> > (or warning)
> > that cluster config could not be updated due to this collision?
> >
> > On Thu, Apr 19, 2018 at 3:35 PM, Michael Stolz <mst...@pivotal.io>
> wrote:
> >
> > > Ok. Yes we do have to take the leap :)
> > > Let's keep thinking that way.
> > >
> > > --
> > > Mike Stolz
> > > Principal Engineer, GemFire Product Lead
> > > Mobile: +1-631-835-4771
> > > Download the GemFire book here.
> > > <https://content.pivotal.io/ebooks/scaling-data-services-
> > > with-pivotal-gemfire>
> > >
> > > On Thu, Apr 19, 2018 at 6:29 PM, Jinmei Liao <jil...@pivotal.io>
> wrote:
> > >
> > > > but this proposed change is one of the effort toward "deprecating
> > > > cache.xml". I think we've got to take the leap at one point.
> > > >
> > > > On Thu, Apr 19, 2018 at 3:14 PM, Michael Stolz <mst...@pivotal.io>
> > > wrote:
> > > >
> > > > > Hmmm...I think I liked the old behavior better. It was a kind of
> > bridge
> > > > to
> > > > > cluster config.
> > > > >
> > > > > I still think we need to be putting much more effort into
> deprecating
> > > > > cache.xml and much less effort into fixing the (possibly) hundreds
> of
> > > > bugs
> > > > > related to using both cache.xml and cluster configuration at the
> same
> > > > time.
> > > > > If we can make cluster config complete enough, and deprecate
> > cache.xml,
> > > > > people will stop using cache.xml.
> > > > >
> > > > >
> > > > >
> > > > > --
> > > > > Mike Stolz
> > > > > Principal Engineer, GemFire Product Lead
> > > > > Mobile: +1-631-835-4771
> > > > > Download the GemFire book here.
> > > > > <https://content.pivotal.io/ebooks/scaling-data-services-
> > > > > with-pivotal-gemfire>
> > > > >
> > > > > On Thu, Apr 19, 2018 at 5:58 PM, Jinmei Liao <jil...@pivotal.io>
> > > wrote:
> > > > >
> > > > > > Scenario:
> > > > > > a locator with cluster configuration enabled and a server started
> > > with
> > > > a
> > > > > > cache.xml that has /regionA defined and connected to this
> locator.
> > So
> > > > the
> > > > > > initial state is the locator has an empty cluster configuration
> for
> > > the
> > > > > > cluster, but the server has a region defined in it's cache.
> > > > > >
> > > > > > Old behavior:
> > > > > > when user execute "create index --region=/regionA " command
> > using
> > > > > gfsh,
> > > > > > the index creation is successful on the server, and the server
> > > returns
> > &

Re: [Proposal]: behavior change when region doesn't exist in cluster configuration

2018-04-27 Thread Diane Hardman
Since we are working on enhancing Lucene support to allow adding a Lucene
index to an existing region containing data, I am very interested in the
decision here.
Like Mike, I also prefer keeping the original behavior of updating cluster
config with both the region and the index if it was not there before.
Is there something preventing you from checking cluster config for a region
of the same name but different properties and, if so, throwing an exception
(or warning)
that cluster config could not be updated due to this collision?

On Thu, Apr 19, 2018 at 3:35 PM, Michael Stolz  wrote:

> Ok. Yes we do have to take the leap :)
> Let's keep thinking that way.
>
> --
> Mike Stolz
> Principal Engineer, GemFire Product Lead
> Mobile: +1-631-835-4771
> Download the GemFire book here.
>  with-pivotal-gemfire>
>
> On Thu, Apr 19, 2018 at 6:29 PM, Jinmei Liao  wrote:
>
> > but this proposed change is one of the effort toward "deprecating
> > cache.xml". I think we've got to take the leap at one point.
> >
> > On Thu, Apr 19, 2018 at 3:14 PM, Michael Stolz 
> wrote:
> >
> > > Hmmm...I think I liked the old behavior better. It was a kind of bridge
> > to
> > > cluster config.
> > >
> > > I still think we need to be putting much more effort into deprecating
> > > cache.xml and much less effort into fixing the (possibly) hundreds of
> > bugs
> > > related to using both cache.xml and cluster configuration at the same
> > time.
> > > If we can make cluster config complete enough, and deprecate cache.xml,
> > > people will stop using cache.xml.
> > >
> > >
> > >
> > > --
> > > Mike Stolz
> > > Principal Engineer, GemFire Product Lead
> > > Mobile: +1-631-835-4771
> > > Download the GemFire book here.
> > >  > > with-pivotal-gemfire>
> > >
> > > On Thu, Apr 19, 2018 at 5:58 PM, Jinmei Liao 
> wrote:
> > >
> > > > Scenario:
> > > > a locator with cluster configuration enabled and a server started
> with
> > a
> > > > cache.xml that has /regionA defined and connected to this locator. So
> > the
> > > > initial state is the locator has an empty cluster configuration for
> the
> > > > cluster, but the server has a region defined in it's cache.
> > > >
> > > > Old behavior:
> > > > when user execute "create index --region=/regionA " command using
> > > gfsh,
> > > > the index creation is successful on the server, and the server
> returns
> > a
> > > > xml section that contains both  and  elements, CC is
> > > updated
> > > > with this xml, so the end result is: both region and index end up in
> > the
> > > > cluster configuration.
> > > >
> > > > Problem with old behavior:
> > > > Not sure if the region is a cluster wide configuration. What if a
> > region
> > > > with the same name, but different type exists on different servers?
> the
> > > xml
> > > > returned by different server might be different.
> > > >
> > > > New behavior:
> > > > when user execute "create index --region=/regionA " command using
> > > gfsh,
> > > > the index creation is successful on the server. We failed to find the
> > > > region in the existing cluster configuration, so cluster
> configuration
> > > will
> > > > NOT be updated.
> > > >
> > > > I would also suggest that this would not apply to index alone, any
> > > element
> > > > inside region would have the same behavior change if we approve this.
> > > >
> > > > Thanks!
> > > >
> > > > --
> > > > Cheers
> > > >
> > > > Jinmei
> > > >
> > >
> >
> >
> >
> > --
> > Cheers
> >
> > Jinmei
> >
>


Re: [VOTE] Apache Geode 1.6.0 RC1

2018-04-26 Thread Diane Hardman
Here is the correct link to the Release notes:
https://cwiki.apache.org/confluence/display/GEODE/Release+Notes#ReleaseNotes-1.6.0

On Thu, Apr 26, 2018 at 2:13 PM, Diane Hardman <dhard...@pivotal.io> wrote:

> The link to the Release notes seems to be incorrect.
>
>
>
> On Thu, Apr 26, 2018 at 11:05 AM, Mike Stolz <mikest...@apache.org> wrote:
>
>> This is the first release candidate for Apache Geode, version 1.6.0.
>> Thanks to all the community members for their contributions to this
>> release!
>>
>> *** Please download, test and vote by Monday, April 30, 1500 hrs US
>> Pacific. ***
>>
>> It fixes 157 issues. Release notes can be found at:
>> https://cwiki.apache.org/confluence/display/GEODE/
>> Release+Notes#ReleaseNotes-1.6.0.
>>
>> Note that we are voting upon the source tags: rel/v1.6.0.RC1
>> https://github.com/apache/geode/tree/rel/v1.6.0.RC1
>> https://github.com/apache/geode-examples/tree/rel/v1.6.0.RC1
>>
>> Commit ID:
>> b4ba77f5131018d36b79608ef007dd3cbd761cd9 (geode)
>> 45d174a1280e539108341b286ff79938f9729bc7 (geode-examples)
>>
>> Source and binary files:
>> https://dist.apache.org/repos/dist/dev/geode/1.6.0.RC1
>>
>> Maven staging repo:
>> https://repository.apache.org/content/repositories/orgapachegeode-1041
>>
>>
>>
>> Geode's KEYS file containing PGP keys we use to sign the release:
>> https://github.com/apache/geode/blob/develop/KEYS
>>
>> Release Signed with Fingerprint:
>>
>> pub   rsa4096 2018-04-12 [SC] [expires: 2022-04-12]
>>
>>  876331B45A97E382D1BDFB820F9CABF4396F
>>
>
>


Re: [VOTE] Apache Geode 1.6.0 RC1

2018-04-26 Thread Diane Hardman
The link to the Release notes seems to be incorrect.



On Thu, Apr 26, 2018 at 11:05 AM, Mike Stolz  wrote:

> This is the first release candidate for Apache Geode, version 1.6.0.
> Thanks to all the community members for their contributions to this
> release!
>
> *** Please download, test and vote by Monday, April 30, 1500 hrs US
> Pacific. ***
>
> It fixes 157 issues. Release notes can be found at:
> https://cwiki.apache.org/confluence/display/GEODE/
> Release+Notes#ReleaseNotes-1.6.0.
>
> Note that we are voting upon the source tags: rel/v1.6.0.RC1
> https://github.com/apache/geode/tree/rel/v1.6.0.RC1
> https://github.com/apache/geode-examples/tree/rel/v1.6.0.RC1
>
> Commit ID:
> b4ba77f5131018d36b79608ef007dd3cbd761cd9 (geode)
> 45d174a1280e539108341b286ff79938f9729bc7 (geode-examples)
>
> Source and binary files:
> https://dist.apache.org/repos/dist/dev/geode/1.6.0.RC1
>
> Maven staging repo:
> https://repository.apache.org/content/repositories/orgapachegeode-1041
>
>
>
> Geode's KEYS file containing PGP keys we use to sign the release:
> https://github.com/apache/geode/blob/develop/KEYS
>
> Release Signed with Fingerprint:
>
> pub   rsa4096 2018-04-12 [SC] [expires: 2022-04-12]
>
>  876331B45A97E382D1BDFB820F9CABF4396F
>


Re: [VOTE] Apache Geode 1.5.0 RC1

2018-03-23 Thread Diane Hardman
-1
I filed GEODE-4913 based on scripts I have that create 2 clusters connected
by a WAN gateway on my local machine.
*Problem*: we are opening a cache-server on default port (40404)
irrespective of the fact that one is explicitly specified in cache.xml.
*symptoms*: users who start more than one Geode server on the same “box”
and use cache.xml to specify port would get bind exceptions when they start
the second server.
*workaround*:
   1. specify `--disable-default-server` on the `start server` command
while still specifying port in cache.xml.
   2. specify port on the `start server` command itself and remove any
references to port from cache.xml.

The problem will break a lot of existing scripts and, though we have a
workaround, it will cause headaches for many when they encounter it.

On Wed, Mar 21, 2018 at 2:52 PM, Dan Smith  wrote:

> +1
>
> Run geode-release-check. Verified no MD5 files in the distribution.
>
> -Dan
>
> On Wed, Mar 21, 2018 at 11:57 AM, Anthony Baker  wrote:
>
> > +1
> >
> > - verified signatures and checksums
> > - checked source release for binaries
> > - basic gfsh testing
> > - ran all the examples
> >
> > Anthony
> >
> > > On Mar 20, 2018, at 3:09 PM, Swapnil Bawaskar 
> > wrote:
> > >
> > > This is the first release candidate for Apache Geode, version 1.5.0.
> > > Thanks to all the community members for their contributions to this
> > > release!
> > >
> > > *** Please download, test and vote by Friday, March 23, 1500 hrs
> > > US Pacific. ***
> > >
> > > It fixes 234 issues. release notes can be found at:
> > > https://issues.apache.org/jira/secure/ReleaseNote.jspa?
> > projectId=12318420=12342395
> > >
> > > Note that we are voting upon the source tags: rel/v1.5.0.RC1
> > > https://github.com/apache/geode/tree/rel/v1.5.0.RC1
> > > https://github.com/apache/geode-examples/tree/rel/v1.5.0.RC1
> > >
> > > Commit ID:
> > > 4ef51dacd79ff69336fb024f3d4b07271e90 (geode)
> > > 4941f05c86d928949fbcdb3fb12295ccecc219eb (geode-examples)
> > >
> > > Source and binary files:
> > > https://dist.apache.org/repos/dist/dev/geode/1.5.0.RC1
> > >
> > > Maven staging repo:
> > > https://repository.apache.org/content/repositories/orgapachegeode-1038
> > >
> > >
> > > Geode's KEYS file containing PGP keys we use to sign the release:
> > > https://github.com/apache/geode/blob/develop/KEYS
> > >
> > > Release Signed with Key: pub 4096R/18F902DB 2016-04-07
> > > Fingerprint: E1B1 ABE3 4753 E7BA 8097 4285 8F8F 2BCC 18F9 02DB
> >
> >
>


Proposal: Adding a Lucene index to existing region with data

2017-11-01 Thread Diane Hardman
Since the 1.2.0 release Lucene text search has been fully integrated
into Geode. We appreciate feedback and are responding to requests
to improve usability.

This proposal provides a significant improvement to the current workflow for
using Lucene text search with Geode. Currently a user must create a Lucene
index BEFORE creating the region it will index and before loading data.

This proposal describes the requirements and design approach for supporting
the user's ability to add a Lucene index to an existing region that
contains data.

Please review the proposal in the following wiki page and give us your
feedback.

https://cwiki.apache.org/confluence/display/GEODE/Lucene+Index+Creation+on+Existing+Region


Re: Requesting wiki edit privileges

2017-07-12 Thread Diane Hardman
Thanks!

On Wed, Jul 12, 2017 at 10:17 AM, Dan Smith <dsm...@pivotal.io> wrote:

> I added you, you should have access now.
>
> Thanks!
> -Dan
>
> On Wed, Jul 12, 2017 at 10:12 AM, Diane Hardman <dhard...@pivotal.io>
> wrote:
>
> > Will someone please give me edit privileges for Geode wiki pages?
> >
> > Thanks!
> > Diane Hardman
> >
>


Proposal: Lucene indexing/searching for nested objects

2017-07-12 Thread Diane Hardman
The Geode 1.2.0 release includes Lucene text search fully integrated and
tested (no longer experimental). We are now proposing enhancements to
improve Lucene usability in Geode.

Some Geode users create data models that include nested and complex
objects. The current Geode Lucene integration supports indexing and
querying only the top-level fields in the data object. The objective of
this proposal is to support indexing and querying an arbitrary depth of
nested objects.


Please review the proposal in the following wiki page and give us your
feedback.

https://cwiki.apache.org/confluence/display/GEODE/Lucene+Text+Search+on+Nested+Object


Requesting wiki edit privileges

2017-07-12 Thread Diane Hardman
Will someone please give me edit privileges for Geode wiki pages?

Thanks!
Diane Hardman


[jira] [Comment Edited] (GEODE-2979) Adding server after defining Lucene index results in error

2017-06-01 Thread Diane Hardman (JIRA)

[ 
https://issues.apache.org/jira/browse/GEODE-2979?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16033494#comment-16033494
 ] 

Diane Hardman edited comment on GEODE-2979 at 6/1/17 7:07 PM:
--

We are keeping this bug open for future evaluation and a possible future fix in 
the code to handle this situation.
The documentation request was submitted in GEODE-3014.


was (Author: dhardman):
We are keeping this bug open for future evaluation and a possible future fix in 
the code to handle this situation.

> Adding server after defining Lucene index results in error
> --
>
> Key: GEODE-2979
> URL: https://issues.apache.org/jira/browse/GEODE-2979
> Project: Geode
>  Issue Type: Bug
>  Components: lucene
>    Reporter: Diane Hardman
>  Labels: workaround
>
> Here are the gfsh commands I used:
> {noformat}
> ## start locator
> start locator --name=locator1 --port=12345
> ## start first server
> start server --name=server50505 --server-port=50505 
> --locators=localhost[12345] --start-rest-api --http-service-port=8080 
> --http-service-bind-address=localhost
> ## create lucene index on region testRegion
> create lucene index --name=testIndex --region=testRegion 
> --field=__REGION_VALUE_FIELD
> ## start second server
> start server --name=server50506 --server-port=50506 
> --locators=localhost[12345] --start-rest-api --http-service-port=8080 
> --http-service-bind-address=localhost
> ## list indexes - NOTE lucene index only listed on first server
> gfsh>list members
>Name | Id
> --- | -
> locator1| 192.168.1.57(locator1:60525:locator):1024
> server50505 | 192.168.1.57(server50505:60533):1025
> server50506 | 192.168.1.57(server50506:60587):1026
> gfsh>list lucene indexes --with-stats
> Index Name | Region Path | Server Name | Inde.. | Field Anal.. | Status  | 
> Query Executions | Updates | Commits | Documents
> -- | --- | --- | -- |  | --- | 
>  | --- | --- | -
> testIndex  | /testRegion | server50505 | [__R.. | {__REGION_.. | Defined | NA 
>   | NA  | NA  | NA
> ## Create region testRegion
> gfsh>create region --name=testRegion --type=PARTITION_REDUNDANT_PERSISTENT
>   Member| Status
> --- | 
> 
> server50506 | ERROR: Must create Lucene index testIndex on region /testRegion 
> because it is defined in another member.
> server50505 | Region "/testRegion" created on "server50505"
> ## Add data to region - NOTE this causes a crash with an NPE
> gfsh>put --key=1 --value=value1 --region=testRegion
> Exception in thread "Gfsh Launcher" java.lang.NoClassDefFoundError: 
> org/apache/commons/collections/CollectionUtils
>   at 
> org.apache.geode.management.internal.cli.commands.DataCommands.put(DataCommands.java:895)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.springframework.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:216)
>   at 
> org.apache.geode.management.internal.cli.remote.RemoteExecutionStrategy.execute(RemoteExecutionStrategy.java:91)
>   at 
> org.apache.geode.management.internal.cli.remote.CommandProcessor.executeCommand(CommandProcessor.java:113)
>   at 
> org.apache.geode.management.internal.cli.remote.CommandStatementImpl.process(CommandStatementImpl.java:71)
>   at 
> org.apache.geode.management.internal.cli.remote.MemberCommandService.processCommand(MemberCommandService.java:52)
>   at 
> org.apache.geode.management.internal.beans.MemberMBeanBridge.processCommand(MemberMBeanBridge.java:1597)
>   at 
> org.apache.geode.management.internal.beans.MemberMBean.processCommand(MemberMBean.java:404)
>   at 
> org.apache.geode.management.internal.beans.MemberMBean.processCommand(MemberMBean.java:397)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  

[jira] [Commented] (GEODE-2979) Adding server after defining Lucene index results in error

2017-06-01 Thread Diane Hardman (JIRA)

[ 
https://issues.apache.org/jira/browse/GEODE-2979?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16033494#comment-16033494
 ] 

Diane Hardman commented on GEODE-2979:
--

We are keeping this bug open for future evaluation and a possible future fix in 
the code to handle this situation.

> Adding server after defining Lucene index results in error
> --
>
> Key: GEODE-2979
> URL: https://issues.apache.org/jira/browse/GEODE-2979
> Project: Geode
>  Issue Type: Bug
>  Components: lucene
>    Reporter: Diane Hardman
>  Labels: workaround
>
> Here are the gfsh commands I used:
> {noformat}
> ## start locator
> start locator --name=locator1 --port=12345
> ## start first server
> start server --name=server50505 --server-port=50505 
> --locators=localhost[12345] --start-rest-api --http-service-port=8080 
> --http-service-bind-address=localhost
> ## create lucene index on region testRegion
> create lucene index --name=testIndex --region=testRegion 
> --field=__REGION_VALUE_FIELD
> ## start second server
> start server --name=server50506 --server-port=50506 
> --locators=localhost[12345] --start-rest-api --http-service-port=8080 
> --http-service-bind-address=localhost
> ## list indexes - NOTE lucene index only listed on first server
> gfsh>list members
>Name | Id
> --- | -
> locator1| 192.168.1.57(locator1:60525:locator):1024
> server50505 | 192.168.1.57(server50505:60533):1025
> server50506 | 192.168.1.57(server50506:60587):1026
> gfsh>list lucene indexes --with-stats
> Index Name | Region Path | Server Name | Inde.. | Field Anal.. | Status  | 
> Query Executions | Updates | Commits | Documents
> -- | --- | --- | -- |  | --- | 
>  | --- | --- | -
> testIndex  | /testRegion | server50505 | [__R.. | {__REGION_.. | Defined | NA 
>   | NA  | NA  | NA
> ## Create region testRegion
> gfsh>create region --name=testRegion --type=PARTITION_REDUNDANT_PERSISTENT
>   Member| Status
> --- | 
> 
> server50506 | ERROR: Must create Lucene index testIndex on region /testRegion 
> because it is defined in another member.
> server50505 | Region "/testRegion" created on "server50505"
> ## Add data to region - NOTE this causes a crash with an NPE
> gfsh>put --key=1 --value=value1 --region=testRegion
> Exception in thread "Gfsh Launcher" java.lang.NoClassDefFoundError: 
> org/apache/commons/collections/CollectionUtils
>   at 
> org.apache.geode.management.internal.cli.commands.DataCommands.put(DataCommands.java:895)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.springframework.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:216)
>   at 
> org.apache.geode.management.internal.cli.remote.RemoteExecutionStrategy.execute(RemoteExecutionStrategy.java:91)
>   at 
> org.apache.geode.management.internal.cli.remote.CommandProcessor.executeCommand(CommandProcessor.java:113)
>   at 
> org.apache.geode.management.internal.cli.remote.CommandStatementImpl.process(CommandStatementImpl.java:71)
>   at 
> org.apache.geode.management.internal.cli.remote.MemberCommandService.processCommand(MemberCommandService.java:52)
>   at 
> org.apache.geode.management.internal.beans.MemberMBeanBridge.processCommand(MemberMBeanBridge.java:1597)
>   at 
> org.apache.geode.management.internal.beans.MemberMBean.processCommand(MemberMBean.java:404)
>   at 
> org.apache.geode.management.internal.beans.MemberMBean.processCommand(MemberMBean.java:397)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at sun.reflect.misc.Trampoline.invoke(MethodUtil.java:71)
>   at sun.reflect.GeneratedMethodAccessor2.invoke(Unknown Source)
>   at 
> sun.reflect.De

[jira] [Updated] (GEODE-2979) Adding server after defining Lucene index results in error

2017-06-01 Thread Diane Hardman (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-2979?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Diane Hardman updated GEODE-2979:
-
Summary: Adding server after defining Lucene index results in error  (was: 
Adding server after defining Lucene index results in unusable cluster)

> Adding server after defining Lucene index results in error
> --
>
> Key: GEODE-2979
> URL: https://issues.apache.org/jira/browse/GEODE-2979
> Project: Geode
>  Issue Type: Bug
>  Components: lucene
>    Reporter: Diane Hardman
>  Labels: workaround
>
> Here are the gfsh commands I used:
> {noformat}
> ## start locator
> start locator --name=locator1 --port=12345
> ## start first server
> start server --name=server50505 --server-port=50505 
> --locators=localhost[12345] --start-rest-api --http-service-port=8080 
> --http-service-bind-address=localhost
> ## create lucene index on region testRegion
> create lucene index --name=testIndex --region=testRegion 
> --field=__REGION_VALUE_FIELD
> ## start second server
> start server --name=server50506 --server-port=50506 
> --locators=localhost[12345] --start-rest-api --http-service-port=8080 
> --http-service-bind-address=localhost
> ## list indexes - NOTE lucene index only listed on first server
> gfsh>list members
>Name | Id
> --- | -
> locator1| 192.168.1.57(locator1:60525:locator):1024
> server50505 | 192.168.1.57(server50505:60533):1025
> server50506 | 192.168.1.57(server50506:60587):1026
> gfsh>list lucene indexes --with-stats
> Index Name | Region Path | Server Name | Inde.. | Field Anal.. | Status  | 
> Query Executions | Updates | Commits | Documents
> -- | --- | --- | -- |  | --- | 
>  | --- | --- | -
> testIndex  | /testRegion | server50505 | [__R.. | {__REGION_.. | Defined | NA 
>   | NA  | NA  | NA
> ## Create region testRegion
> gfsh>create region --name=testRegion --type=PARTITION_REDUNDANT_PERSISTENT
>   Member| Status
> --- | 
> 
> server50506 | ERROR: Must create Lucene index testIndex on region /testRegion 
> because it is defined in another member.
> server50505 | Region "/testRegion" created on "server50505"
> ## Add data to region - NOTE this causes a crash with an NPE
> gfsh>put --key=1 --value=value1 --region=testRegion
> Exception in thread "Gfsh Launcher" java.lang.NoClassDefFoundError: 
> org/apache/commons/collections/CollectionUtils
>   at 
> org.apache.geode.management.internal.cli.commands.DataCommands.put(DataCommands.java:895)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.springframework.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:216)
>   at 
> org.apache.geode.management.internal.cli.remote.RemoteExecutionStrategy.execute(RemoteExecutionStrategy.java:91)
>   at 
> org.apache.geode.management.internal.cli.remote.CommandProcessor.executeCommand(CommandProcessor.java:113)
>   at 
> org.apache.geode.management.internal.cli.remote.CommandStatementImpl.process(CommandStatementImpl.java:71)
>   at 
> org.apache.geode.management.internal.cli.remote.MemberCommandService.processCommand(MemberCommandService.java:52)
>   at 
> org.apache.geode.management.internal.beans.MemberMBeanBridge.processCommand(MemberMBeanBridge.java:1597)
>   at 
> org.apache.geode.management.internal.beans.MemberMBean.processCommand(MemberMBean.java:404)
>   at 
> org.apache.geode.management.internal.beans.MemberMBean.processCommand(MemberMBean.java:397)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at sun.reflect.misc.Trampoline.invoke(MethodUtil.java:71)
>   at sun.reflect.GeneratedMethodAccessor2.invoke(Unknown Source)
>   at 
> sun.reflect.De

[jira] [Updated] (GEODE-2979) Adding server after defining Lucene index results in unusable cluster

2017-06-01 Thread Diane Hardman (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-2979?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Diane Hardman updated GEODE-2979:
-
Labels: workaround  (was: )

> Adding server after defining Lucene index results in unusable cluster
> -
>
> Key: GEODE-2979
> URL: https://issues.apache.org/jira/browse/GEODE-2979
> Project: Geode
>  Issue Type: Bug
>  Components: lucene
>    Reporter: Diane Hardman
>  Labels: workaround
>
> Here are the gfsh commands I used:
> {noformat}
> ## start locator
> start locator --name=locator1 --port=12345
> ## start first server
> start server --name=server50505 --server-port=50505 
> --locators=localhost[12345] --start-rest-api --http-service-port=8080 
> --http-service-bind-address=localhost
> ## create lucene index on region testRegion
> create lucene index --name=testIndex --region=testRegion 
> --field=__REGION_VALUE_FIELD
> ## start second server
> start server --name=server50506 --server-port=50506 
> --locators=localhost[12345] --start-rest-api --http-service-port=8080 
> --http-service-bind-address=localhost
> ## list indexes - NOTE lucene index only listed on first server
> gfsh>list members
>Name | Id
> --- | -
> locator1| 192.168.1.57(locator1:60525:locator):1024
> server50505 | 192.168.1.57(server50505:60533):1025
> server50506 | 192.168.1.57(server50506:60587):1026
> gfsh>list lucene indexes --with-stats
> Index Name | Region Path | Server Name | Inde.. | Field Anal.. | Status  | 
> Query Executions | Updates | Commits | Documents
> -- | --- | --- | -- |  | --- | 
>  | --- | --- | -
> testIndex  | /testRegion | server50505 | [__R.. | {__REGION_.. | Defined | NA 
>   | NA  | NA  | NA
> ## Create region testRegion
> gfsh>create region --name=testRegion --type=PARTITION_REDUNDANT_PERSISTENT
>   Member| Status
> --- | 
> 
> server50506 | ERROR: Must create Lucene index testIndex on region /testRegion 
> because it is defined in another member.
> server50505 | Region "/testRegion" created on "server50505"
> ## Add data to region - NOTE this causes a crash with an NPE
> gfsh>put --key=1 --value=value1 --region=testRegion
> Exception in thread "Gfsh Launcher" java.lang.NoClassDefFoundError: 
> org/apache/commons/collections/CollectionUtils
>   at 
> org.apache.geode.management.internal.cli.commands.DataCommands.put(DataCommands.java:895)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.springframework.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:216)
>   at 
> org.apache.geode.management.internal.cli.remote.RemoteExecutionStrategy.execute(RemoteExecutionStrategy.java:91)
>   at 
> org.apache.geode.management.internal.cli.remote.CommandProcessor.executeCommand(CommandProcessor.java:113)
>   at 
> org.apache.geode.management.internal.cli.remote.CommandStatementImpl.process(CommandStatementImpl.java:71)
>   at 
> org.apache.geode.management.internal.cli.remote.MemberCommandService.processCommand(MemberCommandService.java:52)
>   at 
> org.apache.geode.management.internal.beans.MemberMBeanBridge.processCommand(MemberMBeanBridge.java:1597)
>   at 
> org.apache.geode.management.internal.beans.MemberMBean.processCommand(MemberMBean.java:404)
>   at 
> org.apache.geode.management.internal.beans.MemberMBean.processCommand(MemberMBean.java:397)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at sun.reflect.misc.Trampoline.invoke(MethodUtil.java:71)
>   at sun.reflect.GeneratedMethodAccessor2.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect

[jira] [Updated] (GEODE-3011) gfsh example to do a Lucene query is incorrect

2017-05-30 Thread Diane Hardman (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-3011?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Diane Hardman updated GEODE-3011:
-
Description: 
In the Lucene integration section of the docs is the following incorrect gfsh 
command example:
gfsh> lucene search --regionName=/orders -queryStrings="John*" 
--defaultField=field1

Instead it should be:
gfsh> search lucene --name=indexName --region=/orders --queryStrings="John*" 
--defaultField=customer --limit=100

  was:
In the Lucene integration section of the docs is the following incorrect gfsh 
command example:
gfsh> lucene search --regionName=/orders -queryStrings="John*" 
--defaultField=field1

Instead it should be:
gfsh> search lucene --name=indexName --region=/orders -queryStrings="John*" 
--defaultField=customer


> gfsh example to do a Lucene query is incorrect
> --
>
> Key: GEODE-3011
> URL: https://issues.apache.org/jira/browse/GEODE-3011
> Project: Geode
>  Issue Type: Bug
>  Components: docs
>Reporter: Diane Hardman
>
> In the Lucene integration section of the docs is the following incorrect gfsh 
> command example:
> gfsh> lucene search --regionName=/orders -queryStrings="John*" 
> --defaultField=field1
> Instead it should be:
> gfsh> search lucene --name=indexName --region=/orders --queryStrings="John*" 
> --defaultField=customer --limit=100



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (GEODE-3011) gfsh example to do a Lucene query is incorrect

2017-05-30 Thread Diane Hardman (JIRA)
Diane Hardman created GEODE-3011:


 Summary: gfsh example to do a Lucene query is incorrect
 Key: GEODE-3011
 URL: https://issues.apache.org/jira/browse/GEODE-3011
 Project: Geode
  Issue Type: Bug
  Components: docs
Reporter: Diane Hardman


In the Lucene integration section of the docs is the following incorrect gfsh 
command example:
gfsh> lucene search --regionName=/orders -queryStrings="John*" 
--defaultField=field1

Instead it should be:
gfsh> search lucene --name=indexName --region=/orders -queryStrings="John*" 
--defaultField=customer



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (GEODE-2949) Creating a lucene index containing a slash and then the region using gfsh causes an inconsistent state

2017-05-26 Thread Diane Hardman (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-2949?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Diane Hardman resolved GEODE-2949.
--
   Resolution: Fixed
Fix Version/s: 1.2.0

Resolved with fix for GEODE-2950.

> Creating a lucene index containing a slash and then the region using gfsh 
> causes an inconsistent state
> --
>
> Key: GEODE-2949
> URL: https://issues.apache.org/jira/browse/GEODE-2949
> Project: Geode
>  Issue Type: Bug
>  Components: lucene
>Reporter: Barry Oglesby
>    Assignee: Diane Hardman
> Fix For: 1.2.0
>
>
> Creating an index containing a slash with gfsh is successful:
> {noformat}
> gfsh>create lucene index --name='/slashed with spaces' --region=sam 
> --field=text
>Member| Status
>  | -
> 192.168.2.4(server2:52699):1026 | Successfully created lucene index
> 192.168.2.4(server1:52692):1025 | Successfully created lucene index
> {noformat}
> And creating the region with gfsh fails:
> {noformat}
> gfsh>create region --name=sam --type=PARTITION
> Member  | Status
> --- | --
> server2 | ERROR: name cannot contain the separator ' / '
> server1 | ERROR: name cannot contain the separator ' / '
> {noformat}
> But the logs show the async event queue and region have been created:
> {noformat}
> [info 2017/05/18 11:25:53.089 PDT server2  
> tid=0x41] Started  ParallelGatewaySender{id=AsyncEventQueue_/slashed with 
> spaces#_sam,remoteDsId=-1,isRunning =true}
> [info 2017/05/18 11:25:53.094 PDT server2  
> tid=0x41] Partitioned Region /sam is born with prId=11 ident:#sam
> {noformat}
> And destroying the index says no indexes were found:
> {noformat}
>  gfsh>destroy lucene index --region=/data
>Member| Status
>  | 
> 
> 192.168.2.4(server1:52692):1025 | No Lucene indexes were found in region 
> /data
> 192.168.2.4(server2:52699):1026 | No Lucene indexes were found in region 
> /data
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (GEODE-2949) Creating a lucene index containing a slash and then the region using gfsh causes an inconsistent state

2017-05-26 Thread Diane Hardman (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-2949?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Diane Hardman reassigned GEODE-2949:


Assignee: Diane Hardman

> Creating a lucene index containing a slash and then the region using gfsh 
> causes an inconsistent state
> --
>
> Key: GEODE-2949
> URL: https://issues.apache.org/jira/browse/GEODE-2949
> Project: Geode
>  Issue Type: Bug
>  Components: lucene
>Reporter: Barry Oglesby
>    Assignee: Diane Hardman
>
> Creating an index containing a slash with gfsh is successful:
> {noformat}
> gfsh>create lucene index --name='/slashed with spaces' --region=sam 
> --field=text
>Member| Status
>  | -
> 192.168.2.4(server2:52699):1026 | Successfully created lucene index
> 192.168.2.4(server1:52692):1025 | Successfully created lucene index
> {noformat}
> And creating the region with gfsh fails:
> {noformat}
> gfsh>create region --name=sam --type=PARTITION
> Member  | Status
> --- | --
> server2 | ERROR: name cannot contain the separator ' / '
> server1 | ERROR: name cannot contain the separator ' / '
> {noformat}
> But the logs show the async event queue and region have been created:
> {noformat}
> [info 2017/05/18 11:25:53.089 PDT server2  
> tid=0x41] Started  ParallelGatewaySender{id=AsyncEventQueue_/slashed with 
> spaces#_sam,remoteDsId=-1,isRunning =true}
> [info 2017/05/18 11:25:53.094 PDT server2  
> tid=0x41] Partitioned Region /sam is born with prId=11 ident:#sam
> {noformat}
> And destroying the index says no indexes were found:
> {noformat}
>  gfsh>destroy lucene index --region=/data
>Member| Status
>  | 
> 
> 192.168.2.4(server1:52692):1025 | No Lucene indexes were found in region 
> /data
> 192.168.2.4(server2:52699):1026 | No Lucene indexes were found in region 
> /data
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (GEODE-2979) Adding server after defining Lucene index results in unusable cluster

2017-05-23 Thread Diane Hardman (JIRA)

[ 
https://issues.apache.org/jira/browse/GEODE-2979?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16022058#comment-16022058
 ] 

Diane Hardman commented on GEODE-2979:
--

Please ignore the NPE after the put command. This was a known issue 
(GEODE-2964) that I stumbled across.

> Adding server after defining Lucene index results in unusable cluster
> -
>
> Key: GEODE-2979
> URL: https://issues.apache.org/jira/browse/GEODE-2979
> Project: Geode
>  Issue Type: Bug
>  Components: lucene
>    Reporter: Diane Hardman
>
> Here are the gfsh commands I used:
> {noformat}
> ## start locator
> start locator --name=locator1 --port=12345
> ## start first server
> start server --name=server50505 --server-port=50505 
> --locators=localhost[12345] --start-rest-api --http-service-port=8080 
> --http-service-bind-address=localhost
> ## create lucene index on region testRegion
> create lucene index --name=testIndex --region=testRegion 
> --field=__REGION_VALUE_FIELD
> ## start second server
> start server --name=server50506 --server-port=50506 
> --locators=localhost[12345] --start-rest-api --http-service-port=8080 
> --http-service-bind-address=localhost
> ## list indexes - NOTE lucene index only listed on first server
> gfsh>list members
>Name | Id
> --- | -
> locator1| 192.168.1.57(locator1:60525:locator):1024
> server50505 | 192.168.1.57(server50505:60533):1025
> server50506 | 192.168.1.57(server50506:60587):1026
> gfsh>list lucene indexes --with-stats
> Index Name | Region Path | Server Name | Inde.. | Field Anal.. | Status  | 
> Query Executions | Updates | Commits | Documents
> -- | --- | --- | -- |  | --- | 
>  | --- | --- | -
> testIndex  | /testRegion | server50505 | [__R.. | {__REGION_.. | Defined | NA 
>   | NA  | NA  | NA
> ## Create region testRegion
> gfsh>create region --name=testRegion --type=PARTITION_REDUNDANT_PERSISTENT
>   Member| Status
> --- | 
> 
> server50506 | ERROR: Must create Lucene index testIndex on region /testRegion 
> because it is defined in another member.
> server50505 | Region "/testRegion" created on "server50505"
> ## Add data to region - NOTE this causes a crash with an NPE
> gfsh>put --key=1 --value=value1 --region=testRegion
> Exception in thread "Gfsh Launcher" java.lang.NoClassDefFoundError: 
> org/apache/commons/collections/CollectionUtils
>   at 
> org.apache.geode.management.internal.cli.commands.DataCommands.put(DataCommands.java:895)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.springframework.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:216)
>   at 
> org.apache.geode.management.internal.cli.remote.RemoteExecutionStrategy.execute(RemoteExecutionStrategy.java:91)
>   at 
> org.apache.geode.management.internal.cli.remote.CommandProcessor.executeCommand(CommandProcessor.java:113)
>   at 
> org.apache.geode.management.internal.cli.remote.CommandStatementImpl.process(CommandStatementImpl.java:71)
>   at 
> org.apache.geode.management.internal.cli.remote.MemberCommandService.processCommand(MemberCommandService.java:52)
>   at 
> org.apache.geode.management.internal.beans.MemberMBeanBridge.processCommand(MemberMBeanBridge.java:1597)
>   at 
> org.apache.geode.management.internal.beans.MemberMBean.processCommand(MemberMBean.java:404)
>   at 
> org.apache.geode.management.internal.beans.MemberMBean.processCommand(MemberMBean.java:397)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at sun.reflect.misc.Trampoline.invoke(MethodUtil.java:71)
>   at sun.reflect.GeneratedMethodAccessor2.invoke(Unknown Source)
>   at 
> sun.reflect.Deleg

[jira] [Updated] (GEODE-2979) Adding server after defining Lucene index results in unusable cluster

2017-05-23 Thread Diane Hardman (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-2979?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Diane Hardman updated GEODE-2979:
-
Description: 
Here are the gfsh commands I used:
{noformat}
## start locator
start locator --name=locator1 --port=12345
## start first server
start server --name=server50505 --server-port=50505 --locators=localhost[12345] 
--start-rest-api --http-service-port=8080 --http-service-bind-address=localhost
## create lucene index on region testRegion
create lucene index --name=testIndex --region=testRegion 
--field=__REGION_VALUE_FIELD
## start second server
start server --name=server50506 --server-port=50506 --locators=localhost[12345] 
--start-rest-api --http-service-port=8080 --http-service-bind-address=localhost
## list indexes - NOTE lucene index only listed on first server
gfsh>list members
   Name | Id
--- | -
locator1| 192.168.1.57(locator1:60525:locator):1024
server50505 | 192.168.1.57(server50505:60533):1025
server50506 | 192.168.1.57(server50506:60587):1026

gfsh>list lucene indexes --with-stats
Index Name | Region Path | Server Name | Inde.. | Field Anal.. | Status  | 
Query Executions | Updates | Commits | Documents
-- | --- | --- | -- |  | --- | 
 | --- | --- | -
testIndex  | /testRegion | server50505 | [__R.. | {__REGION_.. | Defined | NA   
| NA  | NA  | NA
## Create region testRegion
gfsh>create region --name=testRegion --type=PARTITION_REDUNDANT_PERSISTENT
  Member| Status
--- | 

server50506 | ERROR: Must create Lucene index testIndex on region /testRegion 
because it is defined in another member.
server50505 | Region "/testRegion" created on "server50505"

## Add data to region - NOTE this causes a crash with an NPE
gfsh>put --key=1 --value=value1 --region=testRegion
Exception in thread "Gfsh Launcher" java.lang.NoClassDefFoundError: 
org/apache/commons/collections/CollectionUtils
at 
org.apache.geode.management.internal.cli.commands.DataCommands.put(DataCommands.java:895)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.springframework.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:216)
at 
org.apache.geode.management.internal.cli.remote.RemoteExecutionStrategy.execute(RemoteExecutionStrategy.java:91)
at 
org.apache.geode.management.internal.cli.remote.CommandProcessor.executeCommand(CommandProcessor.java:113)
at 
org.apache.geode.management.internal.cli.remote.CommandStatementImpl.process(CommandStatementImpl.java:71)
at 
org.apache.geode.management.internal.cli.remote.MemberCommandService.processCommand(MemberCommandService.java:52)
at 
org.apache.geode.management.internal.beans.MemberMBeanBridge.processCommand(MemberMBeanBridge.java:1597)
at 
org.apache.geode.management.internal.beans.MemberMBean.processCommand(MemberMBean.java:404)
at 
org.apache.geode.management.internal.beans.MemberMBean.processCommand(MemberMBean.java:397)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at sun.reflect.misc.Trampoline.invoke(MethodUtil.java:71)
at sun.reflect.GeneratedMethodAccessor2.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at sun.reflect.misc.MethodUtil.invoke(MethodUtil.java:275)
at 
com.sun.jmx.mbeanserver.ConvertingMethod.invokeWithOpenReturn(ConvertingMethod.java:193)
at 
com.sun.jmx.mbeanserver.ConvertingMethod.invokeWithOpenReturn(ConvertingMethod.java:175)
at 
com.sun.jmx.mbeanserver.MXBeanIntrospector.invokeM2(MXBeanIntrospector.java:117)
at 
com.sun.jmx.mbeanserver.MXBeanIntrospector.invokeM2(MXBeanIntrospector.java:54)
at 
com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:237)
at com.sun.jmx.mbeanserver.PerInterface.invoke(PerInterface.java:138)
at com.sun.jmx.mbeanserver.MBeanSupport.invoke(MBeanSupport.java:252)
at 
com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.inv

[jira] [Updated] (GEODE-2979) Adding server after defining Lucene index results in unusable cluster

2017-05-23 Thread Diane Hardman (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-2979?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Diane Hardman updated GEODE-2979:
-
Description: 
Here are the gfsh commands I used:
```
## start locator
start locator --name=locator1 --port=12345
## start first server
start server --name=server50505 --server-port=50505 --locators=localhost[12345] 
--start-rest-api --http-service-port=8080 --http-service-bind-address=localhost
## create lucene index on region testRegion
create lucene index --name=testIndex --region=testRegion 
--field=__REGION_VALUE_FIELD
## start second server
start server --name=server50506 --server-port=50506 --locators=localhost[12345] 
--start-rest-api --http-service-port=8080 --http-service-bind-address=localhost
## list indexes - NOTE lucene index only listed on first server
gfsh>list members
   Name | Id
--- | -
locator1| 192.168.1.57(locator1:60525:locator):1024
server50505 | 192.168.1.57(server50505:60533):1025
server50506 | 192.168.1.57(server50506:60587):1026

gfsh>list lucene indexes --with-stats
Index Name | Region Path | Server Name | Inde.. | Field Anal.. | Status  | 
Query Executions | Updates | Commits | Documents
-- | --- | --- | -- |  | --- | 
 | --- | --- | -
testIndex  | /testRegion | server50505 | [__R.. | {__REGION_.. | Defined | NA   
| NA  | NA  | NA
## Create region testRegion
gfsh>create region --name=testRegion --type=PARTITION_REDUNDANT_PERSISTENT
  Member| Status
--- | 

server50506 | ERROR: Must create Lucene index testIndex on region /testRegion 
because it is defined in another member.
server50505 | Region "/testRegion" created on "server50505"

## Add data to region - NOTE this causes a crash with an NPE
gfsh>put --key=1 --value=value1 --region=testRegion
Exception in thread "Gfsh Launcher" java.lang.NoClassDefFoundError: 
org/apache/commons/collections/CollectionUtils
at 
org.apache.geode.management.internal.cli.commands.DataCommands.put(DataCommands.java:895)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.springframework.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:216)
at 
org.apache.geode.management.internal.cli.remote.RemoteExecutionStrategy.execute(RemoteExecutionStrategy.java:91)
at 
org.apache.geode.management.internal.cli.remote.CommandProcessor.executeCommand(CommandProcessor.java:113)
at 
org.apache.geode.management.internal.cli.remote.CommandStatementImpl.process(CommandStatementImpl.java:71)
at 
org.apache.geode.management.internal.cli.remote.MemberCommandService.processCommand(MemberCommandService.java:52)
at 
org.apache.geode.management.internal.beans.MemberMBeanBridge.processCommand(MemberMBeanBridge.java:1597)
at 
org.apache.geode.management.internal.beans.MemberMBean.processCommand(MemberMBean.java:404)
at 
org.apache.geode.management.internal.beans.MemberMBean.processCommand(MemberMBean.java:397)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at sun.reflect.misc.Trampoline.invoke(MethodUtil.java:71)
at sun.reflect.GeneratedMethodAccessor2.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at sun.reflect.misc.MethodUtil.invoke(MethodUtil.java:275)
at 
com.sun.jmx.mbeanserver.ConvertingMethod.invokeWithOpenReturn(ConvertingMethod.java:193)
at 
com.sun.jmx.mbeanserver.ConvertingMethod.invokeWithOpenReturn(ConvertingMethod.java:175)
at 
com.sun.jmx.mbeanserver.MXBeanIntrospector.invokeM2(MXBeanIntrospector.java:117)
at 
com.sun.jmx.mbeanserver.MXBeanIntrospector.invokeM2(MXBeanIntrospector.java:54)
at 
com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:237)
at com.sun.jmx.mbeanserver.PerInterface.invoke(PerInterface.java:138)
at com.sun.jmx.mbeanserver.MBeanSupport.invoke(MBeanSupport.java:252)
at 
com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.inv

[jira] [Updated] (GEODE-2979) Adding server after defining Lucene index results in unusable cluster

2017-05-23 Thread Diane Hardman (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-2979?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Diane Hardman updated GEODE-2979:
-
Summary: Adding server after defining Lucene index results in unusable 
cluster  (was: Adding server after creating Lucene index results in unusable 
cluster)

> Adding server after defining Lucene index results in unusable cluster
> -
>
> Key: GEODE-2979
> URL: https://issues.apache.org/jira/browse/GEODE-2979
> Project: Geode
>  Issue Type: Bug
>  Components: lucene
>    Reporter: Diane Hardman
>
> Here are the gfsh commands I used:
> ## start locator
> start locator --name=locator1 --port=12345
> ## start first server
> start server --name=server50505 --server-port=50505 
> --locators=localhost[12345] --start-rest-api --http-service-port=8080 
> --http-service-bind-address=localhost
> ## create lucene index on region testRegion
> create lucene index --name=testIndex --region=testRegion 
> --field=__REGION_VALUE_FIELD
> ## start second server
> start server --name=server50506 --server-port=50506 
> --locators=localhost[12345] --start-rest-api --http-service-port=8080 
> --http-service-bind-address=localhost
> ## list indexes - NOTE lucene index only listed on first server
> gfsh>list members
>Name | Id
> --- | -
> locator1| 192.168.1.57(locator1:60525:locator):1024
> server50505 | 192.168.1.57(server50505:60533):1025
> server50506 | 192.168.1.57(server50506:60587):1026
> gfsh>list lucene indexes --with-stats
> Index Name | Region Path | Server Name | Inde.. | Field Anal.. | Status  | 
> Query Executions | Updates | Commits | Documents
> -- | --- | --- | -- |  | --- | 
>  | --- | --- | -
> testIndex  | /testRegion | server50505 | [__R.. | {__REGION_.. | Defined | NA 
>   | NA  | NA  | NA
> ## Create region testRegion
> gfsh>create region --name=testRegion --type=PARTITION_REDUNDANT_PERSISTENT
>   Member| Status
> --- | 
> 
> server50506 | ERROR: Must create Lucene index testIndex on region /testRegion 
> because it is defined in another member.
> server50505 | Region "/testRegion" created on "server50505"
> ## Add data to region - NOTE this causes a crash with an NPE
> gfsh>put --key=1 --value=value1 --region=testRegion
> Exception in thread "Gfsh Launcher" java.lang.NoClassDefFoundError: 
> org/apache/commons/collections/CollectionUtils
>   at 
> org.apache.geode.management.internal.cli.commands.DataCommands.put(DataCommands.java:895)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.springframework.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:216)
>   at 
> org.apache.geode.management.internal.cli.remote.RemoteExecutionStrategy.execute(RemoteExecutionStrategy.java:91)
>   at 
> org.apache.geode.management.internal.cli.remote.CommandProcessor.executeCommand(CommandProcessor.java:113)
>   at 
> org.apache.geode.management.internal.cli.remote.CommandStatementImpl.process(CommandStatementImpl.java:71)
>   at 
> org.apache.geode.management.internal.cli.remote.MemberCommandService.processCommand(MemberCommandService.java:52)
>   at 
> org.apache.geode.management.internal.beans.MemberMBeanBridge.processCommand(MemberMBeanBridge.java:1597)
>   at 
> org.apache.geode.management.internal.beans.MemberMBean.processCommand(MemberMBean.java:404)
>   at 
> org.apache.geode.management.internal.beans.MemberMBean.processCommand(MemberMBean.java:397)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at sun.reflect.misc.Trampoline.invoke(MethodUtil.java:71)
>   at sun.reflect.GeneratedMethodAccessor2.invoke(Unknown Source)
>   at 
> sun.reflect.Delegati

[jira] [Created] (GEODE-2979) Adding server after creating Lucene index results in unusable cluster

2017-05-23 Thread Diane Hardman (JIRA)
Diane Hardman created GEODE-2979:


 Summary: Adding server after creating Lucene index results in 
unusable cluster
 Key: GEODE-2979
 URL: https://issues.apache.org/jira/browse/GEODE-2979
 Project: Geode
  Issue Type: Bug
  Components: lucene
Reporter: Diane Hardman


Here are the gfsh commands I used:
## start locator
start locator --name=locator1 --port=12345
## start first server
start server --name=server50505 --server-port=50505 --locators=localhost[12345] 
--start-rest-api --http-service-port=8080 --http-service-bind-address=localhost
## create lucene index on region testRegion
create lucene index --name=testIndex --region=testRegion 
--field=__REGION_VALUE_FIELD
## start second server
start server --name=server50506 --server-port=50506 --locators=localhost[12345] 
--start-rest-api --http-service-port=8080 --http-service-bind-address=localhost
## list indexes - NOTE lucene index only listed on first server
gfsh>list members
   Name | Id
--- | -
locator1| 192.168.1.57(locator1:60525:locator):1024
server50505 | 192.168.1.57(server50505:60533):1025
server50506 | 192.168.1.57(server50506:60587):1026

gfsh>list lucene indexes --with-stats
Index Name | Region Path | Server Name | Inde.. | Field Anal.. | Status  | 
Query Executions | Updates | Commits | Documents
-- | --- | --- | -- |  | --- | 
 | --- | --- | -
testIndex  | /testRegion | server50505 | [__R.. | {__REGION_.. | Defined | NA   
| NA  | NA  | NA
## Create region testRegion
gfsh>create region --name=testRegion --type=PARTITION_REDUNDANT_PERSISTENT
  Member| Status
--- | 

server50506 | ERROR: Must create Lucene index testIndex on region /testRegion 
because it is defined in another member.
server50505 | Region "/testRegion" created on "server50505"

## Add data to region - NOTE this causes a crash with an NPE
gfsh>put --key=1 --value=value1 --region=testRegion
Exception in thread "Gfsh Launcher" java.lang.NoClassDefFoundError: 
org/apache/commons/collections/CollectionUtils
at 
org.apache.geode.management.internal.cli.commands.DataCommands.put(DataCommands.java:895)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.springframework.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:216)
at 
org.apache.geode.management.internal.cli.remote.RemoteExecutionStrategy.execute(RemoteExecutionStrategy.java:91)
at 
org.apache.geode.management.internal.cli.remote.CommandProcessor.executeCommand(CommandProcessor.java:113)
at 
org.apache.geode.management.internal.cli.remote.CommandStatementImpl.process(CommandStatementImpl.java:71)
at 
org.apache.geode.management.internal.cli.remote.MemberCommandService.processCommand(MemberCommandService.java:52)
at 
org.apache.geode.management.internal.beans.MemberMBeanBridge.processCommand(MemberMBeanBridge.java:1597)
at 
org.apache.geode.management.internal.beans.MemberMBean.processCommand(MemberMBean.java:404)
at 
org.apache.geode.management.internal.beans.MemberMBean.processCommand(MemberMBean.java:397)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at sun.reflect.misc.Trampoline.invoke(MethodUtil.java:71)
at sun.reflect.GeneratedMethodAccessor2.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at sun.reflect.misc.MethodUtil.invoke(MethodUtil.java:275)
at 
com.sun.jmx.mbeanserver.ConvertingMethod.invokeWithOpenReturn(ConvertingMethod.java:193)
at 
com.sun.jmx.mbeanserver.ConvertingMethod.invokeWithOpenReturn(ConvertingMethod.java:175)
at 
com.sun.jmx.mbeanserver.MXBeanIntrospector.invokeM2(MXBeanIntrospector.java:117)
at 
com.sun.jmx.mbeanserver.MXBeanIntrospector.invokeM2(MXBeanIntrospector.java:54)
at 
com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:237)
at com.sun.jmx.mbeanserver.PerInterface.invoke(PerInterface.java:138)
at c

[jira] [Commented] (GEODE-2774) CI failure: LuceneIndexDestroyDUnitTest.verifyDestroyAllIndexesWhileDoingPuts

2017-05-17 Thread Diane Hardman (JIRA)

[ 
https://issues.apache.org/jira/browse/GEODE-2774?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16014548#comment-16014548
 ] 

Diane Hardman commented on GEODE-2774:
--

Latest comment from Nabarun in Tracker story:
Unable to reproduce.

> CI failure: LuceneIndexDestroyDUnitTest.verifyDestroyAllIndexesWhileDoingPuts
> -
>
> Key: GEODE-2774
> URL: https://issues.apache.org/jira/browse/GEODE-2774
> Project: Geode
>  Issue Type: Bug
>  Components: lucene
>Affects Versions: 1.2.0
>Reporter: Shelley Lynn Hughes-Godfrey
>    Assignee: Diane Hardman
>  Labels: CI
>
> {noformat}
> :geode-lucene:testClassesat 
> org.apache.geode.internal.Assert.fail(Assert.java:68)
> at 
> org.apache.geode.cache.lucene.LuceneIndexDestroyDUnitTest.verifyDestroyAllIndexesWhileDoingPuts(LuceneIndexDestroyDUnitTest.java:215)
> org.apache.geode.cache.RegionDestroyedException: Partitioned Region 
> @1a3e3379 [path='/region'; dataPolicy=PERSISTENT_PARTITION; prId=76; 
> isDestroyed=false; isClosed=false; retryTimeout=360; serialNumber=1315; 
> partition 
> attributes=PartitionAttributes@1060958737[redundantCopies=0;localMaxMemory=100;totalMaxMemory=2147483647;totalNumBuckets=10;partitionResolver=null;colocatedWith=null;recoveryDelay=-1;startupRecoveryDelay=0;FixedPartitionAttributes=null;partitionListeners=null];
>  on VM 172.17.0.5(154):32770], caused by 
> org.apache.geode.cache.RegionDestroyedException: 
> 172.17.0.5(154):32770@org.apache.geode.internal.cache.PartitionedRegionDataStore@983990329
>  name: /AsyncEventQueue_index1#_region_PARALLEL_GATEWAY_SENDER_QUEUE bucket 
> count: 2, caused by org.apache.geode.cache.RegionDestroyedException: 
> Partitioned Region @32c5de26 
> [path='/AsyncEventQueue_index1#_region_PARALLEL_GATEWAY_SENDER_QUEUE'; 
> dataPolicy=PERSISTENT_PARTITION; prId=78; isDestroyed=true; isClosed=false; 
> retryTimeout=360; serialNumber=1340; partition 
> attributes=PartitionAttributes@2128111693[redundantCopies=0;localMaxMemory=1000;totalMaxMemory=2147483647;totalNumBuckets=10;partitionResolver=null;colocatedWith=/region;recoveryDelay=-1;startupRecoveryDelay=0;FixedPartitionAttributes=null;partitionListeners=null];
>  on VM 172.17.0.5(154):32770]
> at 
> org.apache.geode.internal.cache.PartitionedRegion.virtualPut(PartitionedRegion.java:1954)
> at 
> org.apache.geode.internal.cache.LocalRegionDataView.putEntry(LocalRegionDataView.java:151)
> at 
> org.apache.geode.internal.cache.LocalRegion.basicPut(LocalRegion.java:5194)
> at 
> org.apache.geode.internal.cache.LocalRegion.validatedPut(LocalRegion.java:1605)
> at 
> org.apache.geode.internal.cache.LocalRegion.put(LocalRegion.java:1592)
> at 
> org.apache.geode.internal.cache.AbstractRegion.put(AbstractRegion.java:279)
> at 
> org.apache.geode.cache.lucene.LuceneIndexDestroyDUnitTest.doPutsUntilStopped(LuceneIndexDestroyDUnitTest.java:523)
> at 
> org.apache.geode.cache.lucene.LuceneIndexDestroyDUnitTest.lambda$verifyDestroyAllIndexesWhileDoingPuts$b814fe7d$1(LuceneIndexDestroyDUnitTest.java:197)
> org.apache.geode.cache.RegionDestroyedException: 
> 172.17.0.5(154):32770@org.apache.geode.internal.cache.PartitionedRegionDataStore@983990329
>  name: /AsyncEventQueue_index1#_region_PARALLEL_GATEWAY_SENDER_QUEUE bucket 
> count: 2, caused by org.apache.geode.cache.RegionDestroyedException: 
> Partitioned Region @32c5de26 
> [path='/AsyncEventQueue_index1#_region_PARALLEL_GATEWAY_SENDER_QUEUE'; 
> dataPolicy=PERSISTENT_PARTITION; prId=78; isDestroyed=true; isClosed=false; 
> retryTimeout=360; serialNumber=1340; partition 
> attributes=PartitionAttributes@2128111693[redundantCopies=0;localMaxMemory=1000;totalMaxMemory=2147483647;totalNumBuckets=10;partitionResolver=null;colocatedWith=/region;recoveryDelay=-1;startupRecoveryDelay=0;FixedPartitionAttributes=null;partitionListeners=null];
>  on VM 172.17.0.5(154):32770]
> at 
> org.apache.geode.internal.cache.PartitionedRegionDataStore.grabFreeBucket(PartitionedRegionDataStore.java:482)
> at 
> org.apache.geode.internal.cache.PartitionedRegionDataStore.grabFreeBucketRecursively(PartitionedRegionDataStore.java:282)
> at 
> org.apache.geode.internal.cache.PartitionedRegion.virtualPut(PartitionedRegion.java:1916)
> ... 7 more
> Caused by:
> org.apache.geode.cache.RegionDestroyedException: Partitioned 
> Region @32c5de26 
> [path='/AsyncEventQueue_

[jira] [Assigned] (GEODE-2774) CI failure: LuceneIndexDestroyDUnitTest.verifyDestroyAllIndexesWhileDoingPuts

2017-05-17 Thread Diane Hardman (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-2774?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Diane Hardman reassigned GEODE-2774:


Assignee: Diane Hardman

> CI failure: LuceneIndexDestroyDUnitTest.verifyDestroyAllIndexesWhileDoingPuts
> -
>
> Key: GEODE-2774
> URL: https://issues.apache.org/jira/browse/GEODE-2774
> Project: Geode
>  Issue Type: Bug
>  Components: lucene
>Affects Versions: 1.2.0
>Reporter: Shelley Lynn Hughes-Godfrey
>    Assignee: Diane Hardman
>  Labels: CI
>
> {noformat}
> :geode-lucene:testClassesat 
> org.apache.geode.internal.Assert.fail(Assert.java:68)
> at 
> org.apache.geode.cache.lucene.LuceneIndexDestroyDUnitTest.verifyDestroyAllIndexesWhileDoingPuts(LuceneIndexDestroyDUnitTest.java:215)
> org.apache.geode.cache.RegionDestroyedException: Partitioned Region 
> @1a3e3379 [path='/region'; dataPolicy=PERSISTENT_PARTITION; prId=76; 
> isDestroyed=false; isClosed=false; retryTimeout=360; serialNumber=1315; 
> partition 
> attributes=PartitionAttributes@1060958737[redundantCopies=0;localMaxMemory=100;totalMaxMemory=2147483647;totalNumBuckets=10;partitionResolver=null;colocatedWith=null;recoveryDelay=-1;startupRecoveryDelay=0;FixedPartitionAttributes=null;partitionListeners=null];
>  on VM 172.17.0.5(154):32770], caused by 
> org.apache.geode.cache.RegionDestroyedException: 
> 172.17.0.5(154):32770@org.apache.geode.internal.cache.PartitionedRegionDataStore@983990329
>  name: /AsyncEventQueue_index1#_region_PARALLEL_GATEWAY_SENDER_QUEUE bucket 
> count: 2, caused by org.apache.geode.cache.RegionDestroyedException: 
> Partitioned Region @32c5de26 
> [path='/AsyncEventQueue_index1#_region_PARALLEL_GATEWAY_SENDER_QUEUE'; 
> dataPolicy=PERSISTENT_PARTITION; prId=78; isDestroyed=true; isClosed=false; 
> retryTimeout=360; serialNumber=1340; partition 
> attributes=PartitionAttributes@2128111693[redundantCopies=0;localMaxMemory=1000;totalMaxMemory=2147483647;totalNumBuckets=10;partitionResolver=null;colocatedWith=/region;recoveryDelay=-1;startupRecoveryDelay=0;FixedPartitionAttributes=null;partitionListeners=null];
>  on VM 172.17.0.5(154):32770]
> at 
> org.apache.geode.internal.cache.PartitionedRegion.virtualPut(PartitionedRegion.java:1954)
> at 
> org.apache.geode.internal.cache.LocalRegionDataView.putEntry(LocalRegionDataView.java:151)
> at 
> org.apache.geode.internal.cache.LocalRegion.basicPut(LocalRegion.java:5194)
> at 
> org.apache.geode.internal.cache.LocalRegion.validatedPut(LocalRegion.java:1605)
> at 
> org.apache.geode.internal.cache.LocalRegion.put(LocalRegion.java:1592)
> at 
> org.apache.geode.internal.cache.AbstractRegion.put(AbstractRegion.java:279)
> at 
> org.apache.geode.cache.lucene.LuceneIndexDestroyDUnitTest.doPutsUntilStopped(LuceneIndexDestroyDUnitTest.java:523)
> at 
> org.apache.geode.cache.lucene.LuceneIndexDestroyDUnitTest.lambda$verifyDestroyAllIndexesWhileDoingPuts$b814fe7d$1(LuceneIndexDestroyDUnitTest.java:197)
> org.apache.geode.cache.RegionDestroyedException: 
> 172.17.0.5(154):32770@org.apache.geode.internal.cache.PartitionedRegionDataStore@983990329
>  name: /AsyncEventQueue_index1#_region_PARALLEL_GATEWAY_SENDER_QUEUE bucket 
> count: 2, caused by org.apache.geode.cache.RegionDestroyedException: 
> Partitioned Region @32c5de26 
> [path='/AsyncEventQueue_index1#_region_PARALLEL_GATEWAY_SENDER_QUEUE'; 
> dataPolicy=PERSISTENT_PARTITION; prId=78; isDestroyed=true; isClosed=false; 
> retryTimeout=360; serialNumber=1340; partition 
> attributes=PartitionAttributes@2128111693[redundantCopies=0;localMaxMemory=1000;totalMaxMemory=2147483647;totalNumBuckets=10;partitionResolver=null;colocatedWith=/region;recoveryDelay=-1;startupRecoveryDelay=0;FixedPartitionAttributes=null;partitionListeners=null];
>  on VM 172.17.0.5(154):32770]
> at 
> org.apache.geode.internal.cache.PartitionedRegionDataStore.grabFreeBucket(PartitionedRegionDataStore.java:482)
> at 
> org.apache.geode.internal.cache.PartitionedRegionDataStore.grabFreeBucketRecursively(PartitionedRegionDataStore.java:282)
> at 
> org.apache.geode.internal.cache.PartitionedRegion.virtualPut(PartitionedRegion.java:1916)
> ... 7 more
> Caused by:
> org.apache.geode.cache.RegionDestroyedException: Partitioned 
> Region @32c5de26 
> [path='/AsyncEventQueue_index1#_region_PARALLEL_GATEWAY_SENDER_QUEUE'; 
> dataPolicy=

[jira] [Commented] (GEODE-2905) CI failure: org.apache.geode.cache.lucene.internal.cli.LuceneIndexCommandsDUnitTest > searchWithoutIndexShouldReturnError

2017-05-17 Thread Diane Hardman (JIRA)

[ 
https://issues.apache.org/jira/browse/GEODE-2905?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16014545#comment-16014545
 ] 

Diane Hardman commented on GEODE-2905:
--

Adding comment Naba made in Tracker story:
Made changes to change the assertTrue to assertEquals so that the result string 
is printed out and we have more information on the failure, if it fails again 
the next.
Currently we have very less information on what caused the test to fail and 
also we are unable to reproduce the failure.

> CI failure: 
> org.apache.geode.cache.lucene.internal.cli.LuceneIndexCommandsDUnitTest > 
> searchWithoutIndexShouldReturnError 
> --
>
> Key: GEODE-2905
> URL: https://issues.apache.org/jira/browse/GEODE-2905
> Project: Geode
>  Issue Type: Bug
>  Components: lucene
>Reporter: Shelley Lynn Hughes-Godfrey
>Assignee: nabarun
>
> This test failed in Apache Jenkins build #830.
> {noformat}
> org.apache.geode.cache.lucene.internal.cli.LuceneIndexCommandsDUnitTest > 
> searchWithoutIndexShouldReturnError FAILED
> java.lang.AssertionError
> at org.junit.Assert.fail(Assert.java:86)
> at org.junit.Assert.assertTrue(Assert.java:41)
> at org.junit.Assert.assertTrue(Assert.java:52)
> at 
> org.apache.geode.cache.lucene.internal.cli.LuceneIndexCommandsDUnitTest.searchWithoutIndexShouldReturnError(LuceneIndexCommandsDUnitTest.java:462)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (GEODE-2913) Update Lucene documentation

2017-05-11 Thread Diane Hardman (JIRA)

[ 
https://issues.apache.org/jira/browse/GEODE-2913?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16007277#comment-16007277
 ] 

Diane Hardman commented on GEODE-2913:
--

I noticed that the following 2 bullets were missing from the list of 
corrections:
 - Add gfsh commands: 'destroy lucene index' and 'describe lucene index'
 - To specify the Lucene index field which represents the entire object, use 
__REGION_VALUE_FIELD

Are these doc updates covered elsewhere?

> Update Lucene documentation
> ---
>
> Key: GEODE-2913
> URL: https://issues.apache.org/jira/browse/GEODE-2913
> Project: Geode
>  Issue Type: Bug
>  Components: docs
>Reporter: Karen Smoler Miller
>Assignee: Karen Smoler Miller
>
> Improvements to the code base that need to be reflected in the docs:
> * Change LuceneService.createIndex to use a factory pattern
> {code:java}
> luceneService.createIndex(region, index, ...)
> {code}
> changes to
> {code:java}
> luceneService.createIndexFactory()
> .setXXX()
> .setYYY()
> .create()
> {code}
> *  Lucene indexes will *NOT* be stored in off-heap memory.
> * Document how to configure an index on accessors - you still need to create 
> the Lucene index before creating the region, even though this member does not 
> hold any region data.
> If the index is not defined on the accessor, an exception like this will be 
> thrown while attempting to create the region:
> {quote}
> [error 2017/05/02 15:19:26.018 PDT  tid=0x1] 
> java.lang.IllegalStateException: Must create Lucene index full_index on 
> region /data because it is defined in another member.
> Exception in thread "main" java.lang.IllegalStateException: Must create 
> Lucene index full_index on region /data because it is defined in another 
> member.
> at 
> org.apache.geode.internal.cache.CreateRegionProcessor$CreateRegionMessage.handleCacheDistributionAdvisee(CreateRegionProcessor.java:478)
> at 
> org.apache.geode.internal.cache.CreateRegionProcessor$CreateRegionMessage.process(CreateRegionProcessor.java:379)
> {quote}
> * Do not need to create a Lucene index on a client with a Proxy cache. The 
> Lucene search will always be done on the server.  Besides, _you can't create 
> an index on a client._
> * If you configure Invalidates for region entries (alone or as part of 
> expiration), these will *NOT* invalidate the Lucene indexes.
> The problem with this is the index contains the keys, but the region doesn't, 
> so the query produces results that don't exist.
> In this test, the first time the query is run, it produces N valid results. 
> The second time it is run it produces N empty results:
> ** load entries
> ** run query
> ** invalidate entries
> ** run query again
> *  Destroying a region will *NOT* automatically destroy any Lucene index 
> associated with that region. Instead, attempting to destroy a region with a 
> Lucene index will throw a colocated region exception. 
> An IllegalStateException is thrown:
> {quote}
> java.lang.IllegalStateException: The parent region [/data] in colocation 
> chain cannot be destroyed, unless all its children 
> [[/cusip_index#_data.files]] are destroyed
> at 
> org.apache.geode.internal.cache.PartitionedRegion.checkForColocatedChildren(PartitionedRegion.java:7231)
> at 
> org.apache.geode.internal.cache.PartitionedRegion.destroyRegion(PartitionedRegion.java:7243)
> at 
> org.apache.geode.internal.cache.AbstractRegion.destroyRegion(AbstractRegion.java:308)
> at 
> DestroyLuceneIndexesAndRegionFunction.destroyRegion(DestroyLuceneIndexesAndRegionFunction.java:46)
> {quote}
> * The process to change a Lucene index using gfsh: 
>   1. export region data
>   2. destroy Lucene index, destroy region 
>   3. create new index, create new region without user-defined business 
> logic callbacks
>   4. import data with option to turn on callbacks (to invoke Lucene Async 
> Event Listener to index the data)
>   5. alter region to add user-defined business logic callbacks
> * Make sure there are no references to replicated regions as they are not 
> supported.
> * Document security implementation and defaults.  If a user has security 
> configured for their cluster, creating a Lucene index requires DATA:MANAGE 
> privilege (similar to OQL), but doing Lucene queries requires DATA:WRITE 
> privilege because a function is called (different from OQL which requires 
> only DATA:READ privilege). Here are all the required privileges for the gfsh 
> commands:
> ** create index requires DATA:MANAGE:region
> ** describe index requires CLUSTER:READ
> ** list

[jira] [Updated] (GEODE-2913) Update Lucene documentation

2017-05-11 Thread Diane Hardman (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-2913?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Diane Hardman updated GEODE-2913:
-
Description: 
Improvements to the code base that need to be reflected in the docs:
* Change LuceneService.createIndex to use a factory pattern
{code:java}
luceneService.createIndex(region, index, ...)
{code}
changes to
{code:java}
luceneService.createIndexFactory()
.setXXX()
.setYYY()
.create()
{code}
*  Lucene indexes will *NOT* be stored in off-heap memory.
* Document how to configure an index on accessors - you still need to create 
the Lucene index before creating the region, even though this member does not 
hold any region data.
If the index is not defined on the accessor, an exception like this will be 
thrown while attempting to create the region:
{quote}
[error 2017/05/02 15:19:26.018 PDT  tid=0x1] 
java.lang.IllegalStateException: Must create Lucene index full_index on region 
/data because it is defined in another member.
Exception in thread "main" java.lang.IllegalStateException: Must create Lucene 
index full_index on region /data because it is defined in another member.
at 
org.apache.geode.internal.cache.CreateRegionProcessor$CreateRegionMessage.handleCacheDistributionAdvisee(CreateRegionProcessor.java:478)
at 
org.apache.geode.internal.cache.CreateRegionProcessor$CreateRegionMessage.process(CreateRegionProcessor.java:379)
{quote}
* Do not need to create a Lucene index on a client with a Proxy cache. The 
Lucene search will always be done on the server.  Besides, _you can't create an 
index on a client._
* If you configure Invalidates for region entries (alone or as part of 
expiration), these will *NOT* invalidate the Lucene indexes.
The problem with this is the index contains the keys, but the region doesn't, 
so the query produces results that don't exist.
In this test, the first time the query is run, it produces N valid results. The 
second time it is run it produces N empty results:
** load entries
** run query
** invalidate entries
** run query again
*  Destroying a region will *NOT* automatically destroy any Lucene index 
associated with that region. Instead, attempting to destroy a region with a 
Lucene index will throw a colocated region exception. 
An IllegalStateException is thrown:
{quote}
java.lang.IllegalStateException: The parent region [/data] in colocation chain 
cannot be destroyed, unless all its children [[/cusip_index#_data.files]] are 
destroyed
at 
org.apache.geode.internal.cache.PartitionedRegion.checkForColocatedChildren(PartitionedRegion.java:7231)
at 
org.apache.geode.internal.cache.PartitionedRegion.destroyRegion(PartitionedRegion.java:7243)
at 
org.apache.geode.internal.cache.AbstractRegion.destroyRegion(AbstractRegion.java:308)
at 
DestroyLuceneIndexesAndRegionFunction.destroyRegion(DestroyLuceneIndexesAndRegionFunction.java:46)
{quote}
* The process to change a Lucene index using gfsh: 
  1. export region data
  2. destroy Lucene index, destroy region 
  3. create new index, create new region without user-defined business 
logic callbacks
  4. import data with option to turn on callbacks (to invoke Lucene Async 
Event Listener to index the data)
  5. alter region to add user-defined business logic callbacks
* Make sure there are no references to replicated regions as they are not 
supported.
* Document security implementation and defaults.  If a user has security 
configured for their cluster, creating a Lucene index requires DATA:MANAGE 
privilege (similar to OQL), but doing Lucene queries requires DATA:WRITE 
privilege because a function is called (different from OQL which requires only 
DATA:READ privilege). Here are all the required privileges for the gfsh 
commands:
** create index requires DATA:MANAGE:region
** describe index requires CLUSTER:READ
** list indexes requires CLUSTER:READ
** search index requires DATA:WRITE
** destroy index requires DATA:MANAGE:region
* A user cannot create a Lucene index on a region that has eviction configured 
with local destroy. If using Lucene indexing, eviction can only be configured 
with overflow to disk. In this case, only the region data is overflowed to 
disk, *NOT* the Lucene index. An UnsupportedOperationException is thrown:
{quote}
[error 2017/05/02 16:12:32.461 PDT  tid=0x1] 
java.lang.UnsupportedOperationException: Lucene indexes on regions with 
eviction and action local destroy are not supported
Exception in thread "main" java.lang.UnsupportedOperationException: Lucene 
indexes on regions with eviction and action local destroy are not supported
at 
org.apache.geode.cache.lucene.internal.LuceneRegionListener.beforeCreate(LuceneRegionListener.java:85)
at 
org.apache.geode.internal.cache.GemFireCacheImpl.invokeRegionBefore(GemFireCacheImpl.java:3154)
at 
org.apache.geode.internal.cache.GemFireCacheImpl.createVMRegion(GemFir

[jira] [Commented] (GEODE-2518) Developer can pass Collections via REST API

2017-05-08 Thread Diane Hardman (JIRA)

[ 
https://issues.apache.org/jira/browse/GEODE-2518?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16001793#comment-16001793
 ] 

Diane Hardman commented on GEODE-2518:
--

Note: This came up today in my meeting with Humana regarding Lucene status. 
This is a showstopper issue for them (Anup and Christian Ceballos). They were 
looking for an update on status. FYI.

> Developer can pass Collections via REST API
> ---
>
> Key: GEODE-2518
> URL: https://issues.apache.org/jira/browse/GEODE-2518
> Project: Geode
>  Issue Type: Wish
>  Components: rest (dev)
>Reporter: Addison
>
> Requesting that the Gemfire REST Api allow the passing of a collection in the 
> JSON payload.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (GEODE-2830) Required permission for executing a function should be DATA:WRITE

2017-04-26 Thread Diane Hardman (JIRA)
Diane Hardman created GEODE-2830:


 Summary: Required permission for executing a function should be 
DATA:WRITE
 Key: GEODE-2830
 URL: https://issues.apache.org/jira/browse/GEODE-2830
 Project: Geode
  Issue Type: Bug
  Components: docs
Reporter: Diane Hardman


The required permission for executing a function as listed in the gfsh command 
table (2nd table) is wrong in the docs:
http://gemfire.docs.pivotal.io/geode/managing/security/implementing_authorization.html

It is listed as DATA:MANAGE in the gfsh command table, but should be DATA:WRITE.
The correct permission is listed in the client operation table above the gfsh 
table.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (GEODE-2605) Unable to do a Lucene query without CLUSTER:READ privilege

2017-04-24 Thread Diane Hardman (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-2605?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Diane Hardman updated GEODE-2605:
-
Description: 
I have configured a small cluster with security and am testing the privileges I 
need for creating a Lucene index and then executing a query/search using 
Lucene. 
I have confirmed that DATA:MANAGE privilege allows me to create a lucene index 
(similar to creating OQL indexes).
I assumed I needed DATA:WRITE privilege to execute 'search lucene' because the 
implementation uses a function. Instead, I am getting an error that I need 
CLUSTER:READ privilege. I don't know why.

As an aside, we may want to document that all DATA privileges automatically 
include CLUSTER:READ as I found I could create indexes with DATA:MANAGE, but 
could not list the indexes I created without CLUSTER:READ... go figure.

  was:
I have configured a small cluster with security and am testing the privileges I 
need for creating a Lucene index and then executing a query/search using 
Lucene. 
I have confirmed that DATA:MANAGE privilege allows me to create a lucene index 
(similar to creating OQL indexes).
I assumed I needed DATA:WRITE privilege to execute 'search lucene' because the 
implementation uses a function. Instead, I am getting an error that I need 
CLUSTER:READ privilege. I don't know why.

As an aside, we may want to document that all DATA privileges automatically 
include CLUSTER:READ as I found I could create indexes with DATA:WRITE, but 
could not list the indexes I created without CLUSTER:READ... go figure.


> Unable to do a Lucene query without CLUSTER:READ privilege
> --
>
> Key: GEODE-2605
> URL: https://issues.apache.org/jira/browse/GEODE-2605
> Project: Geode
>  Issue Type: Bug
>  Components: docs, lucene, security
>    Reporter: Diane Hardman
>Assignee: Barry Oglesby
> Fix For: 1.2.0
>
> Attachments: security.json
>
>
> I have configured a small cluster with security and am testing the privileges 
> I need for creating a Lucene index and then executing a query/search using 
> Lucene. 
> I have confirmed that DATA:MANAGE privilege allows me to create a lucene 
> index (similar to creating OQL indexes).
> I assumed I needed DATA:WRITE privilege to execute 'search lucene' because 
> the implementation uses a function. Instead, I am getting an error that I 
> need CLUSTER:READ privilege. I don't know why.
> As an aside, we may want to document that all DATA privileges automatically 
> include CLUSTER:READ as I found I could create indexes with DATA:MANAGE, but 
> could not list the indexes I created without CLUSTER:READ... go figure.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (GEODE-2822) gfsh commds 'list index' and 'describe index' should not require CLUSTER:READ permission

2017-04-24 Thread Diane Hardman (JIRA)
Diane Hardman created GEODE-2822:


 Summary: gfsh commds 'list index' and 'describe index' should not 
require CLUSTER:READ permission
 Key: GEODE-2822
 URL: https://issues.apache.org/jira/browse/GEODE-2822
 Project: Geode
  Issue Type: Bug
  Components: gfsh, security
Reporter: Diane Hardman


To create either an OQL index or a Lucene index requires DATA:MANAGE 
permission. Once I've created an index, I should be able to get a list of the 
indexes and/or a description of the indexes with the same permission.

Instead, today, listing or describing indexes requires CLUSTER:READ permission. 
This would require every developer with DATA:MANAGE permission to also have 
CLUSTER:READ permission to examine what they were able to create. This doesn't 
make sense to me.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (GEODE-2689) If a region containing a Lucene index is created in one group and altered in another, a member in the other group will fail to start

2017-04-12 Thread Diane Hardman (JIRA)

[ 
https://issues.apache.org/jira/browse/GEODE-2689?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15966756#comment-15966756
 ] 

Diane Hardman commented on GEODE-2689:
--

Barry,
When you say "the same test with OQL works", do you mean that the member is 
started WITH the index or without the index, due to the 
IndexNameConflictException?

My preference is to mimic as much as the OQL behavior as possible. I like your 
suggestion of throwing a  LuceneIndexExistsException and verifying that the 
indexes are the same. Is this done with OQL?

Thanks!

> If a region containing a Lucene index is created in one group and altered in 
> another, a member in the other group will fail to start
> 
>
> Key: GEODE-2689
> URL: https://issues.apache.org/jira/browse/GEODE-2689
> Project: Geode
>  Issue Type: Bug
>  Components: lucene
>Reporter: Barry Oglesby
>
> Steps to reproduce:
> - create lucene index --name=full_index --region=data --field=field1
> - create region --name=data --type=PARTITION_REDUNDANT
> - alter region --name=data --cache-listener=TestCacheListener --group=group1
> At this point, the cluster config xml looks like:
> {noformat}
> [info 2017/03/15 17:04:17.375 PDT server3  tid=0x1] 
>   ***
>   Configuration for  'cluster'
>   
>   Jar files to deployed
>   
>   http://geode.apache.org/schema/cache; 
> xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance; copy-on-read="false" 
> is-server="false" lock-lease="120" lock-timeout="60" search-timeout="300" 
> version="1.0" xsi:schemaLocation="http://geode.apache.org/schema/cache 
> http://geode.apache.org/schema/cache/cache-1.0.xsd;>
>   
>data-policy="partition">
> 
>   
>   http://geode.apache.org/schema/lucene; 
> name="full_index">
>  analyzer="org.apache.lucene.analysis.standard.StandardAnalyzer" 
> name="field1"/>
>   
> 
>   
>   
>   ***
>   Configuration for  'group1'
>   
>   Jar files to deployed
>   
>   http://geode.apache.org/schema/cache; 
> xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance; copy-on-read="false" 
> is-server="false" lock-lease="120" lock-timeout="60" search-timeout="300" 
> version="1.0" xsi:schemaLocation="http://geode.apache.org/schema/cache 
> http://geode.apache.org/schema/cache/cache-1.0.xsd;>
>   
>data-policy="partition">
> 
> 
>   TestCacheListener
> 
>   
>   http://geode.apache.org/schema/lucene; 
> name="full_index">
>  analyzer="org.apache.lucene.analysis.standard.StandardAnalyzer" 
> name="field1"/>
>   
> 
>   
> {noformat}
> If a member is started in the group (group1 in this case), it will fail to 
> start with the following error:
> {noformat}
> [error 2017/03/15 17:04:19.715 PDT  tid=0x1] Lucene index already 
> exists in region
> Exception in thread "main" java.lang.IllegalArgumentException: Lucene index 
> already exists in region
>   at 
> org.apache.geode.cache.lucene.internal.LuceneServiceImpl.registerDefinedIndex(LuceneServiceImpl.java:201)
>   at 
> org.apache.geode.cache.lucene.internal.LuceneServiceImpl.createIndex(LuceneServiceImpl.java:154)
>   at 
> org.apache.geode.cache.lucene.internal.xml.LuceneIndexCreation.beforeCreate(LuceneIndexCreation.java:85)
>   at 
> org.apache.geode.internal.cache.extension.SimpleExtensionPoint.beforeCreate(SimpleExtensionPoint.java:77)
>   at 
> org.apache.geode.internal.cache.xmlcache.RegionCreation.createRoot(RegionCreation.java:252)
>   at 
> org.apache.geode.internal.cache.xmlcache.CacheCreation.initializeRegions(CacheCreation.java:544)
>   at 
> org.apache.geode.internal.cache.xmlcache.CacheCreation.create(CacheCreation.java:495)
>   at 
> org.apache.geode.internal.cache.xmlcache.CacheXmlParser.create(CacheXmlParser.java:343)
>   at 
> org.apache.geode.internal.cache.GemFireCacheImpl.loadCacheXml(GemFireCacheImpl.java:4479)
>   at 
> org.apache.geode.internal.cache.ClusterConfigurationLoader.applyClusterXmlConfiguration(ClusterConfigurationLoader.java:129)
>   at 
> org.apache.geode.internal.cach

[jira] [Updated] (GEODE-2703) Improve error message that Lucene queries are not supported in the context of a transaction

2017-04-07 Thread Diane Hardman (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-2703?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Diane Hardman updated GEODE-2703:
-
Description: 
We currently do not support Lucene queries in the context of a transaction. The 
exception thrown, however, may be confusing to the user.

ERROR org.apache.geode.cache.TransactionException: Function inside a 
transaction cannot execute on more than one node

org.apache.geode.cache.TransactionException: Function inside a transaction 
cannot execute on more than one node
at 
org.apache.geode.internal.cache.execute.PartitionedRegionFunctionExecutor.validateExecution(PartitionedRegionFunctionExecutor.java:344)
at 
org.apache.geode.internal.cache.PartitionedRegion.executeOnAllBuckets(PartitionedRegion.java:3840)
at 
org.apache.geode.internal.cache.PartitionedRegion.executeFunction(PartitionedRegion.java:3353)
at 
org.apache.geode.internal.cache.execute.PartitionedRegionFunctionExecutor.executeFunction(PartitionedRegionFunctionExecutor.java:228)
at 
org.apache.geode.internal.cache.execute.AbstractExecution.execute(AbstractExecution.java:376)
at 
org.apache.geode.cache.lucene.internal.LuceneQueryImpl.findTopEntries(LuceneQueryImpl.java:115)
at 
org.apache.geode.cache.lucene.internal.LuceneQueryImpl.findPages(LuceneQueryImpl.java:95)
at 
org.apache.geode.cache.lucene.internal.LuceneQueryImpl.findResults(LuceneQueryImpl.java:81)
at lucene.LuceneTest.executeLuceneQuery(LuceneTest.java:154)
at parReg.ParRegTest.doEntryOperations(ParRegTest.java:2929)
at parReg.ParRegTest.doRROpsAndVerify(ParRegTest.java:1709)
at parReg.ParRegTest.HydraTask_doRROpsAndVerify(ParRegTest.java:958)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at hydra.MethExecutor.execute(MethExecutor.java:182)
at hydra.MethExecutor.execute(MethExecutor.java:150)
at hydra.TestTask.execute(TestTask.java:192)
at hydra.RemoteTestModule$1.run(RemoteTestModule.java:212)

```


  was:
We currently do not support Lucene queries in the context of a transaction. The 
exception thrown, however, may be confusing to the user.

CLIENT vm_0_thr_0_client1_rs-GEM1332-client-2_20970
TASK[0] parReg.ParRegTest.HydraTask_doRROpsAndVerify
ERROR org.apache.geode.cache.TransactionException: Function inside a 
transaction cannot execute on more than one node

org.apache.geode.cache.TransactionException: Function inside a transaction 
cannot execute on more than one node
at 
org.apache.geode.internal.cache.execute.PartitionedRegionFunctionExecutor.validateExecution(PartitionedRegionFunctionExecutor.java:344)
at 
org.apache.geode.internal.cache.PartitionedRegion.executeOnAllBuckets(PartitionedRegion.java:3840)
at 
org.apache.geode.internal.cache.PartitionedRegion.executeFunction(PartitionedRegion.java:3353)
at 
org.apache.geode.internal.cache.execute.PartitionedRegionFunctionExecutor.executeFunction(PartitionedRegionFunctionExecutor.java:228)
at 
org.apache.geode.internal.cache.execute.AbstractExecution.execute(AbstractExecution.java:376)
at 
org.apache.geode.cache.lucene.internal.LuceneQueryImpl.findTopEntries(LuceneQueryImpl.java:115)
at 
org.apache.geode.cache.lucene.internal.LuceneQueryImpl.findPages(LuceneQueryImpl.java:95)
at 
org.apache.geode.cache.lucene.internal.LuceneQueryImpl.findResults(LuceneQueryImpl.java:81)
at lucene.LuceneTest.executeLuceneQuery(LuceneTest.java:154)
at parReg.ParRegTest.doEntryOperations(ParRegTest.java:2929)
at parReg.ParRegTest.doRROpsAndVerify(ParRegTest.java:1709)
at parReg.ParRegTest.HydraTask_doRROpsAndVerify(ParRegTest.java:958)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at hydra.MethExecutor.execute(MethExecutor.java:182)
at hydra.MethExecutor.execute(MethExecutor.java:150)
at hydra.TestTask.execute(TestTask.java:192)
at hydra.RemoteTestModule$1.run(RemoteTestModule.java:212)

```



> Improve error message that Lucene queries are not supported in the context of 
> a transaction
> ---
>
> Key: GEODE-2703
> URL: https://issues.apache.org/jira/browse/GEODE-2703
> Project: Geode
>  Issue Type: Bug
>  Components: lucene
>    Reporter: Diane Hardman
>
> We currently do not support Lucene queries in the context of a transaction. 
> The exception thrown, however, may be confusing to the user.
> ERROR org.apache.geode.cache.TransactionException: Function inside a 
>

[jira] [Resolved] (GEODE-2669) Add gfsh command to destroy lucene index

2017-03-15 Thread Diane Hardman (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-2669?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Diane Hardman resolved GEODE-2669.
--
   Resolution: Fixed
Fix Version/s: 1.2.0

This command has been implemented on develop (Geode v1.2.0).

> Add gfsh command to destroy lucene index
> 
>
> Key: GEODE-2669
> URL: https://issues.apache.org/jira/browse/GEODE-2669
> Project: Geode
>  Issue Type: Sub-task
>  Components: lucene
>Reporter: Swapnil Bawaskar
>    Assignee: Diane Hardman
> Fix For: 1.2.0
>
>
> Currently, there is a {{create lucene index}} gfsh command, however, there is 
> no corresponding {{destroy lucence index}} command.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (GEODE-2669) Add gfsh command to destroy lucene index

2017-03-15 Thread Diane Hardman (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-2669?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Diane Hardman reassigned GEODE-2669:


Assignee: Diane Hardman

> Add gfsh command to destroy lucene index
> 
>
> Key: GEODE-2669
> URL: https://issues.apache.org/jira/browse/GEODE-2669
> Project: Geode
>  Issue Type: Sub-task
>  Components: lucene
>Reporter: Swapnil Bawaskar
>    Assignee: Diane Hardman
>
> Currently, there is a {{create lucene index}} gfsh command, however, there is 
> no corresponding {{destroy lucence index}} command.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (GEODE-2553) After deleting and recreating my Lucene index and region, my Lucene query hung.

2017-03-09 Thread Diane Hardman (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-2553?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Diane Hardman updated GEODE-2553:
-
Summary: After deleting and recreating my Lucene index and region, my 
Lucene query hung.  (was: After deleting and recreating my Lucene index and 
region, my Lucene query should be successful.)

> After deleting and recreating my Lucene index and region, my Lucene query 
> hung.
> ---
>
> Key: GEODE-2553
> URL: https://issues.apache.org/jira/browse/GEODE-2553
> Project: Geode
>  Issue Type: Bug
>  Components: lucene
>    Reporter: Diane Hardman
> Fix For: 1.2.0
>
> Attachments: server50505.log, stack.log
>
>
> While manually testing in gfsh the process of deleting Lucene indexes, 
> deleting the region, creating new indexes and a new empty region, I was able 
> to hang gfsh while doing a Lucene search on the new region with no data.
> Here are the steps I used:
> _ __
>/ _/ __/ __/ // /
>   / /  __/ /___  /_  / _  / 
>  / /__/ / /  _/ / // /  
> /__/_/  /__/_//_/1.2.0-SNAPSHOT
> Monitor and Manage Apache Geode
> gfsh>start locator --name=locator1 --port=12345
> gfsh>start server --name=server50505 --server-port=50505 
> --locators=localhost[12345] --start-rest-api --http-service-port=8080 
> --http-service-bind-address=localhost
> gfsh>create lucene index --name=testIndex --region=testRegion 
> --field=__REGION_VALUE_FIELD
> gfsh>create lucene index --name=testIndex2 --region=testRegion 
> gfsh>list lucene indexes --with-stats
> gfsh>create region --name=testRegion --type=PARTITION_PERSISTENT
> gfsh>put --key=1 --value=value1 --region=testRegion
> gfsh>put --key=2 --value=value2 --region=testRegion
> gfsh>put --key=3 --value=value3 --region=testRegion
> gfsh>search lucene --name=testIndex --region=/testRegion 
> --queryStrings=value* --defaultField=__REGION_VALUE_FIELD
> gfsh>search lucene --name=testIndex2 --region=/testRegion 
> --queryStrings=value* --defaultField=__REGION_VALUE_FIELD
> gfsh>destroy lucene index --region=/testRegion --name=testIndex
> gfsh>list lucene indexes --with-stats
> gfsh>search lucene --name=testIndex2 --region=/testRegion 
> --queryStrings=value* --defaultField=__REGION_VALUE_FIELD
> gfsh>search lucene --name=testIndex --region=/testRegion 
> --queryStrings=value* --defaultField=__REGION_VALUE_FIELD
> 
> gfsh>destroy lucene index --region=/testRegion
> gfsh>list lucene indexes --with-stats
> gfsh>destroy region --name=/testRegion
> gfsh>create lucene index --name=testIndex --region=testRegion 
> gfsh>create lucene index --name=testIndex2 --region=testRegion 
> --field=__REGION_VALUE_FIELD
> gfsh>list lucene indexes --with-stats
> gfsh>create region --name=testRegion --type=PARTITION_PERSISTENT
> gfsh>search lucene --name=testIndex --region=/testRegion 
> --queryStrings=value* --defaultField=__REGION_VALUE_FIELD
> The gfsh process hangs at this point.
> I'll attach the stacktrace for the server.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (GEODE-2553) After deleting and recreating my Lucene index and region, my Lucene query should be successful.

2017-03-09 Thread Diane Hardman (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-2553?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Diane Hardman updated GEODE-2553:
-
Summary: After deleting and recreating my Lucene index and region, my 
Lucene query should be successful.  (was: Lucene search hangs on recreated 
region with no data)

> After deleting and recreating my Lucene index and region, my Lucene query 
> should be successful.
> ---
>
> Key: GEODE-2553
> URL: https://issues.apache.org/jira/browse/GEODE-2553
> Project: Geode
>  Issue Type: Bug
>  Components: lucene
>    Reporter: Diane Hardman
> Fix For: 1.2.0
>
> Attachments: server50505.log, stack.log
>
>
> While manually testing in gfsh the process of deleting Lucene indexes, 
> deleting the region, creating new indexes and a new empty region, I was able 
> to hang gfsh while doing a Lucene search on the new region with no data.
> Here are the steps I used:
> _ __
>/ _/ __/ __/ // /
>   / /  __/ /___  /_  / _  / 
>  / /__/ / /  _/ / // /  
> /__/_/  /__/_//_/1.2.0-SNAPSHOT
> Monitor and Manage Apache Geode
> gfsh>start locator --name=locator1 --port=12345
> gfsh>start server --name=server50505 --server-port=50505 
> --locators=localhost[12345] --start-rest-api --http-service-port=8080 
> --http-service-bind-address=localhost
> gfsh>create lucene index --name=testIndex --region=testRegion 
> --field=__REGION_VALUE_FIELD
> gfsh>create lucene index --name=testIndex2 --region=testRegion 
> gfsh>list lucene indexes --with-stats
> gfsh>create region --name=testRegion --type=PARTITION_PERSISTENT
> gfsh>put --key=1 --value=value1 --region=testRegion
> gfsh>put --key=2 --value=value2 --region=testRegion
> gfsh>put --key=3 --value=value3 --region=testRegion
> gfsh>search lucene --name=testIndex --region=/testRegion 
> --queryStrings=value* --defaultField=__REGION_VALUE_FIELD
> gfsh>search lucene --name=testIndex2 --region=/testRegion 
> --queryStrings=value* --defaultField=__REGION_VALUE_FIELD
> gfsh>destroy lucene index --region=/testRegion --name=testIndex
> gfsh>list lucene indexes --with-stats
> gfsh>search lucene --name=testIndex2 --region=/testRegion 
> --queryStrings=value* --defaultField=__REGION_VALUE_FIELD
> gfsh>search lucene --name=testIndex --region=/testRegion 
> --queryStrings=value* --defaultField=__REGION_VALUE_FIELD
> 
> gfsh>destroy lucene index --region=/testRegion
> gfsh>list lucene indexes --with-stats
> gfsh>destroy region --name=/testRegion
> gfsh>create lucene index --name=testIndex --region=testRegion 
> gfsh>create lucene index --name=testIndex2 --region=testRegion 
> --field=__REGION_VALUE_FIELD
> gfsh>list lucene indexes --with-stats
> gfsh>create region --name=testRegion --type=PARTITION_PERSISTENT
> gfsh>search lucene --name=testIndex --region=/testRegion 
> --queryStrings=value* --defaultField=__REGION_VALUE_FIELD
> The gfsh process hangs at this point.
> I'll attach the stacktrace for the server.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (GEODE-2617) LuceneResultStruct should be Serializable

2017-03-08 Thread Diane Hardman (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-2617?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Diane Hardman updated GEODE-2617:
-
Component/s: lucene

> LuceneResultStruct should be Serializable
> -
>
> Key: GEODE-2617
> URL: https://issues.apache.org/jira/browse/GEODE-2617
> Project: Geode
>  Issue Type: Bug
>  Components: lucene
>Reporter: xiaojian zhou
>Assignee: xiaojian zhou
>
> let LuceneResultStruct to be Serializable, then customer does not have to 
> defined their Serializable class to hold result.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Comment Edited] (GEODE-2605) Unable to do a Lucene query without CLUSTER:READ privilege

2017-03-07 Thread Diane Hardman (JIRA)

[ 
https://issues.apache.org/jira/browse/GEODE-2605?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15900383#comment-15900383
 ] 

Diane Hardman edited comment on GEODE-2605 at 3/7/17 11:40 PM:
---

Here are the gfsh commands to reproduce this behavior:
In first VM using gfsh, start up the cluster with 1 locator and 1 server 
configured with security as ‘super-user’ (all cluster and data privileges):
  start locator --name=loc2 
--J=-Dgemfire.security-manager=org.apache.geode.examples.security.ExampleSecurityManager
 --classpath=.
  start server --name=serv2 --start-rest-api --http-service-port=8080 
--http-service-bind-address=localhost --locators=localhost[10334] --classpath=. 
--user=super-user
  connect
  list members

In second VM using gfsh, connect to running cluster as ‘dataAdmin’ (all data 
privileges):
  connect
  create lucene index --name=testIndex --region=testRegion 
--field=__REGION_VALUE_FIELD
  list lucene indexes --with-stats=true   NOTE: This will fail as it 
needs CLUSTER:READ privilege. I can however execute this command on the first 
VM 
  create region --name=testRegion --type=PARTITION_PERSISTENT
  put --key=1 --value=value1 --region=testRegion
  put --key=2 --value=value2 --region=testRegion
  put --key=3 --value=value3 --region=testRegion
  search lucene --name=testIndex --region=testRegion --queryStrings=value* 
--defaultField=__REGION_VALUE_FIELD
  NOTE: This fails with message that I need CLUSTER:READ privilege 

The Lucene query will execute a function so I assumed that I needed DATA:WRITE 
privilege and am surprised that I need CLUSTER:READ.
Here is a link to the Lucene Integration spec, illustrating the implementation: 
https://cwiki.apache.org/confluence/display/GEODE/Text+Search+With+Lucene


was (Author: dhardman):
Here are the gfsh commands to reproduce this behavior:
In first VM using gfsh, start up the cluster with 1 locator and 1 server 
configured with security as ‘super-user’ (all cluster and data privileges):
  start locator --name=loc2 
--J=-Dgemfire.security-manager=org.apache.geode.examples.security.ExampleSecurityManager
 --classpath=.
  start server --name=serv2 --start-rest-api --http-service-port=8080 
--http-service-bind-address=localhost --locators=localhost[10334] --classpath=. 
--user=super-user
  connect
  list members

In second VM using gfsh, connect to running cluster as ‘dataAdmin’ (all data 
privileges):
  connect
  create lucene index --name=testIndex --region=testRegion 
--field=__REGION_VALUE_FIELD
  list lucene indexes --with-stats=true   NOTE: This will fail as it needs 
CLUSTER:READ privilege. I can however execute this command on the first VM 
  create region --name=testRegion --type=PARTITION_PERSISTENT
  put --key=1 --value=value1 --region=testRegion
  put --key=2 --value=value2 --region=testRegion
  put --key=3 --value=value3 --region=testRegion
  search lucene --name=testIndex --region=testRegion --queryStrings=value* 
--defaultField=__REGION_VALUE_FIELD
  NOTE: This fails with message that I need CLUSTER:READ privilege 

The Lucene query will execute a function so I assumed that I needed DATA:WRITE 
privilege and am surprised that I need CLUSTER:READ.
Here is a link to the Lucene Integration spec, illustrating the implementation: 
https://cwiki.apache.org/confluence/display/GEODE/Text+Search+With+Lucene

> Unable to do a Lucene query without CLUSTER:READ privilege
> --
>
> Key: GEODE-2605
> URL: https://issues.apache.org/jira/browse/GEODE-2605
> Project: Geode
>  Issue Type: Bug
>  Components: lucene, security
>    Reporter: Diane Hardman
> Attachments: security.json
>
>
> I have configured a small cluster with security and am testing the privileges 
> I need for creating a Lucene index and then executing a query/search using 
> Lucene. 
> I have confirmed that DATA:MANAGE privilege allows me to create a lucene 
> index (similar to creating OQL indexes).
> I assumed I needed DATA:WRITE privilege to execute 'search lucene' because 
> the implementation uses a function. Instead, I am getting an error that I 
> need CLUSTER:READ privilege. I don't know why.
> As an aside, we may want to document that all DATA privileges automatically 
> include CLUSTER:READ as I found I could create indexes with DATA:WRITE, but 
> could not list the indexes I created without CLUSTER:READ... go figure.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (GEODE-2605) Unable to do a Lucene query without CLUSTER:READ privilege

2017-03-07 Thread Diane Hardman (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-2605?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Diane Hardman updated GEODE-2605:
-
Attachment: security.json

The attached security.json file contains the roles and users with different 
privileges assigned to use with this test. This file must be placed under 
./loc2 and ./serv2 directories when security is configured.

> Unable to do a Lucene query without CLUSTER:READ privilege
> --
>
> Key: GEODE-2605
> URL: https://issues.apache.org/jira/browse/GEODE-2605
> Project: Geode
>  Issue Type: Bug
>  Components: lucene, security
>    Reporter: Diane Hardman
> Attachments: security.json
>
>
> I have configured a small cluster with security and am testing the privileges 
> I need for creating a Lucene index and then executing a query/search using 
> Lucene. 
> I have confirmed that DATA:MANAGE privilege allows me to create a lucene 
> index (similar to creating OQL indexes).
> I assumed I needed DATA:WRITE privilege to execute 'search lucene' because 
> the implementation uses a function. Instead, I am getting an error that I 
> need CLUSTER:READ privilege. I don't know why.
> As an aside, we may want to document that all DATA privileges automatically 
> include CLUSTER:READ as I found I could create indexes with DATA:WRITE, but 
> could not list the indexes I created without CLUSTER:READ... go figure.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (GEODE-2605) Unable to do a Lucene query without CLUSTER:READ privilege

2017-03-07 Thread Diane Hardman (JIRA)

[ 
https://issues.apache.org/jira/browse/GEODE-2605?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15900383#comment-15900383
 ] 

Diane Hardman commented on GEODE-2605:
--

Here are the gfsh commands to reproduce this behavior:
In first VM using gfsh, start up the cluster with 1 locator and 1 server 
configured with security as ‘super-user’ (all cluster and data privileges):
  start locator --name=loc2 
--J=-Dgemfire.security-manager=org.apache.geode.examples.security.ExampleSecurityManager
 --classpath=.
  start server --name=serv2 --start-rest-api --http-service-port=8080 
--http-service-bind-address=localhost --locators=localhost[10334] --classpath=. 
--user=super-user
  connect
  list members

In second VM using gfsh, connect to running cluster as ‘dataAdmin’ (all data 
privileges):
  connect
  create lucene index --name=testIndex --region=testRegion 
--field=__REGION_VALUE_FIELD
  list lucene indexes --with-stats=true   NOTE: This will fail as it needs 
CLUSTER:READ privilege. I can however execute this command on the first VM 
  create region --name=testRegion --type=PARTITION_PERSISTENT
  put --key=1 --value=value1 --region=testRegion
  put --key=2 --value=value2 --region=testRegion
  put --key=3 --value=value3 --region=testRegion
  search lucene --name=testIndex --region=testRegion --queryStrings=value* 
--defaultField=__REGION_VALUE_FIELD
  NOTE: This fails with message that I need CLUSTER:READ privilege 

The Lucene query will execute a function so I assumed that I needed DATA:WRITE 
privilege and am surprised that I need CLUSTER:READ.
Here is a link to the Lucene Integration spec, illustrating the implementation: 
https://cwiki.apache.org/confluence/display/GEODE/Text+Search+With+Lucene

> Unable to do a Lucene query without CLUSTER:READ privilege
> --
>
> Key: GEODE-2605
> URL: https://issues.apache.org/jira/browse/GEODE-2605
> Project: Geode
>  Issue Type: Bug
>  Components: lucene, security
>    Reporter: Diane Hardman
>
> I have configured a small cluster with security and am testing the privileges 
> I need for creating a Lucene index and then executing a query/search using 
> Lucene. 
> I have confirmed that DATA:MANAGE privilege allows me to create a lucene 
> index (similar to creating OQL indexes).
> I assumed I needed DATA:WRITE privilege to execute 'search lucene' because 
> the implementation uses a function. Instead, I am getting an error that I 
> need CLUSTER:READ privilege. I don't know why.
> As an aside, we may want to document that all DATA privileges automatically 
> include CLUSTER:READ as I found I could create indexes with DATA:WRITE, but 
> could not list the indexes I created without CLUSTER:READ... go figure.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (GEODE-2613) Incorrect security privilege listed for 'execute function'

2017-03-07 Thread Diane Hardman (JIRA)
Diane Hardman created GEODE-2613:


 Summary: Incorrect security privilege listed for 'execute function'
 Key: GEODE-2613
 URL: https://issues.apache.org/jira/browse/GEODE-2613
 Project: Geode
  Issue Type: Bug
  Components: docs, security
Reporter: Diane Hardman


Under 'Implementing Authorization" are 2 tables; one for client operations and 
another for general gfsh commands. 'execute function' is listed in both tables 
but with different privilege requirements.

The correct privilege required is DATA:WRITE for 'execute function' in either 
context.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (GEODE-2605) Unable to do a Lucene query without CLUSTER:READ privilege

2017-03-06 Thread Diane Hardman (JIRA)

[ 
https://issues.apache.org/jira/browse/GEODE-2605?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15898640#comment-15898640
 ] 

Diane Hardman commented on GEODE-2605:
--

When I add CLUSTER:READ to my dataAdmin user (which already had DATA:MANAGE, 
DATA:READ, and DATA:WRITE privileges) the Lucene query succeeds.

> Unable to do a Lucene query without CLUSTER:READ privilege
> --
>
> Key: GEODE-2605
> URL: https://issues.apache.org/jira/browse/GEODE-2605
> Project: Geode
>  Issue Type: Bug
>  Components: lucene, security
>    Reporter: Diane Hardman
>
> I have configured a small cluster with security and am testing the privileges 
> I need for creating a Lucene index and then executing a query/search using 
> Lucene. 
> I have confirmed that DATA:MANAGE privilege allows me to create a lucene 
> index (similar to creating OQL indexes).
> I assumed I needed DATA:WRITE privilege to execute 'search lucene' because 
> the implementation uses a function. Instead, I am getting an error that I 
> need CLUSTER:READ privilege. I don't know why.
> As an aside, we may want to document that all DATA privileges automatically 
> include CLUSTER:READ as I found I could create indexes with DATA:WRITE, but 
> could not list the indexes I created without CLUSTER:READ... go figure.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (GEODE-2605) Unable to do a Lucene query without CLUSTER:READ privilege

2017-03-06 Thread Diane Hardman (JIRA)
Diane Hardman created GEODE-2605:


 Summary: Unable to do a Lucene query without CLUSTER:READ privilege
 Key: GEODE-2605
 URL: https://issues.apache.org/jira/browse/GEODE-2605
 Project: Geode
  Issue Type: Bug
  Components: lucene, security
Reporter: Diane Hardman


I have configured a small cluster with security and am testing the privileges I 
need for creating a Lucene index and then executing a query/search using 
Lucene. 
I have confirmed that DATA:MANAGE privilege allows me to create a lucene index 
(similar to creating OQL indexes).
I assumed I needed DATA:WRITE privilege to execute 'search lucene' because the 
implementation uses a function. Instead, I am getting an error that I need 
CLUSTER:READ privilege. I don't know why.

As an aside, we may want to document that all DATA privileges automatically 
include CLUSTER:READ as I found I could create indexes with DATA:WRITE, but 
could not list the indexes I created without CLUSTER:READ... go figure.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (GEODE-2553) Lucene search hangs on recreated region with no data

2017-02-27 Thread Diane Hardman (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-2553?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Diane Hardman updated GEODE-2553:
-
Attachment: server50505.log

This is the server logfile generated by GemFire.

> Lucene search hangs on recreated region with no data
> 
>
> Key: GEODE-2553
> URL: https://issues.apache.org/jira/browse/GEODE-2553
> Project: Geode
>  Issue Type: Bug
>  Components: lucene
>    Reporter: Diane Hardman
> Fix For: 1.2.0
>
> Attachments: server50505.log, stack.log
>
>
> While manually testing in gfsh the process of deleting Lucene indexes, 
> deleting the region, creating new indexes and a new empty region, I was able 
> to hang gfsh while doing a Lucene search on the new region with no data.
> Here are the steps I used:
> _ __
>/ _/ __/ __/ // /
>   / /  __/ /___  /_  / _  / 
>  / /__/ / /  _/ / // /  
> /__/_/  /__/_//_/1.2.0-SNAPSHOT
> Monitor and Manage Apache Geode
> gfsh>start locator --name=locator1 --port=12345
> gfsh>start server --name=server50505 --server-port=50505 
> --locators=localhost[12345] --start-rest-api --http-service-port=8080 
> --http-service-bind-address=localhost
> gfsh>create lucene index --name=testIndex --region=testRegion 
> --field=__REGION_VALUE_FIELD
> gfsh>create lucene index --name=testIndex2 --region=testRegion 
> gfsh>list lucene indexes --with-stats
> gfsh>create region --name=testRegion --type=PARTITION_PERSISTENT
> gfsh>put --key=1 --value=value1 --region=testRegion
> gfsh>put --key=2 --value=value2 --region=testRegion
> gfsh>put --key=3 --value=value3 --region=testRegion
> gfsh>search lucene --name=testIndex --region=/testRegion 
> --queryStrings=value* --defaultField=__REGION_VALUE_FIELD
> gfsh>search lucene --name=testIndex2 --region=/testRegion 
> --queryStrings=value* --defaultField=__REGION_VALUE_FIELD
> gfsh>destroy lucene index --region=/testRegion --name=testIndex
> gfsh>list lucene indexes --with-stats
> gfsh>search lucene --name=testIndex2 --region=/testRegion 
> --queryStrings=value* --defaultField=__REGION_VALUE_FIELD
> gfsh>search lucene --name=testIndex --region=/testRegion 
> --queryStrings=value* --defaultField=__REGION_VALUE_FIELD
> 
> gfsh>destroy lucene index --region=/testRegion
> gfsh>list lucene indexes --with-stats
> gfsh>destroy region --name=/testRegion
> gfsh>create lucene index --name=testIndex --region=testRegion 
> gfsh>create lucene index --name=testIndex2 --region=testRegion 
> --field=__REGION_VALUE_FIELD
> gfsh>list lucene indexes --with-stats
> gfsh>create region --name=testRegion --type=PARTITION_PERSISTENT
> gfsh>search lucene --name=testIndex --region=/testRegion 
> --queryStrings=value* --defaultField=__REGION_VALUE_FIELD
> The gfsh process hangs at this point.
> I'll attach the stacktrace for the server.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (GEODE-2553) Lucene search hangs on recreated region with no data

2017-02-27 Thread Diane Hardman (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-2553?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Diane Hardman updated GEODE-2553:
-
Attachment: stack.log

This is the stack trace acquired by running jstack on the PID for the server. 

> Lucene search hangs on recreated region with no data
> 
>
> Key: GEODE-2553
> URL: https://issues.apache.org/jira/browse/GEODE-2553
> Project: Geode
>  Issue Type: Bug
>  Components: lucene
>    Reporter: Diane Hardman
> Fix For: 1.2.0
>
> Attachments: stack.log
>
>
> While manually testing in gfsh the process of deleting Lucene indexes, 
> deleting the region, creating new indexes and a new empty region, I was able 
> to hang gfsh while doing a Lucene search on the new region with no data.
> Here are the steps I used:
> _ __
>/ _/ __/ __/ // /
>   / /  __/ /___  /_  / _  / 
>  / /__/ / /  _/ / // /  
> /__/_/  /__/_//_/1.2.0-SNAPSHOT
> Monitor and Manage Apache Geode
> gfsh>start locator --name=locator1 --port=12345
> gfsh>start server --name=server50505 --server-port=50505 
> --locators=localhost[12345] --start-rest-api --http-service-port=8080 
> --http-service-bind-address=localhost
> gfsh>create lucene index --name=testIndex --region=testRegion 
> --field=__REGION_VALUE_FIELD
> gfsh>create lucene index --name=testIndex2 --region=testRegion 
> gfsh>list lucene indexes --with-stats
> gfsh>create region --name=testRegion --type=PARTITION_PERSISTENT
> gfsh>put --key=1 --value=value1 --region=testRegion
> gfsh>put --key=2 --value=value2 --region=testRegion
> gfsh>put --key=3 --value=value3 --region=testRegion
> gfsh>search lucene --name=testIndex --region=/testRegion 
> --queryStrings=value* --defaultField=__REGION_VALUE_FIELD
> gfsh>search lucene --name=testIndex2 --region=/testRegion 
> --queryStrings=value* --defaultField=__REGION_VALUE_FIELD
> gfsh>destroy lucene index --region=/testRegion --name=testIndex
> gfsh>list lucene indexes --with-stats
> gfsh>search lucene --name=testIndex2 --region=/testRegion 
> --queryStrings=value* --defaultField=__REGION_VALUE_FIELD
> gfsh>search lucene --name=testIndex --region=/testRegion 
> --queryStrings=value* --defaultField=__REGION_VALUE_FIELD
> 
> gfsh>destroy lucene index --region=/testRegion
> gfsh>list lucene indexes --with-stats
> gfsh>destroy region --name=/testRegion
> gfsh>create lucene index --name=testIndex --region=testRegion 
> gfsh>create lucene index --name=testIndex2 --region=testRegion 
> --field=__REGION_VALUE_FIELD
> gfsh>list lucene indexes --with-stats
> gfsh>create region --name=testRegion --type=PARTITION_PERSISTENT
> gfsh>search lucene --name=testIndex --region=/testRegion 
> --queryStrings=value* --defaultField=__REGION_VALUE_FIELD
> The gfsh process hangs at this point.
> I'll attach the stacktrace for the server.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (GEODE-2553) Lucene search hangs on recreated region with no data

2017-02-27 Thread Diane Hardman (JIRA)
Diane Hardman created GEODE-2553:


 Summary: Lucene search hangs on recreated region with no data
 Key: GEODE-2553
 URL: https://issues.apache.org/jira/browse/GEODE-2553
 Project: Geode
  Issue Type: Bug
  Components: lucene
Reporter: Diane Hardman
 Fix For: 1.2.0


While manually testing in gfsh the process of deleting Lucene indexes, deleting 
the region, creating new indexes and a new empty region, I was able to hang 
gfsh while doing a Lucene search on the new region with no data.
Here are the steps I used:
_ __
   / _/ __/ __/ // /
  / /  __/ /___  /_  / _  / 
 / /__/ / /  _/ / // /  
/__/_/  /__/_//_/1.2.0-SNAPSHOT

Monitor and Manage Apache Geode
gfsh>start locator --name=locator1 --port=12345

gfsh>start server --name=server50505 --server-port=50505 
--locators=localhost[12345] --start-rest-api --http-service-port=8080 
--http-service-bind-address=localhost

gfsh>create lucene index --name=testIndex --region=testRegion 
--field=__REGION_VALUE_FIELD

gfsh>create lucene index --name=testIndex2 --region=testRegion 
gfsh>list lucene indexes --with-stats

gfsh>create region --name=testRegion --type=PARTITION_PERSISTENT

gfsh>put --key=1 --value=value1 --region=testRegion

gfsh>put --key=2 --value=value2 --region=testRegion

gfsh>put --key=3 --value=value3 --region=testRegion

gfsh>search lucene --name=testIndex --region=/testRegion --queryStrings=value* 
--defaultField=__REGION_VALUE_FIELD

gfsh>search lucene --name=testIndex2 --region=/testRegion --queryStrings=value* 
--defaultField=__REGION_VALUE_FIELD

gfsh>destroy lucene index --region=/testRegion --name=testIndex

gfsh>list lucene indexes --with-stats

gfsh>search lucene --name=testIndex2 --region=/testRegion --queryStrings=value* 
--defaultField=__REGION_VALUE_FIELD

gfsh>search lucene --name=testIndex --region=/testRegion --queryStrings=value* 
--defaultField=__REGION_VALUE_FIELD


gfsh>destroy lucene index --region=/testRegion

gfsh>list lucene indexes --with-stats

gfsh>destroy region --name=/testRegion

gfsh>create lucene index --name=testIndex --region=testRegion 

gfsh>create lucene index --name=testIndex2 --region=testRegion 
--field=__REGION_VALUE_FIELD
gfsh>list lucene indexes --with-stats

gfsh>create region --name=testRegion --type=PARTITION_PERSISTENT
gfsh>search lucene --name=testIndex --region=/testRegion --queryStrings=value* 
--defaultField=__REGION_VALUE_FIELD

The gfsh process hangs at this point.

I'll attach the stacktrace for the server.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (GEODE-2483) Allow developer to set security permissions within functions

2017-02-14 Thread Diane Hardman (JIRA)
Diane Hardman created GEODE-2483:


 Summary: Allow developer to set security permissions within 
functions
 Key: GEODE-2483
 URL: https://issues.apache.org/jira/browse/GEODE-2483
 Project: Geode
  Issue Type: Improvement
  Components: functions, security
Reporter: Diane Hardman


Currently users need DATA:WRITE permission to execute a function on the 
servers. As an application/function developer, I would like to specify in my 
function what permissions are required to execute it. 
Additionally, when I deploy my function to a cluster with security configured, 
if I have not specified user permissions, the deploy should fail.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (GEODE-2267) Add gfsh command to export all cluster artifacts

2017-01-20 Thread Diane Hardman (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-2267?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Diane Hardman updated GEODE-2267:
-
Issue Type: New Feature  (was: Bug)

> Add gfsh command to export all cluster artifacts
> 
>
> Key: GEODE-2267
> URL: https://issues.apache.org/jira/browse/GEODE-2267
> Project: Geode
>  Issue Type: New Feature
>  Components: configuration, docs, gfsh
>    Reporter: Diane Hardman
>
> We would like a single gfsh command to collect and export all logfiles and 
> stat files into a single package. This package (zipfile) can then be saved 
> and attached to emails and Jira tickets to help evaluate the Geode cluster 
> status.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (GEODE-2198) import cluster-config should continue if the running servers have no data in their application regions

2017-01-10 Thread Diane Hardman (JIRA)

[ 
https://issues.apache.org/jira/browse/GEODE-2198?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15816651#comment-15816651
 ] 

Diane Hardman commented on GEODE-2198:
--

Below is a specific condition where this behavior is not desired.
For the use case where a GemFire region exists, but has no data (eg. it is used 
for messaging only), then I should not be able to import a new cluster 
configuration. In this case we need to:
1. check for the existence of a region on the currently running cluster
2. verify that the region attributes match what is being imported.
3. if the attributes are different, then throw an error and do not import.

> import cluster-config should continue if the running servers have no data in 
> their application regions
> --
>
> Key: GEODE-2198
> URL: https://issues.apache.org/jira/browse/GEODE-2198
> Project: Geode
>  Issue Type: Sub-task
>  Components: management
>Reporter: Jinmei Liao
>Assignee: Kirk Lund
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)