Re: Added Support for JUnit 5

2021-10-13 Thread Gregory Green
This is awesome, thank you 

-
Gregory Green | Advisor Solution Engineer | Tanzu Data 
Mobile: 732.737.7119| Email: grego...@vmware.com 
--
Articles/Videos
Monoliths to Microservices? Don’t Forget to Transform the Data Layer 
<https://content.pivotal.io/engineers/moving-from-monoliths-to-microservices-don-t-forget-to-transform-the-data-layer-here-s-how>
A Caching Approach to Data Transformation of Legacy RDBMS 
<https://www.youtube.com/watch?v=h5UvIJo7eBc>
How to Build Modern Data Pipelines with GemFire and SCDF 
<https://content.pivotal.io/blog/how-to-build-modern-data-pipelines-with-pivotal-gemfire-and-spring-cloud-data-flow>
GemFire AWS Quickstart <https://youtu.be/QqWKzZ2MeOY>
 
 

On 10/13/21, 12:27 PM, "Kirk Lund"  wrote:

Good job Dale and thanks!

On Tue, Oct 12, 2021 at 3:37 PM Dale Emery  wrote:

> In September 2021, Geode added support for writing and running tests using
> JUnit 5.
>
> Most Geode modules now support JUnit 5. For most Geode modules, you can
> now write each test class using either JUnit 5's "Jupiter" API or the
> legacy JUnit 4 API.
>
> Which modules support JUnit 5? Any source set that depends on geode-junit
> or geode-dunit already has JUnit 5 support. For those source sets you can
> start writing tests using the JUnit Jupiter API now, and Gradle will run
> them.
>
> To add JUnit 5 support to a module or source set: Add lines like these to
> the "dependencies" configuration of the module’s build.gradle file:
>
>
> testImplementation('org.junit.jupiter:junit-jupiter-api')
>
> testRuntimeOnly('org.junit.jupiter:junit-jupiter-engine')
>
> The first line allows you to write unit tests using the JUnit Jupiter API.
> The second line allows Gradle to run your JUnit Jupiter unit tests.
>
> To use JUnit Jupiter to write and run other kinds of tests (e.g.
> integrationTest or distributedTest), add similar lines to configure the
> appropriate source sets.
>
> LIMITATIONS
>
>   *   Because Geode support for JUnit Jupiter is so new, we have not yet
> added test framework code that takes advantage of its features.
>   *   JUnit Jupiter does not support the use of Rules.
>
> SEE ALSO
>
>   *   The JUnit 5 User Guide:
> 
https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fjunit.org%2Fjunit5%2Fdocs%2Fcurrent%2Fuser-guide%2Fdata=04%7C01%7Cgregoryg%40vmware.com%7C79cfd62c0e3246639b3e08d98e6653d5%7Cb39138ca3cee4b4aa4d6cd83d9dd62f0%7C0%7C0%7C637697392435164713%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000sdata=4pQW6ayi%2BetPhjgwamjuJGLzLqaaPHFG8v%2FpTcA03fc%3Dreserved=0
>   *   Using JUnit 5 (a copy of this message on the Geode wiki):
> 
https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fcwiki.apache.org%2Fconfluence%2Fdisplay%2FGEODE%2FUsing%2BJUnit%2B5data=04%7C01%7Cgregoryg%40vmware.com%7C79cfd62c0e3246639b3e08d98e6653d5%7Cb39138ca3cee4b4aa4d6cd83d9dd62f0%7C0%7C0%7C637697392435164713%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000sdata=pcUCajdRMmxY5pqjM4w2l4pKtrdZc%2FrolrYbJiU4Ldo%3Dreserved=0
>
> Dale Emery
>
>



Re: CFP for ApacheCon 2021 closes in ONE WEEK

2021-06-28 Thread Gregory Green
Hello team,

I do not see where I can accept the invite in the following link/info?

Dear Gregory Green,

Congratulations! We are pleased to tell you that your talk, "OLTP Application 
Data Services with Apache Geode”
has been accepted for ApacheCon 2021. (If you submitted additional proposals, 
you
will receive separate notifications regarding each proposal.)

Please confirm that you will be attending by responding to this email. You will 
also need to register for the event, at 
https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.apachecon.com%2Facah2021%2Fregister.htmldata=04%7C01%7Cgregoryg%40vmware.com%7C195452c8c4544b1c596208d9371651ae%7Cb39138ca3cee4b4aa4d6cd83d9dd62f0%7C0%7C0%7C637601391271000992%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000sdata=QNyhxysm3tko%2BfceiNAmBMJGy4KFbGXPRVrBkWNu0pQ%3Dreserved=0,
 in order to be able to give your presentation. Please do not put this off, as 
that will make the scheduling process more difficult - please go do that now. 
Thanks.

With regards,
The team behind ApacheCon 2021



-
Gregory Green | Advisor Solution Engineer | Tanzu Data 
Mobile: 732.737.7119| Email: grego...@vmware.com 
--
Articles/Videos
Monoliths to Microservices? Don’t Forget to Transform the Data Layer 
<https://content.pivotal.io/engineers/moving-from-monoliths-to-microservices-don-t-forget-to-transform-the-data-layer-here-s-how>
A Caching Approach to Data Transformation of Legacy RDBMS 
<https://www.youtube.com/watch?v=h5UvIJo7eBc>
How to Build Modern Data Pipelines with GemFire and SCDF 
<https://content.pivotal.io/blog/how-to-build-modern-data-pipelines-with-pivotal-gemfire-and-spring-cloud-data-flow>
GemFire AWS Quickstart <https://youtu.be/QqWKzZ2MeOY>
 
 

On 4/23/21, 11:03 AM, "Rich Bowen"  wrote:

[You are receiving this because you're subscribed to one or more dev@
mailing lists for an Apache project, or the ApacheCon Announce list.]

Time is running out to submit your talk for ApacheCon 2021.

The Call for Presentations for ApacheCon @Home 2021, focused on Europe
and North America time zones, closes May 3rd, and is at

https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.apachecon.com%2Facah2021%2Fcfp.htmldata=04%7C01%7Cgregoryg%40vmware.com%7Ccc707c5c26394c97c38f08d90668fc34%7Cb39138ca3cee4b4aa4d6cd83d9dd62f0%7C0%7C0%7C637547870269279991%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000sdata=%2FssIougp2FOWef7HroIfOwMi7f6ypSgAXNjapZUCZpM%3Dreserved=0

The CFP for ApacheCon Asia, focused on Asia/Pacific time zones, is at

https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fapachecon.com%2Facasia2021%2Fcfp.htmldata=04%7C01%7Cgregoryg%40vmware.com%7Ccc707c5c26394c97c38f08d90668fc34%7Cb39138ca3cee4b4aa4d6cd83d9dd62f0%7C0%7C0%7C637547870269279991%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000sdata=%2B4eqBNgkJIAVLNtZNgzYQd%2FkpPEJgh2tDXnlJNFFWNs%3Dreserved=0
 and also closes on May 3rd.

ApacheCon is our main event, featuring content from any and all of our
projects, and is your best opportunity to get your project in front of
the largest audience of enthusiasts.

Please don't wait for the last minute. Get your talks in today!

-- 
Rich Bowen, VP Conferences
The Apache Software Foundation

https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fapachecon.com%2Fdata=04%7C01%7Cgregoryg%40vmware.com%7Ccc707c5c26394c97c38f08d90668fc34%7Cb39138ca3cee4b4aa4d6cd83d9dd62f0%7C0%7C0%7C637547870269279991%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000sdata=MVHWG69GbmTDdKKQWyScrj3HLzcTz7DtnQwJ3C4fYes%3Dreserved=0
@apachecon



Re: [DISCUSS] changes to Redis implementation

2017-02-24 Thread Gregory Green
Hello,

I just push the changes to remove the @author tags and updated the
RegionProviderTest.

On Fri, Feb 24, 2017 at 7:36 PM, Jason Huynh <jhu...@pivotal.io> wrote:

> It looks like travis ci failed on that pr?  Also there are some @author
> tags that should probably be scrubbed out
>
> On Fri, Feb 24, 2017 at 4:33 PM Michael Stolz <mst...@pivotal.io> wrote:
>
> > +1 experimental means changing. Go for it.
> >
> > --
> > Mike Stolz
> > Principal Engineer - Gemfire Product Manager
> > Mobile: 631-835-4771 <(631)%20835-4771>
> >
> > On Feb 24, 2017 7:30 PM, "Kirk Lund" <kl...@apache.org> wrote:
> >
> > > +1 for merging in these changes even though they break rolling upgrade
> > for
> > > redis storage format -- it should be ok to break API or data format if
> it
> > > was "experimental" in all releases so far
> > >
> > > On Fri, Feb 24, 2017 at 3:25 PM, Bruce Schuchardt <
> > bschucha...@pivotal.io>
> > > wrote:
> > >
> > > > Gregory Green has posted a pull request that warrants discussion. It
> > > > improves performance for Sets and Hashes by altering the storage
> format
> > > for
> > > > these collections.  As such it will not permit a rolling upgrade,
> > though
> > > > the Redis adapter is labelled "experimental" so maybe that's okay.
> > > >
> > > > https://github.com/apache/geode/pull/404
> > > >
> > > > The PR also fixes GEODE-2469, inability to handle hash keys having
> > > colons.
> > > >
> > > > There was some discussion about altering the storage format that was
> > > > initiated by Hitesh.  Personally I think Gregory's changes are better
> > > than
> > > > the current implementation and we should accept them, though I
> haven't
> > > gone
> > > > through the code changes extensively.
> > > >
> > > >
> > >
> >
>



-- 
*Gregory Green* (Senior Data Engineer)
ggr...@pivotal.io
201.736.1016


[jira] [Created] (GEODE-2533) Export CSV

2017-02-23 Thread Gregory Green (JIRA)
Gregory Green created GEODE-2533:


 Summary: Export CSV
 Key: GEODE-2533
 URL: https://issues.apache.org/jira/browse/GEODE-2533
 Project: Geode
  Issue Type: Wish
  Components: gfsh
Reporter: Gregory Green
 Fix For: 1.2.0


I would like the ability to export region data into a CSV format.
The interface should be exposed thru a gfsh command

Example: gfsh>exportCSV --region=name

A header column should exist for each object property. 

User
{
   String name
   String email 
}

[CSV]
"key","name","mail"
"1","test","t...@test.io"

Properties in nested objects will contain a dot notation for the property name.

class User
{
   String name
   String email 
   class Address
   {
  String street
  String city
   }
}

[CSV]
"key","name","mail", "address.street", "address.city"
"1","test","t...@test.io","123","charlotte"


Arrays, collections, iterators and multi-value object properties will be 
converted to string using the object.toString() method.




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


Re: GeodeRedisAdapter improvments/feedback

2017-02-15 Thread Gregory Green
Hitesh and Team,

Also, I think geospatial support in core Gemfire that can be exposed thru
the following Redis GEO... commands would be great

GEOADD
GEODIST
GEOHASH
GEOPOS
GEORADIUS
GEORADIUSBYMEMBER




On Wed, Feb 15, 2017 at 10:48 AM, Gregory Green <ggr...@pivotal.io> wrote:

> Hello Hitesh,
>
> The following is my feedback.
>
> *1. Redis Type String*
>   I like the idea of creating a region upfront. If we are still using the
> convention that internal region names start with "__" , then I would
> suggest something like a region named "__RedisString region"
>
> *2. List Type*
>
> I propose using a single partition region (ex: "__RedisList") for the List
> commands.
>
> Region<byteArrayWrapper,ArrayList> region;
>
> //Note: ByteArrayWrapper is what the current RedisAdapter uses as its data
> type. It converts strings to bytes using UTF8 encoding
>
> Example Redis commands
>
> RPUSH mylist A =>
>
>  Region<ByteArrayWrapper,List> region = 
> getRegion("__RedisList")
>  List list = getOrCreateList(mylist);
>  list.add(A)
>  region.put(mylist,list)
>
> *3. Hashes*
>
> Based on my Spring Data Redis testing for Hash/object support.
>
> HMSET and similar Hash commands are submitted in the following format:
> HMSET region:key [field value]+ I proposed creating regions with the
> following format:
>
> Region<ByteArrayWrapper,Map<ByteArrayWrapper,ByteArrayWrapper>> region;
>
> Also see Hashes section at the following URL https://redis.io/topics/da
> ta-types
>
> Example Redis command:
>
> HMSET companies:100 _class io.pivotal.redis.gemfire.example.repository.Company
> id 100 name nylaInc email i...@nylainc.io website nylaInc.io taxID id:1
> address.address1 address1 address.address2 address2 address.cityTown
> cityTown address.stateProvince stateProvince address.zip zip
> address.country country
>
> =>
>
> //Pseudo Access code
> Region<ByteArrayWrapper,Map<ByteArrayWrapper,ByteArrayWrapper>> 
> companiesRegion = getRegion("companies")
> companiesRegion.put(100, toMap(fieldValues))
>
> //--
>
> // HGETALL region:key
>
> HGETALL companies:100 =>
>
> Region<key,Map<field,value>> companiesRegion = getRegion("companies")
> return companiesRegion.get(100)
>
> //HSET region:key field value
>
> HSET companies:100 email upda...@pivotal.io =>
>
> Region<key,Map<field,value>> companiesRegion = getRegion("companies");
> Map map = companiesRegion.get(100)
> map.set(email,upda...@pivotal.io)
> companiesRegion.put(100,map);
>
> FYI - I started to implement this and hope to submit a pull request soon
> related to GEODE-2469.
>
>
> *4. Set*
>
> I propose using a single partition region (ex: __RedisSET) for the SET
> commands.
>
> Region<byteArrayWrapper,HashSet> region;
>
> Example Redis commands
>
> SADD myset "Hello" =>
>
> Region<ByteArrayWrapper,Set> region = 
> getRegion("__RedisSET");
> Set set = region(myset)
> boolean bool = set.add(Hello)
> if(bool){
>   region.put(myset,set)
> }
> return bool;
>
> SMEMBERS myset "Hello" =>
>
> Region<ByteArrayWrapper,Set> region = 
> getRegion("_RedisSET");
> Set set = region(myset)
> return set.contains(Hello)s
>
> FYI - I started to implement this and hope to submit a pull request soon
> related to GEODE-2469.
>
>
> *5. SortedSets *
>
> I propose using a single partition region for the SET commands.
>
> Region<byteArrayWrapper,TreeSet> region;
>
> 6. Default config for geode-region (vote)
>
> I think the default setting should be partitioned with persistence and no
> redundant copies.
>
> 7. It seems; redis knows type(list, hashes, string ,set ..) of each key...
>
> I suggested most operations can assume all keys are strings in UTF8 byte
> encoding, not sure if there are any mathematical number based Redis
> commands that need numbers.
>
> *8. Transactions:*
>
> +1 I agree to not support transactions
>
> *9. Redis COMMAND* (https://redis.io/commands/comman
> <https://redis.io/commands/command>
>
> +1 for implementing the "COMMAND"
>
>
> -- Forwarded message --
> From: Hitesh Khamesra <hitesh...@yahoo.com.invalid>
> Date: Tue, Feb 14, 2017 at 5:36 PM
> Subject: GeodeRedisAdapter improvments/feedback
> To: Geode <dev@geode.apache.org>, "u...@geode.apache.org" <
> u...@geode.apache.org>
>
>
> Current GeodeRedisAdapter implementation is based on
> https://

Fwd: GeodeRedisAdapter improvments/feedback

2017-02-15 Thread Gregory Green
h hashes to geode-partition-region(i.e.
user1000 is geode-partition-region)
  d. Feedback/vote
-- Should we map hashes to region-entry
-- region-key = user1000
-- region-value = map
-- This will provide java bean sort to behaviour with 10s of field-value
-- Personally I would prefer this..
  e. Feedback/vote: both behaviour is desirable

4. Sets
  a. This represents unique keys in set
  b. usage "sadd myset 1 2 3"
  c. Current implementation maps each sadd to geode-partition-region(i.e.
myset is geode-partition-region)
  d. Feedback/vote
-- Should we map set to region-entry
-- region-key = myset
-- region-value = Hashset
  e. Feedback/vote: both behaviour is desirable

5. SortedSets
  a. This represents unique keys in set with score (usecase Query top-10)
  b. usage "zadd hackers 1940 "Alan Kay""
  c. Current implementation maps each zadd to geode-partition-region(i.e.
hackers is geode-partition-region)
  d. Feedback/vote
-- Should we map set to region-entry
-- region-key = hackers
-- region-value = Sorted Hashset
  e. Feedback/vote: both behaviour is desirable

6. HyperLogLogs
  a. A HyperLogLog is a probabilistic data structure used in order to count
unique things (technically this is referred to estimating the cardinality
of a set).
  b. usage "pfadd hll a b c d"
  c. Current implementation creates "HLL_REGION" geode-partition-region
upfront
  d. hll becomes region-key and value is HLL object
  e. any feedback?

7. Default config for geode-region (vote)
   a. partition region
   b. 1 redundant copy
   c. Persistence
   d. Eviction
   e. Expiration
   f. ?

8. It seems; redis knows type(list, hashes, string ,set ..) of each key.
Thus for each operation we need to make sure type of key. In current
implementation we have different region for each redis type. Thus we have
another region(metaTypeRegion) which keeps type for each key. This makes
any operation in geode slow as it needs to verify that type. For instance,
creating new key need to make sure its already there or not. Whether we
should allow type change or not.
  a. Feedback/vote
 -- type change of key
 -- Can we allow two key with same name but two differnt type (as it
will endup in two different geode-region)
String type "key1" in string region
HLL type "key1" in HLL region
  b. any other feedback

9. Transactions:
  a. we will not support transaction in redisAdapter as geode transaction
are limited to single node.
  b. feedback?

10. Redis COMMAND (https://redis.io/commands/command)
  a. should we implement this "COMMAND" ?

11. Any other redis command we should consider?


Thanks.Hitesh




-- 
*Gregory Green* (Senior Data Engineer)
ggr...@pivotal.io


[jira] [Commented] (GEODE-2469) Redis adapter Hash key support

2017-02-13 Thread Gregory Green (JIRA)

[ 
https://issues.apache.org/jira/browse/GEODE-2469?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15863976#comment-15863976
 ] 

Gregory Green commented on GEODE-2469:
--

Please see https://redis.io/topics/data-types

Hashes
Redis Hashes are maps between string fields and string values, so they are the 
perfect data type to represent objects (e.g. A User with a number of fields 
like name, surname, age, and so forth):
@cli
HMSET user:1000 username antirez password P1pp0 age 34
HGETALL user:1000
HSET user:1000 password 12345
HGETALL user:1000
A hash with a few fields (where few means up to one hundred or so) is stored in 
a way that takes very little space, so you can store millions of objects in a 
small Redis instance.
While Hashes are used mainly to represent objects, they are capable of storing 
many elements, so you can use Hashes for many other tasks as well.
Every hash can store up to 232 - 1 field-value pairs (more than 4 billion).
Check the full list of Hash commands for more information, or read the 
introduction to Redis data types.

> Redis adapter Hash key support
> --
>
> Key: GEODE-2469
> URL: https://issues.apache.org/jira/browse/GEODE-2469
> Project: Geode
>  Issue Type: Bug
>  Components: redis
>    Reporter: Gregory Green
>
> The Redis adapter does not appear to handle hash keys correctly.
> The following Example: Redis CLI works.
> localhost:11211>  HSET companies name "John Smith"
> Using a  HSET :id  .. produces an error
> Example:
> localhost:11211>  HSET companies:1000 name "John Smith"
> [Server error]
> [fine 2017/02/10 16:04:33.289 EST server1  
> tid=0x6a] Region names may only be alphanumeric and may contain hyphens or 
> underscores: companies: 1000
> java.lang.IllegalArgumentException: Region names may only be alphanumeric and 
> may contain hyphens or underscores: companies: 1000
> at 
> org.apache.geode.internal.cache.LocalRegion.validateRegionName(LocalRegion.java:7618)
> at 
> org.apache.geode.internal.cache.GemFireCacheImpl.createVMRegion(GemFireCacheImpl.java:3201)
> at 
> org.apache.geode.internal.cache.GemFireCacheImpl.basicCreateRegion(GemFireCacheImpl.java:3181)
> at 
> org.apache.geode.internal.cache.GemFireCacheImpl.createRegion(GemFireCacheImpl.java:3169)
> at org.apache.geode.cache.RegionFactory.create(RegionFactory.java:762)
> at 
> org.apache.geode.management.internal.cli.functions.RegionCreateFunction.createRegion(RegionCreateFunction.java:355)
> at 
> org.apache.geode.management.internal.cli.functions.RegionCreateFunction.execute(RegionCreateFunction.java:90)
> at 
> org.apache.geode.internal.cache.execute.AbstractExecution.executeFunctionLocally(AbstractExecution.java:333)
> at 
> org.apache.geode.internal.cache.execute.AbstractExecution$2.run(AbstractExecution.java:303)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at 
> org.apache.geode.distributed.internal.DistributionManager.runUntilShutdown(DistributionManager.java:621)
> at 
> org.apache.geode.distributed.internal.DistributionManager$9$1.run(DistributionManager.java:1067)
> at java.lang.Thread.run(Thread.java:745)
> //Example Spring Data Redis Object sample
> @Data
> @EqualsAndHashCode()
> @RedisHash(value="companies")
> @NoArgsConstructor
> public class Company
> {
>   private @Id String id;
>
> //Repository
> public interface CompanyRepository extends CrudRepository<Company, String> 
> {
>  
> }
> //When saving using a repository
> repository.save(this.myCompany);
> [Same Server error]
> java.lang.IllegalArgumentException: Region names may only be alphanumeric and 
> may contain hyphens or underscores: 
> companies:f05405c2-86f2-4aaf-bd0c-6fecd483bf28
> at 
> org.apache.geode.internal.cache.LocalRegion.validateRegionName(LocalRegion.java:7618)
> at 
> org.apache.geode.internal.cache.GemFireCacheImpl.createVMRegion(GemFireCacheImpl.java:3201)
> at 
> org.apache.geode.internal.cache.GemFireCacheImpl.basicCreateRegion(GemFireCacheImpl.java:3181)
> at 
> org.apache.geode.internal.cache.GemFireCacheImpl.createRegion(GemFireCacheImpl.java:3169)
> at org.apache.geode.cache.RegionFactory.create(RegionFactory.java:762)
> at 
> org.apache.geode.management.internal.cli.functions.RegionCreateFunction.createRegion(R

[jira] [Created] (GEODE-2468) The Redis adapter (start server --name=server1 --r...

2017-02-11 Thread Gregory Green (JIRA)
Gregory Green created GEODE-2468:


 Summary: The Redis adapter (start server --name=server1 --r...
 Key: GEODE-2468
 URL: https://issues.apache.org/jira/browse/GEODE-2468
 Project: Geode
  Issue Type: Improvement
Reporter: Gregory Green


The Redis adapter (start server --name=server1 --redis-port=11211 
--redis-bind-address=127.0.0.1  --use-cluster-configuration) does not appear to 
handle hash keys correctly.

The following Example: Redis CLI works.
localhost:11211>  HSET companies name "John Smith"


Using a  HSET :id  .. produces an error
Example:
localhost:11211>  HSET companies:1000 name "John Smith"

[Server error]
[fine 2017/02/10 16:04:33.289 EST server1  
tid=0x6a] Region names may only be alphanumeric and may contain hyphens or 
underscores: companies: 1000
java.lang.IllegalArgumentException: Region names may only be alphanumeric and 
may contain hyphens or underscores: companies: 1000
at 
org.apache.geode.internal.cache.LocalRegion.validateRegionName(LocalRegion.java:7618)
at 
org.apache.geode.internal.cache.GemFireCacheImpl.createVMRegion(GemFireCacheImpl.java:3201)
at 
org.apache.geode.internal.cache.GemFireCacheImpl.basicCreateRegion(GemFireCacheImpl.java:3181)
at 
org.apache.geode.internal.cache.GemFireCacheImpl.createRegion(GemFireCacheImpl.java:3169)
at org.apache.geode.cache.RegionFactory.create(RegionFactory.java:762)
at 
org.apache.geode.management.internal.cli.functions.RegionCreateFunction.createRegion(RegionCreateFunction.java:355)
at 
org.apache.geode.management.internal.cli.functions.RegionCreateFunction.execute(RegionCreateFunction.java:90)
at 
org.apache.geode.internal.cache.execute.AbstractExecution.executeFunctionLocally(AbstractExecution.java:333)
at 
org.apache.geode.internal.cache.execute.AbstractExecution$2.run(AbstractExecution.java:303)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at 
org.apache.geode.distributed.internal.DistributionManager.runUntilShutdown(DistributionManager.java:621)
at 
org.apache.geode.distributed.internal.DistributionManager$9$1.run(DistributionManager.java:1067)
at java.lang.Thread.run(Thread.java:745)


//Example Spring Data Redis Object sample
@Data
@EqualsAndHashCode()
@RedisHash(value="companies")
@NoArgsConstructor
public class Company
{
private @Id String id;
   

//Repository

public interface CompanyRepository extends CrudRepository<Company, String> 
{
 
}

//When saving using a repository
repository.save(this.myCompany);


[Same Server error]

java.lang.IllegalArgumentException: Region names may only be alphanumeric and 
may contain hyphens or underscores: 
companies:f05405c2-86f2-4aaf-bd0c-6fecd483bf28
at 
org.apache.geode.internal.cache.LocalRegion.validateRegionName(LocalRegion.java:7618)
at 
org.apache.geode.internal.cache.GemFireCacheImpl.createVMRegion(GemFireCacheImpl.java:3201)
at 
org.apache.geode.internal.cache.GemFireCacheImpl.basicCreateRegion(GemFireCacheImpl.java:3181)
at 
org.apache.geode.internal.cache.GemFireCacheImpl.createRegion(GemFireCacheImpl.java:3169)
at org.apache.geode.cache.RegionFactory.create(RegionFactory.java:762)
at 
org.apache.geode.management.internal.cli.functions.RegionCreateFunction.createRegion(RegionCreateFunction.java:355)
at 
org.apache.geode.management.internal.cli.functions.RegionCreateFunction.execute(RegionCreateFunction.java:90)
at 
org.apache.geode.internal.cache.execute.AbstractExecution.executeFunctionLocally(AbstractExecution.java:333)
at 
org.apache.geode.internal.cache.execute.AbstractExecution$2.run(AbstractExecution.java:303)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at 
org.apache.geode.distributed.internal.DistributionManager.runUntilShutdown(DistributionManager.java:621)
at 
org.apache.geode.distributed.internal.DistributionManager$9$1.run(DistributionManager.java:1067)
at java.lang.Thread.run(Thread.java:745)



*Reporter*: Gregory Green
*E-mail*: [mailto:ggr...@pivotoal.io]



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (GEODE-2269) It seems the gfsh "remove" command cannot remove r...

2017-01-05 Thread Gregory Green (JIRA)
Gregory Green created GEODE-2269:


 Summary: It seems the gfsh "remove" command cannot remove r...
 Key: GEODE-2269
 URL: https://issues.apache.org/jira/browse/GEODE-2269
 Project: Geode
  Issue Type: Improvement
Reporter: Gregory Green


It seems the gfsh "remove" command cannot remove region entries with a 0 length 
string key.

gfsh>query --query="select toString().length() from /Recipient.keySet()"

Result : true
startCount : 0
endCount   : 20
Rows   : 3

Result
--
0
2
5


gfsh>remove --region=/Recipient --key=""
Message : Key is either empty or Null
Result  : false

gfsh>remove --region=/Recipient --key="''"
Message : Key is either empty or Null
Result  : false





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)