I just started with a new effort and it looks like the downloads are broken.
>From this page:
[image: image.png]
Clicking the TGZ yields this URL
https://apache.org/dyn/closer.cgi/geode/1.15.2/apache-geode-1.15.2.tgz
which has this page in Chrome:
[image: image.png]
Tried 3 Browsers and 2 on m
+API
> --
> Cheers
>
> Jinmei
>
--
Charlie Black | cbl...@pivotal.io
received together on the other site?
>
> There is a risk of having a site with an inconsistent state if there is a
> network problem. Lets say you have two sites, and communication between
> them is broken when not all events of a transaction on site 1 where sent to
> site 2. How are you dealing with this risk? Have someone experimented this?
>
> Thanks,
>
> BR/
>
> Alberto
>
>
> --
Charlie Black | cbl...@pivotal.io
t; Is there any easy way to debug geode functions?
>
> With best regards,
> Ashish
>
--
Charlie Black | cbl...@pivotal.io
stop server/servers but sometimes it doesn't work and we
>>> end up using kill -9. Which I don't think is the right way to kill as we
>>> have persistence enabled and it terminates the process abruptly.
>>>
>>> Then option left is SIGABRT VS SIGTERM?
>>>
>>>
>>> With best regards,
>>> Ashish
>>>
>>
--
Charlie Black | cbl...@pivotal.io
cated region. Exception we see in the server side
>>>>>> is
>>>>>> that default limit of 800 gets maxed out and on client side we see retry
>>>>>> attempt to each server but gets failed even though when we re ran the
>>>>>> same
>>>>>> job it gets completed without any issue.
>>>>>>
>>>>>> In the code problem I could see is that we are connecting to geode
>>>>>> using client cache in forEachPartition which I think could be the issue.
>>>>>> So
>>>>>> for each partition we are making a connection to geode. In stats file we
>>>>>> could see that connections getting timeout and there is thread burst also
>>>>>> sometimes >4000.
>>>>>>
>>>>>> What is the recommended way to connect to geode using spark?
>>>>>>
>>>>>> But this one specific job which gets failed most of the times and is
>>>>>> a replicated region. Also when we change the type of region to
>>>>>> partitioned
>>>>>> then job gets completed. We have enabled disk persistence for both type
>>>>>> of
>>>>>> regions.
>>>>>>
>>>>>> Thoughts?
>>>>>>
>>>>>>
>>>>>>
>>>>>> With best regards,
>>>>>> Ashish
>>>>>>
>>>>>
--
Charlie Black | cbl...@pivotal.io
ork cable fully || ..).
Charlie
On Mon, Apr 15, 2019 at 10:33 AM Claudiu Balciza wrote:
> This solves my problem đ
>
> I can easily work around the âsame region nameâ
>
>
>
> Thank you Charlie Black and Michael Stolz
>
>
>
>
>
> *Claudiu BalcĂźza*BSc
:
>>>
>>> Caught: java.lang.IllegalStateException: Existing cache's default pool
>>> was not compatible
>>>
>>>
>>>
>>> I could work around this by having one cache open at a time but Iâd like
>>> to have them both open
>>>
>>> Any ideas?
>>>
>>>
>>>
>>> *Claudiu BalcĂźza*BSc
>>>
>>> Database Architect
>>> Architecte de base de données
>>> Arhitect Baze de Date
>>>
>>>
>>>
>>
--
Charlie Black | cbl...@pivotal.io
30 seconds is long for any operation - single-hop or through a
non-single-hop operation. I would check some other environmental aspect.
A couple that comes to mind client garbage collection, virtualization
neighbors, downstream applications using data from getAll() operation.
Maybe just throw a
It's not normal for Geode to be not servicing requests. I *do not*
recommend changing the fault tolerances until you find out why things
aren't responding in 10 seconds to 1 minute.Imagine your users waiting
for a minute or more for an in-memory system to return a value.
Some things to look
I made a video a while back in an effort to show how Geode handles this use
case. I walk through:
- Starting up geode with 3 locators 4 servers
- Create a region
- Load some data with spring boot
- kill -9 a data node
- See what happens
- restart the down node
- rebalance th
; method - try taking that out and see how things
are.
Regards,
Charlie
On Wed, Oct 17, 2018 at 9:15 AM Charlie Black wrote:
> One thing I run into more often than not is how teams shutdown Geode. If
> the shutdown process is killing each of the processes one by one (gfsh stop
>
One thing I run into more often than not is how teams shutdown Geode. If
the shutdown process is killing each of the processes one by one (gfsh stop
or kill -9 ) it actually facilitates a distrust on the remaining
members. Try using the gfsh shutdown command and see how much better or
worse th
g.apache.calcite.adapter.geode.rel.GeodeSchemaFactory", "operand": {
> "locatorHost": "localhost", "locatorPort": "10334", "regions": "Zips",
> "pdxSerializablePackagePath": ".*" } } ] }
>
> Thanks,
> Ashish
&g
>From a Geode perspective, Calcite is just another application. So any
data operations will be covered by the Geode Role-Based access control.
As for LDAP - some commercial customers use this implementation which
extends Shiro.
https://github.com/Pivotal-Field-Engineering/pivotal-gemfire-ldap Ho
That error makes me think about something Anthony mentioned - the steal
time. If this is running on vSphere most "security minded" orginizations
turn off steal time reporting. So one might want to look at the vSphere
console and check out the metric Ready Time for the vm. Ready time is the
am
I use the technique Jens mentions... pause current thread. Works like champ
no mater what you are debugging.
Charlie
On Tue, Jul 10, 2018 at 10:26 AM John Blum wrote:
> Hi Pieter - Yes, set the member-timeout Geode property when debugging,
> and then set breakpoints in whatever user-defined co
Are we redeploying the geode jar to the jvm? Or have some kind of
interesting classloader?
On Thu, Jul 5, 2018 at 8:36 AM Anthony Baker wrote:
> Thanks for the error report. Can you share the portion of the log that
> dumps the classpath? It looks like youâre starting geode with springboot
> an
Don't forget java has issues with microbenchmarks - just starting up a JVM
do something in a tight loop and done. What happens is the JIT compiler
hasn't compiled the code and done its job. So we need to give the JVM time
to do its thing, otherwise, java is interpreting the code (slow).
In the
Thanks, Dan - per thread max one can expect is 5,000 gets and puts per
second.
There will be some theoretical max due to some bottleneck in the 7 layer
OSI model or some external issue like the internal switching fabric.
Running on the local host actually eliminates the physical layer.
With the
Donât forget about the physical nature of a distributed call. The local
client get doesnât go through the switch and the remote client goes through
the switch.
As a quick check try a simple ping from one machine to the other and ping
local host. Notice how much quicker the local ping to the remote
hat are the parameters to tune socket buffer ?
>
> On Wed, Nov 29, 2017 at 1:06 AM, Charlie Black wrote:
>
>> Socket Buffers - good catch.
>>
>> On Tue, Nov 28, 2017 at 11:33 AM Udo Kohlmeyer
>> wrote:
>>
>>> Another thing to keep in mind put
levant to a
> server to that server...
> But as everybody else has already stated... you'll have test what is your
> "optimal" batch size AND ... maybe tune your buffers to match
>
> --Udo
>
> On Tue, Nov 28, 2017 at 11:21 AM, Charlie Black wrote:
>
>
Amit Pandey
wrote:
> Thanks guy. Much appreciated.
>
> Charlie do you mean batches of say 50-100 for putAlls ?
>
> Regards
>
> On Tue, Nov 28, 2017 at 11:15 PM, Charlie Black wrote:
>
>> Both are correct and incorrect at the same time - it depends on
>> you
Both are correct and incorrect at the same time - it depends on
your application, domain model, workload and physical environment. I
would recommend adding some metrics and follow what Akihiro mentioned and
use what works for your environment.
As a side note: I would also recommend trying smalle
ata-geode-100incubating-release-released>
---
Charlie Black
858.480.9722 | cbl...@pivotal.io
> On May 29, 2017, at 2:16 AM, Ratika Prasad wrote:
>
> We see the below exception while using Apache Geode, we are just getting
> started up and have set up Apache geode version 7.0 â and we
---
Charlie Black
858.480.9722 | cbl...@pivotal.io
> On Mar 6, 2017, at 10:42 AM, Amit Pandey wrote:
>
> Hey Jake,
>
> Thanks. I am a bot confused so a put should be faster than putAll ?
>
> John,
>
> I need to setup all data so that they can be queried. So I don'
Hello fellow Apache Geode users.
I thought I would share a project with the community where I showcase how to
add Geospatial indexing to Geode. In this project I simulate vehicle sensors
reporting thier geo location as they travel the California roadways. Geode
will be indexing the data
It feels like Java hasn't hit the threshold when the GC would kick in.
>From a Geode perspective if the entry count is zero all of the references
to the objects stored in Geode should be removed and are waiting to be
reclaimed by Java.
If you would like to force a GC try the gfsh command "gc".
29 matches
Mail list logo