Hi,
I don’t think that it is directly related to the discovery message itself. Even
before that you have long jvm pauses, probably it was a full gc, looks like you
don’t have enough heap on the client. What do you there? What kind of
operations do you run? I’d suggest collecting heap dump and c
I see. Thank you. I am still a bit unsure about what the second value in
the tuple represents. Are these indeed the nanoseconds? Apparently a Python
datetime can have differing precisions (perhaps depending on platform)...
Kind regards,
Stéphane Thibaud
2019年5月16日(木) 0:10 Igor Sapego :
> I be
Are you available for a verbal conversation? I would invite a solution
architect on a call to figure out if Ignite works or not for your use case.
-
Denis
On Wed, May 15, 2019 at 10:17 PM Denis Magda wrote:
> Hi,
>
> But after seeing your explanation below I understand that option 2 above is
>
Hi,
But after seeing your explanation below I understand that option 2 above is
> not really the way Ignite is supposed to be used - even if it is on top of
> hadoop. Did I get that right?
In this configuration, Ignite will not be on top of Hadoop, it will be
close to it - deployed as separate s
Hello!
We are using it as a standalone, in order to create a distributed file
system for a (hopefully) fast cache of a few hours of data. We are not
using it in front of Hadoop, although there is some discussion of
eventually backing it with Cassandra, should we decide to keep more than a
few hou
Hi,
Whenever, a client loses connection with ignite cluster, it throws out an
OutOfMemory error.
Can you please see if you can explain this behavior?
The ignite client had 1G memory set on XMS and XMX... and a very few clients
active.
Now it is quite possible that ignite clients lose connectio
I believe, it's OK to pass tuple for timestamp in python, but you also
should
add a tip for the client to inform it you are going to store timestamp
value.
Take a look at tests for example: [1]
1 -
https://github.com/apache/ignite/blob/master/modules/platforms/python/tests/test_datatypes.py#L80
I created a JIRA for this, but curious if this was intentional or anyone had
a workaround for this. Basically when building the thin client as described
no PC file is created. This is something I'm going to need to ease others
linking in this code.
https://issues.apache.org/jira/browse/IGNITE-1184
Gupabhi,
Memory-related metrics are available on data region level. You can see, how
much space is occupied by each data region on every node.
You can find available metrics here:
https://apacheignite.readme.io/docs/memory-metrics
Denis
ср, 15 мая 2019 г. в 01:21, gupabhi :
> Hello,
> I'm
Thank you. I will try to create a code line to reproduce it, but I remember
the following: when you do a 'select' query on a timestamp column with the
Python thin client, you get a tuple. Because of that, I assumed that a
tuple also had to be written in an update query.
Kind regards,
Stéphane Th
Stéphane,
Can you sharer a code line, how do you try to store timestamp value?
Best Regards,
Igor
On Tue, May 14, 2019 at 7:23 PM Denis Mekhanikov
wrote:
> Stéphane,
>
> Could you provide the code, that results in this exception?
> Do you try to insert the tuple as a single field via SQL? The
Hi Denis,
Thank you very much for a good response. Definitely helps. Hope I can ask
one follow-up question which I feel I did not make so clear from the
beginning:
The business has a very strong (non-negotiable) requirement on that the data
warehouse should be modeled with high normalisation. Th
12 matches
Mail list logo