Hi there,
I am running the ignite service via ssh terminal.
I use command like this : ./ignite.sh �CJ-Xms8g �CJ-Xmx8g &
However, when I turn off the terminal, the ignite server is also closed.
Is there anyway to make sure the service running without depend on terminal
session?
Cheers,
Kevin
[l
If adding ignite-log4j module slows you down, then you're definitely
producing too much logs. Can you double check your settings?
-Vla
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/ignite-logging-not-captured-in-log-file-log4j-tp4334p4474.html
Sent from the A
How to materialize it?
On 23-Apr-2016 02:53, "Alexey Goncharuk" wrote:
> Val,
>
> StringBuilder.append() is what javac generates for `"Read value: " + val`
> in the code.
>
> Ravi,
>
> Hibernate returned you a collection proxy for your map, however by the
> time control flow leaves the CacheStore
(Inline)
On Fri, Apr 22, 2016, 4:26 PM vkulichenko
wrote:
>
> Hi Matt,
>
> I'm confused. The locking does happen on per-entry level, otherwise it's
> impossible to guarantee data consistency. Two concurrent updates or reads
> for the same key will wait for each other on this lock. But this should
I have an application that is using Ignite for a clustered cache. Each member
of the cache will have connections open with a third party application. When a
cluster member stops its connections must be re-established on other cluster
members.
I can do this manually if I have a way of detectin
Val,
StringBuilder.append() is what javac generates for `"Read value: " + val`
in the code.
Ravi,
Hibernate returned you a collection proxy for your map, however by the time
control flow leaves the CacheStore your hibernate session gets closed and
this collection proxy cannot be used anymore. Yo
Binti,
It sounds like client were not able to connect due to instability on the
server, and increasing networkTimeout gave them a better chance to join, but
most likely they could not work properly anyway. Is I said, most likely this
was caused by memory issues.
Any particular reason you're start
Hi Ravi,
I'm confused by the trace. It shows that executeTransaction() method called
StringBuilder.append(), but I don't see this in the code you provided. Is it
the full trace? If so, are you sure that you properly rebuild the project?
-Val
--
View this message in context:
http://apache-igni
I had set it on ERROR level.
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/ignite-logging-not-captured-in-log-file-log4j-tp4334p4467.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.
Hi Binti,
Did you set DEBUG level of org.apache.ignite? If so, this will produce too
much output and will definitely slow you down. You should use DEBUG only if
it's needed, and preferably for several particular categories you need DEBUG
output for.
-Val
--
View this message in context:
http:
Hi Matt,
I'm confused. The locking does happen on per-entry level, otherwise it's
impossible to guarantee data consistency. Two concurrent updates or reads
for the same key will wait for each other on this lock. But this should not
cause performance degradation, unless you have very few keys and v
Hi Arthi,
In Scala you can use Java API directly, they are fully compatible. C++
currently supports only Data Grid, but not the Compute Grid.
-Val
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/Compute-Grid-API-in-C-Scala-tp4456p4464.html
Sent from the Apache
Hello Arthi,
There is no yet Compute API in C++ client, but we are planning to
add it soon.
Best Regards,
Igor
On Fri, Apr 22, 2016 at 4:50 PM, arthi
wrote:
> Hi Team,
>
> Is there a C++/Scala API for the compute grid?
>
> Thanks
> Arthi
>
>
>
> --
> View this message in context:
> http://apac
Hi,
Data will be persisted unless the cache update fails on the server (in this
case you will get an exception on client side). To avoid out of memory
issues, you can use evictions [1], that will remove less frequently used
entries from memory. If you need an evicted entry again, it will loaded fr
Hi,
Generally speaking, continuous queries are designed to listen for data
updates, not for heavy computational tasks. Can you provide more information
about your use case? What are you trying to achieve?
-Val
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/Con
Val, adding the dependency worked. The clients also started logging at root
level. We will have to modify the client log4j files to only log Error
level. Thanks, but we saw another issue now. When we tried to bring up the
grid (server nodes - ERROR level) and if old clients were already connected,
2. I see your point, but setting joinTimeout looks like a good solution. Does
it work for you?
joinTimeout was working earlier with 5 seconds, for some clients we had to
raise it. but eventually some clients could not connect at all with any
jointimeout. We had to remove joinTimeout and add networ
Hi,
So I followed the suggestion that was made to run the server node in
java instead of in .NET. The java server node itself started fine and
loaded the data from SQL Server upon startup into the cache. Next I tried
to fire up the c++ node as a client to access the cache and ran into the
below
I'm trying to install and integrate Ignite with Spark under CDH by following
the recommendation at
https://apacheignite-fs.readme.io/docs/installation-deployment for
Standalone Deployment. I modified the existing $SPARK_HOME/conf/spark-env.sh
as recommended but the following command still can't rec
Hi Team,
Is there a C++/Scala API for the compute grid?
Thanks
Arthi
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/Compute-Grid-API-in-C-Scala-tp4456.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.
I'm assuming you're seeing a lot of threads that are BLOCKED waiting on
that locked GridLocalCacheEntry (<70d32489> in that example you pasted
above). Looking at the code, it looks like it does block on individual
cache entries (so two reads of the same key within the same JVM will
block). In your
BTW, I created 4 + 3 nodes on two servers.
each node I called command like this ./ignite.sh -J-Xmx8g -J-Xms8g
kind regards,
Kevin
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/Ignite-cache-data-size-problem-tp4449p4454.html
Sent from the Apache Ignite Users
Hi Vladimir,
My table is very simple, it contains the following information
OrgId (varchar) , oId(varchar), fnum(int), gId(number), msg(varchar),
num(number), date(date)
the gId is the primary Index.
---
Hi,
It looks like you have relatively small entries (somewhere about 60-70
bytes per key-value pair). Ignite also has some intrinsic overhead, which
could be more than actual data in this case. However, I surely would not
expect that it will not fit into 80GB.
Could you please share your key and
Hi,
Could you please explain why do you think that the thread is blocked? I see
it is in a RUNNABLE state.
Vladimir.
On Fri, Apr 22, 2016 at 2:41 AM, ccanning wrote:
> We seem to be having some serious performance issues after adding Apache
> Ignite Local cache to our APIs'. Looking at a heap
Wait,
I don't understand..
I am writing data with write-through.
What in case when memory run out ?
These data are not persist in postgres ?
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/What-will-happend-in-case-of-run-out-of-memory-tp4446p4450.html
Sent from
Hi there,
I am trying to load a table with 47Million records, in which the data size is
less than 3gb. However, When I load into the memory ( two vm with 48+32 =
80gb), it is still crushed due to not enough memory space? This problem
occurred when I instantiated 6 + 4 nodes.
Why the cache model
Hi,
If OOM happens while persisting data, on example in JDBC driver, operation
will fail and update will be lost.
2016-04-22 12:15 GMT+03:00 tomk :
> Hello,
> I consider what will happen in case of out of memory ?
> I mean write-through mode. My data will lost ? I assume that it always save
> it
server side
--
at server side i loaded my data with person and also configured with my
database..
cache.loadCache(null,100_000);
cache.size();// It returns length of cache.
client side
--
public static void main (String args[])
{
Ignition.setClientMode(true
Hello,
I consider what will happen in case of out of memory ?
I mean write-through mode. My data will lost ? I assume that it always save
it into underlying database.
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/What-will-happend-in-case-of-run-out-of-memory
Hi,
You should not use write-behind mode in such scenario.
Concerning performance issues: are you using connection pooling library?
If no, I recommend you to use a library like c3p0 for connection pooling
and prepared statement caching.
2016-04-21 18:02 GMT+03:00 vetko :
> Hi All,
>
> I am exp
31 matches
Mail list logo