Re: different application code on two ignite nodes on same machine

2016-03-19 Thread vkulichenko
Hi Saurabh,

I'm not sure I completely understand the use case, can you give more
details? If application node creates objects, how is it possible that it
doesn't have classes?

-Val



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/different-application-code-on-two-ignite-nodes-on-same-machine-tp3558p3564.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: about mr accelerator question.

2016-03-19 Thread Vladimir Ozerov
Hi,

The fact that you can work with 29G cluster with only 8G of memory might be
caused by the following things:
1) Your job doesn't use all data form cluster and hence caches only part of
it. This is the most likely case.
2) You have eviction policy configured for IGFS data cache.
3) Or may be you use offheap.
Please provide the full XML configuration and we will be able to understand
it.

Anyways, your initial question was about out-of-memory. Could you provide
exact error message? Is it about heap memory or may be permgen?

As per execution time, this depends on your workload. If there are lots map
tasks and very active work with data, you will see improvement in speed. If
there are lots operations on file system (e.g. mkdirs, move, etc.) and very
little amount of map jobs, chances there will be no speedup at all. Provide
more details on the job you test and type of data you use and we will be
able to give you more ideas on what to do.

Vladimir.

On Wed, Mar 16, 2016 at 10:53 AM, l...@runstone.com 
wrote:

> Hi,
> I have been present a problem in the forum,the link is:
> http://apache-ignite-users.70518.x6.nabble.com/about-mr-accelerator-question-tc3502.html
> I will follow you suggestions on the ignite cluster, but i am so sorry
> that there are some confusions.
>
> on the single node,with 29G data, my ignite configrution is default
> almost.just modify the ignite.sh
> as follow:
> if [ -z "$JVM_OPTS" ] ; then
> JVM_OPTS="-Xms8g -Xmx8g -server -XX:+AggressiveOpts
> -XX:MaxPermSize=8g"
> fi
>
> and modify the default-config.xml
>
>  
> 
>  class="org.apache.ignite.configuration.FileSystemConfiguration"
> parent="igfsCfgBase">
> 
>
>
> 
> 
>
>
> 
>  class="org.apache.ignite.igfs.IgfsIpcEndpointConfiguration">
> 
> 
> 
> 
> 
>
>
> 
>  class="org.apache.ignite.hadoop.fs.IgniteHadoopIgfsSecondaryFileSystem">
>  value="hdfs://localhost:9000"/>
>  name="cfgPath">
>  value="client-user-name"/>
> 
> 
> 
> 
> 
>
> the result is very cool.
>
>
>
> the on-heap just 8G but the data have 29G.It work well.why?
>
>
>
> another problem is I start up another ignite node on the same server.I
> have 16g heap with two ignite node.My job is finished after 7 minutes, the
> effect is the same as on the single node.why the performance is not improve
> when add the memory?
>
> thanks
> --
>
>


Re: JMS Data Streamer: Not writing to data to cache when message received by Streamer

2016-03-19 Thread techbysample
That worked.  Thank you very much.

Would you please elaborate on how property is used?
Does it imply that by default autoflushFrequency is disabled and
DataStreamer will NOT
write data to cache unless explicitly set?

I basically modeled my JMSStreamer after the following documentation here:
https://apacheignite.readme.io/docs/jms-data-streamer
The docs do not explicitly mention setting 'autoflushFrequency'..


Here is definition from Apache Ignite docs: -
dataStreamer.autoFlushFrequency:


"Sets automatic flush frequency. Essentially, this is the time after which
the streamer will make an attempt to submit all data added so far to remote
nodes. Note that there is no guarantee that data will be delivered after
this concrete attempt (e.g., it can fail when topology is changing), but it
won't be lost anyway.

If set to 0, automatic flush is disabled.

Automatic flush is disabled by default (default value is 0)."


Please advise.

Thank you






--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/JMS-Data-Streamer-Not-writing-data-to-cache-when-message-received-by-Streamer-tp3590p3592.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: websession clustering problem

2016-03-19 Thread boulik
Hi Val. 
I'm newbie about apache ignite and after reading OptimizedMarshaller javadoc
i was happy, that classes   may not implement serializable  interface. 

I haven't found which class is referencing "GeneratedMethodAccessor246".
It's not intentional.  After reading this blog(
https://blogs.oracle.com/buck/entry/inflation_system_properties
  ) this
generated objects should be used by reflection. 

I did some test with marshalling with objects used in httpsession using
diferent marshallers. With BinaryMarshaller i have problem with putting
object to cache. Stack is here:

mar 16, 2016 10:33:36 AM org.apache.catalina.core.StandardWrapperValve
invoke
SEVERE: Servlet.service() for servlet [jsp] in context with path [/ais]
threw exception [javax.cache.CacheException: class
org.apache.ignite.IgniteCheckedException:
org.apache.ignite.internal.binary.BinaryObjectImpl cannot be cast to
ais.bo.ss.sp.SPAplikacnyModul] with root cause
java.lang.ClassCastException:
org.apache.ignite.internal.binary.BinaryObjectImpl cannot be cast to
ais.bo.ss.sp.SPAplikacnyModul
at ais.bo.ss.sp.SPAplikacnyModul$1.compare(SPAplikacnyModul.java:1)
at java.util.TreeMap.compare(TreeMap.java:1291)
at java.util.TreeMap.put(TreeMap.java:538)
at
org.apache.ignite.internal.processors.cache.binary.CacheObjectBinaryProcessorImpl.marshalToBinary(CacheObjectBinaryProcessorImpl.java:495)
at
org.apache.ignite.internal.processors.cache.binary.CacheObjectBinaryProcessorImpl.toBinary(CacheObjectBinaryProcessorImpl.java:856)
at
org.apache.ignite.internal.processors.cache.binary.CacheObjectBinaryProcessorImpl.toCacheObject(CacheObjectBinaryProcessorImpl.java:801)
at
org.apache.ignite.internal.processors.cache.GridCacheContext.toCacheObject(GridCacheContext.java:1807)
at
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicUpdateFuture$UpdateState.mapSingleUpdate(GridNearAtomicUpdateFuture.java:1176)
at
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicUpdateFuture$UpdateState.map(GridNearAtomicUpdateFuture.java:868)
at
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicUpdateFuture.mapOnTopology(GridNearAtomicUpdateFuture.java:417)
at
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicUpdateFuture.map(GridNearAtomicUpdateFuture.java:283)
at
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache$21.apply(GridDhtAtomicCache.java:1006)
at
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache$21.apply(GridDhtAtomicCache.java:1004)
at
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.asyncOp(GridDhtAtomicCache.java:737)
at
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.updateAllAsync0(GridDhtAtomicCache.java:1004)
at
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.putAsync0(GridDhtAtomicCache.java:465)
at
org.apache.ignite.internal.processors.cache.GridCacheAdapter.putAsync(GridCacheAdapter.java:2491)
at
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.putIfAbsentAsync(GridDhtAtomicCache.java:514)
at
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.putIfAbsent(GridDhtAtomicCache.java:507)
at
org.apache.ignite.internal.processors.cache.IgniteCacheProxy.putIfAbsent(IgniteCacheProxy.java:1199)
at test.TestIgnite.lambda$4(TestIgnite.java:118)
at 
java.util.HashMap$EntrySpliterator.forEachRemaining(HashMap.java:1683)
at
java.util.stream.ReferencePipeline$Head.forEach(ReferencePipeline.java:580)
at test.TestIgnite.testCezJsp(TestIgnite.java:116)
at org.apache.jsp.testIgnite2_jsp._jspService(testIgnite2_jsp.java:140)
at org.apache.jasper.runtime.HttpJspBase.service(HttpJspBase.java:70)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:731)
at
org.apache.jasper.servlet.JspServletWrapper.service(JspServletWrapper.java:439)
at 
org.apache.jasper.servlet.JspServlet.serviceJspFile(JspServlet.java:395)
at org.apache.jasper.servlet.JspServlet.service(JspServlet.java:339)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:731)
at
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:303)
at
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208)
at 
org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:52)
at
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:241)
at
org.apache.catalina.core.ApplicationFilterChain.doFilter

Re: Reducing memory footprint

2016-03-19 Thread alexGalushka
Val,

What do you mean by this question "Do you have a node embedded the
application"?
Our real world application will be represented by a set of nodes which are
the docker containers runing inside of the host VM (CentOS) which processes
high volume of HTTP requests. 

--Alexander.



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Reducing-memory-footprint-tp3494p3583.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: JMS Data Streamer: Not writing to data to cache when message received by Streamer

2016-03-19 Thread Raul Kripalani
Hi,

I see you're using clientMode=true.

Could you try setting a low autoFlushFrequency on the streamer, say of
value 1 (ms)?

dataStreamer.autoFlushFrequency(1);

Cheers,
Raúl.
Support,

Given the following:

Background:
JBoss ESB 4.12(JMS messaging)
Apache Ignite 1.5 final

1. To begin, I am starting to 2 server nodes with same configuration that
comes with Ignite (ie:
..\apache-ignite-fabric-1.5.0.final-bin\\examples\\config\\example-ignite.xml)
2. Next I start JBoss ESB and deploy a Topic called
"/topic/quickstart_jmstopic_topic"
3. Next, I execute JMSStreamWords.
4. Finally, I send a message to "/topic/quickstart_jmstopic_topic"
5. No data is written to cache.

Problem:

I am attempting to use the JMS Data Streamer to inject data into Ignite
cache.
I am using a JMS Topic for a destination.

When I debug the JMSStreamWords class, I am seeing that the message is
received
and the 'answer.put(tokens[0], tokens[1]);' statement is completed without
exceptions/errors.

Any ideas on why data is not written to cache as expected...
I would appreciate any suggestions for resolving this issue...

Please advise.

Thanks in advance.
Source Code:

package netmille.examples.streaming.jms;

import java.util.Collections;
import java.util.HashMap;
import java.util.Map;
import java.util.Properties;

import javax.jms.JMSException;
import javax.jms.TextMessage;
import javax.jms.Topic;
import javax.jms.TopicConnectionFactory;
import javax.naming.Context;
import javax.naming.InitialContext;

import org.apache.ignite.Ignite;
import org.apache.ignite.IgniteCache;
import org.apache.ignite.IgniteDataStreamer;
import org.apache.ignite.Ignition;
import org.apache.ignite.stream.jms11.JmsStreamer;
import org.apache.ignite.stream.jms11.MessageTransformer;

import netmille.examples.ExamplesUtils;


public class JMSStreamWords {

private static Ignite ignite = null;
private static JmsStreamer jmsStreamer
= new
JmsStreamer<>();

/**
 * Starts JMS words streaming.
 *
 * @param args Command line arguments (none required).
 * @throws Exception If failed.
 */
public static void main(String[] args) throws Exception {
// Mark this cluster member as client.
Ignition.setClientMode(true);

try {

ignite =
Ignition.start("C:\\netmilleRoot\\tools\\apache-ignite-fabric-1.5.0.final-bin\\examples\\config\\example-ignite.xml");
if (!ExamplesUtils.hasServerNodes(ignite))
return;

IgniteCache stmCache =
ignite.getOrCreateCache(CacheConfig.wordCache());

IgniteDataStreamer dataStreamer =
ignite.dataStreamer("words");
dataStreamer.allowOverwrite(true);

Properties properties1 = new Properties();
properties1.put(Context.INITIAL_CONTEXT_FACTORY,
"org.jnp.interfaces.NamingContextFactory");
properties1.put(Context.URL_PKG_PREFIXES,
"org.jboss.naming:org.jnp.interfaces");
properties1.put(Context.PROVIDER_URL, "jnp://127.0.0.1:1199
");
InitialContext iniCtx = new InitialContext(properties1);

TopicConnectionFactory tcf = (TopicConnectionFactory)
iniCtx.lookup("ConnectionFactory");

 // create a JMS streamer and plug the data streamer into it

jmsStreamer.setIgnite(ignite);
jmsStreamer.setStreamer(dataStreamer);
jmsStreamer.setConnectionFactory(tcf);
Topic topic = (Topic)
iniCtx.lookup("/topic/quickstart_jmstopic_topic");

jmsStreamer.setDestination(topic);
jmsStreamer.setTransacted(true);
jmsStreamer.setTransformer(new MessageTransformer() {
@Override
public Map apply(TextMessage message) {
final Map answer = new HashMap<>();
String text;
try {
text = message.getText();
}
catch (JMSException e) {
System.out.println("Could not parse message." + e);
return Collections.emptyMap();
}
for (String s : text.split("\n")) {
String[] tokens = s.split(",");
answer.put(tokens[0], tokens[1]);
System.out.println("added " + tokens[0]);
}
return answer;
}
});
jmsStreamer.start();
}
catch(Exception e)
{
 System.out.println(e);
}
}
}




package netmille.examples.streaming.jms;


import org.apache.ignite.configuration.CacheConfiguration;

public class CacheConfig {
/**
 * Configure streaming cache.
 */
public static CacheConfiguration wordCache() {
CacheConfiguration cfg = new
CacheConfiguration<>("words");

// Index all w

JMS Data Streamer: Not writing to data to cache when message received by Streamer

2016-03-19 Thread techbysample
Support,

Given the following:

Background:
JBoss ESB 4.12(JMS messaging)
Apache Ignite 1.5 final

1. To begin, I am starting to 2 server nodes with same configuration that
comes with Ignite (ie:
..\apache-ignite-fabric-1.5.0.final-bin\\examples\\config\\example-ignite.xml)
2. Next I start JBoss ESB and deploy a Topic called
"/topic/quickstart_jmstopic_topic"
3. Next, I execute JMSStreamWords.
4. Finally, I send a message to "/topic/quickstart_jmstopic_topic"
5. No data is written to cache.

Problem:

I am attempting to use the JMS Data Streamer to inject data into Ignite
cache.
I am using a JMS Topic for a destination.  

When I debug the JMSStreamWords class, I am seeing that the message is
received
and the 'answer.put(tokens[0], tokens[1]);' statement is completed without
exceptions/errors.

Any ideas on why data is not written to cache as expected...
I would appreciate any suggestions for resolving this issue...

Please advise.

Thanks in advance.
Source Code:

package netmille.examples.streaming.jms;

import java.util.Collections;
import java.util.HashMap;
import java.util.Map;
import java.util.Properties;

import javax.jms.JMSException;
import javax.jms.TextMessage;
import javax.jms.Topic;
import javax.jms.TopicConnectionFactory;
import javax.naming.Context;
import javax.naming.InitialContext;

import org.apache.ignite.Ignite;
import org.apache.ignite.IgniteCache;
import org.apache.ignite.IgniteDataStreamer;
import org.apache.ignite.Ignition;
import org.apache.ignite.stream.jms11.JmsStreamer;
import org.apache.ignite.stream.jms11.MessageTransformer;

import netmille.examples.ExamplesUtils;


public class JMSStreamWords {

private static Ignite ignite = null;
private static JmsStreamer jmsStreamer = 
new
JmsStreamer<>();
 
/**
 * Starts JMS words streaming.
 *
 * @param args Command line arguments (none required).
 * @throws Exception If failed.
 */
public static void main(String[] args) throws Exception {
// Mark this cluster member as client.
Ignition.setClientMode(true);
  
try {

ignite =
Ignition.start("C:\\netmilleRoot\\tools\\apache-ignite-fabric-1.5.0.final-bin\\examples\\config\\example-ignite.xml");
if (!ExamplesUtils.hasServerNodes(ignite))
return;

IgniteCache stmCache =
ignite.getOrCreateCache(CacheConfig.wordCache());

IgniteDataStreamer dataStreamer =
ignite.dataStreamer("words");
dataStreamer.allowOverwrite(true);

Properties properties1 = new Properties();
properties1.put(Context.INITIAL_CONTEXT_FACTORY,
"org.jnp.interfaces.NamingContextFactory");
properties1.put(Context.URL_PKG_PREFIXES,
"org.jboss.naming:org.jnp.interfaces");
properties1.put(Context.PROVIDER_URL, "jnp://127.0.0.1:1199");
InitialContext iniCtx = new InitialContext(properties1);

TopicConnectionFactory tcf = (TopicConnectionFactory)
iniCtx.lookup("ConnectionFactory");
  
 // create a JMS streamer and plug the data streamer into it

jmsStreamer.setIgnite(ignite);
jmsStreamer.setStreamer(dataStreamer);
jmsStreamer.setConnectionFactory(tcf);
Topic topic = (Topic)
iniCtx.lookup("/topic/quickstart_jmstopic_topic");

jmsStreamer.setDestination(topic);
jmsStreamer.setTransacted(true);
jmsStreamer.setTransformer(new MessageTransformer() {
@Override
public Map apply(TextMessage message) {
final Map answer = new HashMap<>();
String text;
try {
text = message.getText();
}
catch (JMSException e) {
System.out.println("Could not parse message." + e);
return Collections.emptyMap();
}
for (String s : text.split("\n")) {
String[] tokens = s.split(",");
answer.put(tokens[0], tokens[1]);
System.out.println("added " + tokens[0]);
}
return answer;
}
});
jmsStreamer.start();
}
catch(Exception e)
{
 System.out.println(e);
}
}
}




package netmille.examples.streaming.jms;


import org.apache.ignite.configuration.CacheConfiguration;

public class CacheConfig {
/**
 * Configure streaming cache.
 */
public static CacheConfiguration wordCache() {
CacheConfiguration cfg = new
CacheConfiguration<>("words");

// Index all words streamed into cache.
cfg.setIndexedTypes(String.class, String.class);
 

Re: websession clustering problem

2016-03-19 Thread vkulichenko
Hi Slavo,

ClassCastException is caused by this issue:
https://issues.apache.org/jira/browse/IGNITE-2852

Essentially, you have a TreeMap somewhere and it can't be converted to
binary object. As a workaround you can create a wrapper class that will
contain this map as a field. Can you try this and see if it helps?

-Val



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/websession-clustering-problem-tp3437p3547.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Server node info when client disconnects

2016-03-19 Thread Alper Tekinalp
Hi.

If I have 2 servers(serv1, serv2) and a client(cli1) and I connect
serv1 with static ip configuration, when I close serv1 I get
EVT_CLIENT_NODE_DISCONNECTED event from client side.
If I close serv2 instead of serv1 this time I get EVT_NODE_LEFT event.

As I test it seems that I should provide all server ips to client, so
that client stays in cluster when a server closed.

I have a dynamic environment that server can start and stop and I want
client to stay in cluser when there are some servers even if that
servers are started after client connected to cluster. Is there a way
to accomplish that?

Now it is working as:
- start server1
- start client with ip of server1
- start server2
- stop server1
what happens is client disconnects even if there is a server on cluster

What I want is:
- start server1
- start client with ip of server1
- start server2
- stop server1
client must stay in cluster and be avare of server2

On Fri, Mar 18, 2016 at 9:35 AM, Alper Tekinalp  wrote:
> Hi Val.
>
> When I listen discovery events from server I get EVT_NODE_LEFT events
> even if client or server disconnects. If I listen events from client
> side if I have 1 server 1 client when I close server I get
> EVT_CLIENT_NODE_DISCONNECTED event and that event not contains any
> information about server node.
>
> On Thu, Mar 17, 2016 at 11:10 PM, vkulichenko
>  wrote:
>> Hi Alper,
>>
>> You can listen for EVT_NODE_FAILED and EVT_NODE_LEFT to get notifications
>> about nodes that leave topology. Will this work for you?
>>
>> -Val
>>
>>
>>
>> --
>> View this message in context: 
>> http://apache-ignite-users.70518.x6.nabble.com/Re-Server-node-info-when-client-disconnects-tp3562p3565.html
>> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>
>
>
> --
> Alper Tekinalp
>
> Software Developer
>
> Evam Stream Analytics
>
> Atatürk Mah.Turgut Özal Bulvarı Gardenya 1 Plaza 42/B K:4 Ataşehir / İSTANBUL
>
> Tlf : +90216 688 45 46 Fax : +90216 688 45 47 Gsm:+90 536 222 76 01
>
> www.evam.com



-- 
Alper Tekinalp

Software Developer

Evam Stream Analytics

Atatürk Mah.Turgut Özal Bulvarı Gardenya 1 Plaza 42/B K:4 Ataşehir / İSTANBUL

Tlf : +90216 688 45 46 Fax : +90216 688 45 47 Gsm:+90 536 222 76 01

www.evam.com


Re: org.apache.ignite.IgniteCheckedException: Failed to register query type: TypeDescriptor

2016-03-19 Thread Vasiliy Sisko
Also I notice that in generated index declaration database field name  was
used instead of java name. 

Please fix index field names to java name.



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/org-apache-ignite-IgniteCheckedException-Failed-to-register-query-type-TypeDescriptor-tp3447p3554.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: org.apache.ignite.IgniteCheckedException: Failed to register query type: TypeDescriptor

2016-03-19 Thread Vasiliy Sisko
I am always glad to help you.

Writing to cache with sql string is not implemented yet, but issue to its
implementation already exist. 
You can watch to this issue:
https://issues.apache.org/jira/browse/IGNITE-2294



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/org-apache-ignite-IgniteCheckedException-Failed-to-register-query-type-TypeDescriptor-tp3447p3578.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Semaphore blocking on tryAcquire() while holding a cache-lock

2016-03-19 Thread Dmitriy Setrakyan
On Fri, Mar 18, 2016 at 7:57 AM, Vladisav Jelisavcic 
wrote:

> Hi Yakov,
>
> yes, thanks for the comments, I think everything should be ok now,
> please review the PR and tell me if you think anything else is needed.
>
> Once ignite-642 is merged into master,
> I'll submit a PR for IgniteReadWriteLock (hopefully on time for 1.6.
> release).
>

This would be awesome :)


>
> Best regrads,
> Vladisav
>
>
>
> On Fri, Mar 18, 2016 at 11:56 AM, Yakov Zhdanov 
> wrote:
>
> > Vlad, did you have a chance to review my latest comments?
> >
> > Thanks!
> > --
> > Yakov Zhdanov, Director R&D
> > *GridGain Systems*
> > www.gridgain.com
> >
> > 2016-03-06 12:21 GMT+03:00 Yakov Zhdanov :
> >
> > > Vlad and all (esp Val and Anton V.),
> > >
> > > I reviewed the PR. My comments are in the ticket.
> > >
> > > Anton V. there is a question regarding optimized-classnames.properties.
> > > Can you please respond in ticket?
> > >
> > >
> > > --Yakov
> > >
> > > 2016-02-29 16:00 GMT+06:00 Yakov Zhdanov :
> > >
> > >> Vlad, that's great! I will take a look this week. Reassigning ticket
> to
> > >> myself.
> > >>
> > >> --Yakov
> > >>
> > >> 2016-02-26 18:37 GMT+03:00 Vladisav Jelisavcic :
> > >>
> > >>> Hi,
> > >>>
> > >>> i recently implemented distributed ReentrantLock - IGNITE-642,
> > >>> i made a pull request, so hopefully this could be added to the next
> > >>> release.
> > >>>
> > >>> Best regards,
> > >>> Vladisav
> > >>>
> > >>> On Thu, Feb 18, 2016 at 10:49 AM, Alexey Goncharuk <
> > >>> alexey.goncha...@gmail.com> wrote:
> > >>>
> > >>> > Folks,
> > >>> >
> > >>> > The current implementation of IgniteCache.lock(key).lock() has the
> > same
> > >>> > semantics as the transactional locks - cache topology cannot be
> > changed
> > >>> > while there exists an ongoing transaction or an explicit lock is
> > held.
> > >>> The
> > >>> > restriction for transactions is quite fundamental, the lock() issue
> > >>> can be
> > >>> > fixed if we re-implement locking the same way IgniteSemaphore
> > currently
> > >>> > works.
> > >>> >
> > >>> > As for the "Failed to find semaphore with the given name" message,
> my
> > >>> first
> > >>> > guess is that DataStructures were configured with 1 backups which
> led
> > >>> to
> > >>> > the data loss when two nodes were stopped. Mario, can you please
> > >>> re-test
> > >>> > your semaphore scenario with 2 backups configured for data
> > structures?
> > >>> > From my side, I can also take a look at the semaphore issue when
> I'm
> > >>> done
> > >>> > with IGNITE-2610.
> > >>> >
> > >>>
> > >>
> > >>
> > >
> >
>


Re: Ignite still support Scala API

2016-03-19 Thread vkulichenko
Hi Timothy,

I believe you're referring to the Scalar:
https://ignite.apache.org/releases/1.5.0.final/scaladoc/scalar/#org.apache.ignite.scalar.scalar$

It's still in the project, but it's a bit legacy and I would not recommend
to use it unless there is no other way around. In vast majority (if not all)
of cases our Java API can be used directly from Scala without any
limitations. If something doesn't work for you, please let us know.

-Val



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Ignite-still-support-Scala-API-tp3567p3570.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Server node info when client disconnects

2016-03-19 Thread vkulichenko
Hi Alper,

You can listen for EVT_NODE_FAILED and EVT_NODE_LEFT to get notifications
about nodes that leave topology. Will this work for you?

-Val



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Re-Server-node-info-when-client-disconnects-tp3562p3565.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: How can I achieve performance

2016-03-19 Thread Andrew
Thank you for your reply.
I'll try it.

Sincerely,
Andrew.



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/How-can-I-achieve-performance-tp3535p3553.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Reducing memory footprint

2016-03-19 Thread alexGalushka
Val,

Good point on the clients vs. server nodes, we might not necessarily need
all of the 16 nodes as servers. As I understand from the Ignite  doc on
server vs. client nodes
   the major
difference between server and client nodes is that client nodes will not
participate in distributed caches. Would it be possible to reconfigure the
node on a fly to be client or server based on the predicate, like you would
do it with a node groups? Memory consumption requirements are valid per each
individual server nodes. 

--Alexander.



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Reducing-memory-footprint-tp3494p3541.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Nodes not rebalancing evenly

2016-03-19 Thread kberthelot
I've created a partitioned cache with the following configuration: 

cacheConfiguration.setCacheMode(CacheMode.PARTITIONED); 
cacheConfiguration.setName(CACHE_NAME); 
cacheConfiguration.setRebalanceMode(CacheRebalanceMode.SYNC); 

When I bring up a new node some of the keys get "rebalanced" to the new
node, but the nodes do not end up balanced evenly (the new node often ends
up with only one or very few entries). Should the default behavior be that
the entries are evenly sharded across all nodes, or is there a setting that
forces this behavior? I would expect all the nodes to have close to the same
number of entries after rebalancing.



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Nodes-not-rebalancing-evenly-tp3589.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Reducing memory footprint

2016-03-19 Thread alexGalushka
Val,

Yes you are correct. We have a Vert.x instance that embeds each node.

--Alexander.



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Reducing-memory-footprint-tp3494p3588.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: How to deploy CacheJdbcPojoStore on a cluster of "non-embedded" Ignite node?

2016-03-19 Thread Vasiliy Sisko
Hello. 

Your server node should contains in classpath store factory class. 

To run example follow to next steps:
1. Execute steps from file IGNITE_HOME/examples/schema-import/README.txt
2. Build example by command “mvn clean package” in 
“$IGNITE_HOME/examples/schema-import” directory
3. copy
“$IGNITE_HOME/examples/schema-import/target/ignite-schema-import-demo-1.5.8.jar”
file to “$IGNITE_HOME/libs” folder
4. Run node in $IGNITE_HOME folder by command “./bin/ignite.sh
examples/config/example-ignite.xml”
5. Run Demo example from your IDE



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/How-to-deploy-CacheJdbcPojoStore-on-a-cluster-of-non-embedded-Ignite-node-tp3555p3557.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Server node info when client disconnects

2016-03-19 Thread Alper Tekinalp
Hi Val.

When I listen discovery events from server I get EVT_NODE_LEFT events
even if client or server disconnects. If I listen events from client
side if I have 1 server 1 client when I close server I get
EVT_CLIENT_NODE_DISCONNECTED event and that event not contains any
information about server node.

On Thu, Mar 17, 2016 at 11:10 PM, vkulichenko
 wrote:
> Hi Alper,
>
> You can listen for EVT_NODE_FAILED and EVT_NODE_LEFT to get notifications
> about nodes that leave topology. Will this work for you?
>
> -Val
>
>
>
> --
> View this message in context: 
> http://apache-ignite-users.70518.x6.nabble.com/Re-Server-node-info-when-client-disconnects-tp3562p3565.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.



-- 
Alper Tekinalp

Software Developer

Evam Stream Analytics

Atatürk Mah.Turgut Özal Bulvarı Gardenya 1 Plaza 42/B K:4 Ataşehir / İSTANBUL

Tlf : +90216 688 45 46 Fax : +90216 688 45 47 Gsm:+90 536 222 76 01

www.evam.com


Re: How to deploy CacheJdbcPojoStore on a cluster of "non-embedded" Ignite node?

2016-03-19 Thread tomli
Thanks Sisko, it works.



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/How-to-deploy-CacheJdbcPojoStore-on-a-cluster-of-non-embedded-Ignite-node-tp3555p3559.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Hibernate loadcache error?

2016-03-19 Thread vkulichenko
Ravi,

Is it failing on the server node? How do you start it? Did you enable
ignite-hibernate module?

-Val



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Hibernate-loadcache-error-tp3534p3544.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Semaphore blocking on tryAcquire() while holding a cache-lock

2016-03-19 Thread Yakov Zhdanov
Vlad, did you have a chance to review my latest comments?

Thanks!
--
Yakov Zhdanov, Director R&D
*GridGain Systems*
www.gridgain.com

2016-03-06 12:21 GMT+03:00 Yakov Zhdanov :

> Vlad and all (esp Val and Anton V.),
>
> I reviewed the PR. My comments are in the ticket.
>
> Anton V. there is a question regarding optimized-classnames.properties.
> Can you please respond in ticket?
>
>
> --Yakov
>
> 2016-02-29 16:00 GMT+06:00 Yakov Zhdanov :
>
>> Vlad, that's great! I will take a look this week. Reassigning ticket to
>> myself.
>>
>> --Yakov
>>
>> 2016-02-26 18:37 GMT+03:00 Vladisav Jelisavcic :
>>
>>> Hi,
>>>
>>> i recently implemented distributed ReentrantLock - IGNITE-642,
>>> i made a pull request, so hopefully this could be added to the next
>>> release.
>>>
>>> Best regards,
>>> Vladisav
>>>
>>> On Thu, Feb 18, 2016 at 10:49 AM, Alexey Goncharuk <
>>> alexey.goncha...@gmail.com> wrote:
>>>
>>> > Folks,
>>> >
>>> > The current implementation of IgniteCache.lock(key).lock() has the same
>>> > semantics as the transactional locks - cache topology cannot be changed
>>> > while there exists an ongoing transaction or an explicit lock is held.
>>> The
>>> > restriction for transactions is quite fundamental, the lock() issue
>>> can be
>>> > fixed if we re-implement locking the same way IgniteSemaphore currently
>>> > works.
>>> >
>>> > As for the "Failed to find semaphore with the given name" message, my
>>> first
>>> > guess is that DataStructures were configured with 1 backups which led
>>> to
>>> > the data loss when two nodes were stopped. Mario, can you please
>>> re-test
>>> > your semaphore scenario with 2 backups configured for data structures?
>>> > From my side, I can also take a look at the semaphore issue when I'm
>>> done
>>> > with IGNITE-2610.
>>> >
>>>
>>
>>
>


How to deploy CacheJdbcPojoStore on a cluster of "non-embedded" Ignite node?

2016-03-19 Thread tomli
Hi All,

I want to deploy a CacheJdbcPojoStore on a cluster formed by some
"non-embedded" Ignite node, the closest thing I can find in 
"https://apacheignite.readme.io/"; is
"https://apacheignite.readme.io/docs/automatic-persistence";.

By following the instructions, I am able to run the Demo application, which
start a "embedded" Ignite node with a CacheJdbcPojoStore connects to a H2
database.

I then try to start a "non-embedded" Ignite node, hoping it could form a
cluster with the "embedded" node, but it failed to start with the following
log message:

Java HotSpot(TM) 64-Bit Server VM warning: ignoring option
MaxPermSize=256m; support was removed in 8.0
[12:33:14]__  
[12:33:14]   /  _/ ___/ |/ /  _/_  __/ __/
[12:33:14]  _/ // (7 7// /  / / / _/
[12:33:14] /___/\___/_/|_/___/ /_/ /___/
[12:33:14]
[12:33:14] ver. 1.5.0-final#20151229-sha1:f1f8cda2
[12:33:14] 2015 Copyright(C) Apache Software Foundation
[12:33:14]
[12:33:14] Ignite documentation: http://ignite.apache.org
[12:33:14]
[12:33:14] Quiet mode.
[12:33:14]   ^-- Logging to file
'C:\work\apache-ignite-fabric-1.5.0.final-bin\work\log\ignite-6c54d7fb.0.log'
[12:33:14]   ^-- To see **FULL** console log here add
-DIGNITE_QUIET=false or "-v" to ignite.{sh|bat}
[12:33:14]
[12:33:14] OS: Windows 7 6.1 amd64
[12:33:14] VM information: Java(TM) SE Runtime Environment 1.8.0_25-b18
Oracle Corporation Java HotSpot(TM) 64-Bit Serve
r VM 25.25-b02
[12:33:16] Configured plugins:
[12:33:16]   ^-- None
[12:33:16]
[12:33:17] Security status [authentication=off, tls/ssl=off]
[12:33:19,008][SEVERE][tcp-disco-msg-worker-#2%null%][TcpDiscoverySpi]
Failed to unmarshal discovery data for component: 1
class org.apache.ignite.IgniteCheckedException: Failed to find class
with given class loader for unmarshalling (make sure same versions of all
classes are available on all nodes or enable peer-class-loading):
sun.misc.Launcher$AppClassLoader@c387f44
at
org.apache.ignite.marshaller.jdk.JdkMarshaller.unmarshal(JdkMarshaller.java:108)
at
org.apache.ignite.marshaller.AbstractMarshaller.unmarshal(AbstractMarshaller.java:78)
at
org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi.onExchange(TcpDiscoverySpi.java:1717)
at
org.apache.ignite.spi.discovery.tcp.ServerImpl$RingMessageWorker.processNodeAddedMessage(ServerImpl.java:3683)
at
org.apache.ignite.spi.discovery.tcp.ServerImpl$RingMessageWorker.processMessage(ServerImpl.java:2252)
at
org.apache.ignite.spi.discovery.tcp.ServerImpl$MessageWorkerAdapter.body(ServerImpl.java:5784)
at
org.apache.ignite.spi.discovery.tcp.ServerImpl$RingMessageWorker.body(ServerImpl.java:2161)
at
org.apache.ignite.spi.IgniteSpiThread.run(IgniteSpiThread.java:62)
Caused by: java.lang.ClassNotFoundException:
org.apache.ignite.schema.Demo$H2DemoStoreFactory
at java.net.URLClassLoader$1.run(URLClassLoader.java:372)
at java.net.URLClassLoader$1.run(URLClassLoader.java:361)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:360)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:344)
at
org.apache.ignite.internal.util.IgniteUtils.forName(IgniteUtils.java:8172)
at
org.apache.ignite.marshaller.jdk.JdkMarshallerObjectInputStream.resolveClass(JdkMarshallerObjectInputStream.java:54)
at
java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1613)
at
java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1518)
at
java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1774)
at
java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1351)
at
java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1993)
at
java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1918)
at
java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1801)
at
java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1351)
at
java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1993)
at
java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1918)
at
java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1801)
at
java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1351)
at
java.io.ObjectInputStream.readObject(ObjectInputStream.java:371)
at java.util.ArrayList.readObject(ArrayList.java:791)
   

Ignite still support Scala API

2016-03-19 Thread timothy
if ignite still support Scala API, I can not find its API document on ignite
official website



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Ignite-still-support-Scala-API-tp3567.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Cross cache query cannot query data

2016-03-19 Thread vkulichenko
Hi,

It's really hard to understand your issue. Please refer to CacheQueryExample
[1] that shows how to execute cross-cache queries.

Most likely you get empty result because persons and organization are not
properly collocated. Note how AffinityKey class is used as a key for Person
to make sure that all persons that belong to a particalur organization are
stored on the same node where this organization is stored.

[1]
https://github.com/apache/ignite/blob/master/examples/src/main/java/org/apache/ignite/examples/datagrid/CacheQueryExample.java

-Val



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Cross-cache-query-cannot-query-data-tp3556p3563.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Two nodes in one JVM/Tomcat

2016-03-19 Thread vkulichenko
Hi,


hemanta wrote
> I hit on subscription button but I am not getting any email. I tried
> couple of times.

Can you try another email?


hemanta wrote
> Thanks for your reply. I would like to share the data between applications
> but we have two web applications running in same tomcat. They have two
> web.xml files. Right now I have two ignite configuration files that I am
> loading from each web.xml. It works but in this approach data in
> duplicated between two applications which is increasing memory footprint. 

You need to start a single node per app server, not per application. In
Tomcat you can implement LifecycleListener [1] and use Ignition.start()
method to start the node within this implementation. Will this work for you?

[1] https://tomcat.apache.org/tomcat-9.0-doc/config/listeners.html

-Val



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Two-nodes-in-one-JVM-Tomcat-tp3519p3546.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


any example of iptables rules for apache ignite ?

2016-03-19 Thread X Yang


Dear All,

I have set up a cluster of 6 nodes (RHEL6.6) using 
apache-ignite-fabric-1.5.0.final-bin and it works without iptables.

However, once I enable iptables, even with the most generous rule as following, 
it stops working. Any tips?

-
Yang

--IPTABLES--
# Firewall configuration written by system-config-firewall
# Manual customization of this file is not recommended.
*filter
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
-A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT
-A INPUT -p icmp -j ACCEPT
-A INPUT -i lo -j ACCEPT
-A INPUT -m state --state NEW -s 10.140.151.68,10.140.151.200/31 -j ACCEPT
-A INPUT -j REJECT --reject-with icmp-host-prohibited
-A FORWARD -j REJECT --reject-with icmp-host-prohibited
COMMIT



--Config-


http://www.springframework.org/schema/beans";
   xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance";
   xmlns:util="http://www.springframework.org/schema/util";
   xsi:schemaLocation="
http://www.springframework.org/schema/beans
http://www.springframework.org/schema/beans/spring-beans.xsd
http://www.springframework.org/schema/util
http://www.springframework.org/schema/util/spring-util.xsd";>








































10.140.151.200:47500..47509
10.140.151.201:47500..47509
10.140.151.202:47500..47509
10.140.151.203:47500..47509
10.140.151.204:47500..47509
10.140.151.205:47500..47509












Re: How to implement a custom ExpiryPolicy based on Entity attributes

2016-03-19 Thread vkulichenko
Hi,

You can define the expiry policy for a particular update when doing a put:

if (!employee.has401k)
cache.withExpiryPolicy(new CreatedExpiryPolicy(new
Duration(TimeUnit.SECONDS, 5))).put(key, employee);
else
cache.put(key, employee);

Will this work for you?

-Val



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/How-to-implement-a-custom-ExpiryPolicy-based-on-Entity-attributes-tp3537p3545.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Reducing memory footprint

2016-03-19 Thread vkulichenko
Alexander,

No, there is no way to change cache topology in runtime. Client node doesn't
store any data and by default never executes jobs or runs services. It also
uses more lightweight discovery protocol, which allows to have a lot of
clients to be connected to a single cluster.

But I'm still a bit confused, how will your deployment look like? Do you
have a node embedded the application? Is it a client node or a server node?
Where other nodes reside?

-Val



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Reducing-memory-footprint-tp3494p3549.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


How to implement a custom ExpiryPolicy based on Entity attributes

2016-03-19 Thread techbysample

Support,

I would like to implement a custom ExpiryPolicy that based on entity (ie:
Employee object) 
attributes and time.

For example, if Employee objects are added to the cache, I want  Employee
objects
removed where Employee.has401k=false after 5 seconds;

Example:

Employee {

private boolean has401k;
:
:
}

Is this possible?  Please advise.

Thanks in advance!



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/How-to-implement-a-custom-ExpiryPolicy-based-on-Entity-attributes-tp3537.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Reducing memory footprint

2016-03-19 Thread Andrey Gura
Alexander,

from my point of view you should use only server nodes because Vert.x use
cluster manager distributed datastructures (caches in case of Ignite) for
storing information about event bus subscribers and deployed verticles.

In order to reduce memory footprint you can also reduce the following
properties values:

- amount of partitions for cache affinity function;
- reduce size of cache delete history.

For example see changes in pom.xml and ignite.xml files in vertx-ignite
that reduces memory consumption for tests (this commit
https://github.com/vert-x3/vertx-ignite/commit/49404258560a5766359f9271c5a3e0a860817eb3
).

On Thu, Mar 17, 2016 at 1:33 AM, vkulichenko 
wrote:

> Alexander,
>
> No, there is no way to change cache topology in runtime. Client node
> doesn't
> store any data and by default never executes jobs or runs services. It also
> uses more lightweight discovery protocol, which allows to have a lot of
> clients to be connected to a single cluster.
>
> But I'm still a bit confused, how will your deployment look like? Do you
> have a node embedded the application? Is it a client node or a server node?
> Where other nodes reside?
>
> -Val
>
>
>
> --
> View this message in context:
> http://apache-ignite-users.70518.x6.nabble.com/Reducing-memory-footprint-tp3494p3549.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>



-- 
Andrey Gura
GridGain Systems, Inc.
www.gridgain.com


Re: Server node info when client disconnects

2016-03-19 Thread vkulichenko
Hi Alper,

I'm not sure I understand the issue and the fix.

EVT_NODE_LEFT/FAILED and EVT_CLIENT_NODE_DISCONNECTED are different events
that are fired in different situations. For example,
EVT_CLIENT_NODE_DISCONNECTED can happen due to network outage, even without
any server failures. And on the other hand it will not happen if the servers
it was physically connected dies, but the client immediately restores the
connection with the cluster through another server.

Having said that, if you need to know when the server left or failed, you
should listen for EVT_NODE_LEFT/FAILED events. You will receive events for
clients as well, but that's OK, because DiscoveryEvent contains ClusterNode
instance that has all the information about the failed node. I.e., you can
filter out by ClusterNode.isClient() flag.

Makes sense?

-Val



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Re-Server-node-info-when-client-disconnects-tp3562p3585.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: org.apache.ignite.IgniteCheckedException: Failed to register query type: TypeDescriptor

2016-03-19 Thread minisoft_rm
super cool 

So before ver1.6, let me test more things about "SELECT".



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/org-apache-ignite-IgniteCheckedException-Failed-to-register-query-type-TypeDescriptor-tp3447p3579.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: How to solve the data lost?

2016-03-19 Thread Vladimir Ozerov
Hi,

Can you please properly subscribe to the mailing list so that the community
receives email notifications? Follow the instruction here:
http://apache-ignite-users.70518.x6.nabble.com/mailing_list/MailingListOptions.jtp?forum=1

I am not sure I understand your question. Do you want to loose data on node
shutdown? 



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/How-to-solve-the-data-lost-tp3572p3580.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: CacheStore handles persistence in client node for transactional cache

2016-03-19 Thread knowak
Thanks Vladmir. Unfortunately we need write-through to guarantee persistence.
Which means we'll need to modify approach so that client nodes are allowed
to write to the store.

You also wrote that ATOMIC cache doesn't give any transactional guarantees.
Does it mean that if we have write-through cache store configured, it's
possible that cache entry write may fail, but change will still be
propagated?




--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/CacheStore-handles-persistence-in-client-node-for-transactional-cache-tp3428p3566.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: org.apache.ignite.IgniteCheckedException: Failed to register query type: TypeDescriptor

2016-03-19 Thread Vasiliy Sisko
Hello.

I reproduced this issue. Schema import utility generate wrong code in
special case.
You may watch this issue https://issues.apache.org/jira/browse/IGNITE-2856.

The reason of that exception is that columns [CARD_NO] for @navy and
[P_PRODUCTORDERLIMIT] for @minisoft_rm are configured as index fields and do
not configured as query fields.

To quick fix this issue you need to configure all fields specified in
indexes as query fields or do not create indexes for absent fields.

Also you can try to configure cache with GridGain Web Console (This issue is
already fixed for Web Console), which is described at this page
http://ignite.apache.org/addons.html.

Please let me know if my answer helped you



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/org-apache-ignite-IgniteCheckedException-Failed-to-register-query-type-TypeDescriptor-tp3447p3551.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.