Re: Evictions started (cache may have reached its capacity). You may wish to increase 'maxSize' on eviction policy being used for cache

2018-01-03 Thread Evgenii Zhuravlev
Hi Hitesh,

Could you give a little bit more information about this case? Do you have
enough space for new entries? Could you share config file for your Ignite
nodes?

Evgenii

2018-01-03 8:05 GMT+03:00 Hitesh :

> I am using fifo eviction policy, It is showing Eviction started but still,
> it
> is showing older entries also.
> I am using apache ignite V 2.3.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Evictions started (cache may have reached its capacity). You may wish to increase 'maxSize' on eviction policy being used for cache

2018-01-03 Thread Denis Mekhanikov
Hi Hitesh!

What do you mean by older versions?
FIFO eviction policy can limit size of a cache, and it specifies the order,
in which values will be evicted.

Does it work in some other way?

Denis

ср, 3 янв. 2018 г. в 8:05, Hitesh :

> I am using fifo eviction policy, It is showing Eviction started but still,
> it
> is showing older entries also.
> I am using apache ignite V 2.3.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Ignite throwing Out Of Memory

2018-01-03 Thread userx
Hi All,

Any reasoning on the same ? Let me know if any more detail are required.




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Ignite throwing Out Of Memory

2018-01-03 Thread Denis Mekhanikov
Hi!

I see in the CSV, that you provided, that there are a lot of BinaryObjects
on heap. Looks like you are streaming data, using DataStremer, and batches
of it are stored on heap.

I don't see the reason of your confusion. The app consumed more memory,
than during last 6 months, so it failed with OOM. Given that you only gave
it 1GB of heap, it's not really surprising.
Am I missing something here?

Denis

ср, 3 янв. 2018 г. в 12:18, userx :

> Hi All,
>
> Any reasoning on the same ? Let me know if any more detail are required.
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Evictions started (cache may have reached its capacity). You may wish to increase 'maxSize' on eviction policy being used for cache

2018-01-03 Thread Hitesh
hey @Denis Mekhanikov I am asking that I am getting older entries also which
should be removed from cache



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Evictions started (cache may have reached its capacity). You may wish to increase 'maxSize' on eviction policy being used for cache

2018-01-03 Thread Denis Mekhanikov
HItesh,

We need more details to confirm your problem.
Please provide a project, that reproduces your issue. Archive, attached to
a letter, or a GitHub repository would be great.
I'd like to see, what is your cache configuration and how you check, that
old values are evicted.

Denis

ср, 3 янв. 2018 г. в 13:22, Hitesh :

> hey @Denis Mekhanikov I am asking that I am getting older entries also
> which
> should be removed from cache
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Evictions started (cache may have reached its capacity). You may wish to increase 'maxSize' on eviction policy being used for cache

2018-01-03 Thread Hitesh
I am getting all the entries new and old, including entries which should be
removed from the cache after adding new entries.

why am I getting old entries also? 



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Ignite throwing Out Of Memory

2018-01-03 Thread userx
Hi Denis,

Thanks for the reply, yes I am using Streamer but the whole point of using
the streamer is that its the best api available for optimum memory
utilization rather than a putAll. 

I am currently running on 1G, if I abruptly increase it to 5G then whats the
importance of a Streamer in such a case ? I can use putAll in that case. 

So the question is how do I decide upon the max memory in such a case, given
that the volume of the data could not always be judged. I can afford Ignite
to perform slowly in terms of data load but definitely not have a failure
because of OOM.





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Evictions started (cache may have reached its capacity). You may wish to increase 'maxSize' on eviction policy being used for cache

2018-01-03 Thread Hitesh
Here its my cache configuration,

CacheConfiguration cacheCfg = new
CacheConfiguration(cacheName);

cacheCfg.setOnheapCacheEnabled(true);
cacheCfg.setEvictionPolicy(new FifoEvictionPolicy<>(2));

   cacheCfg.setCacheMode(CacheMode.PARTITIONED); // Default.

cacheCfg.setIndexedTypes(String.class, 
WebSocketClient.class);

cacheCfg.setAtomicityMode(CacheAtomicityMode.ATOMIC);

i have set max size 2 entries,when i am trying to put another entry its
showing eviction policy started ,but when i query its returning all 3
entries in the cache instead of recent two entries.
 



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Simple GETs/PUTs which one is better - Rest API, Thin , Thick OR Java API

2018-01-03 Thread Naveen
Hi Evgenii

Yes, I have tried BinaryObjects, still does not seems to be working.

Here is the code  snippet I have used

Ignite DDL is

   DROP TABLE IF EXISTS TEST_CACHE;
   CREATE TABLE TEST_CACHE 
   (
ASSOCIATE_ID VARCHAR(200) NULL, 
MAPPING_ID VARCHAR(4000), 
SYNCREQUIRED VARCHAR(100) NULL, 
SYNCTO VARCHAR(10) NULL, 
ADB_SOURCE CHAR(1) NULL, 
PRIMARY KEY (MAPPING_ID))
WITH "template=partitioned,backups=1,cache_name=TEST_CACHE,
key_type=com.ril.edif.model.TEST_CACHE.Key,
value_type=com.ril.edif.model.TEST_CACHE.Value";

When I tried the rest API with the below URL

http://10.144.96.142:8080/ignite?cmd=get&key=M111&cacheName=TEST_CACHE

Here is the response, it did not return the data and I dont see anything in
teh logs as well

{"successStatus":0,"affinityNodeId":"b47daca7-5aae-470e-a86b-ec793cc90d48","sessionToken":null,"error":null,"response":null}

However size command works fine for the same cache

http://10.144.96.142:8080/ignite?cmd=size&cacheName=TEST_CACHE

Response:
{"successStatus":0,"affinityNodeId":null,"sessionToken":null,"error":null,"response":2}

I was thinking that, because of key and value classes are on the node's
classpath it may not be working.  Then I have created below mentioned Java
POJOs - Key and Value classes 


package com.ril.edif.model;


public class TEST_CAHCE_KEY {

private String MAPPING_ID;

public String getMAPPING_ID() {
return MAPPING_ID;
}

public void setMAPPING_ID(String mAPPING_ID) {
MAPPING_ID = mAPPING_ID;
} 


}
***
package com.ril.edif.model;

import org.apache.ignite.cache.query.annotations.QuerySqlField;

public class TEST_CACHE_VALUE {

public String getASSOCIATE_ID() {
return ASSOCIATE_ID;
}
public void setASSOCIATE_ID(String aSSOCIATE_ID) {
ASSOCIATE_ID = aSSOCIATE_ID;
}
public String getMAPPING_ID() {
return MAPPING_ID;
}
public void setMAPPING_ID(String mAPPING_ID) {
MAPPING_ID = mAPPING_ID;
}
public String getSYNCREQUIRED() {
return SYNCREQUIRED;
}
public void setSYNCREQUIRED(String sYNCREQUIRED) {
SYNCREQUIRED = sYNCREQUIRED;
}
public String getSYNCTO() {
return SYNCTO;
}
public void setSYNCTO(String sYNCTO) {
SYNCTO = sYNCTO;
}
public String getADB_SOURCE() {
return ADB_SOURCE;
}
public void setADB_SOURCE(String aDB_SOURCE) {
ADB_SOURCE = aDB_SOURCE;
}

private String ASSOCIATE_ID; 

private String MAPPING_ID; 

private String SYNCREQUIRED; 

private String SYNCTO;

private String ADB_SOURCE; 

}
**
Still, no luck, I was getting teh same response, no data is returned.

What could be wrong with this

Thanks
Naveen



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Simple GETs/PUTs which one is better - Rest API, Thin , Thick OR Java API

2018-01-03 Thread Evgenii Zhuravlev
Naveen, you should analyze logs, it will show exception. If you really
can't get anything from logs, please provide them to the community.

Evgenii

2018-01-03 13:58 GMT+03:00 Naveen :

> Hi Evgenii
>
> Yes, I have tried BinaryObjects, still does not seems to be working.
>
> Here is the code  snippet I have used
>
> Ignite DDL is
>
>DROP TABLE IF EXISTS TEST_CACHE;
>CREATE TABLE TEST_CACHE
>(
> ASSOCIATE_ID VARCHAR(200) NULL,
> MAPPING_ID VARCHAR(4000),
> SYNCREQUIRED VARCHAR(100) NULL,
> SYNCTO VARCHAR(10) NULL,
> ADB_SOURCE CHAR(1) NULL,
> PRIMARY KEY (MAPPING_ID))
> WITH "template=partitioned,backups=1,cache_name=TEST_CACHE,
> key_type=com.ril.edif.model.TEST_CACHE.Key,
> value_type=com.ril.edif.model.TEST_CACHE.Value";
>
> When I tried the rest API with the below URL
>
> http://10.144.96.142:8080/ignite?cmd=get&key=M111&cacheName=TEST_CACHE
>
> Here is the response, it did not return the data and I dont see anything in
> teh logs as well
>
> {"successStatus":0,"affinityNodeId":"b47daca7-
> 5aae-470e-a86b-ec793cc90d48","sessionToken":null,"error":
> null,"response":null}
>
> However size command works fine for the same cache
>
> http://10.144.96.142:8080/ignite?cmd=size&cacheName=TEST_CACHE
>
> Response:
> {"successStatus":0,"affinityNodeId":null,"sessionToken":null,"error":
> null,"response":2}
>
> I was thinking that, because of key and value classes are on the node's
> classpath it may not be working.  Then I have created below mentioned Java
> POJOs - Key and Value classes
>
> 
> package com.ril.edif.model;
>
>
> public class TEST_CAHCE_KEY {
>
> private String MAPPING_ID;
>
> public String getMAPPING_ID() {
> return MAPPING_ID;
> }
>
> public void setMAPPING_ID(String mAPPING_ID) {
> MAPPING_ID = mAPPING_ID;
> }
>
>
> }
> ***
> package com.ril.edif.model;
>
> import org.apache.ignite.cache.query.annotations.QuerySqlField;
>
> public class TEST_CACHE_VALUE {
>
> public String getASSOCIATE_ID() {
> return ASSOCIATE_ID;
> }
> public void setASSOCIATE_ID(String aSSOCIATE_ID) {
> ASSOCIATE_ID = aSSOCIATE_ID;
> }
> public String getMAPPING_ID() {
> return MAPPING_ID;
> }
> public void setMAPPING_ID(String mAPPING_ID) {
> MAPPING_ID = mAPPING_ID;
> }
> public String getSYNCREQUIRED() {
> return SYNCREQUIRED;
> }
> public void setSYNCREQUIRED(String sYNCREQUIRED) {
> SYNCREQUIRED = sYNCREQUIRED;
> }
> public String getSYNCTO() {
> return SYNCTO;
> }
> public void setSYNCTO(String sYNCTO) {
> SYNCTO = sYNCTO;
> }
> public String getADB_SOURCE() {
> return ADB_SOURCE;
> }
> public void setADB_SOURCE(String aDB_SOURCE) {
> ADB_SOURCE = aDB_SOURCE;
> }
>
> private String ASSOCIATE_ID;
>
> private String MAPPING_ID;
>
> private String SYNCREQUIRED;
>
> private String SYNCTO;
>
> private String ADB_SOURCE;
>
> }
> **
> Still, no luck, I was getting teh same response, no data is returned.
>
> What could be wrong with this
>
> Thanks
> Naveen
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Ignite throwing Out Of Memory

2018-01-03 Thread Denis Mekhanikov
DataStreamer is designed to insert data in the fastest way possible, not to
save memory.
If memory consumption is so critical for your, you can try tuning
DataStreamer properties like perNodeBufferSize

 and perNodeParallelOperations
.
Decreasing them should save you some heap, but will affect performance for
sure.

But if you are not running Ignite on Raspberry PI or something, and have
some memory reserve, I would recommend you just to increase available heap
space.

Denis

ср, 3 янв. 2018 г. в 13:30, userx :

> Hi Denis,
>
> Thanks for the reply, yes I am using Streamer but the whole point of using
> the streamer is that its the best api available for optimum memory
> utilization rather than a putAll.
>
> I am currently running on 1G, if I abruptly increase it to 5G then whats
> the
> importance of a Streamer in such a case ? I can use putAll in that case.
>
> So the question is how do I decide upon the max memory in such a case,
> given
> that the volume of the data could not always be judged. I can afford Ignite
> to perform slowly in terms of data load but definitely not have a failure
> because of OOM.
>
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Cache persistence question

2018-01-03 Thread Mikael
Is it ok to use both third party persistence (setWriteBehindEnabled( 
true)) on a cache and also have an expiration policy set at the same 
time, will any items in the cache that expire and have been modified be 
saved before they are removed from the cache ?






Re: Ignite throwing Out Of Memory

2018-01-03 Thread userx
Thanks Denis, For now, I will increase the memory.

But for records, I will quote the Java doc comments from the interface
IgniteDataStreamer interface

* Data streamer is responsible for streaming external data into cache. It
achieves it by
 * properly buffering updates and properly mapping keys to nodes responsible
for the data
 * to make sure that there is the least amount of data movement possible and
*optimal
 * network and memory utilization.*



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Cache persistence question

2018-01-03 Thread Evgenii Zhuravlev
Cache stores only hot data and it's expiration shouldn't affect underlying
third-party persistence.

Evgenii

2018-01-03 14:23 GMT+03:00 Mikael :

> Is it ok to use both third party persistence (setWriteBehindEnabled(
> true)) on a cache and also have an expiration policy set at the same time,
> will any items in the cache that expire and have been modified be saved
> before they are removed from the cache ?
>
>
>
>


Re: Cache persistence question

2018-01-03 Thread Denis Mekhanikov
Hi Mikael!

Yes, this is fine. Expiration policy is intended to remove those entries
from memory, that are saved to the persistent data storage.
So, if you query expired data one more, it will be loaded from persistence.

> will any items in the cache that expire and have been modified be saved
before they are removed from the cache ?
Could you clarify this? I didn't really understand the question.

Denis

ср, 3 янв. 2018 г. в 14:23, Mikael :

> Is it ok to use both third party persistence (setWriteBehindEnabled(
> true)) on a cache and also have an expiration policy set at the same
> time, will any items in the cache that expire and have been modified be
> saved before they are removed from the cache ?
>
>
>
>


Re: Cache persistence question

2018-01-03 Thread Denis Mekhanikov
If you put a value into a cache, and previous value has already expired,
then it will be written to persistent storage once more. It works just like
for non-expired entries.
Expiry only specifies, when entries are removed from memory, not from
persistent storage.

Denis

ср, 3 янв. 2018 г. в 14:55, Denis Mekhanikov :

> Hi Mikael!
>
> Yes, this is fine. Expiration policy is intended to remove those entries
> from memory, that are saved to the persistent data storage.
> So, if you query expired data one more, it will be loaded from persistence.
>
> > will any items in the cache that expire and have been modified be saved
> before they are removed from the cache ?
> Could you clarify this? I didn't really understand the question.
>
> Denis
>
> ср, 3 янв. 2018 г. в 14:23, Mikael :
>
>> Is it ok to use both third party persistence (setWriteBehindEnabled(
>> true)) on a cache and also have an expiration policy set at the same
>> time, will any items in the cache that expire and have been modified be
>> saved before they are removed from the cache ?
>>
>>
>>
>>


Re: Cache persistence question

2018-01-03 Thread Mikael
If I put an item in the cache and the expiration time is set shorter 
than the write behind delay, the item will expire before it has been 
written to the cache, will the expiration handler make sure that the 
cache entry is written to storage before it removes it ?


Same thing if I use setWriteBehindFlushSize(), then I don't know when 
any modified entries are saved to storage so they may expire before they 
are written to storage.|

|


Den 2018-01-03 kl. 12:55, skrev Denis Mekhanikov:

Hi Mikael!

Yes, this is fine. Expiration policy is intended to remove those 
entries from memory, that are saved to the persistent data storage.
So, if you query expired data one more, it will be loaded from 
persistence.


> will any items in the cache that expire and have been modified be saved before they are 
removed from the cache ?

Could you clarify this? I didn't really understand the question.

Denis

ср, 3 янв. 2018 г. в 14:23, Mikael >:


Is it ok to use both third party persistence (setWriteBehindEnabled(
true)) on a cache and also have an expiration policy set at the same
time, will any items in the cache that expire and have been
modified be
saved before they are removed from the cache ?







Re: Cache persistence question

2018-01-03 Thread Denis Mekhanikov
Mikael,

Normally expiration shouldn't affect writing to the persistent storage in
any way.
The only possible problem I see here, is when you put value into a cache,
then try to get it after it is expired, but not yet persisted. In this case
you may get null instead of an actual value.

I suggest you to check this behaviour and report, if anything doesn't work
as expected.

Denis

ср, 3 янв. 2018 г. в 15:17, Mikael :

> If I put an item in the cache and the expiration time is set shorter than
> the write behind delay, the item will expire before it has been written to
> the cache, will the expiration handler make sure that the cache entry is
> written to storage before it removes it ?
>
> Same thing if I use setWriteBehindFlushSize(), then I don't know when any
> modified entries are saved to storage so they may expire before they are
> written to storage.
>
> Den 2018-01-03 kl. 12:55, skrev Denis Mekhanikov:
>
> Hi Mikael!
>
> Yes, this is fine. Expiration policy is intended to remove those entries
> from memory, that are saved to the persistent data storage.
> So, if you query expired data one more, it will be loaded from persistence.
>
> > will any items in the cache that expire and have been modified be saved
> before they are removed from the cache ?
> Could you clarify this? I didn't really understand the question.
>
> Denis
>
> ср, 3 янв. 2018 г. в 14:23, Mikael :
>
>> Is it ok to use both third party persistence (setWriteBehindEnabled(
>> true)) on a cache and also have an expiration policy set at the same
>> time, will any items in the cache that expire and have been modified be
>> saved before they are removed from the cache ?
>>
>>
>>
>>
>


Re: Evictions started (cache may have reached its capacity). You may wish to increase 'maxSize' on eviction policy being used for cache

2018-01-03 Thread Denis Mekhanikov
Hitesh,

Problem, that you described looks similar to the one described in the
following ticket: https://issues.apache.org/jira/browse/IGNITE-1535
Please consider submiting a patch, if this functionality is critical for
you.

Denis

ср, 3 янв. 2018 г. в 13:51, Hitesh :

> Here its my cache configuration,
>
> CacheConfiguration cacheCfg = new
> CacheConfiguration(cacheName);
>
> cacheCfg.setOnheapCacheEnabled(true);
> cacheCfg.setEvictionPolicy(new
> FifoEvictionPolicy<>(2));
>
>cacheCfg.setCacheMode(CacheMode.PARTITIONED); //
> Default.
>
> cacheCfg.setIndexedTypes(String.class,
> WebSocketClient.class);
>
>
> cacheCfg.setAtomicityMode(CacheAtomicityMode.ATOMIC);
>
> i have set max size 2 entries,when i am trying to put another entry its
> showing eviction policy started ,but when i query its returning all 3
> entries in the cache instead of recent two entries.
> 
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Cursor in TextQuery - first hasNex() is slow

2018-01-03 Thread zbyszek
Hello Igniters,

I would like to consult with you the following:
I am trying to understand the performance of text search in Ignite 2.3 on 1
milion of entities with one field: name.
I execute the search and take first 10 entries from the cursor.
I have noticed that the time of getting first entry (or calling first
hasNext()) is equal to the time of getting all entries.
Seems that internally the cursor is prepared by fetching and caching all
query data.
Also the processing time is growing significantly with the number of query
hits. for example when query returns 89242 items (see below), the execution
time of the first hasNext() is 1327 ms. what is rather unacceptable for our
auto-suggest solution (we are planning to search over hundreds of millions
of entries). Example and code is below.
Could you confirm this as a bug or feature?

Regards,
zbyszek


All time: (393 items ) 42 ms.
Cursor time: 1 ms.
Iterator time: 0 ms.
Next time: (0) 45 ms.
Next time: (1) 0 ms.
Next time: (2) 0 ms.
Next time: (3) 0 ms.
Next time: (4) 0 ms.
Next time: (5) 0 ms.
Next time: (6) 0 ms.
Next time: (7) 0 ms.
Next time: (8) 0 ms.
Next time: (9) 0 ms.
Result time: 45 ms.
All time: (2385 items ) 79 ms.
Cursor time: 1 ms.
Iterator time: 0 ms.
Next time: (0) 122 ms.
Next time: (1) 0 ms.
Next time: (2) 0 ms.
Next time: (3) 0 ms.
Next time: (4) 0 ms.
Next time: (5) 0 ms.
Next time: (6) 0 ms.
Next time: (7) 0 ms.
Next time: (8) 0 ms.
Next time: (9) 0 ms.
Result time: 122 ms.
All time: (28026 items ) 485 ms.
Cursor time: 0 ms.
Iterator time: 0 ms.
Next time: (0) 500 ms.
Next time: (1) 0 ms.
Next time: (2) 0 ms.
Next time: (3) 0 ms.
Next time: (4) 0 ms.
Next time: (5) 0 ms.
Next time: (6) 0 ms.
Next time: (7) 0 ms.
Next time: (8) 0 ms.
Next time: (9) 0 ms.
Result time: 500 ms.
All time: (89242 items ) 1383 ms.
Cursor time: 0 ms.
Iterator time: 0 ms.
Next time: (0) 1327 ms.
Next time: (1) 0 ms.
Next time: (2) 0 ms.
Next time: (3) 0 ms.
Next time: (4) 0 ms.
Next time: (5) 0 ms.
Next time: (6) 0 ms.
Next time: (7) 0 ms.
Next time: (8) 0 ms.
Next time: (9) 0 ms.
Result time: 1327 ms.


public QueryEntity createSearchQuery(String entity) {
QueryEntity queryEntity = new
QueryEntity(String.class.getTypeName(), entity);
List indexes = new ArrayList<>();
Set fields = new LinkedHashSet<>(Arrays.asList("name"));
fields.forEach(f -> {
queryEntity.addQueryField(f, String.class.getTypeName(), f);
LinkedHashMap indexField = new
LinkedHashMap<>();
indexField.put(f, false);
QueryIndex index = new QueryIndex(indexField,
QueryIndexType.FULLTEXT);
index.setName(f + "Index");
indexes.add(index);
});
queryEntity.setIndexes(indexes);
return queryEntity;
}


public SearchResult search(Search search) {
String query = search.getTerm();
TextQuery textQuery = new
TextQuery<>("Search", query);
textQuery.setLocal(true);
textQuery.setPageSize(10);
IgniteCache cache = ignite.cache("search");

QueryCursor> cursor =
cache.withKeepBinary().query(textQuery);
long startTime = System.currentTimeMillis();
int size = cursor.getAll().size();
System.out.println("All time: " + "(" + size + " items ) " +
(System.currentTimeMillis() - startTime) + " ms.");

startTime = System.currentTimeMillis();
cursor = cache.withKeepBinary().query(textQuery);
System.out.println("Cursor time: " + (System.currentTimeMillis() -
startTime) + " ms.");
startTime = System.currentTimeMillis();
Iterator> iterator =
cursor.iterator();
System.out.println("Iterator time: " + (System.currentTimeMillis() -
startTime) + " ms.");
List res = new ArrayList<>();
int counter = 0;
startTime = System.currentTimeMillis();
while (/*iterator.hasNext() && */ counter < search.getMaxRecords()){
long nextStartTime = System.currentTimeMillis();
Cache.Entry entry = iterator.next();
System.out.println("Next time: " + "(" + counter + ") " +
(System.currentTimeMillis() - nextStartTime) + " ms.");
String key = entry.getKey();
BinaryObject obj = entry.getValue();
StringBuilder sb = new StringBuilder();
for (String f : fields) {
String v = obj.field(f);
if (v != null &&
v.toLowerCase().contains(search.getTerm().toLowerCase())) {
sb.append(f).append(": ").append(v).append(", ");
}
}
SearchResult.Entry dataRow = new SearchResult.Entry(key,
extractType(key), obj.field("name"), sb.toString());
res.add(dataRow);
counter++;
}
System.out.println("Result time: " + (System.currentTimeMillis() -
startTime) + " ms.");
return new SearchResult(System.currentTimeMillis() - startTime,
search.getTerm(), res.toArray(new SearchResult.Entry[0]));
}


Re: Connection problem between client and server

2018-01-03 Thread Denis Mekhanikov
Hi Jeff!

1 MB sound like a lot of data. Especially for discovery protocol. I didn't
check, what is the volume of discovery data normally, but I think, it
shouldn't be measured in megabytes. If you don't create thousands of
caches, of cause.
What work load do you perform on your cluster? How many caches do you
create? Can you provide node configuration files?

Node startup can be slow, if you have long ping between server nodes. Make
sure, that network between nodes is stable and has low latency.

Denis

ср, 3 янв. 2018 г. в 9:40, Jeff Jiao :

> Hi Ignite community,
>
> we are encountering a connection problem, in our production environment,
> we have 3 Ignite servers with 500G+ of data each, and about 30 Ignite
> clients running.
> then we tried to start a simple Ignite client in different environment,
> Citrix environment, it cannot connect to the existing servers. (firewalls
> are open in both way for 47500-47509 and 47100-47109)
>
>
> we did some investigating,
> 1. if we start only one Ignite server node, this Ignite client from Citrix
> can connect to it very fast, about 2-4 secs. (Server=1, Client=1)
> 2. then we start all of the 3 Ignite servers, it still can connect to
> servers but took longer, about 20 secs. (Server=3, Client=1)
> 3. then we start 2 of our services which contains an Ignite client node,
> then start this Ignite client in Citrix, it took 27-30 secs. (Server=3,
> Client=3)
>
>
> it looks like this connection establishment process depends on the amount
> of
> Ignite nodes.
> then we did some testing in our development environment and QA environment,
> we used a tool to catch the packages transport from Ignite server to Ignite
> client through port 47500 when they connect,
> in DEV env, we have 1 server and 4 clients running, then start a client
> node, the size of packages from server to this client is about 1MB.
> in QA env, we have 3 servers and 12 clients running, then start a client
> node, the size of packages from servers to this client is about 4MB.
> The result also can prove the point.
>
>
> we guess that between our Production env and Citrix env, when client try to
> connect, there are too much infos need to be transport and reach some limit
> or timeout...
>
> so are we in the correct direction for debugging? do you have any ideas or
> suggestions for this situation? or does Ignite has some configurations
> which
> can control the information transportation when connect?
>
>
> Thanks a lot,
> Jeff
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Cache.query with ScanQuery Hangs forever

2018-01-03 Thread Timay
it looks like it was classpath loading issue with "Entry::getValue". Since
this is a lambda it will try to serialize the containing class which is not
expected at the cluster nor necessary. When i killed the process I got stack
trace(below) point to that. So i assume some issue with around the class
path loading.

/class org.apache.ignite.IgniteCheckedException: Query execution failed:
GridCacheQueryBean [qry=GridCacheQueryAdapter [type=SCAN, clsName=null,
clause=null, filter=ManagerServiceImpl$InactivePredicate@220fd437,
transform=*ManagerServiceImpl$$Lambda$32/1251084807@7b343e3*/


This is something i have seen before during a compute operation, during an
exception handling we had a library that was not supplied to the cluster get
referenced and it failed silently but would not release the thread. Are any
issues known around that? I may take a look if i can, but any info would me
begin. 

Thanks. 
Tim



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Spark data frames integration merged

2018-01-03 Thread Revin Chalil
Thank you and this is great news. 

We currently use the Ignite cache as a Reference dataset RDD in Spark, convert 
it into a spark DataFrame and then join this DF with the incoming-data DF. I 
hope we can change this 3 step process to a single step with the Spark DF 
integration. If so, would index / affinitykeys on the join columns help with 
performance? We currently do not have them defined on the Reference dataset. 
Are there examples available joining ignite DF with Spark DF? Also, what is the 
best way to get the latest executables with the IGNITE-3084 included? Thanks 
again. 


On 12/29/17, 10:34 PM, "Nikolay Izhikov"  wrote:

Thank you, guys.

Val, thanks for all reviews, advices and patience.

Anton, thanks for ignite wisdom you share with me.

Looking forward for next issues :)

P.S Happy New Year for all Ignite community!

В Пт, 29/12/2017 в 13:22 -0800, Valentin Kulichenko пишет:
> Igniters,
> 
> Great news! We completed and merged first part of integration with
> Spark data frames [1]. It contains implementation of Spark data
> source which allows to use DataFrame API to query Ignite data, as
> well as join it with other data frames originated from different
> sources.
> 
> Next planned steps are the following:
> - Implement custom execution strategy to avoid transferring data from
> Ignite to Spark when possible [2]. This should give serious
> performance improvement in cases when only Ignite tables participate
> in a query.
> - Implement ability to save a data frame into Ignite via
> DataFrameWrite API [3].
> 
> [1] https://issues.apache.org/jira/browse/IGNITE-3084
> [2] https://issues.apache.org/jira/browse/IGNITE-7077
> [3] https://issues.apache.org/jira/browse/IGNITE-7337
> 
> Nikolay Izhikov, thanks for the contribution and for all the hard
> work!
> 
> -Val




Re: Ignite throwing Out Of Memory

2018-01-03 Thread vkulichenko
"Optimal consumption" doesn't mean that you give high ingestion throughput
for free. Data streamer is highly optimized for a particular use case and if
you try to achieve same results with putAll API, you will likely get worse
consumption.

If low memory consumption is more important for you than high throughput,
then putAll probably suites you better. However, 1GB per node is a VERY low
memory allocation for modern hardware and modern applications. I generally
recommend to have at least 4GB per node regardless of use case.

-Val



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Cache.query with ScanQuery Hangs forever

2018-01-03 Thread vkulichenko
Tim,

Can you try replacing lambda with a static class? Does it help?

-Val



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Cursor in TextQuery - first hasNex() is slow

2018-01-03 Thread vkulichenko
zbyszek,

Ignite fetches query results in pages. When you call next() or hasNext() for
the first time, client requests first page and receives it from server. It
then iterates though this page which is obviously much faster (no network
communication). Once the page is exceeded, next page is requested, and so
on. So the behavior you see is correct.

BTW, default page size is 1024 entries, you can change it via
Query#setPageSize parameter.

-Val



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: H2 console - GridH2QueryContext is not initialized

2018-01-03 Thread vkulichenko
Which version are you on?

-Val



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Spark data frames integration merged

2018-01-03 Thread vkulichenko
Indexes would not be used during joins, at least in current implementation.
Current integration is implemented as a regular Spark data source which
provides each relation separately. Spark then performs join by itself, so
Ignite indexes do not help.

The easiest way to get binaries would be to use a nightly build [1] , but it
seems to be broken for some reason (latest is from May 31). I guess the only
option at the moment is to build from source.

[1]
https://builds.apache.org/view/H-L/view/Ignite/job/Ignite-nightly/lastSuccessfulBuild/

-Val



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Spark data frames integration merged

2018-01-03 Thread Revin Chalil
Thanks Val for the info on indexes with DF. Do you know if adding index / 
affinitykeys on the cache help with the join, when the IgniteRDD is joined with 
a spark DF? The below from docs say that 

“IgniteRDD also provides affinity information to Spark via 
getPrefferredLocations method so that RDD computations use data locality.”

I was wondering, if the affinitykey on the cache can be utilized in the spark 
join? 


On 1/3/18, 12:27 PM, "vkulichenko"  wrote:

Indexes would not be used during joins, at least in current implementation.
Current integration is implemented as a regular Spark data source which
provides each relation separately. Spark then performs join by itself, so
Ignite indexes do not help.

The easiest way to get binaries would be to use a nightly build [1] , but it
seems to be broken for some reason (latest is from May 31). I guess the only
option at the moment is to build from source.

[1]

https://builds.apache.org/view/H-L/view/Ignite/job/Ignite-nightly/lastSuccessfulBuild/

-Val



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/




Re: BinaryObjectImpl.deserializeValue with specific ClassLoader

2018-01-03 Thread Abeneazer Chafamo
Is there any update on the suggested functionality to resolve cache entry
classes based on the caller's context first instead of relying on Ignite's
classloader?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: BinaryObjectImpl.deserializeValue with specific ClassLoader

2018-01-03 Thread Valentin Kulichenko
Ticket is still open. Vladimir, looks like it's assigned to you. Do you
have any plans to work on it?

https://issues.apache.org/jira/browse/IGNITE-5038

-Val

On Wed, Jan 3, 2018 at 1:26 PM, Abeneazer Chafamo  wrote:

> Is there any update on the suggested functionality to resolve cache entry
> classes based on the caller's context first instead of relying on Ignite's
> classloader?
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Spark data frames integration merged

2018-01-03 Thread Valentin Kulichenko
Revin,

I doubt IgniteRDD#getPrefferredLocations has any affect on data frames, but
this is an interesting point. Nikolay, as a developer of this
functionality, can you please comment on this?

-Val

On Wed, Jan 3, 2018 at 1:22 PM, Revin Chalil  wrote:

> Thanks Val for the info on indexes with DF. Do you know if adding index /
> affinitykeys on the cache help with the join, when the IgniteRDD is joined
> with a spark DF? The below from docs say that
>
> “IgniteRDD also provides affinity information to Spark via
> getPrefferredLocations method so that RDD computations use data locality.”
>
> I was wondering, if the affinitykey on the cache can be utilized in the
> spark join?
>
>
> On 1/3/18, 12:27 PM, "vkulichenko"  wrote:
>
> Indexes would not be used during joins, at least in current
> implementation.
> Current integration is implemented as a regular Spark data source which
> provides each relation separately. Spark then performs join by itself,
> so
> Ignite indexes do not help.
>
> The easiest way to get binaries would be to use a nightly build [1] ,
> but it
> seems to be broken for some reason (latest is from May 31). I guess
> the only
> option at the moment is to build from source.
>
> [1]
> https://builds.apache.org/view/H-L/view/Ignite/job/Ignite-nightly/
> lastSuccessfulBuild/
>
> -Val
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>
>
>


Re: H2 console - GridH2QueryContext is not initialized

2018-01-03 Thread Rajarshi Pain
I am using 2.3

On Thu 4 Jan, 2018, 01:43 vkulichenko, 
wrote:

> Which version are you on?
>
> -Val
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>
-- 

Thanks
Rajarshi


Re: Ignite throwing Out Of Memory

2018-01-03 Thread userx
Thanks Val.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Connection problem between client and server

2018-01-03 Thread Jeff Jiao
dev2.cap   

Hi Denis,

the attachment is the packages that server transport to client when connect,
we catched it using tool (Microsoft network monitor), can you please take a
look and see what you can find?

we only configured 12 caches in our Ignite, most of them have 3 or 4
indexes, only few has 7 or 8 indexes configured...




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Best practice while creating tables - specifying the length of the data type

2018-01-03 Thread Denis Mekhanikov
Hi Naveen!

CLOB is not supported in Ignite as an SQL-specific type. You can use
VARCHAR for character data or BINARY for binary data. VARCHAR corresponds
to java.lang.String type, and BINARY - to byte[].
There is no need to specify length for VARCHARs, it has no effect. VARCHARs
always can have unlimited lengths.

Denis

ср, 3 янв. 2018 г. в 10:40, Naveen :

> Hi
>
> I am using 2.3
>
> Looks like ignite does not support CLOB, can I use varchar instead of CLOB
> if my requirement is to store 10 characters
>
> CREATE TABLE MAP_CUST
> (
> PARTY_ID CLOB,
> MAPPING_ID VARCHAR(1000) NULL,
> UPDATEDBY VARCHAR(4000 CHAR) NULL,
> SYNCREQUIRED VARCHAR(100) NULL,
> ADB_SOURCE CHAR(1) NULL,
> SYNCTO VARCHAR(10) NULL,
> PRIMARY KEY (MAPPING_ID)
> )WITH "template=partitioned,backups=1,cache_name=MAP_CUST";
>
> And, what is the best practice, do we need to specify the exact size or
> omitting the size completely, which  one is efficient.
>
> Thanks
> Naveen
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>