Re: Mysql and ignite persistence inheritance issues

2019-02-28 Thread Denis Magda
This won't be supported - our SQL engine can't be integrated with 3rd party
storage like MySQL. If you can't fit all the data in RAM then use native
persistence which is supported by SQL.

-
Denis


On Thu, Feb 21, 2019 at 11:54 PM hulitao198758  wrote:

> Ignite does not enable the persistence function. The startup is to connect
> to
> the MySql database. The Sql statement can be run in Ignite to automatically
> load the data in mysql. In Ignite, select, insert, update, delete, etc., if
> the data in the memory does not exist, then Automatically load data from
> mysql and load it into the cache to achieve real-time synchronization
> between MySql and Ignite. Can this function be implemented, or is there any
> development plan for this feature in the future Ignite version?
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: same cache cannot update twice in one transaction

2019-02-28 Thread Павлухин Иван
Hi,

MVCC in Ignite is targeted to provide transactional consistency
guarantees. I suppose that with eventually consistent 3rd party store
it would be impossible to give much guarantees in general. Do think
that such eventually consistent store will be widely used? What kind
of guarantees should it provide? Is it easy to use it properly?
Currently we do not have answers to these questions. Feedback is
appreciated.

Also I must say that MVCC feature is currently in beta stage and
limitations are listed in documentation [1].

[1] 
https://apacheignite.readme.io/docs/multiversion-concurrency-control#section-other-limitations

чт, 28 февр. 2019 г. в 22:34, xmw45688 :
>
> Hi Ilya,
>
> It'd better if this was mentioned in Ignite Doc.
>
> It seems very limited if MVCC only supports  the Ignite native persistence.
> Yes, supporting MVCC in 3rd party persistence is challenging.  However, do
> we really need MVCC when the data from Cache (where MVCC already enabled) is
> ready to write to a 3rd party persistence store.   I think that an "eventual
> consistence" for writing cached data into a 3rd persistence layer seems
> sufficient when Ignite is used as cache stored, and the data in cache store
> is persistent.
>
> Does Ignite have a plan to support MVCC in cache layer and write the data
> from the cached store into a 3rd party persistence store with some limited
> feature like "eventual consistence".
>
> Can some gurus shed some lights on this subject?
>
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/



-- 
Best regards,
Ivan Pavlukhin


How to avoid start multiple instances in single machine

2019-02-28 Thread jameswsp
Hi Support,

I hope to host only one instance/node in one machine.

But I am worried that I startup another node in one machine by mistake. How
to configure to avoid this/

Thank you.
James Wang
M/WeChat: 135 1215 1134 



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


java.lang.OutOfMemoryError: GC overhead limit exceeded

2019-02-28 Thread edvance CN
OS: 4C +8GB
Data Region = 4GB

start command:
nohup $IGNITE_HOME/bin/ignite.sh -Xmx1024m -XX:+UseG1GC 
-XX:MaxDirectMemorySize=1G grtip-config.xml > ignite.log 2>&1 &

How to adjust the the memory tunning.

[21:29:22,777][SEVERE][mgmt-#33519%234-236-237-241%][GridJobWorker] Runtime 
error caught during grid runnable execution: GridJobWorker 
[createTime=1551360450243, startTime=1551360453071, finishTime=1551360483944, 
taskNode=TcpDiscoveryNode [id=a7839266-6396-4f7c-9ef7-c3a4b2355782, 
addrs=[0:0:0:0:0:0:0:1%lo, 127.0.0.1, 192.168.1.236], 
sockAddrs=[/192.168.1.236:47500, /0:0:0:0:0:0:0:1%lo:47500, /127.0.0.1:47500], 
discPort=47500, order=56, intOrder=31, lastExchangeTime=1551151457630, 
loc=false, ver=2.7.0#20181201-sha1:256ae401, isClient=false], internal=true, 
marsh=BinaryMarshaller [], ses=GridJobSessionImpl [ses=GridTaskSessionImpl 
[taskName=o.a.i.i.v.service.VisorServiceTask, dep=GridDeployment 
[ts=1551094729510, depMode=SHARED, 
clsLdr=sun.misc.Launcher$AppClassLoader@764c12b6, 
clsLdrId=3aef2742961-5a63d018-0eb4-493e-91a4-be6d41caff85, userVer=0, loc=true, 
sampleClsName=o.a.i.i.processors.cache.CachesRegistry, pendingUndeploy=false, 
undeployed=false, usage=0], taskClsName=o.a.i.i.v.service.VisorServiceTask, 
sesId=8a35b792961-a7839266-6396-4f7c-9ef7-c3a4b2355782, 
startTime=1551360450899, endTime=9223372036854775807, 
taskNodeId=a7839266-6396-4f7c-9ef7-c3a4b2355782, 
clsLdr=sun.misc.Launcher$AppClassLoader@764c12b6, closed=false, cpSpi=null, 
failSpi=null, loadSpi=null, usage=1, fullSup=false, internal=true, 
topPred=ContainsNodeIdsPredicate [], 
subjId=a7839266-6396-4f7c-9ef7-c3a4b2355782, mapFut=IgniteFuture 
[orig=GridFutureAdapter [ignoreInterrupts=false, state=INIT, res=null, 
hash=327805292]], execName=null], 
jobId=9a35b792961-a7839266-6396-4f7c-9ef7-c3a4b2355782], 
jobCtx=GridJobContextImpl 
[jobId=9a35b792961-a7839266-6396-4f7c-9ef7-c3a4b2355782, timeoutObj=null, 
attrs={}], dep=GridDeployment [ts=1551094729510, depMode=SHARED, 
clsLdr=sun.misc.Launcher$AppClassLoader@764c12b6, 
clsLdrId=3aef2742961-5a63d018-0eb4-493e-91a4-be6d41caff85, userVer=0, loc=true, 
sampleClsName=o.a.i.i.processors.cache.CachesRegistry, pendingUndeploy=false, 
undeployed=false, usage=0], finishing=true, masterLeaveGuard=false, 
timedOut=false, sysCancelled=false, sysStopping=false, isStarted=true, 
job=VisorServiceJob [], held=0, partsReservation=null, reqTopVer=null, 
execName=null]
java.lang.OutOfMemoryError: GC overhead limit exceeded
[21:29:22,781][SEVERE][nio-acceptor-tcp-comm-#28%234-236-237-241%][] JVM will 
be halted immediately due to the failure: [failureCtx=FailureContext 
[type=CRITICAL_ERROR, err=java.lang.OutOfMemoryError: GC overhead limit 
exceeded]]
[21:29:22,814][SEVERE][grid-timeout-worker-#23%234-236-237-241%][GridTimeoutProcessor]
 Error when executing timeout callback: CancelableTask 
[id=94ef2742961-0742c515-5b96-4aad-b07a-9f1ec60af5f3, endTime=1551360514795, 
period=3000, cancel=false, task=MetricsUpdater [prevGcTime=38806, 
prevCpuTime=2195727, 
super=o.a.i.i.managers.discovery.GridDiscoveryManager$MetricsUpdater@73766331]]
java.lang.OutOfMemoryError: GC overhead limit exceeded


James Wang / edvance
Mobile/WeChat: +86 135 1215 1134
This message contains information that is deemed confidential and privileged. 
Unless you are the addressee (or authorized to receive for the addressee), you 
may not use, copy or disclose to anyone the message or any information 
contained in the message. If you have received the message in error, please 
advise the sender by reply e-mail and delete the message.


Re: pre-load data (Apache ignite native persistence store or Cassandra) into two partitioned cache tables

2019-02-28 Thread xmw45688
Can some one comments on the following questions in my previous post?


4. fact_purhcase_line, invoice and invoice line via factLineId and InvoiceID 
do not work, please see annotation below 

public class InvoiceLineKey { 
/** Primary key. */ 
private long id; 

/** Foreign key to fact_purhcase_line */ 
@AffinityKeyMapped 
private long factLineId; 

/** Foreign key to invoice */ 
@AffinityKeyMapped 
private long invoiceId; 


5. I don't quite understand that invoiceId affinity key mapped between 
invoice and invoice_line does not require factLineId key mapped between 
fact_purchase_line and invoice_line.  Is this because of having factId key 
affinity between purchase_fact and purchase_fact_line, between purchase_fact 
and invoice.   

So I just have the following key affinity mapped - 

purchase_fact -> factId-> purchase_fact_line 
purchase_fact -> factId -> invoice 
invoice -> invoiceId -> invoice_line 

Interestingly, invoice_line join fact_purhcase_line works fine (see queries 
below).  Can someone please shed some lights on this? 

// expected 
SELECT count(*) from PARTITION.invoice inv, PARTITION.invoiceline il 
WHERE inv.id = il.invoiceid; 

// why does this query work? note there is a join between 
li.id=il.factLineId which is not a key affinity mapped. 
SELECT count(*) 
 from PARTITION.factpurchaseline li, PARTITION.invoice inv, 
PARTITION.invoiceline il 
WHERE li.id = il.factlineid 
  AND inv.id = il.invoiceid 
; 

// why does this query work? note there is a join between 
li.id=il.factLineId which is not a key affinity mapped. 
SELECT count(*) from PARTITION.factpurchaseline li, PARTITION.invoiceline il 
WHERE li.id = il.factlineid 
; 






--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: same cache cannot update twice in one transaction

2019-02-28 Thread xmw45688
Hi Ilya, 

It'd better if this was mentioned in Ignite Doc.  

It seems very limited if MVCC only supports  the Ignite native persistence. 
Yes, supporting MVCC in 3rd party persistence is challenging.  However, do
we really need MVCC when the data from Cache (where MVCC already enabled) is
ready to write to a 3rd party persistence store.   I think that an "eventual
consistence" for writing cached data into a 3rd persistence layer seems
sufficient when Ignite is used as cache stored, and the data in cache store
is persistent.  

Does Ignite have a plan to support MVCC in cache layer and write the data
from the cached store into a 3rd party persistence store with some limited
feature like "eventual consistence".  

Can some gurus shed some lights on this subject?





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Performance degradation in case of high volumes

2019-02-28 Thread Ilya Kasnacheev
Hello?

What's delay of checkpoint?

I fail to see how 82206 is catastrophically worse than 65681. To me it
sounds like sensible increase.

Regards,
-- 
Ilya Kasnacheev


чт, 28 февр. 2019 г. в 19:01, Antonio Conforti :

> Hello Ilya,
>
>
> if I well interpret the log for example of node with consitentid 7 what I
> see is a constant increment of dirty pages:
>
> 2019-02-27 17:04:42.206  INFO 41153 --- [oint-thread-#67]
> i.i.p.c.p.GridCacheDatabaseSharedManager : Checkpoint finished
> [cpId=b3d5b34c-60d3-4c39-af0f-5de6d41686f3, pages=65681,
> markPos=FileWALPointer [idx=16, fileOff=24231459, len=75645],
> walSegmentsCleared=6, walSegmentsCovered=[10 - 15], markDuration=74ms,
> pagesWrite=1429ms, fsync=101ms, total=1604ms]
> 2019-02-27 17:07:42.244  INFO 41153 --- [oint-thread-#67]
> i.i.p.c.p.GridCacheDatabaseSharedManager : Checkpoint finished
> [cpId=83cc0c0a-9427-465d-a290-f8a56c366528, pages=69139,
> markPos=FileWALPointer [idx=22, fileOff=41402244, len=75645],
> walSegmentsCleared=6, walSegmentsCovered=[16 - 21], markDuration=26ms,
> pagesWrite=1521ms, fsync=101ms, total=1648ms]
> 2019-02-27 17:10:42.553  INFO 41153 --- [oint-thread-#67]
> i.i.p.c.p.GridCacheDatabaseSharedManager : Checkpoint finished
> [cpId=4fe0396f-5ece-4016-a269-119d5ecd8d16, pages=72896,
> markPos=FileWALPointer [idx=29, fileOff=6850845, len=75645],
> walSegmentsCleared=12, walSegmentsCovered=[22 - 28], markDuration=28ms,
> pagesWrite=1713ms, fsync=207ms, total=1948ms]
> 2019-02-27 17:13:45.214  INFO 41153 --- [oint-thread-#67]
> i.i.p.c.p.GridCacheDatabaseSharedManager : Checkpoint finished
> [cpId=a309e6a2-865d-439e-ab6d-11ef9641c8b0, pages=75381,
> markPos=FileWALPointer [idx=35, fileOff=48279077, len=75645],
> walSegmentsCleared=7, walSegmentsCovered=[29 - 34], markDuration=44ms,
> pagesWrite=4446ms, fsync=112ms, total=4602ms]
> 2019-02-27 17:16:42.728  INFO 41153 --- [oint-thread-#67]
> i.i.p.c.p.GridCacheDatabaseSharedManager : Checkpoint finished
> [cpId=0299f7d3-8ed0-4f0b-8499-14e46d3bef13, pages=78944,
> markPos=FileWALPointer [idx=42, fileOff=39209416, len=75645],
> walSegmentsCleared=6, walSegmentsCovered=[35 - 41], markDuration=28ms,
> pagesWrite=1989ms, fsync=95ms, total=2112ms]
> 2019-02-27 17:19:40.829  INFO 41153 --- [r-stripe-10-#27]
> o.a.i.i.p.c.p.pagemem.PageMemoryImpl : Throttling is applied to page
> modifications [percentOfPartTime=0.31, markDirty=22777 pages/sec,
> checkpointWrite=11469 pages/sec, estIdealMarkDirty=226519 pages/sec,
> curDirty=0.00, maxDirty=0.07, avgParkTime=122057 ns, pages: (total=82206,
> evicted=0, written=10035, synced=0, cpBufUsed=27, cpBufTotal=518215)]
> 2019-02-27 17:19:50.524  INFO 41153 --- [oint-thread-#67]
> i.i.p.c.p.GridCacheDatabaseSharedManager : Checkpoint finished
> [cpId=f323c2fd-d358-469e-8e12-d2ae1391a546, pages=82206,
> markPos=FileWALPointer [idx=49, fileOff=42053202, len=75645],
> walSegmentsCleared=8, walSegmentsCovered=[42 - 48], markDuration=27ms,
> pagesWrite=9749ms, fsync=123ms, total=9899ms]
> 2019-02-27 17:22:40.885  INFO 41153 --- [r-stripe-11-#28]
> o.a.i.i.p.c.p.pagemem.PageMemoryImpl : Throttling is applied to page
> modifications [percentOfPartTime=0.23, markDirty=14449 pages/sec,
> checkpointWrite=6729 pages/sec, estIdealMarkDirty=166922 pages/sec,
> curDirty=0.00, maxDirty=0.09, avgParkTime=108997 ns, pages: (total=68092,
> evicted=0, written=13266, synced=0, cpBufUsed=12, cpBufTotal=518215)]
>
>
> I try to explain better:
>
> I see at every finished checkpoint a constant increease of dirty pages and
> a
> consequent duration increase of checkpoint:
>
> 1) at 17:04:42.206 dirty pages are 65681 with a delay of checkpoint of
> 1604ms
> 2) at 17:19:50.524 dirty pages are 82206 with a delay of checkpoint of
> 9899ms
>
>
> So in only 15 minutes with constant rate of 4000 entry/sec I observe a
> increment of delay of about 9 times respect to start of simulation and the
> consequent apllication of throttling how reported in the log at 17:19:40.
>
> I expected a constant number of dirty page and a consequent constant delay
> of checkpoint.
>
> Do you have any suggestion about that?
>
> Thanks a lot.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Performance degradation in case of high volumes

2019-02-28 Thread Antonio Conforti
Hello Ilya,


if I well interpret the log for example of node with consitentid 7 what I
see is a constant increment of dirty pages:

2019-02-27 17:04:42.206  INFO 41153 --- [oint-thread-#67]
i.i.p.c.p.GridCacheDatabaseSharedManager : Checkpoint finished
[cpId=b3d5b34c-60d3-4c39-af0f-5de6d41686f3, pages=65681,
markPos=FileWALPointer [idx=16, fileOff=24231459, len=75645],
walSegmentsCleared=6, walSegmentsCovered=[10 - 15], markDuration=74ms,
pagesWrite=1429ms, fsync=101ms, total=1604ms]
2019-02-27 17:07:42.244  INFO 41153 --- [oint-thread-#67]
i.i.p.c.p.GridCacheDatabaseSharedManager : Checkpoint finished
[cpId=83cc0c0a-9427-465d-a290-f8a56c366528, pages=69139,
markPos=FileWALPointer [idx=22, fileOff=41402244, len=75645],
walSegmentsCleared=6, walSegmentsCovered=[16 - 21], markDuration=26ms,
pagesWrite=1521ms, fsync=101ms, total=1648ms]
2019-02-27 17:10:42.553  INFO 41153 --- [oint-thread-#67]
i.i.p.c.p.GridCacheDatabaseSharedManager : Checkpoint finished
[cpId=4fe0396f-5ece-4016-a269-119d5ecd8d16, pages=72896,
markPos=FileWALPointer [idx=29, fileOff=6850845, len=75645],
walSegmentsCleared=12, walSegmentsCovered=[22 - 28], markDuration=28ms,
pagesWrite=1713ms, fsync=207ms, total=1948ms]
2019-02-27 17:13:45.214  INFO 41153 --- [oint-thread-#67]
i.i.p.c.p.GridCacheDatabaseSharedManager : Checkpoint finished
[cpId=a309e6a2-865d-439e-ab6d-11ef9641c8b0, pages=75381,
markPos=FileWALPointer [idx=35, fileOff=48279077, len=75645],
walSegmentsCleared=7, walSegmentsCovered=[29 - 34], markDuration=44ms,
pagesWrite=4446ms, fsync=112ms, total=4602ms]
2019-02-27 17:16:42.728  INFO 41153 --- [oint-thread-#67]
i.i.p.c.p.GridCacheDatabaseSharedManager : Checkpoint finished
[cpId=0299f7d3-8ed0-4f0b-8499-14e46d3bef13, pages=78944,
markPos=FileWALPointer [idx=42, fileOff=39209416, len=75645],
walSegmentsCleared=6, walSegmentsCovered=[35 - 41], markDuration=28ms,
pagesWrite=1989ms, fsync=95ms, total=2112ms]
2019-02-27 17:19:40.829  INFO 41153 --- [r-stripe-10-#27]
o.a.i.i.p.c.p.pagemem.PageMemoryImpl : Throttling is applied to page
modifications [percentOfPartTime=0.31, markDirty=22777 pages/sec,
checkpointWrite=11469 pages/sec, estIdealMarkDirty=226519 pages/sec,
curDirty=0.00, maxDirty=0.07, avgParkTime=122057 ns, pages: (total=82206,
evicted=0, written=10035, synced=0, cpBufUsed=27, cpBufTotal=518215)]
2019-02-27 17:19:50.524  INFO 41153 --- [oint-thread-#67]
i.i.p.c.p.GridCacheDatabaseSharedManager : Checkpoint finished
[cpId=f323c2fd-d358-469e-8e12-d2ae1391a546, pages=82206,
markPos=FileWALPointer [idx=49, fileOff=42053202, len=75645],
walSegmentsCleared=8, walSegmentsCovered=[42 - 48], markDuration=27ms,
pagesWrite=9749ms, fsync=123ms, total=9899ms]
2019-02-27 17:22:40.885  INFO 41153 --- [r-stripe-11-#28]
o.a.i.i.p.c.p.pagemem.PageMemoryImpl : Throttling is applied to page
modifications [percentOfPartTime=0.23, markDirty=14449 pages/sec,
checkpointWrite=6729 pages/sec, estIdealMarkDirty=166922 pages/sec,
curDirty=0.00, maxDirty=0.09, avgParkTime=108997 ns, pages: (total=68092,
evicted=0, written=13266, synced=0, cpBufUsed=12, cpBufTotal=518215)]


I try to explain better:

I see at every finished checkpoint a constant increease of dirty pages and a
consequent duration increase of checkpoint: 

1) at 17:04:42.206 dirty pages are 65681 with a delay of checkpoint of 
1604ms
2) at 17:19:50.524 dirty pages are 82206 with a delay of checkpoint of 
9899ms


So in only 15 minutes with constant rate of 4000 entry/sec I observe a
increment of delay of about 9 times respect to start of simulation and the
consequent apllication of throttling how reported in the log at 17:19:40.

I expected a constant number of dirty page and a consequent constant delay
of checkpoint.

Do you have any suggestion about that?

Thanks a lot. 



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Performance degradation in case of high volumes

2019-02-28 Thread Ilya Kasnacheev
Hello!

It's hard to say. Maybe the problem is caused by some other index.

Can you make a reproducer project which will exhibit this behavior so that
we could check?

Regards,
-- 
Ilya Kasnacheev


чт, 28 февр. 2019 г. в 17:41, Justin Ji :

> Ilya -
>
> I have tried to increase QueryIndex.setInlineSize and
> CacheConfiguration.setSqlIndexMaxInlineSize to 128 256 and 512, but the
> performace became worse.
>
> Do I miss some configuraiton?
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Streamer for updates ?

2019-02-28 Thread Ilya Kasnacheev
Hello!

There is definitely advantage on the side of DataStreamer.

Regards,
-- 
Ilya Kasnacheev


чт, 28 февр. 2019 г. в 17:29, Mikael :

> Hi!
>
> If I have to update a large number of items in a cache (same keys, new
> values every few seconds), but it's the same keys so I need to have
> allow overwrite enabled, is there any advantage of using a streamer for
> this or is it better to just collect them in a map and use putAll ?
>
> Mikael
>
>
>


Re: Performance degradation in case of high volumes

2019-02-28 Thread Justin Ji
Ilya -

I have tried to increase QueryIndex.setInlineSize and
CacheConfiguration.setSqlIndexMaxInlineSize to 128 256 and 512, but the
performace became worse.

Do I miss some configuraiton?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Streamer for updates ?

2019-02-28 Thread Mikael

Hi!

If I have to update a large number of items in a cache (same keys, new 
values every few seconds), but it's the same keys so I need to have 
allow overwrite enabled, is there any advantage of using a streamer for 
this or is it better to just collect them in a map and use putAll ?


Mikael




Re: Access a cache loaded by DataStreamer with SQL

2019-02-28 Thread Ilya Kasnacheev
Hello!

I have linked the documentation page, there are also some code examples in
distribution.

Regards,
-- 
Ilya Kasnacheev


чт, 28 февр. 2019 г. в 17:10, Mike Needham :

> Is there any examples that show the steps to do this correctly?  I
> stumbled upon this but have no idea if it is the best way to do this
>
> On Thu, Feb 28, 2019 at 6:27 AM Ilya Kasnacheev 
> wrote:
>
>> Hello!
>>
>> There's no restriction on cache name but setting it up for the first time
>> may be tricky indeed.
>>
>> Regards,
>> --
>> Ilya Kasnacheev
>>
>>
>> ср, 27 февр. 2019 г. в 19:48, needbrew99 :
>>
>>> OK, was able to get it working.  Apparently the cache name has to be
>>> PUBLIC
>>> and it will create a table based on the object definition that I have.
>>>
>>>
>>>
>>> --
>>> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>>>
>>
>
> --
> *Some days it just not worth chewing through the restraints*
>


Re: Access a cache loaded by DataStreamer with SQL

2019-02-28 Thread Mike Needham
Is there any examples that show the steps to do this correctly?  I stumbled
upon this but have no idea if it is the best way to do this

On Thu, Feb 28, 2019 at 6:27 AM Ilya Kasnacheev 
wrote:

> Hello!
>
> There's no restriction on cache name but setting it up for the first time
> may be tricky indeed.
>
> Regards,
> --
> Ilya Kasnacheev
>
>
> ср, 27 февр. 2019 г. в 19:48, needbrew99 :
>
>> OK, was able to get it working.  Apparently the cache name has to be
>> PUBLIC
>> and it will create a table based on the object definition that I have.
>>
>>
>>
>> --
>> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>>
>

-- 
*Some days it just not worth chewing through the restraints*


Re: Performance degradation in case of high volumes

2019-02-28 Thread Ilya Kasnacheev
Hello!

Since your key is composite, it's hard to estimate its length (BinaryObject
has a lot of overhead). It's recommended to increase
IGNITE_MAX_INDEX_PAYLOAD_SIZE
until you no longer see this problem. Please try e.g. 128.

You can also try specifying that via setSqlIndexMaxInlineSize().

Regards,
-- 
Ilya Kasnacheev


чт, 28 февр. 2019 г. в 16:21, BinaryTree :

> Hi Ilya -
> First of all, thank for your reply!
> Here is my cache configuration:
>
> private static CacheConfiguration 
> getCacheConfiguration(IgniteConfiguration cfg) {
>
> CacheConfiguration cacheCfg = new 
> CacheConfiguration();
> cacheCfg.setName(IgniteCacheKey.DATA_POINT_NEW.getCode());
> cacheCfg.setCacheMode(CacheMode.PARTITIONED);
> cacheCfg.setBackups(1);
> cacheCfg.setAtomicityMode(CacheAtomicityMode.ATOMIC);
> cacheCfg.setDataRegionName(Constants.FIVE_GB_PERSISTENCE_REGION);
> 
> cacheCfg.setCacheStoreFactory(FactoryBuilder.factoryOf(DataPointCacheStore.class));
> cacheCfg.setWriteThrough(true);
> cacheCfg.setWriteBehindEnabled(true);
> cacheCfg.setWriteBehindFlushThreadCount(2);
> cacheCfg.setWriteBehindFlushFrequency(15 * 1000);
> cacheCfg.setWriteBehindFlushSize(409600);
> cacheCfg.setWriteBehindBatchSize(1024);
> cacheCfg.setStoreKeepBinary(true);
> cacheCfg.setQueryParallelism(16);
>
> //2M
> cacheCfg.setRebalanceBatchSize(2 * 1024 * 1024);
> cacheCfg.setRebalanceThrottle(100);
>
> cacheCfg.setSqlIndexMaxInlineSize(256);
>
> List entities = getQueryEntities();
> cacheCfg.setQueryEntities(entities);
>
> CacheKeyConfiguration cacheKeyConfiguration = new 
> CacheKeyConfiguration(DpKey.class);
> cacheCfg.setKeyConfiguration(cacheKeyConfiguration);
>
> RendezvousAffinityFunction affinityFunction = new 
> RendezvousAffinityFunction();
> affinityFunction.setPartitions(128);
> affinityFunction.setExcludeNeighbors(true);
> cacheCfg.setAffinity(affinityFunction);
>
>
> cfg.setCacheConfiguration(cacheCfg);
> return cacheCfg;
> }
>
>
> private static List getQueryEntities() {
> List entities = Lists.newArrayList();
>
> //配置可见(可被查询)字段
> QueryEntity entity = new QueryEntity(DpKey.class.getName(), 
> DpCache.class.getName());
> entity.setTableName(IgniteTableKey.T_DATA_POINT_NEW.getCode());
>
> LinkedHashMap map = new LinkedHashMap<>();
> map.put("id", "java.lang.String");
> map.put("gmtCreate", "java.lang.Long");
> map.put("gmtModified", "java.lang.Long");
> map.put("devId", "java.lang.String");
> map.put("dpId", "java.lang.Integer");
> map.put("code", "java.lang.String");
> map.put("name", "java.lang.String");
> map.put("customName", "java.lang.String");
> map.put("mode", "java.lang.String");
> map.put("type", "java.lang.String");
> map.put("value", "java.lang.String");
> map.put("rawValue", byte[].class.getName());
> map.put("time", "java.lang.Long");
> map.put("status", "java.lang.Boolean");
> map.put("uuid", "java.lang.String");
>
> entity.setFields(map);
>
> //配置索引信息
> QueryIndex devIdIdx = new QueryIndex("devId");
> devIdIdx.setName("idx_devId");
> devIdIdx.setInlineSize(32);
> List indexes = Lists.newArrayList(devIdIdx);
> entity.setIndexes(indexes);
>
> entities.add(entity);
>
> return entities;
> }
>
> public class DpKey implements Serializable {
> private String key;
> @AffinityKeyMapped
> private String devId;
>
> public DpKey() {
> }
>
> public DpKey(String key, String devId) {
> this.key = key;
> this.devId = devId;
> }
>
> public String getKey() {
> return this.key;
> }
>
> public void setKey(String key) {
> this.key = key;
> }
>
> public String getDevId() {
> return this.devId;
> }
>
> public void setDevId(String devId) {
> this.devId = devId;
> }
>
> public boolean equals(Object o) {
> if (this == o) {
> return true;
> } else if (o != null && this.getClass() == o.getClass()) {
> DpKey key = (DpKey)o;
> return this.key.equals(key.key);
> } else {
> return false;
> }
> }
>
> public int hashCode() {
> return this.key.hashCode();
> }
> }
>
> And I have described my issue in this post and some tests I have done :
>
> http://apache-ignite-users.70518.x6.nabble.com/Ignite-Data-Streamer-Hung-after-a-period-td21161.html
>
> -- 原始邮件 --
> *发件人:* "ilya.kasnacheev";
> *发送时间:* 2019年2月28日(星期四) 晚上9:03
> *收件人:* "user";
> *主题:* Re: Performance degradation in case of high volumes
>
> Hello Justin!
>
> Ignite 2.6 does have IGNITE_MAX_INDEX_PAYLOAD_SIZE system property.
>
> We are talking about primary key here. What is your primary key type? What
> other indexes do you have? Can you provide complete configuration for
> affected tables (including POJOs if applicable?)
>
> Regards,
> 

?????? Performance degradation in case of high volumes

2019-02-28 Thread BinaryTree
Hi Ilya - 
First of all, thank for your reply!
Here is my cache configuration:
private static CacheConfiguration 
getCacheConfiguration(IgniteConfiguration cfg) {

CacheConfiguration cacheCfg = new CacheConfiguration();
cacheCfg.setName(IgniteCacheKey.DATA_POINT_NEW.getCode());
cacheCfg.setCacheMode(CacheMode.PARTITIONED);
cacheCfg.setBackups(1);
cacheCfg.setAtomicityMode(CacheAtomicityMode.ATOMIC);
cacheCfg.setDataRegionName(Constants.FIVE_GB_PERSISTENCE_REGION);

cacheCfg.setCacheStoreFactory(FactoryBuilder.factoryOf(DataPointCacheStore.class));
cacheCfg.setWriteThrough(true);
cacheCfg.setWriteBehindEnabled(true);
cacheCfg.setWriteBehindFlushThreadCount(2);
cacheCfg.setWriteBehindFlushFrequency(15 * 1000);
cacheCfg.setWriteBehindFlushSize(409600);
cacheCfg.setWriteBehindBatchSize(1024);
cacheCfg.setStoreKeepBinary(true);
cacheCfg.setQueryParallelism(16);

//2M
cacheCfg.setRebalanceBatchSize(2 * 1024 * 1024);
cacheCfg.setRebalanceThrottle(100);

cacheCfg.setSqlIndexMaxInlineSize(256);

List entities = getQueryEntities();
cacheCfg.setQueryEntities(entities);

CacheKeyConfiguration cacheKeyConfiguration = new 
CacheKeyConfiguration(DpKey.class);
cacheCfg.setKeyConfiguration(cacheKeyConfiguration);

RendezvousAffinityFunction affinityFunction = new 
RendezvousAffinityFunction();
affinityFunction.setPartitions(128);
affinityFunction.setExcludeNeighbors(true);
cacheCfg.setAffinity(affinityFunction);
cfg.setCacheConfiguration(cacheCfg);
return cacheCfg;
}


private static List getQueryEntities() {
List entities = Lists.newArrayList();

//()
QueryEntity entity = new QueryEntity(DpKey.class.getName(), 
DpCache.class.getName());
entity.setTableName(IgniteTableKey.T_DATA_POINT_NEW.getCode());

LinkedHashMap map = new LinkedHashMap<>();
map.put("id", "java.lang.String");
map.put("gmtCreate", "java.lang.Long");
map.put("gmtModified", "java.lang.Long");
map.put("devId", "java.lang.String");
map.put("dpId", "java.lang.Integer");
map.put("code", "java.lang.String");
map.put("name", "java.lang.String");
map.put("customName", "java.lang.String");
map.put("mode", "java.lang.String");
map.put("type", "java.lang.String");
map.put("value", "java.lang.String");
map.put("rawValue", byte[].class.getName());
map.put("time", "java.lang.Long");
map.put("status", "java.lang.Boolean");
map.put("uuid", "java.lang.String");

entity.setFields(map);

//
QueryIndex devIdIdx = new QueryIndex("devId");
devIdIdx.setName("idx_devId");
devIdIdx.setInlineSize(32);
List indexes = Lists.newArrayList(devIdIdx);
entity.setIndexes(indexes);

entities.add(entity);

return entities;
}
public class DpKey implements Serializable {
private String key;
@AffinityKeyMapped
private String devId;

public DpKey() {
}

public DpKey(String key, String devId) {
this.key = key;
this.devId = devId;
}

public String getKey() {
return this.key;
}

public void setKey(String key) {
this.key = key;
}

public String getDevId() {
return this.devId;
}

public void setDevId(String devId) {
this.devId = devId;
}

public boolean equals(Object o) {
if (this == o) {
return true;
} else if (o != null && this.getClass() == o.getClass()) {
DpKey key = (DpKey)o;
return this.key.equals(key.key);
} else {
return false;
}
}

public int hashCode() {
return this.key.hashCode();
}
}

And I have described my issue in this post and some tests I have done :
http://apache-ignite-users.70518.x6.nabble.com/Ignite-Data-Streamer-Hung-after-a-period-td21161.html


--  --
??: "ilya.kasnacheev";
: 2019??2??28??(??) 9:03
??: "user";

: Re: Performance degradation in case of high volumes



Hello Justin!


Ignite 2.6 does have IGNITE_MAX_INDEX_PAYLOAD_SIZE system property.


We are talking about primary key here. What is your primary key type? What 
other indexes do you have? Can you provide complete configuration for affected 
tables (including POJOs if applicable?)


Regards,

-- 

Ilya Kasnacheev









, 28 . 2019 ??. ?? 15:29, Justin Ji :

Ilya - 
 
 I use ignite 2.6.0, does not have IGNITE_MAX_INDEX_PAYLOAD_SIZE system
 property.
 But our index field has a fixed length:25 characters, so where can I find
 the algorithm to calculate the 'index inline size'.
 
 Looking forward to your reply.
 
 
 
 --
 Sent from: http://apache-ignite-users.70518.x6.nabble.com/

Re: Can Custom SQL functions get loaded after server start?

2019-02-28 Thread Ilya Kasnacheev
Hello!

Same class won't work, you will need to generate different classes (by name
+ package name) for different iterations of your functions class.

On related note, you could try to build a generic Java function that will
be calling back .Net code, pass function name + parameters to it.

Regards,
-- 
Ilya Kasnacheev


чт, 28 февр. 2019 г. в 16:09, wt :

> The problem is it will work in java using the api but all my code is in
> .Net
> and unfortunately the .net api doesn't have any of the capability in it. I
> guess we could have a separate java app that does this. I tried to fudge it
> by compiling the same class with an additional function but it looks like
> Ignite loads these up once so it doesn't find it. I am trying to build a
> financial app that allows users to upload their own functions and run those
> over the data in the system. Not a train smash  - i can work around it.
> Thanks Ilya
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Can Custom SQL functions get loaded after server start?

2019-02-28 Thread wt
The problem is it will work in java using the api but all my code is in .Net
and unfortunately the .net api doesn't have any of the capability in it. I
guess we could have a separate java app that does this. I tried to fudge it
by compiling the same class with an additional function but it looks like
Ignite loads these up once so it doesn't find it. I am trying to build a
financial app that allows users to upload their own functions and run those
over the data in the system. Not a train smash  - i can work around it.
Thanks Ilya



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Performance degradation in case of high volumes

2019-02-28 Thread Ilya Kasnacheev
Hello Justin!

Ignite 2.6 does have IGNITE_MAX_INDEX_PAYLOAD_SIZE system property.

We are talking about primary key here. What is your primary key type? What
other indexes do you have? Can you provide complete configuration for
affected tables (including POJOs if applicable?)

Regards,
-- 
Ilya Kasnacheev


чт, 28 февр. 2019 г. в 15:29, Justin Ji :

> Ilya -
>
> I use ignite 2.6.0, does not have IGNITE_MAX_INDEX_PAYLOAD_SIZE system
> property.
> But our index field has a fixed length:25 characters, so where can I find
> the algorithm to calculate the 'index inline size'.
>
> Looking forward to your reply.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Performance degradation in case of high volumes

2019-02-28 Thread Ilya Kasnacheev
Hello Antonio!

I have reviewed your logs but don't understand what is the issue here. Can
you please clarify what is the problem once the warning went away?

Of course you should expect eventual slowdown since the height of B-tree
will still increase with time.

Regards,
-- 
Ilya Kasnacheev


чт, 28 февр. 2019 г. в 15:27, Antonio Conforti :

> Hello Ilya,
> there was a misunderstanding.
> The warn log message was the explanation how I set the value 66 as max
> payload.
> After setting the environment variable IGNITE_MAX_INDEX_PAYLOAD_SIZE the
> warning message disappeared.
> But the test seems failed.
> You can see my evaluations in the message above.
>
> Thanks a lot and I look forward to hearing from you.
>
> Regards
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Can Custom SQL functions get loaded after server start?

2019-02-28 Thread Ilya Kasnacheev
Hello!

I don't see why you can't put new classes to classpath (or even define them
on the fly) and then create new caches with fresh set of functions to bind
them to schema.

However, I guess you will have to put them to classpath on all nodes before
doing that.

Regards,
-- 
Ilya Kasnacheev


чт, 28 февр. 2019 г. в 15:10, wt :

> I am looking to build a capability to add new functions in without
> restarting
> the service.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Performance degradation in case of high volumes

2019-02-28 Thread Justin Ji
Ilya - 

I use ignite 2.6.0, does not have IGNITE_MAX_INDEX_PAYLOAD_SIZE system
property.
But our index field has a fixed length:25 characters, so where can I find
the algorithm to calculate the 'index inline size'.

Looking forward to your reply.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Access a cache loaded by DataStreamer with SQL

2019-02-28 Thread Ilya Kasnacheev
Hello!

There's no restriction on cache name but setting it up for the first time
may be tricky indeed.

Regards,
-- 
Ilya Kasnacheev


ср, 27 февр. 2019 г. в 19:48, needbrew99 :

> OK, was able to get it working.  Apparently the cache name has to be PUBLIC
> and it will create a table based on the object definition that I have.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: same cache cannot update twice in one transaction

2019-02-28 Thread Ilya Kasnacheev
Hello!

Yes, unfortunately MVCC does not support 3rd party persistence (and I'm not
sure that it is possible by design).

Regards,
-- 
Ilya Kasnacheev


чт, 28 февр. 2019 г. в 00:29, xmw45688 :

> Hi Ilya,
>
> Since I'm using Cassandra as data store, it raises the following exceptions
> once MVCC is enabled -
> 
>
>
> class org.apache.ignite.IgniteCheckedException: Grid configuration
> parameter
> invalid: readThrough cannot be used with TRANSACTIONAL_SNAPSHOT atomicity
> mode
> at
>
> org.apache.ignite.internal.processors.GridProcessorAdapter.assertParameter(GridProcessorAdapter.java:140)
> at
>
> org.apache.ignite.internal.processors.cache.GridCacheProcessor.validate(GridCacheProcessor.java:527)
> at
>
> org.apache.ignite.internal.processors.cache.GridCacheProcessor.createCacheContext(GridCacheProcessor.java:1543)
> at
>
> org.apache.ignite.internal.processors.cache.GridCacheProcessor.prepareCacheContext(GridCacheProcessor.java:2324)
> at
>
> org.apache.ignite.internal.processors.cache.GridCacheProcessor.lambda$null$fd62dedb$1(GridCacheProcessor.java:2163)
> at
>
> org.apache.ignite.internal.processors.cache.GridCacheProcessor.lambda$prepareStartCaches$5(GridCacheProcessor.java:2086)
> at
>
> org.apache.ignite.internal.processors.cache.GridCacheProcessor.lambda$prepareStartCaches$937cbe24$1(GridCacheProcessor.java:2160)
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Performance degradation in case of high volumes

2019-02-28 Thread Antonio Conforti
Hello Ilya,
there was a misunderstanding.
The warn log message was the explanation how I set the value 66 as max
payload.
After setting the environment variable IGNITE_MAX_INDEX_PAYLOAD_SIZE the
warning message disappeared.
But the test seems failed.
You can see my evaluations in the message above.

Thanks a lot and I look forward to hearing from you.

Regards




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Performance degradation in case of high volumes

2019-02-28 Thread Justin Ji
Ilya - 

I use ignite 2.6.0, does not have IGNITE_MAX_INDEX_PAYLOAD_SIZE system
property.
But our index field has a fixed length:25 characters, so where can I find
the algorithm to calculate the 'index inline size'.

Looking forward to your reply.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Can Custom SQL functions get loaded after server start?

2019-02-28 Thread wt
I am looking to build a capability to add new functions in without restarting
the service.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: ignite.net custom sql functions

2019-02-28 Thread wt
That works - super happy with the result you are always so helpful thank you
Ilya



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Performance degradation in case of high volumes

2019-02-28 Thread ilya.kasnacheev
Hello!

I'm afraid you will have to increase IGNITE_MAX_INDEX_PAYLOAD_SIZE until you
no longer see "Indexed columns of a row cannot be fully inlined" warning.
Can you try?

Alternatively you can try specifying it with
cacheCfg.setSqlIndexMaxInlineSize().

Regards,



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: ignite.net custom sql functions

2019-02-28 Thread Ilya Kasnacheev
Hello!

It does not seem that you define any tables for this cache, but

0: jdbc:ignite:thin://localhost> SELECT "wayne".square(4);
++
|   "wayne".SQUARE(4)|
++
| 16 |
++
1 row selected (0,096 seconds)

seems to work for me! Note the quotes since cache names/schemas are case
sensitive.

Regards,
-- 
Ilya Kasnacheev


чт, 28 февр. 2019 г. в 14:49, wt :

> I don't know why i didn't think of that - superb thanks Ilya.
>
> Are you certain the function needs to be in libs\home - It can't seem to
> find the function and i would have expected to at least see something in
> the
> log about loading it.
>
> ENV setup
>
> system variable  IGNITE_HOME = c:\Ignite_2.7
>
> test function code (no FQDN namespace it is just a class at the root):
>
> /public class testfunc {
>
> @QuerySqlFunction
> public static int square(int x) {
> return x * x;
> }
> }/
>
> I copied the jar file to c:\Ignite_2.7\libs (at the root of that folder)
>
> XML config section
>
>
>  
>   
>  class="org.apache.ignite.configuration.CacheConfiguration">
> 
> 
> 
>  name="sqlFunctionClasses">
> 
>
>
> testfunc
> 
>
> 
> 
> 
>  
>
> I have no data in the cache but if i try run the following from dbbeaver i
> get an error:
>
> SELECT square(10) FROM WAYNE.WAYNE
>
> here is the error and note it is trying to access the public schema.
>
> /11:41:11,859][SEVERE][client-connector-#63][JdbcRequestHandler] Failed to
> execu
> e SQL query [reqId=0, req=JdbcQueryExecuteRequest [schemaName=PUBLIC,
> pageSize=
> 024, maxRows=200, sqlQry=SELECT square(10) FROM WAYNE.WAYNE, args=Object[]
> [],
> tmtType=ANY_STATEMENT_TYPE, autoCommit=true]]
> lass org.apache.ignite.internal.processors.query.IgniteSQLException: Failed
> to
> arse query. Function "SQUARE" not found; SQL statement:/
>
>
> So i created a basic table in public called tes and tried to run this
>
> SELECT wayne.square(10)  FROM tes
>
> Same issue  - function not found
>
> I then tried this
>
> SELECT wayne.wayne.square(10)  FROM tes
>
> and it states database is not found.
>
> Seems to me like the jar file is not being loaded.
>
>
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: 回复: Ignite Data Streamer Hung after a period

2019-02-28 Thread Ilya Kasnacheev
Hello!

The reason for slow index building, as well as workaround, is described in:
http://apache-ignite-users.70518.x6.nabble.com/Performance-degradation-in-case-of-high-volumes-tp27150p27204.html

Regards,
-- 
Ilya Kasnacheev


чт, 28 февр. 2019 г. в 14:34, Justin Ji :

> I have tried to load data without indexes, but it does not have any help!
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: ignite.net custom sql functions

2019-02-28 Thread wt
I don't know why i didn't think of that - superb thanks Ilya.

Are you certain the function needs to be in libs\home - It can't seem to
find the function and i would have expected to at least see something in the
log about loading it.

ENV setup

system variable  IGNITE_HOME = c:\Ignite_2.7

test function code (no FQDN namespace it is just a class at the root):

/public class testfunc {

@QuerySqlFunction
public static int square(int x) {
return x * x;
}
}/

I copied the jar file to c:\Ignite_2.7\libs (at the root of that folder)

XML config section


  
  






  
  
testfunc 

 

 


  

I have no data in the cache but if i try run the following from dbbeaver i
get an error:

SELECT square(10) FROM WAYNE.WAYNE

here is the error and note it is trying to access the public schema. 

/11:41:11,859][SEVERE][client-connector-#63][JdbcRequestHandler] Failed to
execu
e SQL query [reqId=0, req=JdbcQueryExecuteRequest [schemaName=PUBLIC,
pageSize=
024, maxRows=200, sqlQry=SELECT square(10) FROM WAYNE.WAYNE, args=Object[]
[],
tmtType=ANY_STATEMENT_TYPE, autoCommit=true]]
lass org.apache.ignite.internal.processors.query.IgniteSQLException: Failed
to
arse query. Function "SQUARE" not found; SQL statement:/


So i created a basic table in public called tes and tried to run this

SELECT wayne.square(10)  FROM tes

Same issue  - function not found

I then tried this

SELECT wayne.wayne.square(10)  FROM tes

and it states database is not found.

Seems to me like the jar file is not being loaded.






--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: 回复: Ignite Data Streamer Hung after a period

2019-02-28 Thread Justin Ji
I have tried to load data without indexes, but it does not have any help!



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: 回复: Ignite Data Streamer Hung after a period

2019-02-28 Thread Justin Ji
I have tried to load data without indexes, but it does not have any help!



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: ignite.net custom sql functions

2019-02-28 Thread Ilya Kasnacheev
Hello!

I think you can refer to functions as "schemaName".function()
This means you can create a single cache with known schema and all the
functions, always refer to it.

Regards,
-- 
Ilya Kasnacheev


чт, 28 февр. 2019 г. в 11:29, wt :

> one question regarding this configuration. I noticed it is cache specific
> and
> to me that implies every cache will need to have this configuration section
> set. Is there perhaps a way to configure this globally once instead of
> replicating it for every cache?
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: ignite.net custom sql functions

2019-02-28 Thread wt
I have just realized an issue with this process. I have built my own version
of the web console that generates caches for tables in databases. This is an
automated process in the UI so i do not know what additional caches will be
created beforehand. This leaves me with the dilemma of this custom function
configuration which seems to be associated with specific caches and not
globally available. I hope there is a solution to this else i will have a
lot of code to change to shift things over to the Public Cache which isnt
ideal as i need isolation. 



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Performance degradation in case of high volumes

2019-02-28 Thread Antonio Conforti
Hello Ilya.
Thank you so much for your answer.
I tried your solution in the same scenario:
-   PARTITIONED cache, SQL indexed by key and by a field (type long, the 
last
key field) descending
-   checkpoint frequency restored to default
-   4000 msg/sec rate (fixed)
I set the environment variable IGNITE_MAX_INDEX_PAYLOAD_SIZE to 66:
(WARN 59726 --- [0-#17%Datafeed%] o.a.i.i.p.query.h2.database.H2TreeIndex  :
 Indexed columns of a row cannot be fully inlined into index
what may lead to slowdown due to additional data page reads, increase index
inline size if needed (set system property IGNITE_MAX_INDEX_PAYLOAD_SIZE
with recommended size (be aware it will be used by default for all indexes
without explicit inline size)) [cacheName=IGN_DF_CMF_QUOTE,
tableName=DF_CMF_QUOTE, idxName=_key_PK, idxCols=(_KEY), idxType=PRIMARY
KEY, curSize=10, recommendedInlineSize=66])
in all cluster nodes and I ran the test again.
We had to stop the test after 3 milion messages and what I could notice was
that the checkpoints collected a number of pages that grew over and over.
This increment grew much faster than in previous tests
(IGNITE_MAX_INDEX_PAYLOAD_SIZE not set). 
So, I couldn't detect the performance degradation (because I had to stop the
test necessarily) but the increasing number of pages in every checkpoints
let me expect it will happen.

I attach the server logs of this test.

log_ignite_190227_setIndexPayload.gz

  


What do you think about ?

Thanks a lot

Regards




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: ignite.net custom sql functions

2019-02-28 Thread wt
one question regarding this configuration. I noticed it is cache specific and
to me that implies every cache will need to have this configuration section
set. Is there perhaps a way to configure this globally once instead of
replicating it for every cache?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: 回复: Ignite Data Streamer Hung after a period

2019-02-28 Thread Justin Ji
After I dig in the issue, I found that the streamer threads are waiting for
building the index.
This looks normal in a database based system, the more data, the slower
insertion.
But ignite is a wide used system, I think other people may encounter this
problem, and have ways to improve the performance. 

Would appreciate anyone who can give me some advices.

"data-streamer-stripe-2-#11%nx-s-ignite-001%" #30 prio=5 os_prio=0
tid=0x5571cbdba800 nid=0x95 waiting on condition [0x7f8e8c9ed000]
   java.lang.Thread.State: WAITING (parking)
   at sun.misc.Unsafe.park(Native Method)
   at java.util.concurrent.locks.LockSupport.park(LockSupport.java:304)
   at
org.apache.ignite.internal.util.future.GridFutureAdapter.get0(GridFutureAdapter.java:177)
   at
org.apache.ignite.internal.util.future.GridFutureAdapter.getUninterruptibly(GridFutureAdapter.java:145)
   at
org.apache.ignite.internal.processors.cache.persistence.file.AsyncFileIO.read(AsyncFileIO.java:95)
   at
org.apache.ignite.internal.processors.cache.persistence.file.FilePageStore.read(FilePageStore.java:351)
   at
org.apache.ignite.internal.processors.cache.persistence.file.FilePageStoreManager.read(FilePageStoreManager.java:328)
   at
org.apache.ignite.internal.processors.cache.persistence.file.FilePageStoreManager.read(FilePageStoreManager.java:312)
   at
org.apache.ignite.internal.processors.cache.persistence.pagemem.PageMemoryImpl.acquirePage(PageMemoryImpl.java:779)
   at
org.apache.ignite.internal.processors.cache.persistence.pagemem.PageMemoryImpl.acquirePage(PageMemoryImpl.java:624)
   at
org.apache.ignite.internal.processors.cache.persistence.CacheDataRowAdapter.initFromLink(CacheDataRowAdapter.java:140)
   at
org.apache.ignite.internal.processors.cache.persistence.CacheDataRowAdapter.initFromLink(CacheDataRowAdapter.java:102)
   at
org.apache.ignite.internal.processors.query.h2.database.H2RowFactory.getRow(H2RowFactory.java:61)
   at
org.apache.ignite.internal.processors.query.h2.database.H2Tree.createRowFromLink(H2Tree.java:149)
   at
org.apache.ignite.internal.processors.query.h2.database.io.H2LeafIO.getLookupRow(H2LeafIO.java:67)
   at
org.apache.ignite.internal.processors.query.h2.database.io.H2LeafIO.getLookupRow(H2LeafIO.java:33)
   at
org.apache.ignite.internal.processors.query.h2.database.H2Tree.getRow(H2Tree.java:167)
   at
org.apache.ignite.internal.processors.query.h2.database.H2Tree.getRow(H2Tree.java:46)
   at
org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.getRow(BPlusTree.java:4482)
   at
org.apache.ignite.internal.processors.query.h2.database.H2Tree.compare(H2Tree.java:209)
   at
org.apache.ignite.internal.processors.query.h2.database.H2Tree.compare(H2Tree.java:46)
   at
org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.compare(BPlusTree.java:4469)
   at
org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.findInsertionPoint(BPlusTree.java:4389)
   at
org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.access$1500(BPlusTree.java:83)
   at
org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree$Search.run0(BPlusTree.java:278)
   at
org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree$GetPageHandler.run(BPlusTree.java:4816)
   at
org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree$GetPageHandler.run(BPlusTree.java:4801)
   at
org.apache.ignite.internal.processors.cache.persistence.tree.util.PageHandler.readPage(PageHandler.java:158)
   at
org.apache.ignite.internal.processors.cache.persistence.DataStructure.read(DataStructure.java:332)
   at
org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.putDown(BPlusTree.java:2336)
   at
org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.putDown(BPlusTree.java:2348)
   at
org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.putDown(BPlusTree.java:2348)
   at
org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.doPut(BPlusTree.java:2086)
   at
org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.putx(BPlusTree.java:2066)
   at
org.apache.ignite.internal.processors.query.h2.database.H2TreeIndex.putx(H2TreeIndex.java:247)
   at
org.apache.ignite.internal.processors.query.h2.opt.GridH2Table.update(GridH2Table.java:466)
   at
org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.store(IgniteH2Indexing.java:659)
   at
org.apache.ignite.internal.processors.query.GridQueryProcessor.store(GridQueryProcessor.java:1866)
   at
org.apache.ignite.internal.processors.cache.query.GridCacheQueryManager.store(GridCacheQueryManager.java:403)
   at
org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl$CacheDataStoreImpl.finishUpdate(IgniteCacheOffheapManagerImpl.java:1393)
   at
org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl$CacheDataStoreImpl.invoke(IgniteCacheOffheapManagerImpl.java:1257)
   at

Re: ignite.net custom sql functions

2019-02-28 Thread wt
Thank you both for your assistance here, it is greatly appreciated - Ignite
Rocks



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


?????? Ignite Data Streamer Hung after a period

2019-02-28 Thread BinaryTree
After I dig in the issue, I found that the streamer threads are waiting for 
building the index.
This looks normal in a database based system, the more data, the slower 
insertion.
But ignite is a wide used system, I think other people may encounter this 
problem, and have ways to improve the performance. 


Would appreciate anyone who can give me some advices.


"data-streamer-stripe-2-#11%nx-s-ignite-001%" #30 prio=5 os_prio=0 
tid=0x5571cbdba800 nid=0x95 waiting on condition [0x7f8e8c9ed000]
   java.lang.Thread.State: WAITING (parking)
   at sun.misc.Unsafe.park(Native Method)
   at java.util.concurrent.locks.LockSupport.park(LockSupport.java:304)
   at 
org.apache.ignite.internal.util.future.GridFutureAdapter.get0(GridFutureAdapter.java:177)
   at 
org.apache.ignite.internal.util.future.GridFutureAdapter.getUninterruptibly(GridFutureAdapter.java:145)
   at 
org.apache.ignite.internal.processors.cache.persistence.file.AsyncFileIO.read(AsyncFileIO.java:95)
   at 
org.apache.ignite.internal.processors.cache.persistence.file.FilePageStore.read(FilePageStore.java:351)
   at 
org.apache.ignite.internal.processors.cache.persistence.file.FilePageStoreManager.read(FilePageStoreManager.java:328)
   at 
org.apache.ignite.internal.processors.cache.persistence.file.FilePageStoreManager.read(FilePageStoreManager.java:312)
   at 
org.apache.ignite.internal.processors.cache.persistence.pagemem.PageMemoryImpl.acquirePage(PageMemoryImpl.java:779)
   at 
org.apache.ignite.internal.processors.cache.persistence.pagemem.PageMemoryImpl.acquirePage(PageMemoryImpl.java:624)
   at 
org.apache.ignite.internal.processors.cache.persistence.CacheDataRowAdapter.initFromLink(CacheDataRowAdapter.java:140)
   at 
org.apache.ignite.internal.processors.cache.persistence.CacheDataRowAdapter.initFromLink(CacheDataRowAdapter.java:102)
   at 
org.apache.ignite.internal.processors.query.h2.database.H2RowFactory.getRow(H2RowFactory.java:61)
   at 
org.apache.ignite.internal.processors.query.h2.database.H2Tree.createRowFromLink(H2Tree.java:149)
   at 
org.apache.ignite.internal.processors.query.h2.database.io.H2LeafIO.getLookupRow(H2LeafIO.java:67)
   at 
org.apache.ignite.internal.processors.query.h2.database.io.H2LeafIO.getLookupRow(H2LeafIO.java:33)
   at 
org.apache.ignite.internal.processors.query.h2.database.H2Tree.getRow(H2Tree.java:167)
   at 
org.apache.ignite.internal.processors.query.h2.database.H2Tree.getRow(H2Tree.java:46)
   at 
org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.getRow(BPlusTree.java:4482)
   at 
org.apache.ignite.internal.processors.query.h2.database.H2Tree.compare(H2Tree.java:209)
   at 
org.apache.ignite.internal.processors.query.h2.database.H2Tree.compare(H2Tree.java:46)
   at 
org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.compare(BPlusTree.java:4469)
   at 
org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.findInsertionPoint(BPlusTree.java:4389)
   at 
org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.access$1500(BPlusTree.java:83)
   at 
org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree$Search.run0(BPlusTree.java:278)
   at 
org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree$GetPageHandler.run(BPlusTree.java:4816)
   at 
org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree$GetPageHandler.run(BPlusTree.java:4801)
   at 
org.apache.ignite.internal.processors.cache.persistence.tree.util.PageHandler.readPage(PageHandler.java:158)
   at 
org.apache.ignite.internal.processors.cache.persistence.DataStructure.read(DataStructure.java:332)
   at 
org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.putDown(BPlusTree.java:2336)
   at 
org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.putDown(BPlusTree.java:2348)
   at 
org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.putDown(BPlusTree.java:2348)
   at 
org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.doPut(BPlusTree.java:2086)
   at 
org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.putx(BPlusTree.java:2066)
   at 
org.apache.ignite.internal.processors.query.h2.database.H2TreeIndex.putx(H2TreeIndex.java:247)
   at 
org.apache.ignite.internal.processors.query.h2.opt.GridH2Table.update(GridH2Table.java:466)
   at 
org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.store(IgniteH2Indexing.java:659)
   at 
org.apache.ignite.internal.processors.query.GridQueryProcessor.store(GridQueryProcessor.java:1866)
   at 
org.apache.ignite.internal.processors.cache.query.GridCacheQueryManager.store(GridCacheQueryManager.java:403)
   at 
org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl$CacheDataStoreImpl.finishUpdate(IgniteCacheOffheapManagerImpl.java:1393)
   at