Re: GridProcessorAdapter fails to start due to failure to initialise WAL segment on Ignite startup

2018-11-30 Thread Raymond Wilson
Hi Ilya,

We don’t change the WAL segment size from the default values.

The only activity that occurred was stopping a node, making a minor change
(not related to persistence) and rerunning the node.

Raymond.

Sent from my iPhone

On 1/12/2018, at 2:52 AM, Ilya Kasnacheev  wrote:

Hello!

"WAL segment size change is not supported"

Is there a chance that you have changed WAL segment size setting between
launches?

Regards,
-- 
Ilya Kasnacheev


чт, 29 нояб. 2018 г. в 02:39, Raymond Wilson :

> I'm using Ignite 2.6 with the C# client.
>
> I have a running cluster that I was debugging. All requests were read only
> (there were no state mutating operations running in the cluster.
>
> I terminated the one server node in the grid (running in the debugger) to
> make a small code change and re-run it (I do this frequently). The node may
> have been stopped for longer than the partitioning timeout.
>
> On re-running the server node it failed to start. On re-running the
> complete cluster it still failed to start, and all other nodes report
> failure to connect to a inactive grid.
>
> Looking at the log for the server node that is failing I get the following
> log showing an exception while initializing a WAL segment. This failure
> seems permanent and is unexpected as we are using the strict WAL atomicity
> mode (WalMode.Fsync) for all persisted regions.Is this a recoverable error,
> or does this imply data loss? [NB: This is a dev system so no prod data is
> affected]]
>
>
> 2018-11-29 12:26:09,933 [1] INFO  ImmutableCacheComputeServer >>>
> __  >>>   /  _/ ___/ |/ /  _/_  __/ __/>>>
> _/ // (7 7// /  / / / _/  >>> /___/\___/_/|_/___/ /_/ /___/
>  >>>   >>> ver. 2.6.0#20180710-sha1:669feacc  >>> 2018 Copyright(C) Apache
> Software Foundation  >>>   >>> Ignite documentation:
> http://ignite.apache.org
> 2018-11-29 12:26:09,933 [1] INFO  ImmutableCacheComputeServer Config URL:
> n/a
> 2018-11-29 12:26:09,948 [1] INFO  ImmutableCacheComputeServer
> IgniteConfiguration [igniteInstanceName=TRex-Immutable, pubPoolSize=50,
> svcPoolSize=12, callbackPoolSize=12, stripedPoolSize=12, sysPoolSize=12,
> mgmtPoolSize=4, igfsPoolSize=12, dataStreamerPoolSize=12,
> utilityCachePoolSize=12, utilityCacheKeepAliveTime=6, p2pPoolSize=2,
> qryPoolSize=12, igniteHome=null,
> igniteWorkDir=C:\Users\rwilson\AppData\Local\Temp\TRexIgniteData\Immutable,
> mbeanSrv=com.sun.jmx.mbeanserver.JmxMBeanServer@6e4784bc,
> nodeId=8f32d0a6-539c-40dd-bc42-d044f28bac73,
> marsh=org.apache.ignite.internal.binary.BinaryMarshaller@e4487af,
> marshLocJobs=false, daemon=false, p2pEnabled=false, netTimeout=5000,
> sndRetryDelay=1000, sndRetryCnt=3, metricsHistSize=1,
> metricsUpdateFreq=2000, metricsExpTime=9223372036854775807,
> discoSpi=TcpDiscoverySpi [addrRslvr=null, sockTimeout=5000,
> ackTimeout=5000, marsh=null, reconCnt=10, reconDelay=2000,
> maxAckTimeout=60, forceSrvMode=false, clientReconnectDisabled=false,
> internalLsnr=null], segPlc=STOP, segResolveAttempts=2,
> waitForSegOnStart=true, allResolversPassReq=true, segChkFreq=1,
> commSpi=TcpCommunicationSpi [connectGate=null, connPlc=null,
> enableForcibleNodeKill=false, enableTroubleshootingLog=false,
> srvLsnr=org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi$2@10d68fcd,
> locAddr=127.0.0.1, locHost=null, locPort=47100, locPortRange=100,
> shmemPort=-1, directBuf=true, directSndBuf=false, idleConnTimeout=3,
> connTimeout=5000, maxConnTimeout=60, reconCnt=10, sockSndBuf=32768,
> sockRcvBuf=32768, msgQueueLimit=1024, slowClientQueueLimit=0, nioSrvr=null,
> shmemSrv=null, usePairedConnections=false, connectionsPerNode=1,
> tcpNoDelay=true, filterReachableAddresses=false, ackSndThreshold=16,
> unackedMsgsBufSize=0, sockWriteTimeout=2000, lsnr=null, boundTcpPort=-1,
> boundTcpShmemPort=-1, selectorsCnt=4, selectorSpins=0, addrRslvr=null,
> ctxInitLatch=java.util.concurrent.CountDownLatch@117e949d[Count = 1],
> stopping=false,
> metricsLsnr=org.apache.ignite.spi.communication.tcp.TcpCommunicationMetricsListener@6db9f5a4],
> evtSpi=org.apache.ignite.spi.eventstorage.NoopEventStorageSpi@5f8edcc5,
> colSpi=NoopCollisionSpi [], deploySpi=LocalDeploymentSpi [lsnr=null],
> indexingSpi=org.apache.ignite.spi.indexing.noop.NoopIndexingSpi@7a675056,
> addrRslvr=null, clientMode=false, rebalanceThreadPoolSize=1,
> txCfg=org.apache.ignite.configuration.TransactionConfiguration@d21a74c,
> cacheSanityCheckEnabled=true, discoStartupDelay=6, deployMode=SHARED,
> p2pMissedCacheSize=100, locHost=null, timeSrvPortBase=31100,
> timeSrvPortRange=100, failureDetectionTimeout=1,
> clientFailureDetectionTimeout=3, metricsLogFreq=1, hadoopCfg=null,
> connectorCfg=org.apache.ignite.configuration.ConnectorConfiguration@6e509ffa,
> odbcCfg=null, warmupClos=null, atomicCfg=AtomicConfiguration
> [seqReserveSize=1000, cacheMode=PARTITIONED, backups=1, aff=null,
> grpName=null], classLdr=null, sslCtxFactory=null,
> 

RE: Continuous queries and duplicates

2018-11-30 Thread Sobolewski, Krzysztof
I will take a look, thanks!

But, upon further investigation, it appears that there is no isolation 
whatsoever between the initial query and the listener in ContinuousQuery. The 
initial query can pick up entries added after it started and which had been 
already sent to the local listener. This way the entry is reported twice, and 
hence the duplicates. This is regardless of the type of the initial query 
(ScanQuery or SqlQuery). This reduces the usefulness of ContinuousQuery a lot, 
because we have to find out a way to rule out these duplicates, which is 
difficult and can incur a significant overhead.

So my follow-up question is this: is it behaving as designed, or is there some 
mechanism to prevent these duplicates from happening?
-KS


From: Ilya Kasnacheev [mailto:ilya.kasnach...@gmail.com]
Sent: 30 listopada 2018 14:02
To: user@ignite.apache.org
Subject: Re: Continuous queries and duplicates

Hello!

There should be isolation in AI 2.7 as an experimental feature.

Regards,
--
Ilya Kasnacheev


пн, 26 нояб. 2018 г. в 12:53, Sobolewski, Krzysztof 
:
Thanks. This is a little disappointing. ScanQuery would probably work, but it’s 
not as efficient (can’t use indexes etc.). Are there any plans to enable 
isolation on SqlQuery?
-KS

From: Ilya Kasnacheev [mailto:ilya.kasnach...@gmail.com]
Sent: 26 listopada 2018 08:01
To: user@ignite.apache.org
Subject: Re: Continuous queries and duplicates

Hello!

SQL queries have no isolation currently so it is not possible to avoid the 
problem that you described. You could try switching to ScanQuery, see if it 
helps; or learning to deal with duplicates.

Regards,
--
Ilya Kasnacheev


пт, 23 нояб. 2018 г. в 19:25, Sobolewski, Krzysztof 
:
Hi,

I'm wanting to use a ContinuousQuery and there is a slight issue with how it 
transitions from the initial query to the notifications phase. It turns out 
that if there are additions to the cache happening while the continuous query 
runs, an entry may be reported twice - once by the initial query and once by 
the listener. This is confirmed by experiment, BTW :) The initial query in this 
case is an SqlQuery.

So my question is: is this intentional? Or is it a bug? Is there something I 
can do to mitigate this? Is this is an issue of isolation level?

Thanks a lot for any pointers :)
-KS



Your Personal Data: We may collect and process information about you that may 
be subject to data protection laws. For more information about how we use and 
disclose your personal data, how we protect your information, our legal basis 
to use your information, your rights and who you can contact, please refer to: 
www.gs.com/privacy-notices



Your Personal Data: We may collect and process information about you that may 
be subject to data protection laws. For more information about how we use and 
disclose your personal data, how we protect your information, our legal basis 
to use your information, your rights and who you can contact, please refer to: 
www.gs.com/privacy-notices



Your Personal Data: We may collect and process information about you that may 
be subject to data protection laws. For more information about how we use and 
disclose your personal data, how we protect your information, our legal basis 
to use your information, your rights and who you can contact, please refer to: 
www.gs.com/privacy-notices


Re: Can we have different Segmentation Policy in same cluster grid?

2018-11-30 Thread Ilya Kasnacheev
Hello!

Have you tried it? What was the result?

Regards,
-- 
Ilya Kasnacheev


вт, 27 нояб. 2018 г. в 09:03, Hemasundara Rao <
hemasundara@travelcentrictechnology.com>:

> Hi,
> Can we have different Segmentation Policy in same cluster grid?
> ()
> Let 's say I have grid with 3 node server cluster.
> Can I configure segmentationPolicy value to 'STOP' in one server,
> 'RESTART_JVM' on another server and 'NOOP' on the other sever?
> What is is behavior if it is allowed and what is client configuration to
> use in that case?
>
> Thanks and regards,
> Hemasundar.
>
>


Re: Invalid property 'statisticsEnabled' is not writable

2018-11-30 Thread Ilya Kasnacheev
Hello!

I was able to start a stand-alone node with such configuration. What are
the steps to reproduce this failure?

Regards,
-- 
Ilya Kasnacheev


вт, 27 нояб. 2018 г. в 21:23, ApacheUser :

> Hi Team,
>
> we have 6 node Ignite cluster, loading data with Spark. recently we have
> added "cacheConfiguration", getting below error when we try to recreate
> "cache" using spark data load.
>
> any hint help please?
>
> The error:
>
>
> Caused by: org.springframework.beans.factory.BeanCreationException: Error
> creating bean with name
> 'org.apache.ignite.configuration.CacheConfiguration#18d63996' defined in
> URL
>
> [file:/apps/ignitedata/apache-ignite-fabric-2.6.0-bin/config/default-config.xml]:
> Error setting property values; nested exception is
> org.springframework.beans.NotWritablePropertyException: Invalid property
> 'statisticsEnabled' of bean class
> [org.apache.ignite.configuration.CacheConfiguration]: Bean property
> 'statisticsEnabled' is not writable or has an invalid setter method. Does
> the parameter type of the setter match the return type of the getter?
> at
>
> org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.applyPropertyValues(AbstractAutowireCapableBeanFactory.java:1570)
> at
>
> org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.populateBean(AbstractAutowireCapableBeanFactory.java:1280)
> at
>
> org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:553)
> at
>
> org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:483)
> at
>
> org.springframework.beans.factory.support.BeanDefinitionValueResolver.resolveInnerBean(BeanDefinitionValueResolver.java:299)
>
>
> my config:
> http://www.springframework.org/schema/beans;
>xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance;
>xsi:schemaLocation="
>http://www.springframework.org/schema/beans
>http://www.springframework.org/schema/beans/spring-beans.xsd;>
>
> class="org.apache.ignite.configuration.IgniteConfiguration">
>  value="RESTART_JVM"/>
>  value="1"/>
>  value="5"/>
>  />
>  value="3000"/>
>
> 
>  class="org.apache.ignite.configuration.DataStorageConfiguration">
> 
>  />
>
>  value="/apps/ignitedata/data/wal/archive" />
>
>  value="true"/>
>   name="defaultDataRegionConfiguration">
>  class="org.apache.ignite.configuration.DataRegionConfiguration">
> 
> 
>
>  value="#{1024L * 1024 *
> 1024}"/>
> 
> 
> 
> 
> 
> 
>
>class="org.apache.ignite.configuration.DataRegionConfiguration">
>
>  name="name" value="q_major"/>
>
>
>  name="initialSize" value="#{10L * 1024 * 1024 * 1024}"/>
>
>
>  name="maxSize" value="#{50L * 1024 * 1024 * 1024}"/>
>
>
>  name="persistenceEnabled" value="true"/>
>   
> class="org.apache.ignite.configuration.DataRegionConfiguration">
>
>  name="name" value="q_minor"/>
>
>
>  name="initialSize" value="#{10L * 1024 * 1024 * 1024}"/>
>
>
>  name="maxSize" value="#{60L * 1024 * 1024 * 1024}"/>
>
>
>  name="persistenceEnabled" value="true"/>
>   
>   
>
> 
> 
> 
>
>
> 
>
> 
>   
> class="org.apache.ignite.configuration.CacheConfiguration">
>
>  value="ccwrCacheTemplate*"/>
>value="PARTITIONED" />
>value="1"
> />
>
> name="partitionLossPolicy" value="IGNORE"/>
>  name="atomicityMode" value="ATOMIC"/>
>  

Re: “Binary type has different field types” Error when using Date type field in Key or Value

2018-11-30 Thread Ilya Kasnacheev
Hello!

Can you post a small reproducer project which will demonstrate this
behavior?

Regards,
-- 
Ilya Kasnacheev


ср, 28 нояб. 2018 г. в 00:03, rishi007bansod :

> I am trying to use java.util.Date type field in my Ignite Key and Value
> objects. But when I start caching data in same Ignite cache using Java
> code,
> I get following error.
>
> *[12:43:01,485][SEVERE][pool-8-thread-1][] Message is ignored due to an
> error [msg=MessageAndMetadata(test1,2,Message(magic = 1, attributes = 0,
> CreateTime = -1, crc = 3705259101, key = java.nio.HeapByteBuffer[pos=0
> lim=4
> cap=3288], payload = java.nio.HeapByteBuffer[pos=0 lim=3280
> cap=3280]),302,kafka.serializer.DefaultDecoder@2d50c6a2
> ,kafka.serializer.DefaultDecoder@1ff7596c,-1,CreateTime)]
> class org.apache.ignite.binary.BinaryObjectException: Binary type has
> different field types [typeName=test.demo.DataKey, fieldName=tstamp,
> fieldTypeName1=String, fieldTypeName2=Date]
> at
>
> org.apache.ignite.internal.binary.BinaryUtils.mergeMetadata(BinaryUtils.java:1027)
> at
>
> org.apache.ignite.internal.processors.cache.binary.BinaryMetadataTransport$MetadataUpdateProposedListener.onCustomEvent(BinaryMetadataTransport.java:293)
> at
>
> org.apache.ignite.internal.processors.cache.binary.BinaryMetadataTransport$MetadataUpdateProposedListener.onCustomEvent(BinaryMetadataTransport.java:258)
> at
>
> org.apache.ignite.internal.managers.discovery.GridDiscoveryManager$4.onDiscovery0(GridDiscoveryManager.java:707)
> at
>
> org.apache.ignite.internal.managers.discovery.GridDiscoveryManager$4.onDiscovery(GridDiscoveryManager.java:589)
> at
>
> org.apache.ignite.spi.discovery.tcp.ServerImpl$RingMessageWorker.notifyDiscoveryListener(ServerImpl.java:5479)
> at
>
> org.apache.ignite.spi.discovery.tcp.ServerImpl$RingMessageWorker.processCustomMessage(ServerImpl.java:5305)
> at
>
> org.apache.ignite.spi.discovery.tcp.ServerImpl$RingMessageWorker.processMessage(ServerImpl.java:2765)
> at
>
> org.apache.ignite.spi.discovery.tcp.ServerImpl$RingMessageWorker.processMessage(ServerImpl.java:2536)
> at
>
> org.apache.ignite.spi.discovery.tcp.ServerImpl$MessageWorkerAdapter.body(ServerImpl.java:6775)
> at
>
> org.apache.ignite.spi.discovery.tcp.ServerImpl$RingMessageWorker.body(ServerImpl.java:2621)
> at org.apache.ignite.spi.IgniteSpiThread.run(IgniteSpiThread.java:62)*
>
>
>
> Where DataKey is Ignite cache Key which is defined as follows,
>
>
> *package test.demo;
> import java.util.Date;
>
> public class DataKey{
>
> private Long sess_id ;
>
> private Long   s_id;
>
> private Long   version;
>
> private Date tstamp;
>
>
> public DataKey(Long sess_id, Long s_id, Long version,
> Date tstamp) {
> super();
> this.sess_id = sess_id;
> this.s_id = s_id;
> this.version = version;
> this.tstamp = tstamp;
> }
>
>
> @Override
> public int hashCode() {
> final int prime = 31;
> int result = 1;
> result = prime * result
> + ((s_id == null) ? 0 : s_id.hashCode());
> result = prime * result
> + ((sess_id == null) ? 0 : sess_id.hashCode());
> result = prime * result
> + ((tstamp == null) ? 0 : tstamp.hashCode());
> result = prime * result + ((version == null) ? 0 :
> version.hashCode());
> return result;
> }
>
>
> @Override
> public boolean equals(Object obj) {
> if (this == obj)
> return true;
> if (obj == null)
> return false;
> if (getClass() != obj.getClass())
> return false;
> DataKey other = (DataKey) obj;
> if (s_id == null) {
> if (other.s_id != null)
> return false;
> } else if (!s_id.equals(other.s_id))
> return false;
> if (sess_id == null) {
> if (other.sess_id != null)
> return false;
> } else if (!sess_id.equals(other.sess_id))
> return false;
> if (tstamp == null) {
> if (other.tstamp != null)
> return false;
> } else if (!tstamp.equals(other.tstamp))
> return false;
> if (version == null) {
> if (other.version != null)
> return false;
> } else if (!version.equals(other.version))
> return false;
> return true;
> }
> }*
>
> As mentioned in link
>
> http://apache-ignite-users.70518.x6.nabble.com/Binary-type-has-different-fields-error-td21540.html
> <
> http://apache-ignite-users.70518.x6.nabble.com/Binary-type-has-different-fields-error-td21540.html>
>
> , I even deleted contents from $IGNITE_HOME/work/ directory and restarted
> the node. But still error is there. What is causing this error? Also same
> error occurs if java.util.Date type field is only used in cache value(not
> in
> key).
>
>
>
> --
> Sent from: 

Re: Cache Structure

2018-11-30 Thread Ilya Kasnacheev
Hello!

You can implement Binarylizable which will serialize your objects so that
your relevant k1 and k2 are stored as entries in BinaryObject.
You can then add QueryEntity to cache to be able to index them properly.

Regards,
-- 
Ilya Kasnacheev


чт, 15 нояб. 2018 г. в 01:33, Ramin Farajollah :

> Thanks for your informative reply.
>
> I looked at CacheQueryExample to be able to query by both unique (k1) and
> non-unique (k2) keys.
>
> I use an /AffinityKey/. However, k1 (unique key) and k2
> (non-unique key) are not member variables, where I would have been able to
> annotate them with /@QuerySqlField(index = true)/ (with true and false,
> respectively). They are entries of a map inside of T. There are getter
> methods for both.
>
> Having said that:
>
> 1. How do I identify the k1 to be an index key so I can execute a query
> with
> a predicate like this: "k1 = ?"
> I saw this post:  [Q1]
> <
> http://apache-ignite-users.70518.x6.nabble.com/Querying-HashMap-stored-as-value-in-IgniteCache-td3507.html#a3550>
>
>
> 2. Will the same strategy work for a query based on a non-unique key (k2)
> in
> the same map within T?
>
>
>
> Andrew Mashenkov wrote
> > If k2 <- k1 relation is one-to-many, there is another way to achieve the
> > same with using SQL [2].
> > With this approach adding new instance will be a single operation on one
> > node and Ignite will need just to update local index in addition, but
> > query
> > for k2 will be a broadcast unless the data is collocated [3].
>
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Why GridDiscoveryManager onSegmentation use StopNodeFailureHandler?

2018-11-30 Thread Ilya Kasnacheev
Hello!

It will use IgniteConfiguration.segmentationPolicy.

You can just try setting it to RESTART_JVM.

Regards,
-- 
Ilya Kasnacheev


вт, 27 нояб. 2018 г. в 17:18, wangsan :

> Can I use LifecycleEventType.AFTER_NODE_STOP to stop jvm? if segment event
> happens?
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Slow select distinct query on primary key

2018-11-30 Thread Ilya Kasnacheev
Hello!

> only 1 out of the 16 available cores get spiked to 100%, while the rest
remain idle

This is to be expected unless you crank query parallelism up:
https://apacheignite.readme.io/docs/sql-performance-and-debugging#query-parallelism

Unfortunately, the setting is only available via cache configuration/cache
template.

Regards,
-- 
Ilya Kasnacheev


пт, 30 нояб. 2018 г. в 17:43, yongjec :

> Here is the explain plan.
>
> 0: jdbc:ignite:thin://127.0.0.1/> EXPLAIN SELECT DISTINCT ACCOUNT_ID FROM
> PERF_POSITIONS;
> 'PLAN'
> 'SELECT DISTINCT
> __Z0.ACCOUNT_ID AS __C0_0
> FROM PUBLIC.PERF_POSITIONS __Z0
> /* PUBLIC."_key_PK" */'
> 'SELECT DISTINCT
> __C0_0 AS ACCOUNT_ID
> FROM PUBLIC.__T0
> /* PUBLIC."merge_scan" */'
> 2 rows selected (0.026 seconds)
>
>
> Based on your suggestion, I tested below changes, but none of them made a
> difference. In all cases, the query took consistently 56-60 seconds.
>
>
> 1. Having the index with inline size 60.
>
> CREATE INDEX PERF_POSITIONS_IDX ON PERF_POSITIONS (ACCOUNT_ID) INLINE_SIZE
> 60;
>
>
> 2. Re-creating the table with VARCHAR size 4. (all the values in this
> particular dataset are 4 chars).
>
> CREATE TABLE PERF_POSITIONS (
> ACCOUNT_ID VARCHAR(4) NOT NULL,
> ...
>
>
> 3. Using index hint.
>
> SELECT DISTINCT ACCOUNT_ID FROM PERF_POSITIONS USE
> INDEX(PERF_POSITIONS_IDX);
>
>
> On a side note, I noticed that while the query is running, only 1 out of
> the
> 16 available cores get spiked to 100%, while the rest remain idle. Not sure
> whether this is expected.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Slow select distinct query on primary key

2018-11-30 Thread yongjec
Here is the explain plan.

0: jdbc:ignite:thin://127.0.0.1/> EXPLAIN SELECT DISTINCT ACCOUNT_ID FROM
PERF_POSITIONS;
'PLAN'
'SELECT DISTINCT
__Z0.ACCOUNT_ID AS __C0_0
FROM PUBLIC.PERF_POSITIONS __Z0
/* PUBLIC."_key_PK" */'
'SELECT DISTINCT
__C0_0 AS ACCOUNT_ID
FROM PUBLIC.__T0
/* PUBLIC."merge_scan" */'
2 rows selected (0.026 seconds)


Based on your suggestion, I tested below changes, but none of them made a
difference. In all cases, the query took consistently 56-60 seconds.


1. Having the index with inline size 60.

CREATE INDEX PERF_POSITIONS_IDX ON PERF_POSITIONS (ACCOUNT_ID) INLINE_SIZE
60;


2. Re-creating the table with VARCHAR size 4. (all the values in this
particular dataset are 4 chars).

CREATE TABLE PERF_POSITIONS (
ACCOUNT_ID VARCHAR(4) NOT NULL,
...


3. Using index hint.

SELECT DISTINCT ACCOUNT_ID FROM PERF_POSITIONS USE
INDEX(PERF_POSITIONS_IDX);


On a side note, I noticed that while the query is running, only 1 out of the
16 available cores get spiked to 100%, while the rest remain idle. Not sure
whether this is expected.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Configuration of ignite

2018-11-30 Thread Ilya Kasnacheev
Hello!

You should keep pageSize the default, and checkpointBufferSize should be
tuned up until you only see checkpoints with reason 'timeout', i.e. when
you will not be running out of it.

You should be aiming for an optimal maxSize/checkpointBufferSize mix.

Regards,
-- 
Ilya Kasnacheev


чт, 29 нояб. 2018 г. в 15:58, Viraj Rathod :

> Actual operations contain high write operations and lesser read ones.
>
> Okay distributed storage noted. Thank you.
>
> On Thu, 29 Nov 2018 at 6:16 PM, Ilya Kasnacheev 
> wrote:
>
>> Hello!
>>
>> What are actual operations that you want best TPS for? For different
>> operations the the answer will be different. With regards to maxSize you
>> should try to load your data, see how much memory it will take.
>>
>> Note that Apache Ignite will distribute cache between all four nodes by
>> default.
>>
>> Regards,
>>
>> --
>> Ilya Kasnacheev
>>
>>
>> чт, 29 нояб. 2018 г. в 15:32, Viraj Rathod :
>>
>>> I am setting up a cluster of 4 nodes. 2 are partitioned and 2 are
>>> backups of the respective nodes.
>>> The dataset contains 1 million rows of 150 columns having VARCHAR
>>> values.
>>> What would be the configurations needed for the same? In regards of
>>> pagesize, maxsize, checkpointbuffersize. What would be ideal values of
>>> these parameters for fast and efficient TPS?
>>> --
>>> Regards,
>>> Viraj Rathod
>>>
>> --
> Regards,
> Viraj Rathod
>


Re: Is there any way to speed up delete data or drop table?

2018-11-30 Thread Ilya Kasnacheev
Hello!

Maybe dropping a cache will be somewhat faster when this cache is a part of
existing cache group:
https://apacheignite.readme.io/docs/cache-groups

Regards,
-- 
Ilya Kasnacheev


чт, 29 нояб. 2018 г. в 16:44, yangjiajun <1371549...@qq.com>:

> Hello.
>
> We need to fully sync data from mysql to ignite periodicity.Thus we need to
> clear data in ignite first.Ignite does not support truncate statement and
> its drop table statement sometimes takes long time.It's also very slow to
> delete all data in ignite's tables or caches.Is there any way to speed up
> delete data or drop table?Or is there some methods like truncate?
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Fair queue polling policy?

2018-11-30 Thread Peter
Hello,

I have found this discussion

about the same topic and indeed the example there works and the queues
poll fair.

And when I tweak the sleep after put and take, so that the queue stays
mostly empty all the, I can reproduce the unfair behaviour!
https://github.com/karussell/igniteexample/blob/master/src/main/java/test/IgniteTest.java

I'm not sure if this is a bug as it should be the responsibility of the
client to avoid overloading itself. E.g. in my case this happened
because I allowed too many threads for the tasks on the polling side,
leading to too frequent polling, which leads to this mostly empty queue.

But IMO it should be clarified in the documentation as one expects a
round robin behaviour even for empty queues. And e.g. in low latency
environments and/or environments with many clients this could make
problems. I have created an issue about it here:
https://issues.apache.org/jira/browse/IGNITE-10496

Kind Regards
Peter

Am 30.11.18 um 01:44 schrieb Peter:
>
> Hello,
>
> My aim is a queue for load balancing that is described in the
> documentation
> :
> create an "ideally balanced system where every node only takes the
> number of jobs it can process, and not more."
>
> I'm using jdk8 and ignite 2.6.0. I have successfully set up a two node
> ignite cluster where node1 has same CPU count (8) and same RAM as
> node2 but slightly slower CPU (virtual vs. dedicated). I created one
> unbounded queue in this system (no collection configuration, also no
> config for cluster except TcpDiscoveryVmIpFinder).
>
> I call queue.put on both nodes at an equal rate and have one
> non-ignite-thread per node that does "queue.take()" and what I expect
> is that both machines go equally fast into the 100% CPU usage as both
> machines poll at their best frequency. But what I observe is that the
> slower node (node1) gets approx. 5 times more items via queue.take
> than node2. This leads to 10% CPU usage on node2 and 100% CPU usage on
> node1 and I never had the case where it was equal.
>
> What could be the reason? Is there a fair polling configuration or
> some anti-affine? Or is it required to do queue.take() inside a
> Runnable submitted via ignite.compute().something?
>
> I also played with CollectionConfiguration.setCacheMode but the
> problem persists. Any pointers are appreciated.
>
> Kind Regards
> Peter
>



Re: JDBC Streaming

2018-11-30 Thread Ilya Kasnacheev
Hello!

Can you create a small reproducer project which will exhibit this behavior?
Put it on e.g. Github.

Note that 'streaming mode' in Client driver and Client driver itself are
near-deprecated. So there may be some rough edges.
However, there are streaming mode tests in Apache Ignite and they pass.

Regards,
-- 
Ilya Kasnacheev


чт, 29 нояб. 2018 г. в 19:45, joseheitor :

> Hi Ilya,
>
> Yes - I am using JDBC Client driver to INSERT data into the SQL table. It
> works correctly (but slow) without setting 'streaming=true'.
>
> When I set 'streaming=true' in the connection string, as per the Ignite
> docs
> (
>
> https://apacheignite-sql.readme.io/docs/jdbc-client-driver#section-streaming-mode
> <
> https://apacheignite-sql.readme.io/docs/jdbc-client-driver#section-streaming-mode>
>
> ), then my data-insert code runs much faster, and without errors - but the
> table remains empty.
>
> I have followed the instructions on the docs carefully and reviewed several
> times over the last couple of days.
>
> There is either something else that needs to be done and is undocumented -
> or there is a bug in this feature.
>
> Please can you verify this functionality on your end and confirm...?
>
> Thanks,
> Jose
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: GridProcessorAdapter fails to start due to failure to initialise WAL segment on Ignite startup

2018-11-30 Thread Ilya Kasnacheev
Hello!

"WAL segment size change is not supported"

Is there a chance that you have changed WAL segment size setting between
launches?

Regards,
-- 
Ilya Kasnacheev


чт, 29 нояб. 2018 г. в 02:39, Raymond Wilson :

> I'm using Ignite 2.6 with the C# client.
>
> I have a running cluster that I was debugging. All requests were read only
> (there were no state mutating operations running in the cluster.
>
> I terminated the one server node in the grid (running in the debugger) to
> make a small code change and re-run it (I do this frequently). The node may
> have been stopped for longer than the partitioning timeout.
>
> On re-running the server node it failed to start. On re-running the
> complete cluster it still failed to start, and all other nodes report
> failure to connect to a inactive grid.
>
> Looking at the log for the server node that is failing I get the following
> log showing an exception while initializing a WAL segment. This failure
> seems permanent and is unexpected as we are using the strict WAL atomicity
> mode (WalMode.Fsync) for all persisted regions.Is this a recoverable error,
> or does this imply data loss? [NB: This is a dev system so no prod data is
> affected]]
>
>
> 2018-11-29 12:26:09,933 [1] INFO  ImmutableCacheComputeServer >>>
> __  >>>   /  _/ ___/ |/ /  _/_  __/ __/>>>
> _/ // (7 7// /  / / / _/  >>> /___/\___/_/|_/___/ /_/ /___/
>  >>>   >>> ver. 2.6.0#20180710-sha1:669feacc  >>> 2018 Copyright(C) Apache
> Software Foundation  >>>   >>> Ignite documentation:
> http://ignite.apache.org
> 2018-11-29 12:26:09,933 [1] INFO  ImmutableCacheComputeServer Config URL:
> n/a
> 2018-11-29 12:26:09,948 [1] INFO  ImmutableCacheComputeServer
> IgniteConfiguration [igniteInstanceName=TRex-Immutable, pubPoolSize=50,
> svcPoolSize=12, callbackPoolSize=12, stripedPoolSize=12, sysPoolSize=12,
> mgmtPoolSize=4, igfsPoolSize=12, dataStreamerPoolSize=12,
> utilityCachePoolSize=12, utilityCacheKeepAliveTime=6, p2pPoolSize=2,
> qryPoolSize=12, igniteHome=null,
> igniteWorkDir=C:\Users\rwilson\AppData\Local\Temp\TRexIgniteData\Immutable,
> mbeanSrv=com.sun.jmx.mbeanserver.JmxMBeanServer@6e4784bc,
> nodeId=8f32d0a6-539c-40dd-bc42-d044f28bac73,
> marsh=org.apache.ignite.internal.binary.BinaryMarshaller@e4487af,
> marshLocJobs=false, daemon=false, p2pEnabled=false, netTimeout=5000,
> sndRetryDelay=1000, sndRetryCnt=3, metricsHistSize=1,
> metricsUpdateFreq=2000, metricsExpTime=9223372036854775807,
> discoSpi=TcpDiscoverySpi [addrRslvr=null, sockTimeout=5000,
> ackTimeout=5000, marsh=null, reconCnt=10, reconDelay=2000,
> maxAckTimeout=60, forceSrvMode=false, clientReconnectDisabled=false,
> internalLsnr=null], segPlc=STOP, segResolveAttempts=2,
> waitForSegOnStart=true, allResolversPassReq=true, segChkFreq=1,
> commSpi=TcpCommunicationSpi [connectGate=null, connPlc=null,
> enableForcibleNodeKill=false, enableTroubleshootingLog=false,
> srvLsnr=org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi$2@10d68fcd,
> locAddr=127.0.0.1, locHost=null, locPort=47100, locPortRange=100,
> shmemPort=-1, directBuf=true, directSndBuf=false, idleConnTimeout=3,
> connTimeout=5000, maxConnTimeout=60, reconCnt=10, sockSndBuf=32768,
> sockRcvBuf=32768, msgQueueLimit=1024, slowClientQueueLimit=0, nioSrvr=null,
> shmemSrv=null, usePairedConnections=false, connectionsPerNode=1,
> tcpNoDelay=true, filterReachableAddresses=false, ackSndThreshold=16,
> unackedMsgsBufSize=0, sockWriteTimeout=2000, lsnr=null, boundTcpPort=-1,
> boundTcpShmemPort=-1, selectorsCnt=4, selectorSpins=0, addrRslvr=null,
> ctxInitLatch=java.util.concurrent.CountDownLatch@117e949d[Count = 1],
> stopping=false,
> metricsLsnr=org.apache.ignite.spi.communication.tcp.TcpCommunicationMetricsListener@6db9f5a4],
> evtSpi=org.apache.ignite.spi.eventstorage.NoopEventStorageSpi@5f8edcc5,
> colSpi=NoopCollisionSpi [], deploySpi=LocalDeploymentSpi [lsnr=null],
> indexingSpi=org.apache.ignite.spi.indexing.noop.NoopIndexingSpi@7a675056,
> addrRslvr=null, clientMode=false, rebalanceThreadPoolSize=1,
> txCfg=org.apache.ignite.configuration.TransactionConfiguration@d21a74c,
> cacheSanityCheckEnabled=true, discoStartupDelay=6, deployMode=SHARED,
> p2pMissedCacheSize=100, locHost=null, timeSrvPortBase=31100,
> timeSrvPortRange=100, failureDetectionTimeout=1,
> clientFailureDetectionTimeout=3, metricsLogFreq=1, hadoopCfg=null,
> connectorCfg=org.apache.ignite.configuration.ConnectorConfiguration@6e509ffa,
> odbcCfg=null, warmupClos=null, atomicCfg=AtomicConfiguration
> [seqReserveSize=1000, cacheMode=PARTITIONED, backups=1, aff=null,
> grpName=null], classLdr=null, sslCtxFactory=null,
> platformCfg=PlatformDotNetConfiguration [binaryCfg=null],
> binaryCfg=BinaryConfiguration [idMapper=null, nameMapper=null,
> serializer=null, compactFooter=true], memCfg=null, pstCfg=null,
> dsCfg=DataStorageConfiguration [sysRegionInitSize=41943040,
> sysCacheMaxSize=104857600, pageSize=16384, 

Re: Continuous queries and duplicates

2018-11-30 Thread Ilya Kasnacheev
Hello!

There should be isolation in AI 2.7 as an experimental feature.

Regards,
-- 
Ilya Kasnacheev


пн, 26 нояб. 2018 г. в 12:53, Sobolewski, Krzysztof <
krzysztof.sobolew...@gs.com>:

> Thanks. This is a little disappointing. ScanQuery would probably work, but
> it’s not as efficient (can’t use indexes etc.). Are there any plans to
> enable isolation on SqlQuery?
>
> -KS
>
>
>
> *From:* Ilya Kasnacheev [mailto:ilya.kasnach...@gmail.com]
> *Sent:* 26 listopada 2018 08:01
> *To:* user@ignite.apache.org
> *Subject:* Re: Continuous queries and duplicates
>
>
>
> Hello!
>
>
>
> SQL queries have no isolation currently so it is not possible to avoid the
> problem that you described. You could try switching to ScanQuery, see if it
> helps; or learning to deal with duplicates.
>
>
>
> Regards,
>
> --
>
> Ilya Kasnacheev
>
>
>
>
>
> пт, 23 нояб. 2018 г. в 19:25, Sobolewski, Krzysztof <
> krzysztof.sobolew...@gs.com>:
>
> Hi,
>
> I'm wanting to use a ContinuousQuery and there is a slight issue with how
> it transitions from the initial query to the notifications phase. It turns
> out that if there are additions to the cache happening while the continuous
> query runs, an entry may be reported twice - once by the initial query and
> once by the listener. This is confirmed by experiment, BTW :) The initial
> query in this case is an SqlQuery.
>
> So my question is: is this intentional? Or is it a bug? Is there something
> I can do to mitigate this? Is this is an issue of isolation level?
>
> Thanks a lot for any pointers :)
> -KS
>
> 
>
> Your Personal Data: We may collect and process information about you that
> may be subject to data protection laws. For more information about how we
> use and disclose your personal data, how we protect your information, our
> legal basis to use your information, your rights and who you can contact,
> please refer to: www.gs.com/privacy-notices<
> http://www.gs.com/privacy-notices>
>
>
> --
>
> Your Personal Data: We may collect and process information about you that
> may be subject to data protection laws. For more information about how we
> use and disclose your personal data, how we protect your information, our
> legal basis to use your information, your rights and who you can contact,
> please refer to: www.gs.com/privacy-notices
>


Re: Ignite benchmarking with YCSB

2018-11-30 Thread Ilya Kasnacheev
Hello!

I'm afraid I have no idea anymore. Will it help if you decrease number of
YCSB threads? Is it possible that YCSB uses the same key over and over
again?

Regards,
-- 
Ilya Kasnacheev


пт, 30 нояб. 2018 г. в 09:05, summasumma :

> Hi Ilya,
>
> I have tried to set the following and rerun the same workload.
>
> 
>
> But this time the performance got degraded from 56k Ops per YCSB to 47k
> Ops.
>
> PFA the thread dump with pairedconnection enabled:
>
> ycsb-1:
> 
> ycsb1_dump1.ycsb1_dump1
> <
> http://apache-ignite-users.70518.x6.nabble.com/file/t2137/ycsb1_dump1.ycsb1_dump1>
>
> ycsb1_dump2.ycsb1_dump2
> <
> http://apache-ignite-users.70518.x6.nabble.com/file/t2137/ycsb1_dump2.ycsb1_dump2>
>
> ycsb-2:
> 
> ycsb2_dump1.ycsb2_dump1
> <
> http://apache-ignite-users.70518.x6.nabble.com/file/t2137/ycsb2_dump1.ycsb2_dump1>
>
> ycsb2_dump2.ycsb2_dump2
> <
> http://apache-ignite-users.70518.x6.nabble.com/file/t2137/ycsb2_dump2.ycsb2_dump2>
>
>
>
> Ignite-1:
> ==
> ignite1_connpair_dump1.ignite1_connpair_dump1
> <
> http://apache-ignite-users.70518.x6.nabble.com/file/t2137/ignite1_connpair_dump1.ignite1_connpair_dump1>
>
> ignite1_connpair_dump2.ignite1_connpair_dump2
> <
> http://apache-ignite-users.70518.x6.nabble.com/file/t2137/ignite1_connpair_dump2.ignite1_connpair_dump2>
>
>
> Ignite-2:
> ==
> ignite2_connpair_dump1.ignite2_connpair_dump1
> <
> http://apache-ignite-users.70518.x6.nabble.com/file/t2137/ignite2_connpair_dump1.ignite2_connpair_dump1>
>
> ignite2_connpair_dump2.ignite2_connpair_dump2
> <
> http://apache-ignite-users.70518.x6.nabble.com/file/t2137/ignite2_connpair_dump2.ignite2_connpair_dump2>
>
>
> Ignite-3:
> =
> ignite3_connpair_dump1.ignite3_connpair_dump1
> <
> http://apache-ignite-users.70518.x6.nabble.com/file/t2137/ignite3_connpair_dump1.ignite3_connpair_dump1>
>
> ignite3_connpair_dump2.ignite3_connpair_dump2
> <
> http://apache-ignite-users.70518.x6.nabble.com/file/t2137/ignite3_connpair_dump2.ignite3_connpair_dump2>
>
>
> Please clarify.
>
> Thanks,
> ...summa
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Ignite cluster going down frequently

2018-11-30 Thread Ilya Kasnacheev
Hello!

[04:45:53,179][WARNING][tcp-disco-msg-worker-#2%StaticGrid_NG_Dev%][TcpDiscoverySpi]
Timed out waiting for message delivery receipt (most probably, the reason
is in long GC pauses on remote
 node; consider tuning GC and increasing 'ackTimeout' configuration
property). Will retry to send message with increased timeout
[currentTimeout=1, rmtAddr=/10.201.30.64:47603, rmtPort=
47603]
[04:45:53,180][WARNING][tcp-disco-msg-worker-#2%StaticGrid_NG_Dev%][TcpDiscoverySpi]
Failed to send message to next node [msg=TcpDiscoveryJoinRequestMessage
[node=TcpDiscoveryNode [id=47aa2
976-0a02-4ffe-9c8d-3f0fbfcc532b, addrs=[10.201.30.173], sockAddrs=[/
10.201.30.173:0], discPort=0, order=0, intOrder=0,
lastExchangeTime=1542861943131, loc=false, ver=2.4.0#20180305-sha1:aa3
42270, isClient=true],
dataPacket=o.a.i.spi.discovery.tcp.internal.DiscoveryDataPacket@6ce6ae2,
super=TcpDiscoveryAbstractMessage
[sndNodeId=8a825790-a987-42c3-acb0-b3ea270143e1, id=5e14ec5
3761-47aa2976-0a02-4ffe-9c8d-3f0fbfcc532b, verifierNodeId=null, topVer=0,
pendingIdx=0, failedNodes=null, isClient=true]], next=TcpDiscoveryNode
[id=d7782a2e-4cfc-4427-8ba7-a9af3954ae3f, ad
drs=[10.201.30.64], sockAddrs=[/10.201.30.64:47603], discPort=47603,
order=53, intOrder=32, lastExchangeTime=1542272829304, loc=false,
ver=2.4.0#20180305-sha1:aa342270, isClient=false], err
Msg=Failed to send message to next node [msg=TcpDiscoveryJoinRequestMessage
[node=TcpDiscoveryNode [id=47aa2976-0a02-4ffe-9c8d-3f0fbfcc532b,
addrs=[10.201.30.173], sockAddrs=[/10.201.30.173
:0], discPort=0, order=0, intOrder=0, lastExchangeTime=1542861943131,
loc=false, ver=2.4.0#20180305-sha1:aa342270, isClient=true],
dataPacket=o.a.i.spi.discovery.tcp.internal.DiscoveryDataP
acket@6ce6ae2, super=TcpDiscoveryAbstractMessage
[sndNodeId=8a825790-a987-42c3-acb0-b3ea270143e1,
id=5e14ec53761-47aa2976-0a02-4ffe-9c8d-3f0fbfcc532b, verifierNodeId=null,
topVer=0, pending
Idx=0, failedNodes=null, isClient=true]], next=ClusterNode
[id=d7782a2e-4cfc-4427-8ba7-a9af3954ae3f, order=53, addr=[10.201.30.64],
daemon=true]]]
[04:45:53,190][WARNING][tcp-disco-msg-worker-#2%StaticGrid_NG_Dev%][TcpDiscoverySpi]
Local node has detected failed nodes and started cluster-wide procedure. To
speed up failure detection p
lease see 'Failure Detection' section under javadoc for 'TcpDiscoverySpi'

and then, on another node:
[04:45:58,335][WARNING][disco-event-worker-#41%StaticGrid_NG_Dev%][GridDiscoveryManager]
Local node SEGMENTED: TcpDiscoveryNode
[id=8a825790-a987-42c3-acb0-b3ea270143e1, addrs=[10.201.30.63], sockAddrs=[/
10.201.30.63:47600], discPort=47600, order=42, intOrder=23,
lastExchangeTime=1542861958327, loc=true, ver=2.4.0#20180305-sha1:aa342270,
isClient=false]

I think that you either have long GC pauses or flaky network (or system
goes into swapping and such).

Consider increasing 'ackTimeout' and/or 'failureDetectionTimeout'. Also
consider collecting GC logs for your nodes, looking into them for a root
cause.

Regards,
-- 
Ilya Kasnacheev


пт, 30 нояб. 2018 г. в 14:01, Hemasundara Rao <
hemasundara@travelcentrictechnology.com>:

> Hi Ilya Kasnacheev,
>
>  I am attaching all logs from second server (10.201.30.64).
> Please let me know if you need any other details.
>
> Thanks and Regards,
> Hemasundar.
>
> On Fri, 30 Nov 2018 at 09:40, Hemasundara Rao <
> hemasundara@travelcentrictechnology.com> wrote:
>
>> Hi Ilya Kasnacheev,
>>
>>   We are running one cluster node (10.201.30.63). I am attaching all logs
>> from this server.
>> Please let me know if you need any other details.
>>
>> Thanks and Regards,
>> Hemasundar.
>>
>>
>> On Thu, 29 Nov 2018 at 20:07, Ilya Kasnacheev 
>> wrote:
>>
>>> Hello!
>>>
>>> It is not clear from this log alone why this node became segmented. Do
>>> you have log from other server node in the topology? It was coordinator so
>>> maybe it was the one experiencing problems.
>>>
>>> Regards,
>>> --
>>> Ilya Kasnacheev
>>>
>>>
>>> ср, 28 нояб. 2018 г. в 13:56, Hemasundara Rao <
>>> hemasundara@travelcentrictechnology.com>:
>>>
 Hi  Ilya Kasnacheev,

  Did you get a chance to go though the log attached?
 This is one of the critical issue we are facing in our dev environment.
 Your input is of great help for us if we get, what is causing this
 issue and a probable solution to it.

 Thanks and Regards,
 Hemasundar.

 On Mon, 26 Nov 2018 at 16:54, Hemasundara Rao <
 hemasundara@travelcentrictechnology.com> wrote:

> Hi  Ilya Kasnacheev,
>   I have attached the log file.
>
> Regards,
> Hemasundar.
>
> On Mon, 26 Nov 2018 at 16:50, Ilya Kasnacheev <
> ilya.kasnach...@gmail.com> wrote:
>
>> Hello!
>>
>> Maybe you have some data in your caches which causes runaway heap
>> usage in your own code. Previously you did not have such data or code 
>> which
>> would react in such fashion.
>>
>> It's hard to say, can you provide more logs from the node before 

Re: Slow select distinct query on primary key

2018-11-30 Thread Юрий
Andrew, 60 will be enough also :) It was just quick calculated value with
rounding.

Real inline size for the case is 53: 1 /* byte,  type code */ + 2 /* short,
length of array */ 50 /* data size for ANSI chars */

пт, 30 нояб. 2018 г. в 14:09, Andrey Mashenkov :

> Yuri, how did you get inline size 60?
> I'd think 55 should be enough to inline Account_ID. 55 = 1 /* byte, type
> code */ + 4 /* int, array lenght */ + 50 /* data size for ANSI chars */
>
> On Fri, Nov 30, 2018 at 1:25 PM Юрий  wrote:
>
>> Please provide explain plan of the query to check that index is use. *EXPLAIN
>> {your select statement}*
>>
>> Also I noticed ACCOUNT_ID have length 50. Need to increase of inline
>> index size for the index.
>>
>> Try create index with the follow command *CREATE INDEX
>> PERF_POSITIONS_IDX ON PERF_POSITIONS (ACCOUNT_ID) INLINE_SIZE 60;*
>>
>> чт, 29 нояб. 2018 г. в 16:47, yongjec :
>>
>>> Hi,
>>>
>>> I tried the additional index as you suggested, but it did not improve the
>>> query time. The query still takes 58-61 seconds.
>>>
>>> CREATE INDEX PERF_POSITIONS_IDX ON PERF_POSITIONS (ACCOUNT_ID);
>>> CREATE INDEX PERF_POSITIONS_IDX2 ON PERF_POSITIONS (ACCOUNT_ID,
>>> EFFECTIVE_DATE, FREQUENCY, SOURCE_ID, SECURITY_ALIAS, POSITION_TYPE);
>>>
>>>
>>> I also tried the single column index only without the composite index.
>>> That
>>> did not make any difference in query time, either.
>>>
>>> CREATE INDEX PERF_POSITIONS_IDX ON PERF_POSITIONS (ACCOUNT_ID);
>>>
>>>
>>>
>>> --
>>> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>>>
>>
>>
>> --
>> Живи с улыбкой! :D
>>
>
>
> --
> Best regards,
> Andrey V. Mashenkov
>


-- 
Живи с улыбкой! :D


Re: HELLO WORLD GA EXAMPLE

2018-11-30 Thread Ilya Kasnacheev
Hello!

GA examples are standalone as far as my understanding goes. You don't need
to launch Ignite node explicitly. Can you try killing it and restarting the
example?

Otherwise, as it tells you, enable peer-class-loading or add ML jars to
libs/

Regards,
-- 
Ilya Kasnacheev


пт, 30 нояб. 2018 г. в 10:24, AlphaMufasaOmega :

> In the terminal where I executed:
>
> user@PTFAssaultMachine:~/apache-ignite-fabric-2.6.0-bin/examples$ sudo mvn
> exec:java
>
> -Dexec.mainClass="org.apache.ignite.examples.ml.genetic.helloworld.HelloWorldGAExample"
>
> The loop displays this strange recurring output:
>
> ]
> [2018-11-30 02:00:23,494][INFO ][grid-timeout-worker-#23][IgniteKernal]
> Metrics for local node (to disable set 'metricsLogFrequency' to 0)
> ^-- Node [id=99fe1608, uptime=00:11:00.704]
> ^-- H/N/C [hosts=1, nodes=2, CPUs=2]
> ^-- CPU [cur=0.83%, avg=1.3%, GC=0%]
> ^-- PageMemory [pages=3286]
> ^-- Heap [used=107MB, free=83.81%, comm=218MB]
> ^-- Non heap [used=70MB, free=-1%, comm=71MB]
> ^-- Outbound messages queue [size=0]
> ^-- Public thread pool [active=0, idle=0, qSize=0]
> ^-- System thread pool [active=0, idle=8, qSize=0]
> [2018-11-30 02:01:23,493][INFO ][grid-timeout-worker-#23][IgniteKernal]
> Metrics for local node (to disable set 'metricsLogFrequency' to 0)
> ^-- Node [id=99fe1608, uptime=00:12:00.710]
> ^-- H/N/C [hosts=1, nodes=2, CPUs=2]
> ^-- CPU [cur=0.67%, avg=1.27%, GC=0%]
> ^-- PageMemory [pages=3286]
> ^-- Heap [used=114MB, free=82.84%, comm=218MB]
> ^-- Non heap [used=70MB, free=-1%, comm=71MB]
> ^-- Outbound messages queue [size=0]
> ^-- Public thread pool [active=0, idle=0, qSize=0]
> ^-- System thread pool [active=0, idle=6, qSize=0]
> [2018-11-30 02:02:24,490][INFO ][tcp-disco-srvr-#3][TcpDiscoverySpi] TCP
> discovery accepted incoming connection [rmtAddr=/127.0.0.1, rmtPort=51869]
> [2018-11-30 02:02:24,824][INFO ][tcp-disco-srvr-#3][TcpDiscoverySpi] TCP
> discovery spawning a new thread for connection [rmtAddr=/127.0.0.1,
> rmtPort=51869]
> [2018-11-30 02:02:24,664][INFO ][tcp-disco-sock-reader-#5][TcpDiscoverySpi]
> Finished serving remote node connection [rmtAddr=/0:0:0:0:0:0:0:1:36991,
> rmtPort=36991
> [2018-11-30 02:02:24,829][INFO ][tcp-disco-sock-reader-#7][TcpDiscoverySpi]
> Started serving remote node connection [rmtAddr=/127.0.0.1:51869,
> rmtPort=51869]
> [2018-11-30 02:02:25,154][INFO ][grid-timeout-worker-#23][IgniteKernal]
> Metrics for local node (to disable set 'metricsLogFrequency' to 0)
> ^-- Node [id=99fe1608, uptime=00:13:02.368]
> ^-- H/N/C [hosts=1, nodes=2, CPUs=2]
> ^-- CPU [cur=1.17%, avg=1.25%, GC=0%]
> ^-- PageMemory [pages=3286]
> ^-- Heap [used=122MB, free=81.69%, comm=218MB]
> ^-- Non heap [used=70MB, free=-1%, comm=71MB]
> ^-- Outbound messages queue [size=0]
> ^-- Public thread pool [active=0, idle=0, qSize=0]
> ^-- System thread pool [active=0, idle=4, qSize=0]
> Nov 30, 2018 2:02:25 AM java.util.logging.LogManager$RootLogger log
> WARNING: Possible too long JVM pause: 3807 milliseconds.
> Nov 30, 2018 2:02:25 AM java.util.logging.LogManager$RootLogger log
> WARNING: Possible too long JVM pause: 723 milliseconds.
> Nov 30, 2018 2:02:29 AM java.util.logging.LogManager$RootLogger log
> WARNING: Possible too long JVM pause: 1166 milliseconds.
> [2018-11-30 02:03:25,151][INFO ][grid-timeout-worker-#23][IgniteKernal]
> Metrics for local node (to disable set 'metricsLogFrequency' to 0)
> ^-- Node [id=99fe1608, uptime=00:14:02.368]
> ^-- H/N/C [hosts=1, nodes=2, CPUs=2]
> ^-- CPU [cur=1.17%, avg=1.22%, GC=0%]
> ^-- PageMemory [pages=3286]
> ^-- Heap [used=128MB, free=80.72%, comm=218MB]
> ^-- Non heap [used=70MB, free=-1%, comm=71MB]
> ^-- Outbound messages queue [size=0]
> ^-- Public thread pool [active=0, idle=0, qSize=0]
> ^-- System thread pool [active=0, idle=6, qSize=0]
> [2018-11-30 02:04:25,212][INFO ][grid-timeout-worker-#23][IgniteKernal]
> Metrics for local node (to disable set 'metricsLogFrequency' to 0)
> ^-- Node [id=99fe1608, uptime=00:15:02.429]
> ^-- H/N/C [hosts=1, nodes=2, CPUs=2]
> ^-- CPU [cur=0.67%, avg=1.19%, GC=0%]
> ^-- PageMemory [pages=3286]
> ^-- Heap [used=134MB, free=79.79%, comm=218MB]
> ^-- Non heap [used=70MB, free=-1%, comm=71MB]
> ^-- Outbound messages queue [size=0]
> ^-- Public thread pool [active=0, idle=0, qSize=0]
> ^-- System thread pool [active=0, idle=6, qSize=0]
> [2018-11-30 02:05:25,355][INFO ][grid-timeout-worker-#23][IgniteKernal]
> Metrics for local node (to disable set 'metricsLogFrequency' to 0)
> ^-- Node [id=99fe1608, uptime=00:16:02.571]
> ^-- H/N/C [hosts=1, nodes=2, CPUs=2]
> ^-- CPU [cur=0.67%, avg=1.16%, GC=0%]
> ^-- PageMemory [pages=3286]
> ^-- Heap [used=140MB, free=78.91%, comm=218MB]
> ^-- Non heap [used=70MB, free=-1%, comm=71MB]
> ^-- Outbound messages queue [size=0]
> ^-- Public 

Re: Slow select distinct query on primary key

2018-11-30 Thread Andrey Mashenkov
Yuri, how did you get inline size 60?
I'd think 55 should be enough to inline Account_ID. 55 = 1 /* byte, type
code */ + 4 /* int, array lenght */ + 50 /* data size for ANSI chars */

On Fri, Nov 30, 2018 at 1:25 PM Юрий  wrote:

> Please provide explain plan of the query to check that index is use. *EXPLAIN
> {your select statement}*
>
> Also I noticed ACCOUNT_ID have length 50. Need to increase of inline index
> size for the index.
>
> Try create index with the follow command *CREATE INDEX PERF_POSITIONS_IDX
> ON PERF_POSITIONS (ACCOUNT_ID) INLINE_SIZE 60;*
>
> чт, 29 нояб. 2018 г. в 16:47, yongjec :
>
>> Hi,
>>
>> I tried the additional index as you suggested, but it did not improve the
>> query time. The query still takes 58-61 seconds.
>>
>> CREATE INDEX PERF_POSITIONS_IDX ON PERF_POSITIONS (ACCOUNT_ID);
>> CREATE INDEX PERF_POSITIONS_IDX2 ON PERF_POSITIONS (ACCOUNT_ID,
>> EFFECTIVE_DATE, FREQUENCY, SOURCE_ID, SECURITY_ALIAS, POSITION_TYPE);
>>
>>
>> I also tried the single column index only without the composite index.
>> That
>> did not make any difference in query time, either.
>>
>> CREATE INDEX PERF_POSITIONS_IDX ON PERF_POSITIONS (ACCOUNT_ID);
>>
>>
>>
>> --
>> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>>
>
>
> --
> Живи с улыбкой! :D
>


-- 
Best regards,
Andrey V. Mashenkov


Re: ODBC driver build error

2018-11-30 Thread Ilya Kasnacheev
Hello!

You will have much easier time just taking a pre-built installer out of
nightly build:
https://ci.ignite.apache.org/viewLog.html?buildId=lastSuccessful=Releases_NightlyRelease_RunApacheIgniteNightlyRelease=artifacts=1#!-oll7z3tvqsdk,-1fllqn9a26ew6,1e6erfmy67dj4,enpgjwyte3hs

Regards,
-- 
Ilya Kasnacheev


пт, 30 нояб. 2018 г. в 05:53, Ray :

> Thank you for the reply Igor,
>
> After adding legacy_stdio_definitions.lib in the linker's input, I built
> the
> ODBC driver successfully.
>
> I want to build ODBC driver myself because I want to apply this ticket to
> Ignite 2.6 because 2.7 is not yet released.
>
> https://issues.apache.org/jira/browse/IGNITE-8930
>
> By the way, I followed the instructions in the
> modules/platforms/cpp/DEVNOTES.txt, I think this document should be
> updated.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Slow select distinct query on primary key

2018-11-30 Thread Юрий
Please provide explain plan of the query to check that index is use. *EXPLAIN
{your select statement}*

Also I noticed ACCOUNT_ID have length 50. Need to increase of inline index
size for the index.

Try create index with the follow command *CREATE INDEX PERF_POSITIONS_IDX
ON PERF_POSITIONS (ACCOUNT_ID) INLINE_SIZE 60;*

чт, 29 нояб. 2018 г. в 16:47, yongjec :

> Hi,
>
> I tried the additional index as you suggested, but it did not improve the
> query time. The query still takes 58-61 seconds.
>
> CREATE INDEX PERF_POSITIONS_IDX ON PERF_POSITIONS (ACCOUNT_ID);
> CREATE INDEX PERF_POSITIONS_IDX2 ON PERF_POSITIONS (ACCOUNT_ID,
> EFFECTIVE_DATE, FREQUENCY, SOURCE_ID, SECURITY_ALIAS, POSITION_TYPE);
>
>
> I also tried the single column index only without the composite index. That
> did not make any difference in query time, either.
>
> CREATE INDEX PERF_POSITIONS_IDX ON PERF_POSITIONS (ACCOUNT_ID);
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


-- 
Живи с улыбкой! :D


Re: Failed to fetch SQL query result

2018-11-30 Thread Ilya Kasnacheev
Hello!

Does this happen every time? If so, which is the # of row on which it will
happen?

Regards,
-- 
Ilya Kasnacheev


вт, 27 нояб. 2018 г. в 10:09, yangjiajun <1371549...@qq.com>:

> Hello.
>
> I did a scan query on a table which has 8w records and tried to go through
> all records in the result set but got following exception:
>
> [13:53:31,523][SEVERE][client-connector-#77][JdbcRequestHandler] Failed to
> fetch SQL query result [reqId=0, req=JdbcQueryFetchRequest
> [queryId=38106237, pageSize=1024]]
> class org.apache.ignite.internal.processors.query.IgniteSQLException: The
> object is already closed [90007-195]
> at
>
> org.apache.ignite.internal.processors.query.h2.H2ResultSetIterator.fetchNext(H2ResultSetIterator.java:136)
> at
>
> org.apache.ignite.internal.processors.query.h2.H2ResultSetIterator.onHasNext(H2ResultSetIterator.java:142)
> at
>
> org.apache.ignite.internal.util.GridCloseableIteratorAdapter.hasNextX(GridCloseableIteratorAdapter.java:53)
> at
>
> org.apache.ignite.internal.util.lang.GridIteratorAdapter.hasNext(GridIteratorAdapter.java:45)
> at
>
> org.apache.ignite.internal.processors.query.GridQueryCacheObjectsIterator.hasNext(GridQueryCacheObjectsIterator.java:61)
> at
>
> org.apache.ignite.internal.processors.odbc.jdbc.JdbcQueryCursor.fetchRows(JdbcQueryCursor.java:72)
> at
>
> org.apache.ignite.internal.processors.odbc.jdbc.JdbcRequestHandler.fetchQuery(JdbcRequestHandler.java:587)
> at
>
> org.apache.ignite.internal.processors.odbc.jdbc.JdbcRequestHandler.handle(JdbcRequestHandler.java:206)
> at
>
> org.apache.ignite.internal.processors.odbc.ClientListenerNioListener.onMessage(ClientListenerNioListener.java:160)
> at
>
> org.apache.ignite.internal.processors.odbc.ClientListenerNioListener.onMessage(ClientListenerNioListener.java:44)
> at
>
> org.apache.ignite.internal.util.nio.GridNioFilterChain$TailFilter.onMessageReceived(GridNioFilterChain.java:279)
> at
>
> org.apache.ignite.internal.util.nio.GridNioFilterAdapter.proceedMessageReceived(GridNioFilterAdapter.java:109)
> at
>
> org.apache.ignite.internal.util.nio.GridNioAsyncNotifyFilter$3.body(GridNioAsyncNotifyFilter.java:97)
> at
> org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
> at
>
> org.apache.ignite.internal.util.worker.GridWorkerPool$1.run(GridWorkerPool.java:70)
> at
>
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at
>
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> Caused by: org.h2.jdbc.JdbcSQLException: The object is already closed
> [90007-195]
> at
> org.h2.message.DbException.getJdbcSQLException(DbException.java:345)
> at org.h2.message.DbException.get(DbException.java:179)
> at org.h2.message.DbException.get(DbException.java:155)
> at org.h2.message.DbException.get(DbException.java:144)
> at org.h2.jdbc.JdbcResultSet.checkClosed(JdbcResultSet.java:3208)
> at org.h2.jdbc.JdbcResultSet.next(JdbcResultSet.java:130)
> at
>
> org.apache.ignite.internal.processors.query.h2.H2ResultSetIterator.fetchNext(H2ResultSetIterator.java:110)
> ... 17 more
>
> My ignite version is 2.6 and I only started one node.I did not call any
> close methods. Why ignite closed my result set?
>
> Here is my test code:
>
> import java.sql.Connection;
> import java.sql.DriverManager;
> import java.sql.ResultSet;
> import java.sql.ResultSetMetaData;
> import java.sql.Statement;
> import java.util.Properties;
>
> public class StatementTest {
>
> private static Connection conn;
>
>
> public static void main(String[] args) throws Exception {
>
> long t1 = System.currentTimeMillis();
> try {
> initialize();
>
> String selectSql = "SELECT * FROM
> table_6932_r_1_1";
> testQuery(selectSql);
> } catch (Exception e) {
> throw e;
> } finally {
> if (conn != null)
> conn.close();
> }
> long t2 = System.currentTimeMillis();
> System.out.println("operation costs " + (t2 - t1) + " ms");
> }
>
> public static void close() throws Exception {
> conn.close();
> }
>
> public static void initialize() throws Exception {
> Class.forName("org.apache.ignite.IgniteJdbcThinDriver");
> String dbUrl =
>
> "jdbc:ignite:thin://ip:port;lazy=true;skipReducerOnUpdate=true;replicatedOnly=true";
> conn = DriverManager.getConnection(dbUrl, props);
> }
>
> public static void testUpdate(String sql) throws Exception {
> try 

Re: ODBC driver build error

2018-11-30 Thread Igor Sapego
You are right, this document needs update. I'll file
a ticket for this task.

Best Regards,
Igor


On Fri, Nov 30, 2018 at 5:53 AM Ray  wrote:

> Thank you for the reply Igor,
>
> After adding legacy_stdio_definitions.lib in the linker's input, I built
> the
> ODBC driver successfully.
>
> I want to build ODBC driver myself because I want to apply this ticket to
> Ignite 2.6 because 2.7 is not yet released.
>
> https://issues.apache.org/jira/browse/IGNITE-8930
>
> By the way, I followed the instructions in the
> modules/platforms/cpp/DEVNOTES.txt, I think this document should be
> updated.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>