Can Ignite native persistence used with 3rd party persistence?

2017-11-30 Thread Ray
Ignite native persistence provides on-disk sql query and quick cluster
startup without data loading, so we definitely want to use it.
But we have a legacy hbase served as persistence layer and there are some
business rely on it.
So can Ignite native persistence used with 3rd party persistence?

Basically we want data to be both persisted both on native persistence and
hbase when new entries goes into Ignite.
And when user queries data not in memory, Ignite will query its native
persistence.

Is this design supported by Ignite?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


the balance between read and write.

2017-11-30 Thread Marco
this use case is about using ignite for instant sales analytics. The data
size is less then 1M, but data state changes frequently, on the other hand,
query processor initiates parallel financial aggregations and publish
outputs to dashboard. 
Months ago, we tried partition cache (v1.7, 4 nodes, affinity key), it
showed very good performance for data processing, but if the concurrent
queries were applied at the same time, the whole cluster turned out to be
slow and unsteady. 

To enhance this performance issue, we turned to replicate cache and realize
the dynamic queries through RESTAPI, in this way we gained much better query
performance, but data process became the new bottle neck, the data
processing and response became slow.

So, my question is how to deal with the balance between read and write. 

separate them physically and chain them via some message solution like
ignite streamer or kafka?
what's the best practice of this scenario?




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Out of memory error

2017-11-30 Thread consultant_dmitry
Hi Ignite team, 

My cluster is a windows server with 32 gb RAM (24 free). I built project in
gridgain.console and use default properties for my project (only change
Query parallelism parameter). When I run my project in IDEA I have next
error log:

[18:13:20] Ignite node started OK (id=7598c95e, instance name=Project)
[18:13:20] Topology snapshot [ver=2, servers=1, clients=1, CPUs=8,
heap=14.0GB]
>>> Loading caches...
>>> Loading cache: ProjectCache
[18:20:43,868][SEVERE][mgmt-#44%Project%][GridTaskWorker] Failed to obtain
remote job result policy for result from ComputeTask.result(..) method (will
fail the whole task): GridJobResultImpl [job=C2 [c=LoadCacheJobV2
[keepBinary=false]], sib=GridJobSiblingImpl
[sesId=fd92d7d0061-7598c95e-e614-49ff-bf47-eb59bf66003d,
jobId=0e92d7d0061-7598c95e-e614-49ff-bf47-eb59bf66003d,
nodeId=8f968d9c-10f7-4318-8ae1-271a9992e440, isJobDone=false],
jobCtx=GridJobContextImpl
[jobId=0e92d7d0061-7598c95e-e614-49ff-bf47-eb59bf66003d, timeoutObj=null,
attrs={}], node=TcpDiscoveryNode [id=8f968d9c-10f7-4318-8ae1-271a9992e440,
addrs=[0:0:0:0:0:0:0:1, 10.50.124.107, 127.0.0.1],
sockAddrs=[RU-LEDTEST07.cee.ema.ad.pwcinternal.com/10.50.124.107:47500,
/0:0:0:0:0:0:0:1:47500, /127.0.0.1:47500], discPort=47500, order=1,
intOrder=1, lastExchangeTime=1512054799051, loc=false,
ver=2.1.6#20171011-sha1:b6e23b35, isClient=false], ex=class
o.a.i.IgniteException: Failed to load cache: ProjectCache, hasRes=true,
isCancelled=false, isOccupied=true]
class org.apache.ignite.IgniteException: Remote job threw user exception
(override or implement ComputeTask.result(..) method if you would like to
have automatic failover for this exception).
at
org.apache.ignite.compute.ComputeTaskAdapter.result(ComputeTaskAdapter.java:101)
at
org.apache.ignite.internal.processors.task.GridTaskWorker$5.apply(GridTaskWorker.java:1047)
at
org.apache.ignite.internal.processors.task.GridTaskWorker$5.apply(GridTaskWorker.java:1040)
at
org.apache.ignite.internal.util.IgniteUtils.wrapThreadLoader(IgniteUtils.java:6661)
at
org.apache.ignite.internal.processors.task.GridTaskWorker.result(GridTaskWorker.java:1040)
at
org.apache.ignite.internal.processors.task.GridTaskWorker.onResponse(GridTaskWorker.java:858)
at
org.apache.ignite.internal.processors.task.GridTaskProcessor.processJobExecuteResponse(GridTaskProcessor.java:1066)
at
org.apache.ignite.internal.processors.task.GridTaskProcessor$JobMessageListener.onMessage(GridTaskProcessor.java:1301)
at
org.apache.ignite.internal.managers.communication.GridIoManager.invokeListener(GridIoManager.java:1562)
at
org.apache.ignite.internal.managers.communication.GridIoManager.processRegularMessage0(GridIoManager.java:1190)
at
org.apache.ignite.internal.managers.communication.GridIoManager.access$4200(GridIoManager.java:126)
at
org.apache.ignite.internal.managers.communication.GridIoManager$9.run(GridIoManager.java:1097)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: class org.apache.ignite.IgniteException: Failed to load cache:
ProjectCache
at
org.apache.ignite.internal.processors.closure.GridClosureProcessor$C2.execute(GridClosureProcessor.java:1858)
at
org.apache.ignite.internal.processors.job.GridJobWorker$2.call(GridJobWorker.java:566)
at
org.apache.ignite.internal.util.IgniteUtils.wrapThreadLoader(IgniteUtils.java:6629)
at
org.apache.ignite.internal.processors.job.GridJobWorker.execute0(GridJobWorker.java:560)
at
org.apache.ignite.internal.processors.job.GridJobWorker.body(GridJobWorker.java:489)
at
org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
at
org.apache.ignite.internal.processors.job.GridJobProcessor.processJobExecuteRequest(GridJobProcessor.java:1115)
at
org.apache.ignite.internal.processors.job.GridJobProcessor$JobExecutionListener.onMessage(GridJobProcessor.java:1908)
... 7 more
Caused by: class org.apache.ignite.IgniteException: Failed to load cache:
ProjectCache
at
org.apache.ignite.internal.util.IgniteUtils.convertException(IgniteUtils.java:966)
at
org.apache.ignite.internal.processors.cache.GridCacheAdapter$LoadCacheJob.localExecute(GridCacheAdapter.java:5485)
at
org.apache.ignite.internal.processors.cache.GridCacheAdapter$LoadCacheJobV2.localExecute(GridCacheAdapter.java:5529)
at
org.apache.ignite.internal.processors.cache.GridCacheAdapter$TopologyVersionAwareJob.execute(GridCacheAdapter.java:6144)
at
org.apache.ignite.compute.ComputeJobAdapter.call(ComputeJobAdapter.java:132)
at
org.apache.ignite.internal.processors.closure.GridClosureProcessor$C2.execute(GridClosureProcessor.java:1855)
... 14 more
Caused by: class 

Re: Number of open file handles keeps on increasing on Ignite node on adding a new cache

2017-11-30 Thread aMark

Thanks for reply.

Yes I use persistent store. For each cache there are close to 1.2 million
entries and it creates close to 500 files(part*.bin) of 1 MB in Persistent
store directory but questions is why Ignite keeps file handles opens even
after there is no active reader/writer to that cache.  

Rather than keeping file handles opens, it can keep information about the
files and can open file handles only in case read/write happens to cache.
Keeping file handles open doesn't seem right.

Thanks,



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Number of open file handles keeps on increasing on Ignite node on adding a new cache

2017-11-30 Thread aMark
Thanks Denis.

In my case there is only one type of cache. If I put all the cache in single
group, will it help ?

Thanks,



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: read/persist a huge cache by its affinity key

2017-11-30 Thread Marco
is there straight forward API to identify row entries?
for example: 
batch 1: from table1 where id>0 and id<100
batch 2: from table1 where id>=100 and id<200
batch 3: from table1 where id>=200 and id<300
..



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


read/persist a huge cache by its affinity key

2017-11-30 Thread Marco
Hi,
I'm currently working on a traffic project, a traffic cache has been
assigned to incrementally save user cookie, now I'm thinking about how to
persist it to disk and then restart the cluster. but I would like to know
how to do this manually and continuously.

Ignite version: 2.1
Cluster: 6 Nodes
Cache type: partition based on affinity key affinityKey(cookie.hasCode,
cookieString)
Cache size: 30M+

Persist app currently connects to the cluster as a client node and do the
data copy.
Any help is appreciated.

Marco



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


RE: Did the default page size change between Ignite 2.2 and 2.3

2017-11-30 Thread Raymond Wilson
Thanks Denis.



I did give that a try in the meantime and it sorted it out.



Do you know how much of a performance delta it made?



Thanks,

Raymond.



*From:* Denis Magda [mailto:dma...@apache.org]
*Sent:* Friday, December 1, 2017 4:15 PM
*To:* user@ignite.apache.org
*Subject:* Re: Did the default page size change between Ignite 2.2 and 2.3



The page default size was changed to 4KB in 2.3 because of performance
considerations:

https://apacheignite.readme.io/docs/durable-memory-tuning#section-page-size



Roll the size back to 2KB manually in your cluster configuration to get rid
off this exception.



—

Denis



On Nov 30, 2017, at 6:16 PM, Raymond Wilson 
wrote:



I am upgrading from Ignite 2.2 to 2.3. I use persistent storage and have an
existing data store.



When activating the cluster I get the following error in the C# client:



class org.apache.ignite.IgniteCheckedException: Failed to verify store file
(invalid page size) [expectedPageSize=4096, filePageSize=2048]



I didn’t set a specific page size in 2.2 or 2.3.



Did this change?



Thanks,

Raymond.


Re: Did the default page size change between Ignite 2.2 and 2.3

2017-11-30 Thread Denis Magda
The page default size was changed to 4KB in 2.3 because of performance 
considerations:
https://apacheignite.readme.io/docs/durable-memory-tuning#section-page-size 


Roll the size back to 2KB manually in your cluster configuration to get rid off 
this exception.

—
Denis

> On Nov 30, 2017, at 6:16 PM, Raymond Wilson  
> wrote:
> 
> I am upgrading from Ignite 2.2 to 2.3. I use persistent storage and have an 
> existing data store.
>  
> When activating the cluster I get the following error in the C# client:
>  
> class org.apache.ignite.IgniteCheckedException: Failed to verify store file 
> (invalid page size) [expectedPageSize=4096, filePageSize=2048]
>  
> I didn’t set a specific page size in 2.2 or 2.3.
>  
> Did this change?
>  
> Thanks,
> Raymond.



RE: Does the Ignite C# client support distributed queues?

2017-11-30 Thread Raymond Wilson
Looking at it I see it's blocked by 2701 (which has additional
dependencies, all of which say they are blocked by 2701).

I understand there is an intention to bring the C# client up to par with
the Java client. Is there a ticket/schedule yet for this?

Raymond.

-Original Message-
From: vkulichenko [mailto:valentin.kuliche...@gmail.com]
Sent: Friday, December 1, 2017 1:30 PM
To: user@ignite.apache.org
Subject: RE: Does the Ignite C# client support distributed queues?

Oops, I read wrong! This is not supported. There is a ticket, but it
doesn't seem to be active at the moment:
https://issues.apache.org/jira/browse/IGNITE-1417

-Val



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Did the default page size change between Ignite 2.2 and 2.3

2017-11-30 Thread Raymond Wilson
I am upgrading from Ignite 2.2 to 2.3. I use persistent storage and have an
existing data store.



When activating the cluster I get the following error in the C# client:



class org.apache.ignite.IgniteCheckedException: Failed to verify store file
(invalid page size) [expectedPageSize=4096, filePageSize=2048]



I didn’t set a specific page size in 2.2 or 2.3.



Did this change?



Thanks,

Raymond.


Re: Number of open file handles keeps on increasing on Ignite node on adding a new cache

2017-11-30 Thread Denis Magda
Hi,

You can leverage from Cache Groups if a total number of opened files is an 
issue for your environment:
https://apacheignite.readme.io/docs/cache-groups 


—
Denis

> On Nov 28, 2017, at 11:34 PM, aMark  wrote:
> 
> Hi,
> 
> I am running ignite cluster on replicated mode having 2 nodes on different
> machines. 
> Whenever I create a cache and write to it through ignite client, number of
> open file handles increases by close to 1000. 
> 
> But these open file handles does not decreases even if I stop writing and
> disconnect my client from the cluster. This is creating a problem in
> scenario if I create many caches (around 30+) then my unix box (configured
> with max 32K file handles) runs out of file handles. 
> 
> Please suggest how can I overcome the problem of ever increasing number of
> file handles.
> 
> PS: I am using Ignite 2.1 version. I am using "lsof | wc -l" command to
> check open file handles on my unix box.
> 
> Thanks,
> 
> 
> 
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/



Re: Ignite DataStream vs cache putAll

2017-11-30 Thread daniels
Thank you dear Denis,I will follow to your suggestions. 



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Facing issue Ignite 2.3 and Vertex-ignite 3.5.0

2017-11-30 Thread Denis Magda
Automatic cluster activation with the persistence enabled will become possible 
once we release the baseline topology:
https://cwiki.apache.org/confluence/display/IGNITE/IEP-4+Baseline+topology+for+caches
 


It should become available in the next Ignite release.

—
Denis

> On Nov 28, 2017, at 9:10 PM, Tejashwa Kumar Verma  
> wrote:
> 
> Hi Alexey,
> 
> Thanks for quick response and explaining the reason for Cluster inactivity by 
> default.
> But in our use case we are ok if we are activating cluster before start. So 
> is there any workaround or other way to enable Cluster through xml configs?
> 
> 
> Thanks & Regards
> Tejas
> 
> 
> On Wed, Nov 29, 2017 at 12:14 AM, Alexey Kukushkin  > wrote:
> Hi,
> 
> You can enable Ignite persistence only explicitly using a command line tool 
> or in the code . 
> There is a reason why enabling persistence makes cluster inactive upon 
> startup: the reason is the data is already in Ignite. Suppose you do a 
> "get(key)" and Ignite does now find the key. How Ignite can know if the key 
> does not exist or a node that has they key is not started yet? This is why 
> you have to explicitly say "now full cluster is started" by activating it.
> 
> Community is developing a Baseline Topology feature where you would say that 
> you cluster consists of specific nodes and will become active automatically 
> once the specified topology is formed. But this is in the next release.
>  
> 
> 



Re: Ignite DataStream vs cache putAll

2017-11-30 Thread Denis Magda
I would suggest the data streamer for initial data preloading or when you 
stream data continuously into the cluster.

Personally, I’ve never seen putAll to beat the data streamer. Check that you 
have allowOverwrite set to true. Plus you can tweak another parameters.

—
Denis

> On Nov 28, 2017, at 5:37 AM, daniels  wrote:
> 
> Thank you for response.
> 
> So.in my case(50.000 max size and no need server side processing ) is beter
> to use putAll?
> In other words, is  50.000 very big ammount data?
> 
> 
> 
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/



RE: Does the Ignite C# client support distributed queues?

2017-11-30 Thread vkulichenko
Oops, I read wrong! This is not supported. There is a ticket, but it doesn't
seem to be active at the moment:
https://issues.apache.org/jira/browse/IGNITE-1417

-Val



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


RE: Does the Ignite C# client support distributed queues?

2017-11-30 Thread Raymond Wilson
Hi Val,

This seems to be related to queries, not queues as described here:
https://apacheignite.readme.io/docs/queue-and-set

Thanks,
Raymond.

-Original Message-
From: vkulichenko [mailto:valentin.kuliche...@gmail.com]
Sent: Friday, December 1, 2017 11:49 AM
To: user@ignite.apache.org
Subject: Re: Does the Ignite C# client support distributed queues?

Ryamond,

It is supported, see here:
https://apacheignite-sql.readme.io/docs/net-sql-api

-Val



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Does the Ignite C# client support distributed queues?

2017-11-30 Thread vkulichenko
Ryamond,

It is supported, see here:
https://apacheignite-sql.readme.io/docs/net-sql-api

-Val



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Does the Ignite C# client support distributed queues?

2017-11-30 Thread Raymond Wilson
I’ve been reading up on the distributed queue support in Ignite which may
be a good fit for a use case we have.



Looking at the Ignite C# client this does not seem to be supported.



Is this supported in the C# client? If not, when is it planned to be
included?



Thanks,

Raymond.


Re: get api takes more time when adding more nodes to cluster

2017-11-30 Thread vkulichenko
Biren,

I meant that you can have a standalone cluster and embed client node into
the application instead of server node. Making these caches replicated can
be also an option - in this case all reads will be local and fast.

-Val



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: SQL Count(*) returns incorrect count

2017-11-30 Thread vkulichenko
Hi,

Your assumptions are correct. There is also an issue [1] that likely is
causing this behavior. As a workaround you can try to force IgniteContext to
start everything in client mode. In order to achieve this, use
setClientMode(true) in the closure that created IgniteConfiguration:

IgniteOutClosure ioc = new IgniteOutClosure() {
@Override
public Object apply() {
IgniteConfiguration cfg = new IgniteConfiguration();
cfg.setClientMode(true);
cfg.setCacheConfiguration(cacheCfg);
return cfg;
}
};

[1] https://issues.apache.org/jira/browse/IGNITE-5981

-Val



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Web Console on Kubernetes Cluster

2017-11-30 Thread lukaszbyjos
Hi. Quick question. How to connect web console with my kubernetes cluster
pods (server and client). 
I have cluster with one pod as server node and few clients. I thought that I
need to just deploy webconsole image and it will connect to nodes. But later
I found that I need some web agent. 
How to setup them? They should be separate deployment or something? 



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: SQL Count(*) returns incorrect count

2017-11-30 Thread soroka21
Thank you for suggestions.
I have 5 node standalone ignite cluster and the main goal is to load data
into it and store it for long time for future use. I can't keep Spark
workers in memory and assume what my data is ending up in cache distributed
inside 5 standalone ignite nodes.

Spark process is starting 5 additional ignite servers ( I can see it from
topology snapshot) 
With standalone=true or standalone=false same issue is happening when I'm
running my spark application using YARN.

However I found what if Spark is working in local, non-distributed mode
(e.g. spark.master=local[2]) records are not getting lost.

So it looks like the issue is when spark workers, processing JavaIgniteRDD,
are going down.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Index not getting created

2017-11-30 Thread Naveen Kumar
Hi


Here is the node logs captured with -v option.


[22:56:41,291][SEVERE][client-connector-#618%IgnitePOC%][JdbcRequestHandler]
Failed to execute SQL query [reqId=0, req=JdbcQueryExecuteRequest
[schemaName=PUBLIC, pageSize=1024, maxRows=0, sqlQry=CREATE INDEX
idx_customer_accountId ON "Customer".CUSTOMER (ACCOUNT_ID_LIST),
args=[], stmtType=ANY_STATEMENT_TYPE]]

class org.apache.ignite.internal.processors.query.IgniteSQLException:
Cache doesn't exist: Customer

at 
org.apache.ignite.internal.processors.query.h2.ddl.DdlStatementsProcessor.convert(DdlStatementsProcessor.java:343)

at 
org.apache.ignite.internal.processors.query.h2.ddl.DdlStatementsProcessor.runDdlStatement(DdlStatementsProcessor.java:287)

at 
org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.queryDistributedSqlFields(IgniteH2Indexing.java:1466)

at 
org.apache.ignite.internal.processors.query.GridQueryProcessor$6.applyx(GridQueryProcessor.java:1966)

at 
org.apache.ignite.internal.processors.query.GridQueryProcessor$6.applyx(GridQueryProcessor.java:1962)

at 
org.apache.ignite.internal.util.lang.IgniteOutClosureX.apply(IgniteOutClosureX.java:36)

at 
org.apache.ignite.internal.processors.query.GridQueryProcessor.executeQuery(GridQueryProcessor.java:2445)

at 
org.apache.ignite.internal.processors.query.GridQueryProcessor.querySqlFieldsNoCache(GridQueryProcessor.java:1971)

at 
org.apache.ignite.internal.processors.odbc.jdbc.JdbcRequestHandler.executeQuery(JdbcRequestHandler.java:305)

at 
org.apache.ignite.internal.processors.odbc.jdbc.JdbcRequestHandler.handle(JdbcRequestHandler.java:164)

at 
org.apache.ignite.internal.processors.odbc.ClientListenerNioListener.onMessage(ClientListenerNioListener.java:137)

at 
org.apache.ignite.internal.processors.odbc.ClientListenerNioListener.onMessage(ClientListenerNioListener.java:39)

at 
org.apache.ignite.internal.util.nio.GridNioFilterChain$TailFilter.onMessageReceived(GridNioFilterChain.java:279)

at 
org.apache.ignite.internal.util.nio.GridNioFilterAdapter.proceedMessageReceived(GridNioFilterAdapter.java:109)

at 
org.apache.ignite.internal.util.nio.GridNioAsyncNotifyFilter$3.body(GridNioAsyncNotifyFilter.java:97)

at 
org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)

at 
org.apache.ignite.internal.util.worker.GridWorkerPool$1.run(GridWorkerPool.java:70)

at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)

at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)

at java.lang.Thread.run(Thread.java:748)



Select query works fine

0: jdbc:ignite:thin://127.0.0.1> select ACCOUNT_ID_LIST from
"Customer".CUSTOMER where ACCOUNT_ID_LIST ='A10001';

++

|ACCOUNT_ID_LIST |

++

| A10001 |

++

1 row selected (1.342 seconds)


Create index query failed with the below error

0: jdbc:ignite:thin://127.0.0.1> CREATE INDEX idx_customer_accountId
ON "Customer".CUSTOMER (ACCOUNT_ID_LIST);

Error: Cache doesn't exist: Customer (state=5,code=0)

java.sql.SQLException: Cache doesn't exist: Customer

at 
org.apache.ignite.internal.jdbc.thin.JdbcThinConnection.sendRequest(JdbcThinConnection.java:671)

at 
org.apache.ignite.internal.jdbc.thin.JdbcThinStatement.execute0(JdbcThinStatement.java:130)

at 
org.apache.ignite.internal.jdbc.thin.JdbcThinStatement.execute(JdbcThinStatement.java:299)

at sqlline.Commands.execute(Commands.java:823)

at sqlline.Commands.sql(Commands.java:733)

at sqlline.SqlLine.dispatch(SqlLine.java:795)

at sqlline.SqlLine.begin(SqlLine.java:668)

at sqlline.SqlLine.start(SqlLine.java:373)

at sqlline.SqlLine.main(SqlLine.java:265)


selectc query works fine even after issuing the create index query
which is failed

0: jdbc:ignite:thin://127.0.0.1> select ACCOUNT_ID_LIST from
"Customer".CUSTOMER where ACCOUNT_ID_LIST ='A10001';

++

|ACCOUNT_ID_LIST |

++

| A10001 |

++

1 row selected (1.641 seconds)

0: jdbc:ignite:thin://127.0.0.1>

On Thu, Nov 30, 2017 at 9:04 PM, Taras Ledkov  wrote:
> Hi,
>
> I cannot reproduce the issue with described steps.
> Please check that the cache wasn't destroyed on the server.
>
> i.e. please execute SELECT query again after failed CREATE INDEX.
>
>
>
> On 30.11.2017 11:45, Naveen wrote:
>>
>> Has anyone got a chance to look into into this issue where I am trying to
>> create an index, but its throwing an error saying cache does not exist
>>
>> 0: jdbc:ignite:thin://127.0.0.1>  select ACCOUNT_ID_LIST from
>> "Customer".CUSTOMER 

Re: sql insert with composite primary key

2017-11-30 Thread vkulichenko
Then the insert syntax I provided should work.

-Val



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: MarshallerContextImpl: Failed to write class name to file

2017-11-30 Thread jpmoore40
Thanks Ilya!



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Lucene query syntaxt support in TextQuery ?

2017-11-30 Thread zbyszek
Hi Andrew,

Indeed, this is related to field names:
1. In ver. 2.3 the solution is to use uppercase names for query to work,
although my all fields are lowercase (as you can see in attached example). I
would still consider this as a bug though
2. In ver. 2.0 it works as expected - it works for the casing used to name
the fields (lowercase in that case)


reagrds,
zbyszek



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Index not getting created

2017-11-30 Thread Taras Ledkov

Hi,

I cannot reproduce the issue with described steps.
Please check that the cache wasn't destroyed on the server.

i.e. please execute SELECT query again after failed CREATE INDEX.


On 30.11.2017 11:45, Naveen wrote:

Has anyone got a chance to look into into this issue where I am trying to
create an index, but its throwing an error saying cache does not exist

0: jdbc:ignite:thin://127.0.0.1>  select ACCOUNT_ID_LIST from
"Customer".CUSTOMER where ACCOUNT_ID_LIST ='A10001';
++
|ACCOUNT_ID_LIST |
++
| A10001 |
++
1 row selected (2.078 seconds)

**0: jdbc:ignite:thin://127.0.0.1> CREATE INDEX idx_customer_accountId ON
"Customer".CUSTOMER (ACCOUNT_ID_LIST);*
*Error: Cache doesn't exist: Customer (state=5,code=0)
java.sql.SQLException: Cache doesn't exist: Customer
 at
org.apache.ignite.internal.jdbc.thin.JdbcThinConnection.sendRequest(JdbcThinConnection.java:671)
 at
org.apache.ignite.internal.jdbc.thin.JdbcThinStatement.execute0(JdbcThinStatement.java:130)
 at
org.apache.ignite.internal.jdbc.thin.JdbcThinStatement.execute(JdbcThinStatement.java:299)
 at sqlline.Commands.execute(Commands.java:823)
 at sqlline.Commands.sql(Commands.java:733)
 at sqlline.SqlLine.dispatch(SqlLine.java:795)
 at sqlline.SqlLine.begin(SqlLine.java:668)
 at sqlline.SqlLine.start(SqlLine.java:373)
 at sqlline.SqlLine.main(SqlLine.java:265)
0: jdbc:ignite:thin://127.0.0.1>




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


--
Taras Ledkov
Mail-To: tled...@gridgain.com



Re: Java's DelayQueue as distributed data structure?

2017-11-30 Thread th76
Hi Slava,

Thanks for your response. 

Hopefully the DelayQueue is added in the future. Can I make a feature
request for the DelayQueue somewhere? 

Greetings,

Teun Hoogendoorn





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Two persistent data stores for a single Ignite cluster - RDBMS and Ignite native

2017-11-30 Thread slava.koptilin
Hi Naveen,

At the first glance, you can try using Continuous Query in order to listen
all modifications (an entry is inserted, updated or deleted) of a cache.
Please see the following page for the detailed information [1]

Another approach that may be used here is using two caches.
The first one configured with PDS and the second one should be configured
with CacheStore and will be used in order to propagate changes to the
underlying database (OracleDB).

[1]
https://apacheignite.readme.io/docs/continuous-queries#section-local-listener
[2] https://apacheignite.readme.io/docs/3rd-party-store#overview

Thanks.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Java's DelayQueue as distributed data structure?

2017-11-30 Thread slava.koptilin
Hi Teun,

Apache Ignite provides support for the following data structures:
 - distributed blocking queue (IgniteQueue) and distributed set (IgniteSet)
 - atomic types (IgniteAtomicLong, IgniteAtomicReference)
 - syncronization objects (CountDownLatch, Semaphore)
 - distributed atomic sequence (IgniteAtomicSequence)

The comprehensive description can be found here [1]

Unfortunately, DelayQueue is not supported for now.

[1] https://apacheignite.readme.io/docs/data-structures



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Deploy Cluster Singleton blocks and never returns

2017-11-30 Thread jeansafar
Hi, 

I am deploying a few Cluster Singletons in my prototype. The first one is
deployed successfully yet the second one blocks and never return due to a
lock in GridFutureAdapter: 

private R get0(boolean ignoreInterrupts) throws IgniteCheckedException {
if (isDone() || !registerWaiter(Thread.currentThread()))
return resolve();

boolean interrupted = false;

try {
while (true) {
LockSupport.park();  *<--- block point* 

Any pointer and help would be appreciated as I can't understand the pattern
here. 

Java 8 144 / service deployment done within spring boot Service
instantiation

best regards-
jean



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Ignite node crashes after one query fetches many entries from cache

2017-11-30 Thread Igor Sapego
Ray,

In upcoming version you can do it a standard way for ODBC -
using SQL_ATTR_QUERY_TIMEOUT.

It will only be available starting with version 2.4 though [1], but
it is already in master and you can try it out using nightly
build [2] or if you are willing to build driver by yourself from the
master.

[1] - https://issues.apache.org/jira/browse/IGNITE-6836
[2] -
https://builds.apache.org/view/H-L/view/Ignite/job/Ignite-nightly/lastSuccessfulBuild/

Best Regards,
Igor

On Thu, Nov 30, 2017 at 4:55 PM, Ray  wrote:

> Anton, thanks for the heads up.
> Any idea on how to set the timeout if I'm using ODBC to do the query?
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Local node and remote node have different version numbers

2017-11-30 Thread Evgenii Zhuravlev
Also, as you saw, example ignite xml has MulticastIpFinder configured, so,
I think it's the root cause. Please make sure that you really started
Ignite with config file, that doesn't have configured MulticastIpFinder,
it's possible that you run node with default example xml config file

Evgenii

2017-11-30 16:30 GMT+03:00 Evgenii Zhuravlev :

> So, don't use multi cast ip finder at all.
>
> 2017-11-30 15:27 GMT+03:00 Rajarshi Pain :
>
>> The code is in client machine so can not share the code, config is there
>> in this mail chain. Yes we are not using multi cast ip. But as we couldn't
>> fix it so my colleague is using the multicast ip that given in ignite
>> website.
>>
>> On Thu 30 Nov, 2017, 15:44 Evgenii Zhuravlev, 
>> wrote:
>>
>>> Are you sure that you are using not Multicast IP finder? Please share
>>> configuration for all nodes that find each other and log files
>>>
>>> 2017-11-30 9:32 GMT+03:00 Rajarshi Pain :
>>>
 Hi Evgenii,

 We both tried to specify the port, but still we rae getting same error
 if we both running it on the time in our system.

 On Wed 29 Nov, 2017, 21:04 Evgenii Zhuravlev, 
 wrote:

> Hi,
>
> In this case, you should specify ports for addresses, for example:
>
> 10.23.153.56:47500
>
>
> Evgenii
>
>
> 2017-11-29 18:20 GMT+03:00 Rajarshi Pain :
>
>> Hi,
>>
>> I was working on Ignite 2,3, and my colleague is working on 1.8. We
>> are running different application on our local machine using the example
>> ignite xml, but it getting failed with the below exception:-
>>
>> We are using our local system ip under TcpDiscoverySpi, but still,
>> it is detecting each other. How I can stop this and not let them discover
>> each other.
>>
>> 
>> 
>>   
>> > class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">
>>   
>> 10.23.153.56
>>
>>   
>>
>>
>> Exception:-
>>
>>  Local node and remote node have different version numbers (node
>> will not join, Ignite does not support rolling updates, so versions must 
>> be
>> exactly the same) [locBuildVer=2.3.0, rmtBuildVer=1.8.0,
>> locNodeAddrs=[01HW1083820.India.TCS.com/0:0:0:0:0:0:0:1, /
>> 10.23.209.152, /127.0.0.1], rmtNodeAddrs=[01Hw1083774.Indi
>> a.TCS.com/0:0:0:0:0:0:0:1, /10.23.209.149, /127.0.0.1],
>> locNodeId=c3e4f1ff-f6d7-4c26-9baf-e04f99a8eaac,
>> rmtNodeId=75e5e5bc-5de2-484a-9a91-a3a9f2ec48d3]
>>
>> --
>> Regards,
>> Rajarshi Pain
>>
>
> --

 Thanks
 Rajarshi

>>>
>>> --
>>
>> Thanks
>> Rajarshi
>>
>
>


Java's DelayQueue as distributed data structure?

2017-11-30 Thread th76
Hi,

Is it possible to use java.util.concurrent.DelayQueue
(https://docs.oracle.com/javase/8/docs/api/java/util/concurrent/DelayQueue.html)
as distributed data structure?

Thanks!

Teun Hoogendoorn



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Error running ignite in YARN

2017-11-30 Thread ilya.kasnacheev
Hello again!

After a close examination I have discovered a problem between Ignite YARN
and Hadoop code:

Ignite YARN collects all existing environment variables to pass them to
container, including variables with incorrect names, such as Bash functions,
which have extra characters at the end, and are ignored by most shells but
not the JVM.

When you tell Bash to export functions, it puts
BASH_FUNC_your_function_name%% variable into env. This is what is causing
problems because Ignite YARN picks this variable and tells Hadoop to pass it
to container, which leads to incorrectly written startup scrips.

I have created https://issues.apache.org/jira/browse/IGNITE-7080 about the
issue.

I will try my own examples on AWS EMR, see if I can find a workaround. A
good workaround will be to unexport the function before creating nodes with

unset -f run_prestart

If you are able to do that before running master.

Regards,



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Index not getting created

2017-11-30 Thread Alexey Kukushkin
Naveen,

Your "CREATE INDEX" syntax seems valid to me. Can you start server nodes
with debugging enabled, reproduce the problem and share the debug log
output from the servers?


Re: Ignite node crashes after one query fetches many entries from cache

2017-11-30 Thread Ray
Anton, thanks for the heads up.
Any idea on how to set the timeout if I'm using ODBC to do the query?




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Local node and remote node have different version numbers

2017-11-30 Thread Evgenii Zhuravlev
So, don't use multi cast ip finder at all.

2017-11-30 15:27 GMT+03:00 Rajarshi Pain :

> The code is in client machine so can not share the code, config is there
> in this mail chain. Yes we are not using multi cast ip. But as we couldn't
> fix it so my colleague is using the multicast ip that given in ignite
> website.
>
> On Thu 30 Nov, 2017, 15:44 Evgenii Zhuravlev, 
> wrote:
>
>> Are you sure that you are using not Multicast IP finder? Please share
>> configuration for all nodes that find each other and log files
>>
>> 2017-11-30 9:32 GMT+03:00 Rajarshi Pain :
>>
>>> Hi Evgenii,
>>>
>>> We both tried to specify the port, but still we rae getting same error
>>> if we both running it on the time in our system.
>>>
>>> On Wed 29 Nov, 2017, 21:04 Evgenii Zhuravlev, 
>>> wrote:
>>>
 Hi,

 In this case, you should specify ports for addresses, for example:

 10.23.153.56:47500


 Evgenii


 2017-11-29 18:20 GMT+03:00 Rajarshi Pain :

> Hi,
>
> I was working on Ignite 2,3, and my colleague is working on 1.8. We
> are running different application on our local machine using the example
> ignite xml, but it getting failed with the below exception:-
>
> We are using our local system ip under TcpDiscoverySpi, but still, it
> is detecting each other. How I can stop this and not let them discover 
> each
> other.
>
> 
> 
>   
>  class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">
>   
> 10.23.153.56
>
>   
>
>
> Exception:-
>
>  Local node and remote node have different version numbers (node will
> not join, Ignite does not support rolling updates, so versions must be
> exactly the same) [locBuildVer=2.3.0, rmtBuildVer=1.8.0,
> locNodeAddrs=[01HW1083820.India.TCS.com/0:0:0:0:0:0:0:1, /
> 10.23.209.152, /127.0.0.1], rmtNodeAddrs=[01Hw1083774.
> India.TCS.com/0:0:0:0:0:0:0:1, /10.23.209.149, /127.0.0.1],
> locNodeId=c3e4f1ff-f6d7-4c26-9baf-e04f99a8eaac,
> rmtNodeId=75e5e5bc-5de2-484a-9a91-a3a9f2ec48d3]
>
> --
> Regards,
> Rajarshi Pain
>

 --
>>>
>>> Thanks
>>> Rajarshi
>>>
>>
>> --
>
> Thanks
> Rajarshi
>


Re: MarshallerContextImpl: Failed to write class name to file

2017-11-30 Thread ilya.kasnacheev
Hello!

Yes, specifying workDirectory should work OK.

For log files, it would depend on which logger you are using (e.g. Log4j2),
and it should be configured using that logger's configuration.

Regards,



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Local node and remote node have different version numbers

2017-11-30 Thread Rajarshi Pain
The code is in client machine so can not share the code, config is there in
this mail chain. Yes we are not using multi cast ip. But as we couldn't fix
it so my colleague is using the multicast ip that given in ignite website.

On Thu 30 Nov, 2017, 15:44 Evgenii Zhuravlev, 
wrote:

> Are you sure that you are using not Multicast IP finder? Please share
> configuration for all nodes that find each other and log files
>
> 2017-11-30 9:32 GMT+03:00 Rajarshi Pain :
>
>> Hi Evgenii,
>>
>> We both tried to specify the port, but still we rae getting same error if
>> we both running it on the time in our system.
>>
>> On Wed 29 Nov, 2017, 21:04 Evgenii Zhuravlev, 
>> wrote:
>>
>>> Hi,
>>>
>>> In this case, you should specify ports for addresses, for example:
>>>
>>> 10.23.153.56:47500
>>>
>>>
>>> Evgenii
>>>
>>>
>>> 2017-11-29 18:20 GMT+03:00 Rajarshi Pain :
>>>
 Hi,

 I was working on Ignite 2,3, and my colleague is working on 1.8. We are
 running different application on our local machine using the example ignite
 xml, but it getting failed with the below exception:-

 We are using our local system ip under TcpDiscoverySpi, but still, it
 is detecting each other. How I can stop this and not let them discover each
 other.

 
 
   
 >>> class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">
   
 10.23.153.56

   


 Exception:-

  Local node and remote node have different version numbers (node will
 not join, Ignite does not support rolling updates, so versions must be
 exactly the same) [locBuildVer=2.3.0, rmtBuildVer=1.8.0, locNodeAddrs=[
 01HW1083820.India.TCS.com/0:0:0:0:0:0:0:1, /10.23.209.152, /127.0.0.1],
 rmtNodeAddrs=[01Hw1083774.India.TCS.com/0:0:0:0:0:0:0:1, /10.23.209.149,
 /127.0.0.1], locNodeId=c3e4f1ff-f6d7-4c26-9baf-e04f99a8eaac,
 rmtNodeId=75e5e5bc-5de2-484a-9a91-a3a9f2ec48d3]

 --
 Regards,
 Rajarshi Pain

>>>
>>> --
>>
>> Thanks
>> Rajarshi
>>
>
> --

Thanks
Rajarshi


Re: Lucene query syntaxt support in TextQuery ?

2017-11-30 Thread Andrey Mashenkov
Hi,

Have you tries to use lowercase or uppercase for field name,
AFAIK field name character's case may be a reason.

Would you please, let us know if it help you or not?

On Wed, Nov 29, 2017 at 3:24 PM, zbyszek  wrote:

> Val, thank you for confirmation.
>
> zbyszek
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>



-- 
Best regards,
Andrey V. Mashenkov


Re: Sorting or ranking TextQuery?

2017-11-30 Thread Andrey Mashenkov
Hi,

1. Ignite (from 2.1 version) has dependencies to Lucene 5.5.2 verison. So,
documentation you reference to may be out dated,
2. Ignite doesn't support configuration for underlying Lucene. Feel free to
create a ticket if you need smth.
3. Ignite has separate dependencies of H2 and Lucene underneath. H2 is used
for Sql queries and Lucene uses for TextQueries.
Ignite supports SqlQuery and TextQuery as different kind of queries for
different engines, this is why they can't be combined within same query.

Looks like latest versions of H2 support integration with Lucene engine for
full text search queries,
but it will not work with Ignite without additional fixes due to dependency
conflicts and other H2 issues:

"To use the Apache Lucene full text search, you need the Lucene library in
the classpath. Currently, Apache Lucene 3.6.2 is used for testing. Newer
versions may work, however they are not tested."
"The Lucene fulltext search supports searching in specific column only. "
"The Lucene fulltext search implementation is not synchronized internally.
If you update the database and query the fulltext search concurrently
(directly using the Java API of H2 or Lucene itself), you need to ensure
operations are properly synchronized"

Also, I see no example where SQL and Lucene indices used within same query.
So, it looks like this integration has very limited scope of use.

On Thu, Nov 23, 2017 at 7:59 PM, zbyszek  wrote:

> Hello All, I am looking at the TextQuery feature of Ignite (2.2). I have
> read (https://manikandansubbu.wordpress.com/tag/lucene/) that Lucene
> "returns results ranked by either the relevance to the query or sorted by
> an arbitrary field such as a document’s last modified date". I also am
> aware that in Ignite I cannot combine sql and text queries (
> http://apache-ignite-users.70518.x6.nabble.com/Combine-
> SQL-and-Text-Query-td7455.html) thus cannot aply SQL 'order by' for
> searches based on lucene index. I would like to ask (in fact to confirm the
> lack of below) if Ignite supports: - sorting of TextQuery results -
> controlling/implementing own scoring (by means of say Query classes as
> mentined here: https://lucene.apache.org/core/2_9_4/scoring.html) These
> questions are in the context of the situation where I have multiple exactly
> the same text values but from different sources and I would like to rank
> returned entities by the source (or other arbitrary field in general).
> Thank you for the help, zbyszek
> --
> Sent from the Apache Ignite Users mailing list archive
>  at Nabble.com.
>



-- 
Best regards,
Andrey V. Mashenkov


Re: Number of open file handles keeps on increasing on Ignite node on adding a new cache

2017-11-30 Thread afedotov
Hi,

Do you use Ignite Persistent Store?
If that's the case, then for each cache partition there is a corresponding
file created.

Kind regards,
Alex



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Local node and remote node have different version numbers

2017-11-30 Thread Evgenii Zhuravlev
Are you sure that you are using not Multicast IP finder? Please share
configuration for all nodes that find each other and log files

2017-11-30 9:32 GMT+03:00 Rajarshi Pain :

> Hi Evgenii,
>
> We both tried to specify the port, but still we rae getting same error if
> we both running it on the time in our system.
>
> On Wed 29 Nov, 2017, 21:04 Evgenii Zhuravlev, 
> wrote:
>
>> Hi,
>>
>> In this case, you should specify ports for addresses, for example:
>>
>> 10.23.153.56:47500
>>
>>
>> Evgenii
>>
>>
>> 2017-11-29 18:20 GMT+03:00 Rajarshi Pain :
>>
>>> Hi,
>>>
>>> I was working on Ignite 2,3, and my colleague is working on 1.8. We are
>>> running different application on our local machine using the example ignite
>>> xml, but it getting failed with the below exception:-
>>>
>>> We are using our local system ip under TcpDiscoverySpi, but still, it
>>> is detecting each other. How I can stop this and not let them discover each
>>> other.
>>>
>>> 
>>> 
>>>   
>>> >> class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">
>>>   
>>> 10.23.153.56
>>>
>>>   
>>>
>>>
>>> Exception:-
>>>
>>>  Local node and remote node have different version numbers (node will
>>> not join, Ignite does not support rolling updates, so versions must be
>>> exactly the same) [locBuildVer=2.3.0, rmtBuildVer=1.8.0, locNodeAddrs=[
>>> 01HW1083820.India.TCS.com/0:0:0:0:0:0:0:1, /10.23.209.152, /127.0.0.1],
>>> rmtNodeAddrs=[01Hw1083774.India.TCS.com/0:0:0:0:0:0:0:1, /10.23.209.149,
>>> /127.0.0.1], locNodeId=c3e4f1ff-f6d7-4c26-9baf-e04f99a8eaac,
>>> rmtNodeId=75e5e5bc-5de2-484a-9a91-a3a9f2ec48d3]
>>>
>>> --
>>> Regards,
>>> Rajarshi Pain
>>>
>>
>> --
>
> Thanks
> Rajarshi
>


Re: MarshallerContextImpl: Failed to write class name to file

2017-11-30 Thread jpmoore40
Having done a bit more digging it seems the best option is to not set 
IGNITE_HOME at all but use IgniteConfiguration.setWorkDirectory(). I'm
potentially going to be running multiple nodes on the same server and would
prefer not to create individual start-up scripts for each (as I may want to
add new nodes on the fly). Seems I could programmatically set the directory
using the process id to guarantee I don't get a clash. Does this seem like a
reasonable approach? Is there any way to do something similar for log files?

As a bit of extra information - I'm writing a server that uses a native
library that is not thread safe, and due to native libraries being
statically loaded can only have one instance per process. As I need to allow
the server to process concurrent requests I will have many nodes running to
process them but they will all (initially at least) be on the same physical
server.

Thanks
Jon




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Error running ignite in YARN

2017-11-30 Thread ilya.kasnacheev
Hello Juan!

We are looking into the Yarn portion of this case.

Meanwhile, could you tell us what happens if tou fix launch_container.sh
script?

I believe it should be 

export BASH_FUNC_run_prestart="() {  su -s /bin/bash $SVC_USER -c "cd
$WORKING_DIR && $EXEC_PATH --config '$CONF_DIR' start $DAEMON_FLAGS"

without the first pair of parens.

If this script is auto-generated, I imagine there's a variable somewhere
with this name BASH_FUNC_run_prestart, please ensure it doesn't have extra
set of parens in the name.

Regards,



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Retrieving keys very slow from a partitioned, no backup, no replicated cache

2017-11-30 Thread Alexey Popov
Hi Anand,

Ignite will collect a batch of updates for multiple operations if you enable
write-behind.
So, it will be done for entry.setvalue() within Cache.invoke for your case.

And then Ignite will make a writeall() call for the batch.

If your own CacheStore implementation does not override writeall() method
then a default implementation will be used:
foreach (entry in a batch) {
  write(entry);
}

So, please implement writeall() with respect to your legacy DB to have a
performance boost for batch updates.

Thank you,
Alexey



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Re: Re: Re: Re: JdbcQueryExecuteResult cast error

2017-11-30 Thread Denis Mekhanikov
> add additional synchronization?
> But how?

You should make sure, that one JDBC connections is not used from multiple
threads simultaneously. You can, for example, acquire some lock before
using it.

> Another thing, the JDBC connection can be closed automatically?

Do you mean closing by timeout or something like that? There is nothing
like that, AFAIK. JDBC connection can only close, if some corrupted
messages are met, like in your case.

Denis

чт, 30 нояб. 2017 г. в 11:35, Lucky :

> Denis,
>
>  Another thing, the JDBC connection can be closed automatically?
>
>
>
>
>


Re:Re: Re: Re: Re: JdbcQueryExecuteResult cast error

2017-11-30 Thread Lucky
Denis,


 Another thing, the JDBC connection can be closed automatically?


Re:Re: Re: Re: Re: JdbcQueryExecuteResult cast error

2017-11-30 Thread Lucky
Denis,


   add additional synchronization?
   But how?


   Thanks.






At 2017-11-30 14:52:05, "Denis Mekhanikov"  wrote:


Lucky,

If it's possible, that this code is executed cuncurrently, then you need to add 
additional synchronization. Otherwise correct work of JDBC driver is not 
guaranteed.

Denis




Re:Re: Re:Poor performance select query with jdbc thin mode

2017-11-30 Thread Lucky
Denis,
  It's worse!
  There just have several sql  can be faster, the other took more time.
  Thanks.






在 2017-11-29 21:31:05,"Denis Mekhanikov"  写道:

Lucky,


Try enabling enforceJoinOrder parameter in JDBC connection string and let us 
know the result.
Your JDBC connection string should look like this: 
jdbc:ignite:thin://127.0.0.1?enforceJoinOrder=true


Denis