Re: same cache cannot update twice in one transaction

2019-02-26 Thread xmw45688
It seems that this enhancement has not been implemented yet for the following
cases:
trx.start() {
1. update t1 set col1='a' where col2='c';
2. update the same table t1 with cache API. 

}
trx.end();

Can someone confirm?  
many thanks, Xinmin



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Fwd: Re: On Multiple Endpoints Mode of JDBC Driver

2019-02-26 Thread 李玉珏

Hi,

Since JDBC can't achieve multi-endpoint load balancing, we want to use 
affinityCall (...) mechanism to achieve load balancing, that is, to 
obtain and use JDBC Connection in IgniteCallable implementation.

How to efficiently access and use JDBC Connection?

 转发的消息 
主题: Re: On Multiple Endpoints Mode of JDBC Driver
日期: Tue, 26 Feb 2019 14:53:17 -0800
发件人:Denis Magda 
回复地址:   d...@ignite.apache.org
收件人:dev 



Hello,

You provide a list of IP addresses for the sake of high-availability - if
one of the servers goes down then the client will reconnect to the next IP
automatically. There is no any load balancing in place presently. But! In
the next Ignite version, we're planning to roll out partition-awareness
support - the client will send a request to the nodes who hold the data
needed for the request.

-
Denis


On Tue, Feb 26, 2019 at 2:48 PM 李玉珏  wrote:


Hi,

Does have load balancing function in Multiple Endpoints mode of JDBC
driver?For example, "jdbc: ignite: thin://192.168.0.50:101,
192.188.5.40:101, 192.168.10.230:101"
If not, will one node become the bottleneck of the whole system?





Re: Ignite Data streamer optimization

2019-02-26 Thread ashishb888
Sure. But in my case I can not do so. Any other options for single threads?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: web-console not displaying the latest version of ignite

2019-02-26 Thread Andrey Novikov
Hi,
Looks like you are deployed old image of apacheignite/web-console-frontend.
What version did you see in web-console footer?




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Ignite can not read the date type

2019-02-26 Thread gn01887818
Thank you.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Access a cache loaded by DataStreamer with SQL

2019-02-26 Thread Mike Needham
Hi All,

I have a cache that I have loaded using the DataStreamer and can confirm
there is a cache created by using the ignitevisor utility  with the cache
command.  I cannot query it from any JDBC tools and am not sure why.  Do I
need to use a CREATE TABLE syntax in order for this to work instead of the
GetOrCreateCache<>(CacheName).  Or is there someother thing on the config
side that I am missiung

Any help appreciated as I am just starting to evaluate this for a project.

-- 
*Some days it just not worth chewing through the restraints*


Re: Performance degradation in case of high volumes

2019-02-26 Thread Antonio Conforti
Hello,


I recap the scenario of benchmark:

1) Constant submission of 4000 entries per second where every entry is an
add (the key contains a field updatetime and changes for every entry).
2) The benchmark starts with no data in cache and the entries are submitted
from an Ignite client node in the cluster using the StreamVisitor.addData()
method (located on HOST1).
3) The cluster is composed of a total of 8 ignite server nodes: server nodes
with consistence ID 1,3,5,7 on HOST1 and server nodes with consistence ID
2,4,6,8 on HOST2;


Below the general configuration:

Cache configuration:

1.TRANSACTIONAL 
2.partitioned 
3.with backup 1 (and affinity with exclude neighbours enabled) 
4.write synchronization mode FULL_ASYNC 
5.indexed on key and value (and enabled to SQL inquiry) 


We have also configured: 
1.failureDetectionTimeout to 12msec 
2.Data region (only 1): 
a.Persistence enabled 
b.max size 8 GB 
c.checkpointPageBufferSize 2 GB 
3.WAL mode LOG_ONLY 
4.disabled WAL archiving (WAL path and the WAL archive path to
the same 
value) 
5.Pages Writes Throttling enabled 


I run also a second scenario with frequency set to 60 (10minutes) as
suggested and with direct IO enabled and WAL mode set to NONE.


For problem of space you can find logs and configuration file of only server
node 1 attached for both scenarios as described below:

1) folder 20190222: log and config for scenario 1)
2) folder 20190225: log and config for scenario 2)
3) IGN_DF_CMF_QUOTE_PK.java: key entity inserted in cache
4) IGN_DF_CMF_QUOTE.java: entity data inserted in cache

case-Ignite.zip
  

If you need another benchmark with specific configuration, just let me know.

Thanks.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: OutOfMemoryError in ClusterProcessor.updateNodeMetrics

2019-02-26 Thread Ilya Kasnacheev
Hello!

Unfortunaytely it seems that you will have to disable metrics for the
duration.

Regards,
-- 
Ilya Kasnacheev


вт, 19 февр. 2019 г. в 20:51, Ruslan Kamashev :

> Probably it relates with
> https://issues.apache.org/jira/browse/IGNITE-10925
>
> On 18 Feb 2019, at 20:42, Ruslan Kamashev  wrote:
>
> Hi,
>
> Could you tell me something about my problem?
> Before OOM error I was getting warning messages such as "Failed to
> unmarshal node metrics:".
>
> Caches:
> 1 cache - Persistence, FSYNC, PRIMARY_SYNC, 3 backups by Availability
> Zones (AZ) using ClusterNodeAttributeAffinityBackupFilter
>
> Topology:
> 12 nodes (4 nodes per AZ)
>
> Version:
> Apache Ignite 2.7
>
> Stacktrace:
> java.lang.OutOfMemoryError: Java heap space
> at java.util.HashMap.resize(HashMap.java:703)
> at java.util.HashMap.putVal(HashMap.java:662)
> at java.util.HashMap.put(HashMap.java:611)
> at
> org.apache.ignite.internal.marshaller.optimized.OptimizedObjectInputStream.readLinkedHashMap(OptimizedObjectInputStream.java:741)
> at
> org.apache.ignite.internal.marshaller.optimized.OptimizedObjectInputStream.readLinkedHashSet(OptimizedObjectInputStream.java:762)
> at
> org.apache.ignite.internal.marshaller.optimized.OptimizedObjectInputStream.readObject0(OptimizedObjectInputStream.java:316)
> at
> org.apache.ignite.internal.marshaller.optimized.OptimizedObjectInputStream.readObjectOverride(OptimizedObjectInputStream.java:198)
> at java.io.ObjectInputStream.readObject(ObjectInputStream.java:416)
> at
> org.apache.ignite.internal.marshaller.optimized.OptimizedObjectInputStream.readLinkedList(OptimizedObjectInputStream.java:714)
> at
> org.apache.ignite.internal.marshaller.optimized.OptimizedObjectInputStream.readObject0(OptimizedObjectInputStream.java:310)
> at
> org.apache.ignite.internal.marshaller.optimized.OptimizedObjectInputStream.readObjectOverride(OptimizedObjectInputStream.java:198)
> at java.io.ObjectInputStream.readObject(ObjectInputStream.java:416)
> at
> org.apache.ignite.internal.marshaller.optimized.OptimizedObjectInputStream.readLinkedList(OptimizedObjectInputStream.java:714)
> at
> org.apache.ignite.internal.marshaller.optimized.OptimizedObjectInputStream.readObject0(OptimizedObjectInputStream.java:310)
> at
> org.apache.ignite.internal.marshaller.optimized.OptimizedObjectInputStream.readObjectOverride(OptimizedObjectInputStream.java:198)
> at java.io.ObjectInputStream.readObject(ObjectInputStream.java:416)
> at
> org.apache.ignite.internal.marshaller.optimized.OptimizedObjectInputStream.readFields(OptimizedObjectInputStream.java:519)
> at
> org.apache.ignite.internal.marshaller.optimized.OptimizedObjectInputStream.readSerializable(OptimizedObjectInputStream.java:611)
> at
> org.apache.ignite.internal.marshaller.optimized.OptimizedClassDescriptor.read(OptimizedClassDescriptor.java:954)
> at
> org.apache.ignite.internal.marshaller.optimized.OptimizedObjectInputStream.readObject0(OptimizedObjectInputStream.java:346)
> at
> org.apache.ignite.internal.marshaller.optimized.OptimizedObjectInputStream.readObjectOverride(OptimizedObjectInputStream.java:198)
> at java.io.ObjectInputStream.readObject(ObjectInputStream.java:416)
> at
> org.apache.ignite.internal.marshaller.optimized.OptimizedMarshaller.unmarshal0(OptimizedMarshaller.java:228)
> at
> org.apache.ignite.marshaller.AbstractNodeNameAwareMarshaller.unmarshal(AbstractNodeNameAwareMarshaller.java:94)
> at
> org.apache.ignite.internal.binary.BinaryUtils.doReadOptimized(BinaryUtils.java:1762)
> at
> org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize0(BinaryReaderExImpl.java:1964)
> at
> org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize(BinaryReaderExImpl.java:1716)
> at
> org.apache.ignite.internal.binary.GridBinaryMarshaller.deserialize(GridBinaryMarshaller.java:313)
> at
> org.apache.ignite.internal.binary.BinaryMarshaller.unmarshal0(BinaryMarshaller.java:121)
> at
> org.apache.ignite.marshaller.AbstractNodeNameAwareMarshaller.unmarshal(AbstractNodeNameAwareMarshaller.java:94)
> at
> org.apache.ignite.internal.util.IgniteUtils.unmarshalZip(IgniteUtils.java:10111)
> at
> org.apache.ignite.internal.processors.cluster.ClusterProcessor.updateNodeMetrics(ClusterProcessor.java:435)
>
>
>


introspection of ignite data

2019-02-26 Thread Scott Cote
I am trouble shooting a sql problem where I'm issuing a "select" statement and 
the parser is not finding my table .

IgniteSqlException: Failed to parse query.  Table "FOOBOO" not found; SQL 
statement:\nselect * from FOOBOO [42102-197]

What API can I call against either an instance of  IgniteCache or Ignite - to 
find the names of the tables that are present - if any.

Want to be able to trouble shoot from inside a java debugger where I have the 
instances present - and/or later call an api for diagnostics.

TIA.

SCott



User authentication and persistence

2019-02-26 Thread Mikael

Hi!

It say in the docs that I need to have persistence enabled to use user 
authentication, I assume this mean that I need to have persistence 
enabled on the defaultDataRegion, I can still have other regions without 
persistence enabled ?


Mikael




Re: Question about ignite data storage

2019-02-26 Thread Ilya Kasnacheev
Hello!

1) Roughly yes: you need to do checkpoint, RAM does not need wal/ but only
db/, and also you need checkpoint page buffer with persistence which you
can add to memory region when going pure-RAM.

2) I don't think that per cache metrics exist.

Regards,
-- 
Ilya Kasnacheev


пн, 25 февр. 2019 г. в 15:02, newigniter :

> Yes of course.
> All of my data is imported through sql. I have it in the warehouse, and
> then
> just inserting them to ignite.
> I was trying through SQL query get size in bytes for single row and then
> based on that trying to predict resources needed. That's why I was confused
> seeing so much more resources needed in ignite when I imported a small
> amount of data. Not sure whether I was going in the right direction with
> this?
>
> Btw, 2 additional questions:
>
> 1.) after I import all of my data to ignite, and when I see how much disk
> space was needed for it, is it safe to presume that the same amount of RAM
> is needed if I want to have all of the data in memory?
> 2.) is it possible to see how much each cache takes memory? For disk, I can
> get it by checking cache directory sizes. Can I do something similar to
> check memory per cache?
>
> Tnx.
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Performance degradation in case of high volumes

2019-02-26 Thread Ilya Kasnacheev
Hello!

Can you please provide full log? It's hard to say what is going on here.
Can you please also share your Ignite/cache configuration and describe your
work load?

Regards,
-- 
Ilya Kasnacheev


пн, 25 февр. 2019 г. в 17:32, Dodong Juan :

> I did observe the same thing with ignite and was not able to resolve it at
> all.
> So I am very interested on a resolution for this too.
>
> > On Feb 25, 2019, at 9:30 AM, Antonio Conforti 
> wrote:
> >
> > Hello,
> >
> > first of all thanks for your advice.
> >
> > Before reading your post I runned another session (at 15:45 about) of
> test
> > with the same rate before changing the frequency of checkpoint you
> suggested
> > but with data already loaded.
> >
> >
> > I observed that performance degrades suddenly and not after a while
> (about
> > 45 minutes) as observed in my first session when the cache was empty.
> > Below you can see  the statistics:
> >
> > 2019-02-22 15:49:03.992  INFO 5271 --- [oint-thread-#67]
> > i.i.p.c.p.GridCacheDatabaseSharedManager : Checkpoint finished
> > [cpId=ab04c732-a72c-4398-8a09-31f05243a7ad, pages=129721,
> > markPos=FileWALPointer [idx=470, fileOff=35838781, len=79426],
> > walSegmentsCleared=11, walSegmentsCovered=[460 - 469],
> markDuration=387ms,
> > pagesWrite=1149ms, fsync=227229ms, total=228765ms]
> > 2019-02-22 15:53:25.681  INFO 5271 --- [oint-thread-#67]
> > i.i.p.c.p.GridCacheDatabaseSharedManager : Checkpoint finished
> > [cpId=cf6e6d68-ac4d-4445-becb-b7bcbb5aee32, pages=178943,
> > markPos=FileWALPointer [idx=484, fileOff=61358243, len=79426],
> > walSegmentsCleared=14, walSegmentsCovered=[470 - 483],
> markDuration=1316ms,
> > pagesWrite=1043ms, fsync=259329ms, total=261688ms]
> > 2019-02-22 15:56:07.878  INFO 5271 --- [oint-thread-#67]
> > i.i.p.c.p.GridCacheDatabaseSharedManager : Checkpoint finished
> > [cpId=86154f6f-907e-4972-b0f1-f86be06c3a40, pages=44514,
> > markPos=FileWALPointer [idx=487, fileOff=66267301, len=79426],
> > walSegmentsCleared=0, walSegmentsCovered=[484 - 486], markDuration=888ms,
> > pagesWrite=239ms, fsync=161070ms, total=162197ms]
> >
> > What looks strange in this second session is that I see a delay between
> 228
> > and 260 sec to write 129721 / 178943 pages.
> > I don't observe this behaviour when I start from scratch (with no data in
> > cache): it usually takes about 45 minutes before observing this
> performance
> > degradation.
> > Any clues on what can be the cause?
> >
> >
> > I started today another test session with the suggested frequency
> > (checkpointingFrequency=60) and I also configured direct IO and set
> the
> > WAL mode to NONE as suggested in tuning performance but the result
> observed
> > is quite the same
> >
> >
> >
> > 2019-02-25 09:34:02.284  INFO 1208 --- [oint-thread-#66]
> > i.i.p.c.p.GridCacheDatabaseSharedManager : Checkpoint finished
> > [cpId=a9423ba9-9d93-40a3-bedb-7379a617f716, pages=78,
> markPos=FileWALPointer
> > [idx=0, fileOff=0, len=0], walSegmentsCleared=0, walSegmentsCovered=[],
> > markDuration=13ms, pagesWrite=15ms, fsync=11ms, total=39ms]
> > 2019-02-25 10:04:04.082  INFO 1208 --- [oint-thread-#66]
> > i.i.p.c.p.GridCacheDatabaseSharedManager : Checkpoint finished
> > [cpId=bd76a704-e125-42e1-b2a8-d285cf15246d, pages=52252,
> > markPos=FileWALPointer [idx=0, fileOff=0, len=0], walSegmentsCleared=0,
> > walSegmentsCovered=[], markDuration=144ms, pagesWrite=1560ms,
> fsync=123ms,
> > total=1827ms]
> > 2019-02-25 10:14:27.275  INFO 1208 --- [oint-thread-#66]
> > i.i.p.c.p.GridCacheDatabaseSharedManager : Checkpoint finished
> > [cpId=a10f5483-050c-4678-98bd-deca7695e782, pages=368810,
> > markPos=FileWALPointer [idx=0, fileOff=0, len=0], walSegmentsCleared=0,
> > walSegmentsCovered=[], markDuration=323ms, pagesWrite=24669ms,
> fsync=21ms,
> > total=25013ms]
> > 2019-02-25 10:24:55.492  INFO 1208 --- [oint-thread-#66]
> > i.i.p.c.p.GridCacheDatabaseSharedManager : Checkpoint finished
> > [cpId=bfab59c6-a1a0-4391-92e5-a23b6c416625, pages=382555,
> > markPos=FileWALPointer [idx=0, fileOff=0, len=0], walSegmentsCleared=0,
> > walSegmentsCovered=[], markDuration=225ms, pagesWrite=52912ms,
> fsync=93ms,
> > total=53230ms]
> > 2019-02-25 10:35:22.512  INFO 1208 --- [oint-thread-#66]
> > i.i.p.c.p.GridCacheDatabaseSharedManager : Checkpoint finished
> > [cpId=feff29e3-db80-4002-82f8-2011e4eca12e, pages=436129,
> > markPos=FileWALPointer [idx=0, fileOff=0, len=0], walSegmentsCleared=0,
> > walSegmentsCovered=[], markDuration=70ms, pagesWrite=80164ms, fsync=7ms,
> > total=80241ms]
> > 2019-02-25 10:45:34.524  INFO 1208 --- [oint-thread-#66]
> > i.i.p.c.p.GridCacheDatabaseSharedManager : Checkpoint finished
> > [cpId=7e3f181b-21d8-4dd1-a018-09711142caf4, pages=384415,
> > markPos=FileWALPointer [idx=0, fileOff=0, len=0], walSegmentsCleared=0,
> > walSegmentsCovered=[], markDuration=56ms, pagesWrite=92186ms, fsync=3ms,
> > total=92245ms]
> > 2019-02-25 10:55:43.139  INFO 1208 --- [oint-thread-#66]
> > i.i.p.c.p.GridCacheDatabaseSharedManager : Che

Re: Ignite can not read the date type

2019-02-26 Thread Ilya Kasnacheev
Hello!

What do you expect to happen here? If it's a Date then use getDate().

Regards,
-- 
Ilya Kasnacheev


вт, 26 февр. 2019 г. в 12:52, gn01887818 :

> Ignititedb defines a date type in a field.
> JdbcResultSet uses the getBytes function (org.apache.ignite.internal.jdbc2)
> Because the field will be java.sql.Date type, there will be an exception to
> the last else.
> How to deal with it?
>
>  @Override public byte[] getBytes(int colIdx) throws SQLException {
> Object val = getValue(colIdx);
>
> if (val == null)
> return null;
>
> Class cls = val.getClass();
>
> if (cls == byte[].class)
> return (byte[])val;
> else if (cls == Byte.class)
> return new byte[] {(byte)val};
> else if (cls == Short.class) {
> short x = (short)val;
>
> return new byte[] {(byte)(x >> 8), (byte)x};
> }
> else if (cls == Integer.class) {
> int x = (int)val;
>
> return new byte[] { (byte) (x >> 24), (byte) (x >> 16), (byte)
> (x >> 8), (byte) x};
> }
> else if (cls == Long.class) {
> long x = (long)val;
>
> return new byte[] {(byte) (x >> 56), (byte) (x >> 48), (byte)
> (x
> >> 40), (byte) (x >> 32),
> (byte) (x >> 24), (byte) (x >> 16), (byte) (x >> 8), (byte)
> x};
> }
> else if (cls == String.class)
> return ((String)val).getBytes();
> else
> throw new SQLException("Cannot convert to byte[]: " + val,
> SqlStateCode.CONVERSION_FAILED);
> }
>
>
> Can refer to line 462
>
> https://github.com/apache/ignite/blob/master/modules/core/src/main/java/org/apache/ignite/internal/jdbc2/JdbcResultSet.java
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


IgniteSpiException: Conflicts during configuration merge for cache

2019-02-26 Thread newigniter
I have two node cluster: each on its own separate machine.
The replicated mode is turned on, as is native persistence. 
While inserting data to a cluster, one of the nodes failed(still checking
why).
When I want to bring it back up(or start a new one with the same
configuration) it does not join the cluster yet it throws the exception. I
added the output from logs. Not sure why I get errors *Conflicts during
configuration merge for cache*

Can someone please help with this?


[11:18:01,426][SEVERE][main][IgniteKernal] Failed to start manager:
GridManagerAdapter [enabled=true,
name=o.a.i.i.managers.discovery.GridDiscoveryManager]
class org.apache.ignite.IgniteCheckedException: Failed to start SPI:
TcpDiscoverySpi [addrRslvr=null, sockTimeout=5000, ackTimeout=5000,
marsh=JdkMarshaller
[clsFilter=org.apache.ignite.marshaller.MarshallerUtils$1@2accaec2],
reconCnt=10, reconDelay=2000, maxAckTimeout=60, forceSrvMode=false,
clientReconnectDisabled=false, internalLsnr=null]
at
org.apache.ignite.internal.managers.GridManagerAdapter.startSpi(GridManagerAdapter.java:300)
at
org.apache.ignite.internal.managers.discovery.GridDiscoveryManager.start(GridDiscoveryManager.java:939)
at
org.apache.ignite.internal.IgniteKernal.startManager(IgniteKernal.java:1682)
at org.apache.ignite.internal.IgniteKernal.start(IgniteKernal.java:1066)
at
org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start0(IgnitionEx.java:2038)
at
org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start(IgnitionEx.java:1730)
at org.apache.ignite.internal.IgnitionEx.start0(IgnitionEx.java:1158)
at
org.apache.ignite.internal.IgnitionEx.startConfigurations(IgnitionEx.java:1076)
at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:962)
at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:861)
at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:731)
at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:700)
at org.apache.ignite.Ignition.start(Ignition.java:348)
at
org.apache.ignite.startup.cmdline.CommandLineStartup.main(CommandLineStartup.java:301)
Caused by: class org.apache.ignite.spi.IgniteSpiException: Conflicts during
configuration merge for cache 'SQL_PUBLIC_FEEDDM_STG' : 
FEEDDM_STG conflict: 
keyType is different:
local=SQL_PUBLIC_FEEDDM_STG_0a684069_17b3_4d23_a843_89ae3037a5b0_KEY,
received=SQL_PUBLIC_FEEDDM_STG_05dcf649_36b0_41fd_a315_8e869505e506_KEY
valType is different:
local=SQL_PUBLIC_FEEDDM_STG_0a684069_17b3_4d23_a843_89ae3037a5b0,
received=SQL_PUBLIC_FEEDDM_STG_05dcf649_36b0_41fd_a315_8e869505e506

Conflicts during configuration merge for cache 'SQL_PUBLIC_FEEDDM' : 
FEEDDM conflict: 
keyType is different:
local=SQL_PUBLIC_FEEDDM_a424a569_d572_40d1_861d_8dda6562d377_KEY,
received=SQL_PUBLIC_FEEDDM_7aa81a6d_d6ab_4df5_8013_a722995d55a0_KEY
valType is different:
local=SQL_PUBLIC_FEEDDM_a424a569_d572_40d1_861d_8dda6562d377,
received=SQL_PUBLIC_FEEDDM_7aa81a6d_d6ab_4df5_8013_a722995d55a0

at
org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi.checkFailedError(TcpDiscoverySpi.java:1946)
at
org.apache.ignite.spi.discovery.tcp.ServerImpl.joinTopology(ServerImpl.java:969)
at
org.apache.ignite.spi.discovery.tcp.ServerImpl.spiStart(ServerImpl.java:391)
at
org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi.spiStart(TcpDiscoverySpi.java:2020)
at
org.apache.ignite.internal.managers.GridManagerAdapter.startSpi(GridManagerAdapter.java:297)
... 13 more
[11:18:01,429][SEVERE][main][IgniteKernal] Got exception while starting
(will rollback startup routine).
class org.apache.ignite.IgniteCheckedException: Failed to start manager:
GridManagerAdapter [enabled=true,
name=org.apache.ignite.internal.managers.discovery.GridDiscoveryManager]
at
org.apache.ignite.internal.IgniteKernal.startManager(IgniteKernal.java:1687)
at org.apache.ignite.internal.IgniteKernal.start(IgniteKernal.java:1066)
at
org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start0(IgnitionEx.java:2038)
at
org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start(IgnitionEx.java:1730)
at org.apache.ignite.internal.IgnitionEx.start0(IgnitionEx.java:1158)
at
org.apache.ignite.internal.IgnitionEx.startConfigurations(IgnitionEx.java:1076)
at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:962)
at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:861)
at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:731)
at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:700)
at org.apache.ignite.Ignition.start(Ignition.java:348)
at
org.apache.ignite.startup.cmdline.CommandLineStartup.main(CommandLineStartup.java:301)
Caused by: class org.apache.ignite.IgniteCheckedException: Failed to start
SPI: TcpDiscoverySpi [addrRslvr=null, sockTimeout=5000, ackT

Ignite can not read the date type

2019-02-26 Thread gn01887818
Ignititedb defines a date type in a field.
JdbcResultSet uses the getBytes function (org.apache.ignite.internal.jdbc2)
Because the field will be java.sql.Date type, there will be an exception to
the last else.
How to deal with it?

 @Override public byte[] getBytes(int colIdx) throws SQLException {
Object val = getValue(colIdx);

if (val == null)
return null;

Class cls = val.getClass();

if (cls == byte[].class)
return (byte[])val;
else if (cls == Byte.class)
return new byte[] {(byte)val};
else if (cls == Short.class) {
short x = (short)val;

return new byte[] {(byte)(x >> 8), (byte)x};
}
else if (cls == Integer.class) {
int x = (int)val;

return new byte[] { (byte) (x >> 24), (byte) (x >> 16), (byte)
(x >> 8), (byte) x};
}
else if (cls == Long.class) {
long x = (long)val;

return new byte[] {(byte) (x >> 56), (byte) (x >> 48), (byte) (x
>> 40), (byte) (x >> 32),
(byte) (x >> 24), (byte) (x >> 16), (byte) (x >> 8), (byte)
x};
}
else if (cls == String.class)
return ((String)val).getBytes();
else
throw new SQLException("Cannot convert to byte[]: " + val,
SqlStateCode.CONVERSION_FAILED);
}


Can refer to line 462
https://github.com/apache/ignite/blob/master/modules/core/src/main/java/org/apache/ignite/internal/jdbc2/JdbcResultSet.java



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: C++ computer delay

2019-02-26 Thread Ilya Kasnacheev
Hello!

Yes, this should do it. Maybe JVM is not started at the time of dump so
only C++ threads are output? Can you dump C++ threads as well? (Since in
this dump we only see top of stack as opposed to whole stack)??

Regards,
-- 
Ilya Kasnacheev


пн, 25 февр. 2019 г. в 16:20, F.D. :

> Hi,
>
> This is the command that I launched:
>
> cmd /c  "" -m -l 7600
> >> C:\temp\$(hostname).log
>
>
> where 7600 is the PID of the JVM. Is it correct?
>
> Thanks,
>   F.D.
>
> On Mon, Feb 25, 2019 at 10:52 AM Ilya Kasnacheev <
> ilya.kasnach...@gmail.com> wrote:
>
>> Hello!
>>
>> Regardless of extension your thread dumps almost do not contain anything
>> relevant.
>>
>> There is one guess that can be made:
>>
>> 0x07fa7fe069c1ignite_dll!?save_object_ptr@
>> ?$pointer_oserializer@Vportable_binary_oarchive
>> @@VVolatilityCubeModel@credit@@@detail@archive@boost@
>> @EEBAXAEAVbasic_oarchive@234@PEBX@Z + 0x4c901
>>
>> Maybe it's something about taking a lot of time to serialize your
>> functions.
>>
>> Still it's hard to say without Java thread dump.
>>
>> Regards,
>> --
>> Ilya Kasnacheev
>>
>>
>> пн, 25 февр. 2019 г. в 11:39, F.D. :
>>
>>> Hi,
>>> in my previous message there was the dump of the 4 servers. Maybe the
>>> extension of the files is misleading.
>>>
>>> Thanks,
>>>F.D.
>>>
>>> On Mon, Feb 25, 2019 at 7:48 AM Ilya Kasnacheev <
>>> ilya.kasnach...@gmail.com> wrote:
>>>
 Hello!

 Please post Java stack traces collected with jstack utility. Otherwise
 there's nothing there for us to see, unfortunately.

 Regards,
 --
 Ilya Kasnacheev


 чт, 21 февр. 2019 г. в 18:33, F.D. :

> Hi,
> I reduced the grid to only 4 server, so it was easier to collect
> dumps. In the logs you'll find 3 attempts, the first and the third are 
> dump
> of the problem, the second it's a dump with the server in idle.
>
> Thanks for your support!
>
> On Thu, Feb 21, 2019 at 10:58 AM Ilya Kasnacheev <
> ilya.kasnach...@gmail.com> wrote:
>
>> Hello!
>>
>> Preferably all nodes in the cluster.
>>
>> Regards,
>> --
>> Ilya Kasnacheev
>>
>>
>> чт, 21 февр. 2019 г. в 12:39, F.D. :
>>
>>> Hi,
>>>
>>> thread log of the client?
>>>
>>> On Thu, Feb 21, 2019 at 10:29 AM Ilya Kasnacheev <
>>> ilya.kasnach...@gmail.com> wrote:
>>>
 Hello!

 Can you collect some thread dumps (Java ones, using jstack) during
 those 10 seconds freezes?

 Regards,
 --
 Ilya Kasnacheev


 ср, 20 февр. 2019 г. в 20:18, F.D. :

> Hi igniters,
> I've a problem when I launch a sequence of about 350 compute
> funcions in my cluster of 10 node. I register a delay in the first 
> launcher
> od about 10 sec. after that, it works smoothly.
> If I launch the same numbers of task on a single node, there 's no
> delay.
> It seems like a problem in the discovery process of the whole
> grid.
>
> Do you have any ideas?
>
> Thanks,
>F. D.
>