i am also getting the below error
Downloading from central:
https://repo.maven.apache.org/maven2/classworlds/classworlds/1.1/classworlds-1.1.jar
[ERROR] [ERROR] Some problems were encountered while processing the POMs:
[ERROR] Unresolveable build extension: Plugin
org.apache.felix:maven-bundle-plu
i am also getting the below error
Downloading from central:
https://repo.maven.apache.org/maven2/classworlds/classworlds/1.1/classworlds-1.1.jar
[ERROR] [ERROR] Some problems were encountered while processing the POMs:
[ERROR] Unresolveable build extension: Plugin
org.apache.felix:maven-bundle-
Hi Naveen,
FYI there an issue in JIRA: IGNITE-6917 SQL: implement COPY command for
efficient data loading
https://issues.apache.org/jira/browse/IGNITE-6917
And it is already merged to master and will be a part of ignite 2.5.
--
Alexey Kuznetsov
Naveen,
Ignite provides out of the box implementation for RDBMS. The easiest way to
integration would be to use Web Console to generate all required POJO
classes and configurations:
https://apacheignite-tools.readme.io/docs/automatic-rdbms-integration
-Val
--
Sent from: http://apache-ignite-us
Hi Naveen,
I had similar situation. Two things you can do :
1. Decouple file reading from cache streaming, so that both can be handled
in separate threads asynchronously.
2. Once you have data from csv in collection, use use parallelStreams to
add data in streamer with multiple threads.
Thanks,
I suspect this happens because you have a repository mirror defined in your
.m2/settings.xml that matches all repos. For example:
my-mirror
my-repo
http://acme.com/my-repo
*
From: vkulichenko
Sent: Thursday, March 8, 2018
Hi Stanislav,
Thanks for info and the note on the terminology.
So in my setup im partitioning by chunks of time. If I turn eagerttl to false
how would the cleanup look? Would I periodically scan the partitions that i
want to be expired? Are there any best practices to that end?
Additionally ar
Hi Stan
I do not want to Oracle with native persistence, I only wanted to use Oracle
persistent layer.
Are you sure, we need to implement cacheStore for each table we have in the
cluster ?
If that is the case, we need to have separate code base for Oracle as
persistence layer and another versio
Hi Naveen,
Please refer to this page https://apacheignite.readme.io/docs/3rd-party-store.
In short, you need to implement a CacheStore or use one of the standard
implementations (CacheJdbcBlobStore, CacheJdbcPojoStore)
and add it to your cache configuration.
Also, you can set DataRegionConfigurat
Hi Jose
I was asking how can I configure Oracle DB as persistent layer.
At the moment I am using the Ignite native persistence as persistent layer,
but I would like to use Oracle DB as persistent layer.
How can I do this, what changes I should do to the config file.
My config file looks like this
Hi DH
I am not using any custom streamReciever, my requirement is very simple.
Have huge data in CSV, reading line by line and parsing the line and
populating the POJO and using the DataStreamer to load data into cache.
while (sc.hasNextLine()) {
ct++;
Hi Stan
Yeah, it seems to be working
Thanks
Naveen
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Hi guys - I have a ignite cluster with persistence enabled that has 200
million events in it. Right now read throughput is around 3000 events per
second. I have increased the IOPS to 1 and even then I have the same
performance. Am I doing something really wrong or this is how it perform
with la
Hi Naveen,
The native persistence doesn’t require you to reload data to memory, the tables
should be available after the activation.
How do you start the nodes? Do you use ignite.sh/ignite.bat?
Have you downloaded Ignite as .zip archive or via Maven?
It’s possible that your persistence files are
Hi,
A terminology nitpicking: it seems that you’re talking about expiry; eviction
is a similar mechanism but is based on the data size, not time.
Have you tried setting CacheConfiguration.eagerTtl = false
(https://apacheignite.readme.io/v2.3/docs/expiry-policies#section-eager-ttl)?
With that se
Hi Mikhail,
Unfortunately, the problem has repeated itself on ignite-core-2.3.3
27.02.18 00:27:55 ERROR GridCacheIoManager - Failed to process message
[senderId=8f99c887-cd4b-4c38-a649-ca430040d535, messageType=class
o.a.i.i.processors.cache.distributed.dht.atomic.
GridNearAtomicUpdateResponse]
16 matches
Mail list logo