Hi,
Our use case is mainly for cache purpose, like the below:
1. Write intensive, put requests ~100k/s
2. Short TTL, about 10~15 mins
So we plan to do it with the native persistence disabled and no backup.
The data is not critical, but if possible, we don't want to just throw them
away with
Hi Mathias,
I use Microsoft Visual Studio Community 2017 (version 15.6.4) and your
project starts without any exceptions.
I think you can try to remove work directory
(C:\Users\\AppData\Local\Temp\ignite\work by default) before start
app.
If the issue is still reproducible please attach ignite
Hi,
I'm wrestling with Continuous Queries. I'm successfully writing data into
Ignite via JDBC; now I want to do a Continuous Query from a client app as
I'm writing that data. I got past several issues by setting
'peerClassLoadingEnabled', using binary objects, and implementing my local
listener
Hi All,
I am wondering if Ignite has Spark Structured streaming sink? If I can use
any JDBC then my question really is Does Spark Structured Streaming has a
JDBC sink or do I need to use forEachWriter? I see the following code in
this link
Hi,
I was able to reproduce the issue you mentioned.
I will debug that use-case and create a JIRA ticket in order to track this.
As a workaround, please try to replace the following line:
personCache.withKeepBinary().invoke(id, new
CacheEntryProcessor<> ...
by
IgniteCache
Thank you for your reply! I have another question. If the Person class has a
member of another class, say Address. Class Address has two members, int
streetNo, String streetName. How can I write the “Create Table” statement to
create columns for the two members of Address in Table Person? Do I
Hi,
> How should I use BinaryObject in the CREATE TABLE statement?
If you want to use binary objects, then there is no need to specify
'VALUE_TYPE'.
Please use the following code:
String createTableSQL = "CREATE TABLE Persons (id LONG, orgId LONG,
firstName VARCHAR, lastName VARCHAR,
Hello,
> Can I add fields without restarting the cluster?
Yes, It can be done via DDL command, as Ilya Kasnacheev mentioned.
Let's assume that you created a cache:
CacheConfiguration cfg = new CacheConfiguration(PERSON_CACHE_NAME)
.setIndexedTypes(Long.class, Person.class);
Bhaskar,
It's still the same version - 2.2.0.
-Val
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Hi Roman,
Thanks for the response. I followed the steps you mentioned and also the
below link
https://stackoverflow.com/questions/50817940/failed-to-retrieve-ignite-pods-ip-addresses/50818842#50818842
apiVersion: v1
kind: ServiceAccount
metadata:
name: ignite
namespace: default
---
The above solution works fine with Ignite JDBC and however I am trying with
Ignite REST API.
http://localhost:8080/ignite?cmd=qryfldexe=100=CustomerCache=select+id+from+Customer
c+join+
AccountCache.account a+on+where+c.id=a.id
I am getting
"error": "Schema \"ACCOUNTCACHE\" not found; SQL
Hi,
Question was answered in
https://stackoverflow.com/questions/50950480/join-on-apache-ignite-rest-api/50950777#50950777.
BR,
Andrei
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Hi,
Please read the documentation more accurate. The lazy flag you should set
on the SqlFieldsQuery object. It could be set on that node where you are
going to use it.
BR,
Andrei
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
That explains a lot. Thanks, Stan!
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Hello,
I have two objects i.e account and customer loaded to apache ignite server.
Both objects are loaded with data and each of them stored in its cache.
Account object/table is loaded to accountcache and customer object/table is
loaded to customercache. I am trying to access both the tables
Thanks Andrew,
I want to control on Server node , its hard from client side . Is there
anyway I can set this lazy on Server node?
Thanks
Bhaskar
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
So to conclude, if I have enabled on heap storage for cache(using
cache.setOnHeapEnabled(true),
then :
1. Still data will be stored off heap, but will be loaded to heap. To escape
out of memory error, I have to set eviction policies.
2. Off heap entries will be written to disk based on data page
DataRegionConfiguration.setMaxSize() should be used to limit offheap memory
usage.
On Tue, Jun 19, 2018 at 2:35 PM the_palakkaran wrote:
> So how do I limit cache size if ignite native persistence is enabled using
> dataRegionCfg.setPersistenceEnabled(true)? I don't want it to keep a lot of
>
Hi,
This is clearly a usability issue in Ignite. I've created a ticket for it:
https://issues.apache.org/jira/browse/IGNITE-8839.
On Tue, Jun 19, 2018 at 6:17 PM, aealexsandrov
wrote:
> As a workaround, you can try to add execution rights (like in your example)
> to all files under work
Data region and data storage metrics are always local.
There are no cluster-wide versions for them.
See an issue for the documentation improvement:
https://issues.apache.org/jira/browse/IGNITE-8726
There is a code snippet that can help to collect metrics from a client via
Compute.
Stan
From:
I suppose that is issue with updating timestamps, rather with WAL writes.
Try to make a load test and compare hash sum of files before load test and
after. Also check if WAL history grow.
Thanks!
-Dmitry
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Hi,
Just because:
1) not all users build their apps from scratch, they might have some legacy
code built over Cassandra DB;
2) native persistence featured much later than Cassandra module, and there
is no point to remove it now;
3) it's always better to offer more choices to user.
Anyway,
got it, thanks
On Wed, Jun 20, 2018 at 11:57 AM, dkarachentsev
wrote:
> Hi Oleksandr,
>
> It's OK for discovery, and this message is printed only in debug mode:
>
> if (log.isDebugEnabled())
> log.error("Exception on direct send: " +
> e.getMessage(),
> e);
>
> Just turn off
WAL mode is default one, all caches are FULL_ASYNC
...and now I see one of my Ignite servers updated WAL files but other one
still have 19th last-modification-time
it looks like WAL files are updated once per day
Hi,
Try to set Lazy flag for SQL query [1].
[1]
https://apacheignite-sql.readme.io/docs/performance-and-debugging#result-set-lazy-load
On Tue, Jun 19, 2018 at 8:06 PM bhaskar
wrote:
> Hi,
> I have Ignite 5 node cluster with more than dozen tables(Cache) . Our
> client
> are using SQL and
Hi Oleksandr,
It's OK for discovery, and this message is printed only in debug mode:
if (log.isDebugEnabled())
log.error("Exception on direct send: " + e.getMessage(),
e);
Just turn off debug logging for discovery package:
org.apache.ignite.spi.discovery.tcp.
Thanks!
Hi,
What is your configuration? Check WAL mode and path to persistence.
Thanks!
-Dmitry
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Hi Ignite team,
I noticed that nothing is written into WAL files even on Ignite restart.
My testing steps:
1) bounce application and ignite cluster
2) perform load testing
3) bounce application and ignite cluster
4) check ignite files:
data files have recent modification time - OK
latest WAL
>> What's the point of scaling persistence manually over allowing Ignite to
scale both RAM and disk layers for you?
Why there exists an official plugin for Cassandra integration then ?
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
29 matches
Mail list logo