Hi Alex,
Did you get an chance to look at it ?
Regards,
Ankit Singhai
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/Caused-by-org-h2-jdbc-JdbcSQLException-General-error-java-lang-IllegalMonitorStateException-Attemptet-tp15684p15984.html
Sent from the Apache
Thank you for the response.
If I insert data from hive directly using below query, select query works
fine.
insert into table stocks PARTITION (years=2004,months=12,days=3)
values('AAPL',1501236980,120.34);
I think the issue here is that when we insert data using IGFS api (append
method),
I had not seen that page yet – very useful.
There’s a few moving parts to getting it working, so not sure I will get
time to really dig into, but will have a look for sure.
I did pull a static copy of the source (after git refused to clone the repo
in GitExtensions) and started looking at
Sorry, adding a question:
4. This raises a more general coordinate question: You can’t create or
locate a cache in the grid until the grid is active (when using Ignite
persistence). Which means you need to stall any logic in the server nodes
that wants to do this until the grid is marked as
Hi Michael,
Some more questions:
1. Can I set the grid active once all primaries are active, but
before all backups are active? Or do I need to have the entire cluster
running before setting active to true?
2. Is there an assumption that the cluster size is static and known
at
Hi Folks,
I understand from the docs (https://apacheignite.readme.io/docs/cache-modes)
that replicated caches are implemented using partitioned caches where every
key has a primary copy and is also backed up on all other nodes in the
cluster & that when data is queried lookups would be made from
1 & 2. Starting with 2.0 all data is stored in off-heap memory, regardless of
persistence configuration. Memory mode configuration is removed.
3. Yes. Data is always stored in pages which can transparently reside both
in memory and on disk. If persistence is not enabled, you have only
in-memory
Czeslaw,
To achieve best performance you should do data remodeling, get rid of
collections, store different types of objects in different caches and
reference them using foreign keys.
L2 cache can also be an option, but performance wise it will be less
effective.
-Val
--
View this message in
A few quick questions:
1. Are the cache memory modes: OFFHEAP_TIERED, OFFHEAP_VALUES, and
ONHEAP_TIERED, deprecated in Ignite 2.1?
2. In version 2.1, if I don't enable persistence store, do all data and
indexes get stored on heap, possibly causing OME errors?
3. Is the virtual
Ignite doesn't have auto increment functionality (at least for now), so you
need to generate IDs manually. I would load the data, query existing data to
get the largest ID value, initialize atomic sequence, and then use this
sequence to generate IDs.
-Val
--
View this message in context:
Hello,
could someone please explain to me how loadCache() queries are distributed to
the Cassandra instances when using the Cassandra Cache Store module.
I used Ignite logging and Cassandra server tracing (system_traces.sessions) to
try to determine how queries are distributed, but I can't
John,
There are multiple places I believe. You can check PageMemory interface and
its implementations, for example.
-Val
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/Persistence-Store-and-sun-misc-UNSAFE-tp15920p15967.html
Sent from the Apache Ignite Users
Any update on this ?
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/IGFS-Error-when-writing-a-file-tp15868p15965.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.
I couldn't get it running with the v1.9.0.
I bumped the version to the latest v2.1.0 and could get the ignite services
up. (Zeppelin only supports v1.9.0 btw)
I am attaching my config file to this message.
default-config.xml
Thank you! I resolved by moving the data load operation outside the server
cluster.
Thanks,
Yasser
On Thu, Aug 3, 2017 at 10:01 AM, slava.koptilin
wrote:
> Hi Yasser,
>
> Well, you are using IgniteDataStreamer in order to load data into cache.
> The IgniteDataStreamer
Hi Andrey,
The latest version worked for the connection. However when i try to import
domain models from my database it fails past the schema selection
page..once i select my database and hit next it doesn't show my two tables
that i have in that. Also when i select all (instead of my specific
Appreciate if anyone has thoughts on how to solve the above issue
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/Data-loss-validation-TopologyValidator-tp15147p15958.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.
I think that in your case the core issue may be not "SQL version
is not supported", but the other one,"Cannot establish connection
with the host". Is the node up and running?
Best Regards,
Igor
On Thu, Aug 3, 2017 at 6:43 PM, Igor Sapego wrote:
> Oh, from your video I can
Greetings Igor,
We have been using RazorSQL 2.0.0 running on a Windows Server 2012 R2 with
visual C++ installed.
Ignite 2.0.0 is running on CentOS 7.3.1611 kernel - 3.10.0-514 x86_64.
Here is a video run through of our attempt:
Hi Andrey,
thanks for your response. I actually do call getAll() 10 times every
iteration. If I call getAll() 1.000.000 times like you suggest I do see the
I/O I would expect. Is is possible the results for the 10 SQL queries are
cached somehow?
Anyway, this solves my problem.
Thanks again.
Hi Pascal,
Looks like you do not actually run a query. Ignite doesn't fetch full
resultset by default and use paging for fetching.
Try to make *cursor.getAll() *called on every iteration
or (which is better) get iterator and iterate over rows to reduce heap
memory pressure.
On Wed, Aug 2, 2017
Hi Yasser,
Well, you are using IgniteDataStreamer in order to load data into cache.
The IgniteDataStreamer actually buffers entries before sending data to
remote node, and, obviously, it requires java heap.
You can try increase Xmx options and adjust the following properties [1]:
-
Hi again,
so it's correct hadoop core-site.xml should have "hdfs://localhost:9000".
Hadoop works with a disk. So you write to IGFS, IGFS writes to the
secondary file system which is hadoop,
that means hadoop itself shouldn't know anything about IGFS,
otherwise, it will write to IGFS, which will
Hi,
I am getting following error when I enable persistent store and using
service grid(when more than 1 node is connected in grid)
[18:47:55]__
[18:47:55] / _/ ___/ |/ / _/_ __/ __/
[18:47:55] _/ // (7 7// / / / / _/
[18:47:55]
Hi Rajeev,
There are no special events for Distributed Data Structures like a
Queue/Set.
Thanks.
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/Queue-Reliability-Failure-Events-tp15927p15945.html
Sent from the Apache Ignite Users mailing list archive at
How many CPUs have you client?
as you can see you set
# Threads count.
t=36
that means that client will make put to cache from 36 threads, do you have
36 cores on client?
Low utilization cpu on servers means that you client doesn't produce enough
work for servers, try to increase number of
Hello again,
I've started working on the RazorSQL support, downloaded the
latest version from the official site and there is no issue with version
you have mentioned (there are some other issues though). Can
you point out what is the version of the RazorSQL you were using,
so that I can test with
Hi Pradeep,
I think you've already fixed this problem, could you please share with us
your solution
Thanks,
Mikhail.
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/IGFS-With-HDFS-Configuration-tp15830p15941.html
Sent from the Apache Ignite Users mailing
Hello,
Looks like you have you have nodes on different hosts with same ports. Do
you have other exceptions in log? Even if he failed to connect to this node
through localhost address, it should try to connect to other addresses of
node fc2bcbe5-2a4b-406b-b60d-69a445f52fd9. Also, on which addreses
Raymond,
ICacheStore is not supposed to work when PersistentStoreConfiguration is
present.
Certainly there are some startup checks missing and documentation needs to
be updated.
Thanks,
Pavel
On Thu, Aug 3, 2017 at 3:55 AM, Raymond Wilson
wrote:
> Hi Pavel,
>
>
>
>
Hi Mikhail,
Thanks for your response.
I changed hdfs://localhost:9000 to igfs://localhost:9000 in the
core-site.xml of hive home directory but core-site.xml of Hadoop home was
still pointing to hdfs://localhost:9000. Issue persists.
If I create partition using the insert command on hive I am
Hi again,
Checkout the email on dev list: "Cluster auto activation design proposal"
https://issues.apache.org/jira/browse/IGNITE-5851
As you can see this feature is targeted to 2.2.
Thanks,
Mikhail.
2017-08-03 6:58 GMT+03:00 Raymond Wilson :
> Michael,
>
>
>
> Is
Hi Raymond,
Unfortunately right now there's no auto-activation, restarting cluster is
like rare event that should be controlled
manually. However you can listen for EVT_NODE_JOINED event, when all nodes
in place you can activate a cluster.
And you only need this if you have ignite persistence
Hi Fabien,
You are posting to the right place :)
Eager cache refresh option does not yet exist, but it is an interesting
idea.
I've filed a ticket: https://issues.apache.org/jira/browse/IGNITE-5911
Thanks,
Pavel
On Wed, Aug 2, 2017 at 8:41 PM, Fabien Bourdon
wrote:
Great!
Here's .NET development page, in case you haven't seen it yet:
https://cwiki.apache.org/confluence/display/IGNITE/Ignite.NET+Development
Let me know if you need any assistance.
Pavel
On Thu, Aug 3, 2017 at 5:28 AM, Raymond Wilson
wrote:
> Hi Pavel,
>
>
>
>
35 matches
Mail list logo