I agree - with a new project today you should probably start with JDK 21 (LTS) - it has matured for years now.I have also the observation that a couple of third party libraries (eg spring) do not support anymore JDK8 and thus security fixes etc are not provided for those.Am 18.01.2024 um 11:14
Maybe you have an instance limit of 100 on aws side ?
> Am 31.10.2019 um 19:09 schrieb codeboyyong :
>
> Hi Friends,
> I have a big ignite cluster running in private was like cloud, I use
> TcpDiscoverySpi with TcpDiscoveryS3IpFinder and run with 96 nodes, it works
> fine.
> Today I redeploy it
I would not recommend it from a security perspective. Use separate keystores/
node. Regarding the trustStore - do you have your own CA? It is not recommended
to secure both with a self-signed certificate.
> Am 08.12.2018 um 06:48 schrieb Shesha Nanda :
>
>
> Hi,
>
> I have enabled SSL
I think the more important question is why you need this. There are many
different ways on accelerating warehouse depending on what you want to achieve.
> Am 23.11.2018 um 07:56 schrieb lk_hadoop :
>
> hi all,
> I think use hive as DW and use spark do some OLAP on hive is quite common
> .
I think you need to also look at the processes that are using the id in case of
a split brain scenario.
A unique identifier is always some centralistic approach either it is done by
one central service or a central rule that is enforced in a distributed fashion.
For instance, in your case you
This does normally not make sense because most graph databases keep the graph
structure (not necessarily the vertex details, but vertexes and edges )
in-memory. As far as I know, Ignite does not provide graph data structures such
as adjacency matrix/list.
If you have a very huge graph of which
Maybe you can elaborate more on your use case, because usually it is not a
technical decision , but driven by user requirements.
> On 9. Jul 2018, at 10:01, Mahesh Talreja wrote:
>
> Hi Team,
> I am working on Dot Net project and trying to implement
> Ignite.Net.
> Being new to
t; 7.0.85.
>
> We've been running Tomcat 7 on JDK9 for over 3 months now with no other
> issues.
>
> [1] http://tomcat.apache.org/tomcat-7.0-doc/changelog.html
>
>> On Fri, Mar 30, 2018 at 11:52 AM, Jörn Franke <jornfra...@gmail.com> wrote:
>> Tomcat 7 does not
Tomcat 7 does not support JDK 9
> On 30. Mar 2018, at 18:30, Eric Ham wrote:
>
> I'm running Tomcat 7 with Oracle JDK 9.0.4 and am attempting to use web
> session clustering based on the following pages [1] and [2] as I saw the
> 2.4.0 release notes say Java 9 is now
You should do first a performance test with your data and our calculation using
a standard vm.
Then use this as a benchmark for non-standard vms.
Do not rely on other benchmarks - different use cases and calculations.
Particularly do your own benchmark and do not listen to advertisement
Which database? Some databases can notify an application id they are updated.
You could read these updates with a Java application and insert them in the
ignite cache.
> On 17. Jul 2017, at 16:45, luqmanahmad wrote:
>
> Hi, We have a legacy system, 15 years old
stored in hdfs with
> timestamp and make a graph with the prices of that stock over time.
>
>> On Mon, Jun 12, 2017 at 1:03 PM, Jörn Franke <jornfra...@gmail.com> wrote:
>> First you need the user requirements - without them answering your questions
>> will be difficult
&
First you need the user requirements - without them answering your questions
will be difficult
> On 12. Jun 2017, at 07:08, ishan-jain wrote:
>
> I am new to BIG Data .Just been working for a month.
> I have HDFS data of stock prices. I need to perform data
Access it via Hive (tez+llap) - you can connect to hive via any analytical tool
Hive provides the data on IGFS as tables that can be accessed by analytical
tools.
> On 9. Jun 2017, at 08:43, ishan-jain wrote:
>
> I am using hdfs to store my data. I have to implement a
I would not expect any of the things that you mention. A cache is not supposed
to slow down writing. This does not make sense from my point of view. Splitting
a block into several smaller ones is also not feasible. The data has to go
somewhere before splitting.
I think what you refer to is
That being said, it is rather easy to include in your application that Hadoop
client libraries and use any of the available inputformats. You do not need a
Hadoop cluster to read files, it can even read from the local file system. This
is done also by Spark and others.
> On 10. Apr 2017, at
Not sure I got the picture of your setup, but the ignite cache should be
started indecently of the application and not within the application.
Aside from that, can you please elaborate more on the problem you like to solve
- maybe with pseudocode? I am not sure if the approach you have selected
Well the data is in memory - do you have a concern that another process on the
same machine as the Ignite daemon can read it - there might be better ways then
encryption to solve it. If you are concerned about swapping to disk then try to
reduce the risk and/or encrypt the hard drive.
In the
As already said, it is not really a cache use case. Aside, performance tests on
single nodes simply do not make sense for a distributed system.
Maybe you can describe in more detail your real use case and we can help you.
There are many area where you can tune and cache is only one possibility.
Hi,
For me that looks more like something suitable for stomp.js+messaging bus (eg
rabbitmq).
> On 21 Oct 2016, at 07:08, Alexandr Porunov wrote:
>
> Hello,
>
> I am developing a messaging system with notifications via WebSockets (When
> the user 'A' sends a
You have to understand for what the database cache is good: lookups of
single/few rows. This is due to the data structure of a cache. In this sense
you use the cache wrongly. Aside of this I think select * is really the worst
way to do professional performance evaluation of your architecture.
You need to configure the igfs in the HDFS configuration file. Then you use the
standard APIs to access HDFS files and it will go automatically through the
cache.
> On 4 Oct 2016, at 07:35, Sateesh Karuturi wrote:
>
> Hello experts,
> I am new to the Apache
I am not sure that this will be performant. What do you want to achieve here?
Fast lookups? Then the Cassandra Ignite store might be the right solution. If
you want to do more analytic style of queries then you can put the data on
HDFS/Hive and use the Ignite HDFS cache to cache certain
You can have also the case both nodes crash ... The bottom line is that a write
loss can occur in any system. I am always surprised to hear even senior
consultants saying that in a high reliability database no write loss can occur
or the risk is low (think about the human factor! Eg an admin
Hmm this would require more details. You can, for example, use ignite as a HDFS
cache for hive and hive (minimum 1.2) +tez +Orc as the SQL layer. This is
probably one of the most fastest way currently available. However, this depends
on your use case.
> On 01 Aug 2016, at 14:45, Labard
try 2.0 they said that the speed was improved), also I looked that Ignite can
> be used as Spark chache with Ignite RDD maybe that could be another approach.
>
> Thanks
>
>> On Fri, Jun 17, 2016 at 2:29 AM, Jörn Franke <jornfra...@gmail.com> wrote:
>>
>>
This depends on the type of queries!
In any case: before you go in-Memory optimize your current data model and
exploit your current technology. I have seen in the past often purely designed
data model that do not leverage the underlying technology well.
> On 16 Jun 2016, at 23:20, Andrés
In Addition to that you should make sure that you run JDK8, it has a lot of
optimizations
> On 21 Apr 2016, at 21:06, vkulichenko wrote:
>
> In most cases it's OK to have one node per machine, but you should not
> allocate more than 10-12G of heap memory,
28 matches
Mail list logo