Hi, Ishan-Jain,
can you please clarify, what exactly would you like to visualize.
If you want to visualize the file listings, Hadoop API is quite enough: just
type "hadoop fs -ls igfs:// ... " on the console.
If you want a GUI to browse files, there is a wide choice: you can use any
tool that
Enabling debug logging level in class
org.apache.ignite.internal.processors.igfs.IgfsImpl can give tracing of all
the IGFS operations performed -- this can help to track this issue down.
Many operations in IGFS are implemented using "retry" logic , like the
following
while (true) {
try
Hi, Joe,
how are you getting the cache commit rate in this measurement?
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/igfs-meta-behavior-when-node-restarts-tp13155p13201.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.
Hi, Paranay,
The logic of why IGFS does not have implemented CacheStore is that it aims
different goals.
Please see https://apacheignite-fs.readme.io/docs/in-memory-file-system for
general IGFS concepts: it has both "native" and Hadoop API , both these are
file-system like APIs. As a persistent
1) Yes, unless some data have been evicted from Ignite cache.
2) Sure, this is expected: IGFS in dual modes always tries to reflect the
underlying file system contents.
3) This sounds strange. In DUAL_ASYNC mode it is possible that changes made
to IGFS appear in underlying file system with some
Hi, Pranay,
IGFS can run upon any file system that can be represented in terms of Hadoop
file system API (namely as a org.apache.hadoop.fs.FileSystem).
MapR file system also implements Hadoop file system API , namely, it has
com.mapr.fs.MapRFileSystem .
So, IGFS can successfully run upon.
Hi, Pranay,
> Does it mean that Namenode can be avoided when IGFS get deployed on top of
> HDFS ?
No. IGFS itself does not have namenode, it is a distributed cache storing
file blocks. But when deployed on top of HDFS, it fetches the underlying
data using ordinary namenode mechanism.
> Or is it
Alena,
as I understand, the message "19988 Killed "$JAVA"" means that the Ignite
node process was killed by the operating system. Can you please see the
kernel log -- what does it say near the node crash time?
--
View this message in context:
As a workaround to IGNITE-4862 propetry
FileSystemConfiguration#perNodeParallelBatchCount can be set to 1.
Also setting FileSystemConfiguration#prefetchBlocks to 0 should help.
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/HDP-Hive-Ignite-tp12195p12627.html
Alena, regarding NPEs in Ignite node logs, this seems to be
https://issues.apache.org/jira/browse/IGNITE-4862 , fixed, but not yet
merged.
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/HDP-Hive-Ignite-tp12195p12594.html
Sent from the Apache Ignite Users
Hi, Alena,
3. Looks like we have an answer why the initial topology version is so high:
you possibly do not restart the Visor process, is that true? If so, please
start next experiment with all nodes stopped, as well as the Visor process .
After that initial topology version should start with 1,
Alena, I suppose, incorrect results in your environment may be a consequence
of topology troubles. In any way, to have some stable and reproducible
results you need to have stable Ignite cluster topology. To achieve that I
would recommend the following steps:
1) kill all the Ignite processes on
WRT item 2. : cannot reproduce the issue yet. Each time I get correct data:
OK
2017-03-15 36564815
2017-03-16 36872463
2017-03-17 36900812
2017-03-18 36904198
2017-03-19 3630
2017-03-20 37029921
Time taken: 69.603 seconds, Fetched: 6 row(s)
--
View this
, heap=10.0GB]
4. Can you please specify more exactly, what Evgeniy's comment you're
referring to?
Regards,
Ivan Veselovsky.
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/HDP-Hive-Ignite-tp12195p12394.html
Sent from the Apache Ignite Users mailing list archive
1. Please make sure IGFS is really used: e.g. you may explicitly locate some
table data on IGFS, and run the queries upon. IGFS statistics can partially
be observed through Visor.
Also please note, that upon node(s) start IGFS is empty. In case of dual
modes it caches the data upon file reading.
Yes, we did some experiments with Hive over Ignite on HDP distributions, in
basic experiments everything was working without critical issues.
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/HDP-Hive-Ignite-tp12195p12308.html
Sent from the Apache Ignite Users
1. The observed "java.lang.ClassNotFoundException: Class
org.apache.ignite.hadoop.fs.v1.IgniteHadoopFileSystem not found " suggests
that TEZ does not have IgniteHadoopFileSystem in class path. Please check
how TEZ composes the classpath and if it adds all the libs from
Hi, zhangshuai.ustc ,
is this problem solved? Can we help more on the subject?
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/OOM-when-using-Ignite-as-HDFS-Cache-tp11900p12297.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.
Hi, Alena
1. logs you have attached show some errors, but , in fact, I cannot deal
with them until the way to reproduce the problem is known.
2. Here I mean that IGFS (write-through cache built upon another file
system) and the Ignite map-reduce engine (jobtracker on port 11211) are 2
Hi, Alena, there are several different requests, let's try to separate them.
A1. Wrong Hive query results:
Is this use case easily reproducible? Now it appears, it is not. Please try
to track it down, as possible:
- Run Ignite nodes with -v option and see console logs of the nodes: are
there
Even more exactly, from the root cause stack trace it appears that the
replacement class
/** Hadoop class name: ShutdownHookManager replacement. */
public static final String CLS_SHUTDOWN_HOOK_MANAGER_REPLACE =
Hi, Prashant,
are you trying to append concurrently from several streams?
The situation you observe means that you're trying to append a file that is
currently open for writing in another output stream. (When a file is open
for append, a special lock is placed to the file meta information in
Hi,
mail archive record
http://mail-archives.apache.org/mod_mbox/hadoop-user/201502.mbox/%3cd0f948da.33f4a%25xg...@hortonworks.com%3E
seems to describe similar issue.
The advice there is to see
https://issues.apache.org/jira/browse/MAPREDUCE-6230 and
Hi, harishraj,
does the initial problem persist?
I tried ignite yarn module in kerberized Hortonworks sandbox environment,
and it works (see listing below), which means that the module is able to run
Ignite nodes in containers, and all the logs are visible through Yarn web
console
I take tickets
https://issues.apache.org/jira/browse/IGNITE-1922 ,
https://issues.apache.org/jira/browse/IGNITE-2525 (they duplicate each
other).
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/Exception-in-Kerberos-Yarn-cluster-tp1950p2847.html
Sent from the
Important thing there is that in case of local underlying file system only
one node on the cluster should be used to access the file system. Otherwise
an incorrect behavior is possible, if, e.g. if local files with the same
path but different content exist on different cluster nodes.
We may think
Paolo Di Tommaso wrote
> What if you have multiple nodes in a cluster using
> org.apache.hadoop.fs.LocalFileSystem as a secondary file system? Each node
> saves the IGFS content locally?
In my understanding yes, each node that was requested to do the file
operation will store the file locally.
Hi, Joe,
>I double checked the config I posted, the one (I think) I'm currently
running and the one I send with the logs and they all
> seem to be the same
Sorry , this is my mistake -- I picked up wrong config for analysis. Please
disregard my previous suggestions.
>>Each ignite node (i.e.
Hi, Joe,
>I also haven't been seeing the expected performance using the hdfs api to
access ignite.
Regarding the slow IGFS, please create another topic for that problem, since
this may not be related to discovery issues in large cluster. Can you please
send us configs you have used to test IGFS
Hi,
Please properly subscribe to the user list by sending an email to
"user-subscr...@ignite.apache.org" and following the instructions in the
reply. This way everyone in the community will get notified whenever you
post questions.
The answers to your questions are positive.
Please find the
Hi, Paolo,
please try to use
org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder as
the IpFinder in TcpDiscoverySpi. This is "IP Finder which works only with
pre-configured list of IP addresses specified via {@link
#setAddresses(Collection)} method."
Also you may need to
Is there a simple way to reproduce the problem on an independent environment?
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/Encountered-incompatible-class-loaders-for-cache-Error-in-Queries-tp1590p1602.html
Sent from the Apache Ignite Users mailing list
Hi, Xukun,
you have
fs.defaultFS
hdfs://clcluster/
in core-sire.xml. There is nothing wrong in that, but that forces you to
specify a full URI in Hadoop client commands.
When you're connecting to IGFS, the full URI should have form
://@:/ , with part being
mandatory. In you
We submitted https://issues.apache.org/jira/browse/IGNITE-1566 on this.
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/How-to-config-Ignite-on-hadoop-tp1306p1516.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.
34 matches
Mail list logo