der custom-zoo.cfg
I'm going to check this also.
If you can send me the NN, JN, ZK logs; more than happy to look into it.
I can yes, I just need time to anonymize everything.
Thanks again for your help.
Best regards.
T@le
Le jeu. 24 févr. 2022 à 21:28, gurmukh singh a
écrit :
Al
Also, as you are using hive/beeline. You can fetch all the config as:
beeline -u "JDBC URL to connect to HS2 " --outputformat=tsv2 -e 'set -v' >
/tmp/BeelineSet.out
Please attach the BeelineSet.out
On Friday, 25 February, 2022, 07:15:51 am GMT+11, gurmukh singh
have to
put under custom-zoo.cfg
If you can send me the NN, JN, ZK logs; more than happy to look into it.
On Friday, 25 February, 2022, 06:59:17 am GMT+11, gurmukh singh
wrote:
@Tale Hive you provided the details in the first email, missed it.
Can you provide me the output of below
"MYIP.(PORTS|IN|LISTEN)" | wc -l
What do you see in the JN logs? and what about ZK logs?any logs in NN, ZK on
the lines of "Slow sync'What is the ZK heap?
On Friday, 25 February, 2022, 06:42:31 am GMT+11, gurmukh singh
wrote:
I checked the heap of the name
I checked the heap of the namenode and there is no problem (I have 75 GB of
max heap, I'm around 50 used GB).
Why 75GB heap size for NN? are you running a very large cluster? 50 GB
of heap used? Can you check are talking about the NN heap itself or are you
saying about the total mem
How is the pre-emption configured ?
On 21/4/20 1:41 am, Ilya Karpov wrote:
Hi, all,
recently I’ve noticed strange behaviour of YARN Fair Scheduler: 2 jobs
(i.e. two simultaneously started oozie launchers) started in a queue
with a small weight, and was not able to launch spark jobs while ther
Hi Wenqi,
Which version of hadoop are you using? (My initial guess is 3.X, as )
Can you see anything in the datanode logs on the lines of
"org.apache.hadoop.hdfs.server.common.InconsistentFSStateException"
Although, you mention that it is new cluster, make sure the ClusterID
matches. --> "at
Your disk seems to be an issue, which is causing Journal node timeout.
Do, benchmarks on the disks for namenode, zk and JQM
On 3/12/18 2:08 pm, 白 瑶瑶 wrote:
*发件人:* 白 瑶瑶 代表 白 瑶瑶 <437633...@qq.com>
*发送时间:* 2018年10月18日 10:3
core-site.xml is wrong.
It is "fs.defaultFS" not "fs.default.FS"
Also remove "/" after the port
fs.default.name
hdfs://master:9000/
fs.default.FS
hdfs://master:9000/
Also, you are running yarn; so you do not need the below:
mapre
your observation is correct. backup node will also download.
If you look at the journey/evolution of hadoop, we had primary, backup
only, checkpointing node and then a generic secondary node.
checking node will do the merge of fsimage and edits
On 25/9/17 5:57 pm, Chang.Wu wrote:
From the
Hi
Can you explain me the job a bit, there are few rpc timeout like at
datanode level, mapper timeouts etc
On 28/9/17 1:47 pm, Demon King wrote:
Hi,
We have finished a yarn application and deploy it to hadoop 2.6.0
cluster. But if one machine in cluster is down. Our application will
h
Well, in actual job the input will be a file.
so, instead of:
echo "bla ble bli bla" | python mapper.py | sort -k1,1 | python reducer.py
you will have:
cat file.txt | python mapper.py | sort -k1,1 | python reducer.py
The file has to be on HDFS (keeping simple, it can be other
filesystems), t
Hi Om,
Although you solved this issue by bumping up the ipc max length which is
by default set to 64MB.
$ hdfs getconf -confkey ipc.maximum.data.length
67108864
So, it means the disk you are using is having more then 1 million
blocks. Are you using HDFS block size to be very small ?
If you
400GB as heap space for Namenode is bit high. The GC pause time will be
very high.
For a cluster with about 6PB, approx 20GB is decent memory.
As you mentioned it is HA, so it is safe to assume that the fsimage is
check pointed at regular intervals and we do not need to worry during a
manual
it is ".bashrc" not "*.barshrc"*
is port 8030 listening ?
netstat -tlpn | grep 8030
Can you give the output of:
$ hdfs getconf -confkey yarn.resourcemanager.scheduler.address
$ hdfs getconf -confkey fs.defaultFS
$ nslookup nn1.cluster1.com (if using DNS), else
$ getent hosts nn1.cluster1.
e?
-----
To unsubscribe, e-mail: user-unsubscr...@hadoop.apache.org
For additional commands, e-mail: user-h...@hadoop.apache.org
--
--
Thanks and Regards
Gurmukh Singh
/192.168.23.206:8020
<http://192.168.23.206:8020>*
Rest of the things look fine. Please help
me in this regard, what could be the issue?
--
--
Thanks and Regards
Gurmukh Singh
ning that all the metadata of name node only stay in the
memory?
Is there a way to fix it? Is there any configuration to control
the persistence?
Thanks!
--
--
Thanks and Regards
Gurmukh Singh
nks and Regards
Gurmukh Singh
Batch Starting Soon: Advanced Hadoop: Performance Tuning and Security
Duration: 21 hours
Module 1: Hadoop High Availability for HDFS and Resource Manager.
Using both JQM and Shared storage.
Module 2: Hadoop Queuing and pools details.
Fair and Capacity Scheduler details.
Dynamic pool configu
friends.
HTH,
Youngwoo
On Fri, Jul 24, 2015 at 4:13 PM, sajid mohammed
mailto:sajid.had...@gmail.com>> wrote:
Hi all
I am changing my project role as Hadoop Adminstrator
please share me useful material for admin role.
Thanks
Sajid
--
--
Thanks and Regards
G
hts
will be appreciated.
--
Bing Jiang
*Rich Haase*| Sr. Software Engineer | Pandora
m (303) 887-1146 | rha...@pandora.com <mailto:rha...@pandora.com>
--
--
Thanks and Regards
Gurmukh Singh
Thanks
Gurmukh Singh
Founder - Netxillon Technologies
23 matches
Mail list logo