RE: CVE-2022-42889

2022-10-27 Thread Deepti Sharma S
Thank you for sharing the link, however when is the plan to release version 
3.3.5 which has the fix of this CVE?


Regards,
Deepti Sharma
PMP® & ITIL

From: Wei-Chiu Chuang 
Sent: 27 October 2022 21:21
Cc: user@hadoop.apache.org
Subject: Re: CVE-2022-42889


  1.  HADOOP-18497

On Thu, Oct 27, 2022 at 4:45 AM Deepti Sharma S 
mailto:deepti.s.sha...@ericsson.com.invalid>>
 wrote:
Hello Team,

As we have received the vulnerability “CVE-2022-42889”. We are using Apache 
Hadoop common 3pp version 3.3.3 which has transitive dependency of Common text.

Do you have any plans to fix this vulnerability in Hadoop next version and when 
is the plan?


Regards,
Deepti Sharma
PMP® & ITIL



Re: CVE-2022-42889

2022-10-27 Thread Wei-Chiu Chuang
   1. HADOOP-18497 


On Thu, Oct 27, 2022 at 4:45 AM Deepti Sharma S
 wrote:

> Hello Team,
>
>
>
> As we have received the vulnerability “CVE-2022-42889”. We are using
> Apache Hadoop common 3pp version 3.3.3 which has transitive dependency of
> Common text.
>
>
>
> Do you have any plans to fix this vulnerability in Hadoop next version and
> when is the plan?
>
>
>
>
>
> Regards,
>
> Deepti Sharma
> * PMP® & ITIL*
>
>
>


CVE-2022-42889

2022-10-27 Thread Deepti Sharma S
Hello Team,

As we have received the vulnerability "CVE-2022-42889". We are using Apache 
Hadoop common 3pp version 3.3.3 which has transitive dependency of Common text.

Do you have any plans to fix this vulnerability in Hadoop next version and when 
is the plan?


Regards,
Deepti Sharma
PMP(r) & ITIL



回复: HDFS DataNode unavailable

2022-10-27 Thread hehaore...@gmail.com
Hello Chris Nauroth, Thank you for your advice. I just saw your email. I will confirm the last information in the log. I will be thinking about upgrading the cluster in the near future.  Thank you very much.  He Hao   从 Windows 版邮件发送 发件人: Chris Nauroth发送时间: 2022年10月26日 0:41收件人: hehaore...@gmail.com抄送: user@hadoop.apache.org主题: Re: HDFS DataNode unavailable Hello, I think broadly there could be 2 potential root cause explanations: 1. Logs are routed to a volume that is too small to hold the expected logging. You can review configuration settings in log4j.properties related to the rolling file appender. This determines how large logs can get and how many of the old rolled files to retain. If the maximum would exceed the capacity on the volume holding these logs, then you either need to configure smaller retention or redirect the logs to a larger volume. 2. Some error condition caused abnormal log spam. If the log isn't there anymore, then it's difficult to say what this could have been specifically. You could keep an eye on logs for the next few days after the restart to see if there are a lot of unexpected errors. On a separate note, version 2.7.2 is quite old, released in 2017. It's missing numerous bug fixes and security patches. I recommend looking into an upgrade to 2.10.2 in the short term, followed by a plan for getting onto a currently supported 3.x release. I hope this helps.Chris Nauroth  On Mon, Oct 24, 2022 at 11:31 PM hehaore...@gmail.com  wrote:I have an HDFS cluster, version 2.7.2, with two namenodes and three datanodes. While uploads the file, an exception is found: java.io.IOException: Got error,status message,ack with firstBadLink as X:50010. I noticed that the datanode log is stopped, only datanode.log.1, not datanode.log. But the rest of the process logs are normal. The HDFS log directory is out of space. I did nothing but restart all the datanodes, and HDFS was back to normal.What's the reason?从 Windows 版邮件发送 - To unsubscribe, e-mail: user-unsubscr...@hadoop.apache.org For additional commands, e-mail: user-h...@hadoop.apache.org  

-
To unsubscribe, e-mail: user-unsubscr...@hadoop.apache.org
For additional commands, e-mail: user-h...@hadoop.apache.org