+1
Thanks for Wangda's proposal.
The submarine project is born within Hadoop, but not limited to Hadoop. It
began with a trainer on YARN but it quickly realized that only a trainer is
not enough to meet the AI platform requirements. But now there's no
user-friendly open-source solution covers the
Mukul Kumar Singh created HDDS-2076:
---
Summary: Read fails because the block cannot be located in the
container
Key: HDDS-2076
URL: https://issues.apache.org/jira/browse/HDDS-2076
Project: Hadoop Dis
+1
Hello everyone, I am a member of the submarine development team.
I have been contributing to submarine for more than a year.
I have seen the progress of submarine development very fast.
In more than a year, there are 9 long-term developers of different
companies. Contributing,
submarine cumulat
+1
Thanks Wangda for the proposal.
I would like to participate in this project, Please add me also to the
project.
Regards
Devaraj K
On Mon, Sep 2, 2019 at 8:50 PM zac yuan wrote:
> +1
>
> Submarine will be a complete solution for AI service development. It can
> take advantage of two best cl
+ 1,
I would also like start participate in this project, hope to get myself
added to the project.
Thanks and Regards,
+ Naga
On Tue, Sep 3, 2019 at 8:35 AM Wangda Tan wrote:
> Hi Sree,
>
> I put it to the proposal, please let me know what you think:
>
> The traditional path at Apache would
For HBase, we purged all the protobuf related things from the public API,
and then upgraded to a shaded and relocated version of protobuf. We have
created a repo for this:
https://github.com/apache/hbase-thirdparty
But since the hadoop dependencies still pull in the protobuf 2.5 jars, our
coproce
Hi Sree,
I put it to the proposal, please let me know what you think:
The traditional path at Apache would have been to create an incubator
> project, but the code is already being released by Apache and most of the
> developers are familiar with Apache rules and guidelines. In particular,
> the
+1, for the branch idea. Just FYI, Your biggest problem is proving that
Hadoop and the downstream projects work correctly after you upgrade core
components like Protobuf.
So while branching and working on a branch is easy, merging back after you
upgrade some of these core components is insanely har
[
https://issues.apache.org/jira/browse/HDFS-14706?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Wei-Chiu Chuang resolved HDFS-14706.
Resolution: Fixed
Done. Reverted the commits and pushed 08 patch to trunk branch-3.2 and
b
[
https://issues.apache.org/jira/browse/HDFS-14706?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Wei-Chiu Chuang reopened HDFS-14706:
Reopen. For future reference, I committed the wrong patch.
I'm going to revert and re-apply th
xuzq created HDFS-14812:
---
Summary: RBF: MountTableRefresherService should load cache when
refresh
Key: HDFS-14812
URL: https://issues.apache.org/jira/browse/HDFS-14812
Project: Hadoop HDFS
Issue Type:
For more details, see
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/432/
No changes
-1 overall
The following subsystems voted -1:
asflicense compile findbugs hadolint mvnsite pathlen unit xml
The following subsystems voted -1 but
were configured to be filtered/ignor
+1
> 在 2019年8月24日,上午10:05,Wangda Tan 写道:
>
> Hi devs,
>
> This is a voting thread to move Submarine source code, documentation from
> Hadoop repo to a separate Apache Git repo. Which is based on discussions of
> https://lists.apache.org/thread.html/e49d60b2e0e021206e22bb2d430f4310019a8b29ee502
Elek, Marton created HDDS-2075:
--
Summary: Tracing in OzoneManager call is propagated with wrong
parent
Key: HDDS-2075
URL: https://issues.apache.org/jira/browse/HDDS-2075
Project: Hadoop Distributed Data
Elek, Marton created HDDS-2074:
--
Summary: Use annotations to define description/filter/required
filters of an InsightPoint
Key: HDDS-2074
URL: https://issues.apache.org/jira/browse/HDDS-2074
Project: Had
Elek, Marton created HDDS-2073:
--
Summary: Make SCMSecurityProtocol message based
Key: HDDS-2073
URL: https://issues.apache.org/jira/browse/HDDS-2073
Project: Hadoop Distributed Data Store
Issue
Elek, Marton created HDDS-2072:
--
Summary: Make StorageContainerLocationProtocolService message based
Key: HDDS-2072
URL: https://issues.apache.org/jira/browse/HDDS-2072
Project: Hadoop Distributed Data St
Elek, Marton created HDDS-2071:
--
Summary: Support filters in ozone insight point
Key: HDDS-2071
URL: https://issues.apache.org/jira/browse/HDDS-2071
Project: Hadoop Distributed Data Store
Issue
Elek, Marton created HDDS-2070:
--
Summary: Create insight point to debug one specific pipeline
Key: HDDS-2070
URL: https://issues.apache.org/jira/browse/HDDS-2070
Project: Hadoop Distributed Data Store
Sammi Chen created HDDS-2069:
Summary: Value of property
hdds.datanode.storage.utilization.critical.threshold and
hdds.datanode.storage.utilization.warning.threshold is not reasonable
Key: HDDS-2069
URL: https://issu
Elek, Marton created HDDS-2068:
--
Summary: Make StorageContainerDatanodeProtocolService message based
Key: HDDS-2068
URL: https://issues.apache.org/jira/browse/HDDS-2068
Project: Hadoop Distributed Data St
Elek, Marton created HDDS-2067:
--
Summary: Create generic service facade with
tracing/metrics/logging support
Key: HDDS-2067
URL: https://issues.apache.org/jira/browse/HDDS-2067
Project: Hadoop Distribute
Elek, Marton created HDDS-2066:
--
Summary: Improve the observability inside Ozone
Key: HDDS-2066
URL: https://issues.apache.org/jira/browse/HDDS-2066
Project: Hadoop Distributed Data Store
Issue
23 matches
Mail list logo