Re: support of RCFile

2021-09-29 Thread yuan youjun
that’s exactly what we need.

> 2021年9月30日 上午9:58,Jacques Nadeau  写道:
> 
> I actually wonder if file formats should be an extension api so someone can 
> implement a file format but it without any changes in Iceberg core (I don't 
> think this is possible today). Let's say one wanted to create a proprietary 
> format but use Iceberg semantics (not me). Could we make it such that one 
> could do so by building an extension and leveraging off-the-shelf Iceberg? 
> That's seems the best option for something like RC file. For sure people are 
> going to have a desire to add new formats and given the pain of rewriting 
> large datasets but I'd hate to see lots of partially implemented file formats 
> in Iceberg proper. Better for people to build against an extension api and 
> have them serve the purposes they need. Maybe go so far as the extension api 
> only allows read, not write so that people don't do crazy things...
> 
> 
> On Wed, Sep 29, 2021 at 6:43 PM yuan youjun  > wrote:
> Hi Ryan and Russell
> 
> Thanks very much for your response.
> 
> well, I want ACID and row level update capability that icegerg provides. I 
> believe data lake is a better way to manage our dataset, instead of hive.
> I also want our transition from hive to data lake is as smooth as possible, 
> which means:
> 1, the transition should be transparent to consumers (dashboard, data 
> scientist, downstream pipelines). If we start a new table with iceberg with 
> new data, then those consumers will NOT be able to query old data (without 
> splitting their queries into two, and combine the result).
> 2, do not impose significant infra cost. Convert historical data from RCFile 
> into ORC or Parquet would be time consuming and costly (though it’s one time 
> cost). I got your point that new format probably save our storage cost in the 
> long term, this would be a separate interesting topic.
> 
> Here is what in my mind now:
> 1, if iceberg support (or will support) legacy format, that would be ideal.
> 2, if not, is it possible for us to develop that feature (may be in a fork).
> 3, convert history data into new format should be our last sort, this way we 
> need more evaluation.
> 
> 
> youjun
> 
>> 2021年9月30日 上午12:15,Ryan Blue mailto:b...@tabular.io>> 写道:
>> 
>> Youjun, what are you trying to do?
>> 
>> If you have existing tables in an incompatible format, you may just want to 
>> leave them as they are for historical data. It depends on why you want to 
>> use Iceberg. If you want to be able to query larger ranges of that data 
>> because you've clustered across files by filter columns, then you'd want to 
>> build the Iceberg metadata. But if you have a lot of historical data that 
>> hasn't been clustered and is unlikely to be rewritten, then keeping old 
>> tables in RCFile and doing new work in Iceberg could be a better option.
>> 
>> You may also want to check how much savings you get out of using Iceberg 
>> with Parquet files vs RCFile. If you find that you can cluster your data for 
>> better queries and that ends up making your dataset considerably smaller 
>> then maybe it's worth the conversion that Russell suggested. RCFile is 
>> pretty old so I think there's a good chance you'd save a lot of space -- 
>> just updating from an old compression codec to something more modern like 
>> snappy to lz4 or gzip to zstd could be a big win.
>> 
>> Ryan
>> 
>> On Wed, Sep 29, 2021 at 8:49 AM Russell Spitzer > > wrote:
>> Within Iceberg it would take a bit of effort, we would need custom readers 
>> at the minimum if we just wanted to make it ReadOnly support. I think the 
>> main complexity would be designing the specific readers for the platform you 
>> want to use like "Spark" or "Flink", the actual metadata handling and such 
>> would probably be pretty straightforward. I would definitely size it as at 
>> least a several week project and I'm not sure we would want to support it in 
>> OSS Iceberg.
>> 
>> On Wed, Sep 29, 2021 at 10:40 AM 袁尤军 > > wrote:
>> thanks for the suggestion. we need to evaluate the cost to convert the 
>> format, as those hive tables  have been there for many years, so PB data 
>> need to reformat.
>> 
>> also, do you think it is possible to develop the support for a new format? 
>> how costly is it?
>> 
>> 发自我的iPhone
>> 
>> > 在 2021年9月29日,下午9:34,Russell Spitzer > > > 写道:
>> > 
>> > There is no plan I am aware of using RCFiles directly in Iceberg. While 
>> > we could work to support other file formats, I don't think it is very 
>> > widely used compared to ORC and Parquet (Iceberg has native support for 
>> > these formats).
>> > 
>> > My suggestion for conversion would be to do a CTAS statement in Spark and 
>> > have the table completely converted over to Parquet (or ORC). This is 
>> > probably the simplest way.
>> > 
>> >> On Sep 29, 2021, at 7:01 AM, yuan youjun > >> 

Re: [DISCUSS] Spark version support strategy

2021-09-29 Thread Steven Wu
Wing, sorry, my earlier message probably misled you. I was speaking my
personal opinion on Flink version support.

On Tue, Sep 28, 2021 at 8:03 PM Wing Yew Poon 
wrote:

> Hi OpenInx,
> I'm sorry I misunderstood the thinking of the Flink community. Thanks for
> the clarification.
> - Wing Yew
>
>
> On Tue, Sep 28, 2021 at 7:15 PM OpenInx  wrote:
>
>> Hi Wing
>>
>> As we discussed above, we community prefer to choose option.2 or
>> option.3.  So in fact, when we planned to upgrade the flink version from
>> 1.12 to 1.13,  we are doing our best to guarantee the master iceberg repo
>> could work fine for both flink1.12 & flink1.13. More context please see
>> [1], [2], [3]
>>
>> [1] https://github.com/apache/iceberg/pull/3116
>> [2] https://github.com/apache/iceberg/issues/3183
>> [3]
>> https://lists.apache.org/x/thread.html/ra438e89eeec2d4623a32822e21739c8f2229505522d73d1034e34198@%3Cdev.flink.apache.org%3E
>>
>>
>> On Wed, Sep 29, 2021 at 5:27 AM Wing Yew Poon 
>> wrote:
>>
>>> In the last community sync, we spent a little time on this topic. For
>>> Spark support, there are currently two options under consideration:
>>>
>>> Option 2: Separate repo for the Spark support. Use branches for
>>> supporting different Spark versions. Main branch for the latest Spark
>>> version (3.2 to begin with).
>>> Tooling needs to be built for producing regular snapshots of core
>>> Iceberg in a consumable way for this repo. Unclear if commits to core
>>> Iceberg will be tested pre-commit against Spark support; my impression is
>>> that they will not be, and the Spark support build can be broken by changes
>>> to core.
>>>
>>> A variant of option 3 (which we will simply call Option 3 going
>>> forward): Single repo, separate module (subdirectory) for each Spark
>>> version to be supported. Code duplication in each Spark module (no attempt
>>> to refactor out common code). Each module built against the specific
>>> version of Spark to be supported, producing a runtime jar built against
>>> that version. CI will test all modules. Support can be provided for only
>>> building the modules a developer cares about.
>>>
>>> More input was sought and people are encouraged to voice their
>>> preference.
>>> I lean towards Option 3.
>>>
>>> - Wing Yew
>>>
>>> ps. In the sync, as Steven Wu wrote, the question was raised if the same
>>> multi-version support strategy can be adopted across engines. Based on what
>>> Steven wrote, currently the Flink developer community's bandwidth makes
>>> supporting only a single Flink version (and focusing resources on
>>> developing new features on that version) the preferred choice. If so, then
>>> no multi-version support strategy for Flink is needed at this time.
>>>
>>>
>>> On Thu, Sep 23, 2021 at 5:26 PM Steven Wu  wrote:
>>>
 During the sync meeting, people talked about if and how we can have the
 same version support model across engines like Flink and Spark. I can
 provide some input from the Flink side.

 Flink only supports two minor versions. E.g., right now Flink 1.13 is
 the latest released version. That means only Flink 1.12 and 1.13 are
 supported. Feature changes or bug fixes will only be backported to 1.12 and
 1.13, unless it is a serious bug (like security). With that context,
 personally I like option 1 (with one actively supported Flink version in
 master branch) for the iceberg-flink module.

 We discussed the idea of supporting multiple Flink versions via shm
 layer and multiple modules. While it may be a little better to support
 multiple Flink versions, I don't know if there is enough support and
 resources from the community to pull it off. Also the ongoing maintenance
 burden for each minor version release from Flink, which happens roughly
 every 4 months.


 On Thu, Sep 16, 2021 at 10:25 PM Peter Vary 
 wrote:

> Since you mentioned Hive, I chime in with what we do there. You might
> find it useful:
> - metastore module - only small differences - DynConstructor solves
> for us
> - mr module - some bigger differences, but still manageable for Hive
> 2-3. Need some new classes, but most of the code is reused - extra module
> for Hive 3. For Hive 4 we use a different repo as we moved to the Hive
> codebase.
>
> My thoughts based on the above experience:
> - Keeping Hive 4 and Hive 2-3 code in sync is a pain. We constantly
> have problems with backporting changes between repos and we are slacking
> behind which hurts both projects
> - Hive 2-3 model is working better by forcing us to keep the things in
> sync, but with serious differences in the Hive project it still doesn't
> seem like a viable option.
>
> So I think the question is: How stable is the Spark code we are
> integrating to. If I is fairly stable then we are better off with a "one
> repo multiple modules" approach and we should consider the multirepo

Re: support of RCFile

2021-09-29 Thread Jacques Nadeau
I actually wonder if file formats should be an extension api so someone can
implement a file format but it without any changes in Iceberg core (I don't
think this is possible today). Let's say one wanted to create a proprietary
format but use Iceberg semantics (not me). Could we make it such that one
could do so by building an extension and leveraging off-the-shelf Iceberg?
That's seems the best option for something like RC file. For sure people
are going to have a desire to add new formats and given the pain of
rewriting large datasets but I'd hate to see lots of partially implemented
file formats in Iceberg proper. Better for people to build against an
extension api and have them serve the purposes they need. Maybe go so far
as the extension api only allows read, not write so that people don't do
crazy things...


On Wed, Sep 29, 2021 at 6:43 PM yuan youjun  wrote:

> Hi Ryan and Russell
>
> Thanks very much for your response.
>
> well, I want ACID and row level update capability that icegerg provides. I
> believe data lake is a better way to manage our dataset, instead of hive.
> I also want our transition from hive to data lake is as smooth as
> possible, which means:
> 1, the transition should be transparent to consumers (dashboard, data
> scientist, downstream pipelines). If we start a new table with iceberg with
> new data, then those consumers will NOT be able to query old data (without
> splitting their queries into two, and combine the result).
> 2, do not impose significant infra cost. Convert historical data from
> RCFile into ORC or Parquet would be time consuming and costly (though it’s
> one time cost). I got your point that new format probably save our storage
> cost in the long term, this would be a separate interesting topic.
>
> Here is what in my mind now:
> 1, if iceberg support (or will support) legacy format, that would be ideal.
> 2, if not, is it possible for us to develop that feature (may be in a
> fork).
> 3, convert history data into new format should be our last sort, this way
> we need more evaluation.
>
>
> youjun
>
> 2021年9月30日 上午12:15,Ryan Blue  写道:
>
> Youjun, what are you trying to do?
>
> If you have existing tables in an incompatible format, you may just want
> to leave them as they are for historical data. It depends on why you want
> to use Iceberg. If you want to be able to query larger ranges of that data
> because you've clustered across files by filter columns, then you'd want to
> build the Iceberg metadata. But if you have a lot of historical data that
> hasn't been clustered and is unlikely to be rewritten, then keeping old
> tables in RCFile and doing new work in Iceberg could be a better option.
>
> You may also want to check how much savings you get out of using Iceberg
> with Parquet files vs RCFile. If you find that you can cluster your data
> for better queries and that ends up making your dataset considerably
> smaller then maybe it's worth the conversion that Russell suggested. RCFile
> is pretty old so I think there's a good chance you'd save a lot of space --
> just updating from an old compression codec to something more modern like
> snappy to lz4 or gzip to zstd could be a big win.
>
> Ryan
>
> On Wed, Sep 29, 2021 at 8:49 AM Russell Spitzer 
> wrote:
>
>> Within Iceberg it would take a bit of effort, we would need custom
>> readers at the minimum if we just wanted to make it ReadOnly support. I
>> think the main complexity would be designing the specific readers for the
>> platform you want to use like "Spark" or "Flink", the actual metadata
>> handling and such would probably be pretty straightforward. I would
>> definitely size it as at least a several week project and I'm not sure we
>> would want to support it in OSS Iceberg.
>>
>> On Wed, Sep 29, 2021 at 10:40 AM 袁尤军  wrote:
>>
>>> thanks for the suggestion. we need to evaluate the cost to convert the
>>> format, as those hive tables  have been there for many years, so PB data
>>> need to reformat.
>>>
>>> also, do you think it is possible to develop the support for a new
>>> format? how costly is it?
>>>
>>> 发自我的iPhone
>>>
>>> > 在 2021年9月29日,下午9:34,Russell Spitzer  写道:
>>> >
>>> > There is no plan I am aware of using RCFiles directly in Iceberg.
>>> While we could work to support other file formats, I don't think it is very
>>> widely used compared to ORC and Parquet (Iceberg has native support for
>>> these formats).
>>> >
>>> > My suggestion for conversion would be to do a CTAS statement in Spark
>>> and have the table completely converted over to Parquet (or ORC). This is
>>> probably the simplest way.
>>> >
>>> >> On Sep 29, 2021, at 7:01 AM, yuan youjun 
>>> wrote:
>>> >>
>>> >> Hi community,
>>> >>
>>> >> I am exploring ways to evolute existing hive tables (RCFile)  into
>>> data lake. However I found out that iceberg (or Hudi, delta lake) does not
>>> support RCFile. So my questions are:
>>> >> 1, is there any plan (or is it possible) to support RCFile in the
>>> future? So we can m

Re: support of RCFile

2021-09-29 Thread yuan youjun
Hi Ryan and Russell

Thanks very much for your response.

well, I want ACID and row level update capability that icegerg provides. I 
believe data lake is a better way to manage our dataset, instead of hive.
I also want our transition from hive to data lake is as smooth as possible, 
which means:
1, the transition should be transparent to consumers (dashboard, data 
scientist, downstream pipelines). If we start a new table with iceberg with new 
data, then those consumers will NOT be able to query old data (without 
splitting their queries into two, and combine the result).
2, do not impose significant infra cost. Convert historical data from RCFile 
into ORC or Parquet would be time consuming and costly (though it’s one time 
cost). I got your point that new format probably save our storage cost in the 
long term, this would be a separate interesting topic.

Here is what in my mind now:
1, if iceberg support (or will support) legacy format, that would be ideal.
2, if not, is it possible for us to develop that feature (may be in a fork).
3, convert history data into new format should be our last sort, this way we 
need more evaluation.


youjun

> 2021年9月30日 上午12:15,Ryan Blue  写道:
> 
> Youjun, what are you trying to do?
> 
> If you have existing tables in an incompatible format, you may just want to 
> leave them as they are for historical data. It depends on why you want to use 
> Iceberg. If you want to be able to query larger ranges of that data because 
> you've clustered across files by filter columns, then you'd want to build the 
> Iceberg metadata. But if you have a lot of historical data that hasn't been 
> clustered and is unlikely to be rewritten, then keeping old tables in RCFile 
> and doing new work in Iceberg could be a better option.
> 
> You may also want to check how much savings you get out of using Iceberg with 
> Parquet files vs RCFile. If you find that you can cluster your data for 
> better queries and that ends up making your dataset considerably smaller then 
> maybe it's worth the conversion that Russell suggested. RCFile is pretty old 
> so I think there's a good chance you'd save a lot of space -- just updating 
> from an old compression codec to something more modern like snappy to lz4 or 
> gzip to zstd could be a big win.
> 
> Ryan
> 
> On Wed, Sep 29, 2021 at 8:49 AM Russell Spitzer  > wrote:
> Within Iceberg it would take a bit of effort, we would need custom readers at 
> the minimum if we just wanted to make it ReadOnly support. I think the main 
> complexity would be designing the specific readers for the platform you want 
> to use like "Spark" or "Flink", the actual metadata handling and such would 
> probably be pretty straightforward. I would definitely size it as at least a 
> several week project and I'm not sure we would want to support it in OSS 
> Iceberg.
> 
> On Wed, Sep 29, 2021 at 10:40 AM 袁尤军  > wrote:
> thanks for the suggestion. we need to evaluate the cost to convert the 
> format, as those hive tables  have been there for many years, so PB data need 
> to reformat.
> 
> also, do you think it is possible to develop the support for a new format? 
> how costly is it?
> 
> 发自我的iPhone
> 
> > 在 2021年9月29日,下午9:34,Russell Spitzer  > > 写道:
> > 
> > There is no plan I am aware of using RCFiles directly in Iceberg. While we 
> > could work to support other file formats, I don't think it is very widely 
> > used compared to ORC and Parquet (Iceberg has native support for these 
> > formats).
> > 
> > My suggestion for conversion would be to do a CTAS statement in Spark and 
> > have the table completely converted over to Parquet (or ORC). This is 
> > probably the simplest way.
> > 
> >> On Sep 29, 2021, at 7:01 AM, yuan youjun  >> > wrote:
> >> 
> >> Hi community,
> >> 
> >> I am exploring ways to evolute existing hive tables (RCFile)  into data 
> >> lake. However I found out that iceberg (or Hudi, delta lake) does not 
> >> support RCFile. So my questions are:
> >> 1, is there any plan (or is it possible) to support RCFile in the future? 
> >> So we can manage those existing data file without re-formating.
> >> 2, If no such plan, do you have any suggestion to migrate RCFiles into 
> >> iceberg?
> >> 
> >> Thanks
> >> Youjun
> 
> 
> 
> 
> -- 
> Ryan Blue
> Tabular



Re: support of RCFile

2021-09-29 Thread Ryan Blue
Youjun, what are you trying to do?

If you have existing tables in an incompatible format, you may just want to
leave them as they are for historical data. It depends on why you want to
use Iceberg. If you want to be able to query larger ranges of that data
because you've clustered across files by filter columns, then you'd want to
build the Iceberg metadata. But if you have a lot of historical data that
hasn't been clustered and is unlikely to be rewritten, then keeping old
tables in RCFile and doing new work in Iceberg could be a better option.

You may also want to check how much savings you get out of using Iceberg
with Parquet files vs RCFile. If you find that you can cluster your data
for better queries and that ends up making your dataset considerably
smaller then maybe it's worth the conversion that Russell suggested. RCFile
is pretty old so I think there's a good chance you'd save a lot of space --
just updating from an old compression codec to something more modern like
snappy to lz4 or gzip to zstd could be a big win.

Ryan

On Wed, Sep 29, 2021 at 8:49 AM Russell Spitzer 
wrote:

> Within Iceberg it would take a bit of effort, we would need custom readers
> at the minimum if we just wanted to make it ReadOnly support. I think the
> main complexity would be designing the specific readers for the platform
> you want to use like "Spark" or "Flink", the actual metadata handling and
> such would probably be pretty straightforward. I would definitely size it
> as at least a several week project and I'm not sure we would want to
> support it in OSS Iceberg.
>
> On Wed, Sep 29, 2021 at 10:40 AM 袁尤军  wrote:
>
>> thanks for the suggestion. we need to evaluate the cost to convert the
>> format, as those hive tables  have been there for many years, so PB data
>> need to reformat.
>>
>> also, do you think it is possible to develop the support for a new
>> format? how costly is it?
>>
>> 发自我的iPhone
>>
>> > 在 2021年9月29日,下午9:34,Russell Spitzer  写道:
>> >
>> > There is no plan I am aware of using RCFiles directly in Iceberg.
>> While we could work to support other file formats, I don't think it is very
>> widely used compared to ORC and Parquet (Iceberg has native support for
>> these formats).
>> >
>> > My suggestion for conversion would be to do a CTAS statement in Spark
>> and have the table completely converted over to Parquet (or ORC). This is
>> probably the simplest way.
>> >
>> >> On Sep 29, 2021, at 7:01 AM, yuan youjun  wrote:
>> >>
>> >> Hi community,
>> >>
>> >> I am exploring ways to evolute existing hive tables (RCFile)  into
>> data lake. However I found out that iceberg (or Hudi, delta lake) does not
>> support RCFile. So my questions are:
>> >> 1, is there any plan (or is it possible) to support RCFile in the
>> future? So we can manage those existing data file without re-formating.
>> >> 2, If no such plan, do you have any suggestion to migrate RCFiles into
>> iceberg?
>> >>
>> >> Thanks
>> >> Youjun
>>
>>
>>

-- 
Ryan Blue
Tabular


Re: support of RCFile

2021-09-29 Thread Russell Spitzer
Within Iceberg it would take a bit of effort, we would need custom readers
at the minimum if we just wanted to make it ReadOnly support. I think the
main complexity would be designing the specific readers for the platform
you want to use like "Spark" or "Flink", the actual metadata handling and
such would probably be pretty straightforward. I would definitely size it
as at least a several week project and I'm not sure we would want to
support it in OSS Iceberg.

On Wed, Sep 29, 2021 at 10:40 AM 袁尤军  wrote:

> thanks for the suggestion. we need to evaluate the cost to convert the
> format, as those hive tables  have been there for many years, so PB data
> need to reformat.
>
> also, do you think it is possible to develop the support for a new format?
> how costly is it?
>
> 发自我的iPhone
>
> > 在 2021年9月29日,下午9:34,Russell Spitzer  写道:
> >
> > There is no plan I am aware of using RCFiles directly in Iceberg. While
> we could work to support other file formats, I don't think it is very
> widely used compared to ORC and Parquet (Iceberg has native support for
> these formats).
> >
> > My suggestion for conversion would be to do a CTAS statement in Spark
> and have the table completely converted over to Parquet (or ORC). This is
> probably the simplest way.
> >
> >> On Sep 29, 2021, at 7:01 AM, yuan youjun  wrote:
> >>
> >> Hi community,
> >>
> >> I am exploring ways to evolute existing hive tables (RCFile)  into data
> lake. However I found out that iceberg (or Hudi, delta lake) does not
> support RCFile. So my questions are:
> >> 1, is there any plan (or is it possible) to support RCFile in the
> future? So we can manage those existing data file without re-formating.
> >> 2, If no such plan, do you have any suggestion to migrate RCFiles into
> iceberg?
> >>
> >> Thanks
> >> Youjun
>
>
>


Re: support of RCFile

2021-09-29 Thread 袁尤军
thanks for the suggestion. we need to evaluate the cost to convert the format, 
as those hive tables  have been there for many years, so PB data need to 
reformat.

also, do you think it is possible to develop the support for a new format? how 
costly is it?

发自我的iPhone

> 在 2021年9月29日,下午9:34,Russell Spitzer  写道:
> 
> There is no plan I am aware of using RCFiles directly in Iceberg. While we 
> could work to support other file formats, I don't think it is very widely 
> used compared to ORC and Parquet (Iceberg has native support for these 
> formats).
> 
> My suggestion for conversion would be to do a CTAS statement in Spark and 
> have the table completely converted over to Parquet (or ORC). This is 
> probably the simplest way.
> 
>> On Sep 29, 2021, at 7:01 AM, yuan youjun  wrote:
>> 
>> Hi community,
>> 
>> I am exploring ways to evolute existing hive tables (RCFile)  into data 
>> lake. However I found out that iceberg (or Hudi, delta lake) does not 
>> support RCFile. So my questions are:
>> 1, is there any plan (or is it possible) to support RCFile in the future? So 
>> we can manage those existing data file without re-formating.
>> 2, If no such plan, do you have any suggestion to migrate RCFiles into 
>> iceberg?
>> 
>> Thanks
>> Youjun




Re: support of RCFile

2021-09-29 Thread Russell Spitzer
There is no plan I am aware of using RCFiles directly in Iceberg. While we 
could work to support other file formats, I don't think it is very widely used 
compared to ORC and Parquet (Iceberg has native support for these formats).

My suggestion for conversion would be to do a CTAS statement in Spark and have 
the table completely converted over to Parquet (or ORC). This is probably the 
simplest way.

> On Sep 29, 2021, at 7:01 AM, yuan youjun  wrote:
> 
> Hi community,
> 
> I am exploring ways to evolute existing hive tables (RCFile)  into data lake. 
> However I found out that iceberg (or Hudi, delta lake) does not support 
> RCFile. So my questions are:
> 1, is there any plan (or is it possible) to support RCFile in the future? So 
> we can manage those existing data file without re-formating.
> 2, If no such plan, do you have any suggestion to migrate RCFiles into 
> iceberg?
> 
> Thanks
> Youjun



support of RCFile

2021-09-29 Thread yuan youjun
Hi community,

I am exploring ways to evolute existing hive tables (RCFile)  into data lake. 
However I found out that iceberg (or Hudi, delta lake) does not support RCFile. 
So my questions are:
1, is there any plan (or is it possible) to support RCFile in the future? So we 
can manage those existing data file without re-formating.
2, If no such plan, do you have any suggestion to migrate RCFiles into iceberg?

Thanks
Youjun