Normally everything go to main, but we are having small issues with the release, then we dont want to add more elements to “main” until we have the release done, then we decide to merge to the develop that its the branch that we use like a testing/unstable versions
On Tue 28. Sep 2021 at 16:08, Gábor Gévay <[email protected]> wrote: > Ohh, I was working with the `main` branch and not the `develop` > branch. This is why I didn't find the object file reading/writing > methods on the PlanBuilder and DataQuanta. Thank you Bertty! > > I would like to ask, what is the policy for the branches? E.g., how > did you decide to merge #30 into `develop`, but #27 and #28 into > `main`? Is it just that the code in #30 is not deemed stable enough > yet for `main`? > > Best, > Gábor > > > Bertty Contreras <[email protected]> ezt írta (időpont: 2021. szept. > 28., K, 15:54): > > > > In the case of wayang you have this > > > > Reader in the api > > > https://github.com/apache/incubator-wayang/blob/develop/wayang-api/wayang-api-scala-java/src/main/scala/org/apache/wayang/api/PlanBuilder.scala#L115-L121 > > > > Writer in the API > > > https://github.com/apache/incubator-wayang/blob/develop/wayang-api/wayang-api-scala-java/src/main/scala/org/apache/wayang/api/DataQuanta.scala#L799-L825 > > > > In both cases, > > the writer will be implemented by the platform in the case of flink is as > > follow > > > > writer part > > > https://github.com/apache/incubator-wayang/blob/develop/wayang-platforms/wayang-flink/src/main/java/org/apache/wayang/flink/operators/FlinkObjectFileSink.java#L80-L82 > > > > reader part > > > https://github.com/apache/incubator-wayang/blob/develop/wayang-platforms/wayang-flink/src/main/java/org/apache/wayang/flink/operators/FlinkObjectFileSource.java#L89-L103 > > > > Also have the implementation on Apache Spark and Stream Java. > > > > If you think the code that you show at > > > https://github.com/emmalanguage/emma/blob/master/emma-flink/src/main/scala/org/emmalanguage/api/flink/FlinkOps.scala#L76-L92 > > > > is similar to the one that is implemented on the Apache Flink platform. > > > > But if you need to read/write in binary it is also possible to create an > > operator that performs that. > > > > Also we can add more options to the operator to perform some behavior in > > the case of working with binaries. > > > > Let us know what do you think or if you need more explain :D > > > > Best regards, > > Bertty > > > > > > > > > > > > On Tue, Sep 28, 2021 at 2:00 PM Gábor E. Gévay <[email protected]> > wrote: > > > > > Hello, > > > > > > I’m working on the Emma integration, and I would need to write a > > > generic DataQuanta to a temporary file, and then read it back later. > > > What would be the best way to do this? It’s not trivial because I > > > don’t know the concrete type of the DataQuanta, i.e., I’m just working > > > with DataQuanta[A]. (I have a ClassTag for A.) > > > > > > For example, the same functionality is achieved when Emma compiles to > > > Flink by writing and reading the DataSet[A] in a binary format with > > > the serializer that Flink has for A: > > > > > > > https://github.com/emmalanguage/emma/blob/master/emma-flink/src/main/scala/org/emmalanguage/api/flink/FlinkOps.scala#L76-L92 > > > > > > At first, I thought that the ObjectFileSinks would help, but now I’m > > > not sure. The ObjectFileSinks seem to be available only at the backend > > > level. > > > > > > Best, > > > Gábor > > > >
