As part of testing the RC, I run into the following issue with a test case
that runs a job from a packaged jar on a MiniCluster. This test had to be
modified due to the client-side API changes in 1.10.

The issue is that the jar file that also contains the entry point isn't
part of the user classpath on the task manager. The entry point is executed
successfully; when removing all user code from the job graph, the test
passes.

If the jar isn't shipped automatically to the task manager, what do I need
to set for it to occur?

Thanks,
Thomas


  @ClassRule
  public static final MiniClusterResource MINI_CLUSTER_RESOURCE = new
MiniClusterResource(
          new MiniClusterResourceConfiguration.Builder()
                  .build());

  @Test(timeout = 30000)
  public void test() throws Exception {
    final URI restAddress =
MINI_CLUSTER_RESOURCE.getMiniCluster().getRestAddress().get();
    Configuration config = new Configuration();
    config.setString(JobManagerOptions.ADDRESS, restAddress.getHost());
    config.setString(RestOptions.ADDRESS, restAddress.getHost());
    config.setInteger(RestOptions.PORT, restAddress.getPort());
    config.set(CoreOptions.DEFAULT_PARALLELISM, 1);
    config.setString(DeploymentOptions.TARGET, RemoteExecutor.NAME);

    String entryPoint = "my.TestFlinkJob";

    PackagedProgram.Builder program = PackagedProgram.newBuilder()
            .setJarFile(new File(JAR_PATH))
            .setEntryPointClassName(entryPoint);

    ClientUtils.executeProgram(DefaultExecutorServiceLoader.INSTANCE,
        config, program.build());
  }

The user function deserialization error:

org.apache.flink.streaming.runtime.tasks.StreamTaskException: Cannot
instantiate user function.
at
org.apache.flink.streaming.api.graph.StreamConfig.getStreamOperatorFactory(StreamConfig.java:269)
at
org.apache.flink.streaming.runtime.tasks.OperatorChain.<init>(OperatorChain.java:115)
at
org.apache.flink.streaming.runtime.tasks.StreamTask.beforeInvoke(StreamTask.java:433)
at
org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:461)
at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:707)
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:532)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.io.StreamCorruptedException: unexpected block data
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1581)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2278)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2158)

With 1.9, the driver code would be:

    PackagedProgram program = new PackagedProgram(new File(JAR_PATH),
            entryPoint, new String[]{});
    RestClusterClient client = new RestClusterClient(config,
"RemoteExecutor");
    client.run(program, 1);

On Fri, Jan 31, 2020 at 9:16 PM Jingsong Li <jingsongl...@gmail.com> wrote:

> Thanks Jincheng,
>
> FLINK-15840 [1] should be a blocker, lead to
> "TableEnvironment.from/scan(string path)" cannot be used for all
> temporaryTable and catalogTable (not DataStreamTable). Of course, it can be
> bypassed by "TableEnvironment.sqlQuery("select * from t")", but "from/scan"
> are very important api of TableEnvironment and pure TableApi can't be used
> seriously.
>
> [1] https://issues.apache.org/jira/browse/FLINK-15840
>
> Best,
> Jingsong Lee
>
> On Sat, Feb 1, 2020 at 12:47 PM Benchao Li <libenc...@gmail.com> wrote:
>
> > Hi all,
> >
> > I also have a issue[1] which I think it's great to be included in 1.10
> > release. The pr is already under review.
> >
> > [1] https://issues.apache.org/jira/projects/FLINK/issues/FLINK-15494
> >
> > jincheng sun <sunjincheng...@gmail.com> 于2020年2月1日周六 下午12:33写道:
> >
> > > Hi folks,
> > >
> > > I found another issue related to Blink planner that ClassCastException
> > > would be thrown when use ConnectorDescriptor to register the Source.
> > > Not sure if it is a blocker. The issue can be found in [1], anyway,
> it's
> > > better to fix this issue in new RC.
> > >
> > > Best,
> > > Jincheng
> > >
> > > [1] https://issues.apache.org/jira/browse/FLINK-15840
> > >
> > >
> > >
> > > Till Rohrmann <trohrm...@apache.org> 于2020年1月31日周五 下午10:29写道:
> > >
> > > > Hi everyone,
> > > >
> > > > -1, because flink-kubernetes does not have the correct NOTICE file.
> > > >
> > > > Here is the issue to track the problem [1].
> > > >
> > > > [1] https://issues.apache.org/jira/browse/FLINK-15837
> > > >
> > > > Cheers,
> > > > Till
> > > >
> > > > On Fri, Jan 31, 2020 at 2:34 PM Xintong Song <tonysong...@gmail.com>
> > > > wrote:
> > > >
> > > > > Thanks Gary & Yu,
> > > > >
> > > > > +1 (non-binding) from my side.
> > > > >
> > > > > - I'm not aware of any blockers (except unfinished documents)
> > > > > - Verified building from source (tests not skipped)
> > > > > - Verified running nightly e2e tests
> > > > > - Verified running example jobs in local/standalone/yarn setups.
> > > > Everything
> > > > > seems to work fine.
> > > > > - Played around with memory configurations on local/standalone/yarn
> > > > setups.
> > > > > Everything works as expected.
> > > > > - Checked on the release notes, particularly the memory management
> > > part.
> > > > >
> > > > > Thank you~
> > > > >
> > > > > Xintong Song
> > > > >
> > > > >
> > > > >
> > > > > On Fri, Jan 31, 2020 at 6:47 PM David Anderson <
> da...@ververica.com>
> > > > > wrote:
> > > > >
> > > > > > +1 (non-binding) No blockers, but I did run into a couple of
> > things.
> > > > > >
> > > > > > I upgraded flink-training-exercises to 1.10; after minor
> > adjustments,
> > > > all
> > > > > > the tests pass.
> > > > > >
> > > > > > With Java 11 I ran into
> > > > > >
> > > > > > WARNING: An illegal reflective access operation has occurred
> > > > > > WARNING: Illegal reflective access by
> > > > > > org.apache.flink.core.memory.MemoryUtils
> > > > > > WARNING: Please consider reporting this to the maintainers of
> > > > > > org.apache.flink.core.memory.MemoryUtils
> > > > > >
> > > > > > which is the subject of
> > > > > https://issues.apache.org/jira/browse/FLINK-15094.
> > > > > > I suggest we add some mention of this to the release notes.
> > > > > >
> > > > > > I have been using the now removed ExternalCatalog interface. I
> > found
> > > > the
> > > > > > documentation for the new Catalog API insufficient for figuring
> out
> > > how
> > > > > to
> > > > > > adapt.
> > > > > >
> > > > > > David
> > > > > >
> > > > > > On Fri, Jan 31, 2020 at 10:00 AM Aljoscha Krettek <
> > > aljos...@apache.org
> > > > >
> > > > > > wrote:
> > > > > >
> > > > > > > +1 (binding)
> > > > > > >
> > > > > > > - I verified the checksum
> > > > > > > - I verified the signatures
> > > > > > > - I eyeballed the diff in pom files between 1.9 and 1.10 and
> > > checked
> > > > > any
> > > > > > > newly added dependencies. They are ok. If the 1.9 licensing was
> > > > correct
> > > > > > > the licensing on this should also be correct
> > > > > > > - I manually installed Flink Python and ran the WordCount.py
> > > example
> > > > on
> > > > > > > a (local) cluster and standalone
> > > > > > >
> > > > > > > Aljoscha
> > > > > > >
> > > > > > > On 30.01.20 10:49, Piotr Nowojski wrote:
> > > > > > > > Hi,
> > > > > > > >
> > > > > > > > Thanks for creating this RC Gary & Yu.
> > > > > > > >
> > > > > > > > +1 (non-binding) from my side
> > > > > > > >
> > > > > > > > Because of instabilities during the testing period, I’ve
> > manually
> > > > > > tested
> > > > > > > some jobs (and streaming examples) on an EMR cluster, writing
> to
> > S3
> > > > > using
> > > > > > > newly unshaded/not relocated S3 plugin. Everything seems to
> works
> > > > fine.
> > > > > > > Also I’m not aware of any blocking issues for this RC.
> > > > > > > >
> > > > > > > > Piotrek
> > > > > > > >
> > > > > > > >> On 27 Jan 2020, at 22:06, Gary Yao <g...@apache.org> wrote:
> > > > > > > >>
> > > > > > > >> Hi everyone,
> > > > > > > >> Please review and vote on the release candidate #1 for the
> > > version
> > > > > > > 1.10.0,
> > > > > > > >> as follows:
> > > > > > > >> [ ] +1, Approve the release
> > > > > > > >> [ ] -1, Do not approve the release (please provide specific
> > > > > comments)
> > > > > > > >>
> > > > > > > >>
> > > > > > > >> The complete staging area is available for your review,
> which
> > > > > > includes:
> > > > > > > >> * JIRA release notes [1],
> > > > > > > >> * the official Apache source release and binary convenience
> > > > releases
> > > > > > to
> > > > > > > be
> > > > > > > >> deployed to dist.apache.org [2], which are signed with the
> > key
> > > > with
> > > > > > > >> fingerprint BB137807CEFBE7DD2616556710B12A1F89C115E8 [3],
> > > > > > > >> * all artifacts to be deployed to the Maven Central
> Repository
> > > > [4],
> > > > > > > >> * source code tag "release-1.10.0-rc1" [5],
> > > > > > > >>
> > > > > > > >> The announcement blog post is in the works. I will update
> this
> > > > > voting
> > > > > > > >> thread with a link to the pull request soon.
> > > > > > > >>
> > > > > > > >> The vote will be open for at least 72 hours. It is adopted
> by
> > > > > majority
> > > > > > > >> approval, with at least 3 PMC affirmative votes.
> > > > > > > >>
> > > > > > > >> Thanks,
> > > > > > > >> Yu & Gary
> > > > > > > >>
> > > > > > > >> [1]
> > > > > > > >>
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522&version=12345845
> > > > > > > >> [2]
> > > > https://dist.apache.org/repos/dist/dev/flink/flink-1.10.0-rc1/
> > > > > > > >> [3] https://dist.apache.org/repos/dist/release/flink/KEYS
> > > > > > > >> [4]
> > > > > > >
> > > >
> https://repository.apache.org/content/repositories/orgapacheflink-1325
> > > > > > > >> [5]
> > > > https://github.com/apache/flink/releases/tag/release-1.10.0-rc1
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> >
> > --
> >
> > Benchao Li
> > School of Electronics Engineering and Computer Science, Peking University
> > Tel:+86-15650713730
> > Email: libenc...@gmail.com; libenc...@pku.edu.cn
> >
>
>
> --
> Best, Jingsong Lee
>

Reply via email to