Why assignTimestampsAndWatermarks parallelism same as map,it will not fired?

2018-04-24 Thread 潘 功森
Hi all,


I use the same parallelism between map and assignTimestampsAndWatermarks , and 
it not fired, I saw the extractTimestamp and generateWatermark all is fine, but 
watermark is always not change and keep as min long value.

And then I changed parallelism and different with map, and windows fired.

I used Flink 1.3.2.

Is it a Flink bug?or others can give me why it not fired. It troubled me the 
whole day.


Best regards,

September


Class loading issues when using Remote Execution Environment

2018-04-24 Thread kedar mhaswade
I am trying to get gradoop_demo
 (a gradoop based graph
visualization app) working on Flink with *Remote* Execution Environment.

This app, which is based on Gradoop, submits a job to the *preconfigured*
execution environment, collects the results and sends it to the UI for
rendering.

When the execution environment is configured to be a LocalEnvironment
,
everything works fine. But when I start a cluster (using <
flink-install-path>/bin/start-cluster.sh), get the Job Manager endpoint
(e.g. localhost:6123) and configure a RemoteEnvironment

and
use that environment to run the job, I get exceptions [1].

Based on the class loading doc
,
I copied the gradoop classes (
gradoop-flink-0.3.3-SNAPSHOT.jar, gradoop-common-0.3.3-SNAPSHOT.jar) to the
/lib folder (hoping that that way those classes will be
available to all the executors in the cluster). I have ensured that the
class that Flink fails to load is in fact available in the Gradoop jars
that I copied to the /lib folder.

I have tried using the RemoteEnvironment method with jarFiles argument
where the passed JAR file is a fat jar containing everything (in which case
there is no Gradoop JAR file in /lib folder).

So, my questions are:
1) How can I use RemoteEnvironment?
2) Is there any other way of doing this *programmatically? *(That means I
can't do flink run since I am interested in the job execution result as a
blocking call -- which means ideally I don't want to use the submit RESTful
API as well). I just want RemoteEnvironment to work as well as
LocalEnvironment.

Regards,
Kedar


[1]
2018-04-24 15:16:02,823 ERROR
org.apache.flink.runtime.jobmanager.JobManager- Failed to
submit job 0c987c8704f8b7eb4d7d38efcb3d708d (Flink Java Job at Tue Apr 24
15:15:59 PDT 2018)
java.lang.NoClassDefFoundError: Could not initialize class
*org.gradoop.common.model.impl.id.GradoopId*
  at java.io.ObjectStreamClass.hasStaticInitializer(Native Method)
  at
java.io.ObjectStreamClass.computeDefaultSUID(ObjectStreamClass.java:1887)
  at java.io.ObjectStreamClass.access$100(ObjectStreamClass.java:79)
  at java.io.ObjectStreamClass$1.run(ObjectStreamClass.java:263)
  at java.io.ObjectStreamClass$1.run(ObjectStreamClass.java:261)
  at java.security.AccessController.doPrivileged(Native Method)
  at
java.io.ObjectStreamClass.getSerialVersionUID(ObjectStreamClass.java:260)
  at java.io.ObjectStreamClass.initNonProxy(ObjectStreamClass.java:682)
  at java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1876)
  at java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1745)
  at java.io.ObjectInputStream.readClass(ObjectInputStream.java:1710)
  at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1550)
  at java.io.ObjectInputStream.readObject(ObjectInputStream.java:427)
  at java.util.HashSet.readObject(HashSet.java:341)
  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
  at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
  at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
  at java.lang.reflect.Method.invoke(Method.java:498)
  at java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.java:1158)
  at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2169)
  at
java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2060)
  at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1567)
  at
java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2278)
  at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2202)
  at
java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2060)
  at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1567)
  at java.io.ObjectInputStream.readObject(ObjectInputStream.java:427)
  at
org.apache.flink.util.InstantiationUtil.deserializeObject(InstantiationUtil.java:290)


Testing Metrics

2018-04-24 Thread Julio Biason
Hey guys and gals,

Just wondering: Does anyone have an idea how to test if metrics are being
generated? I have an integration test and I just added a processor to count
elements in late arrivals (the general idea is to capture those, count and
get an average, so we can adjust the allowedLateness) but now I'm wondering
if there is a way I can integrate this into the test itself.

Possible? Not possible? Ideas?

-- 
*Julio Biason*, Sofware Engineer
*AZION*  |  Deliver. Accelerate. Protect.
Office: +55 51 3083 8101   |  Mobile: +55 51
*99907 0554*


elasticsearch5 , java.lang.NoClassDefFoundError on mesos

2018-04-24 Thread miki haiat
Hi ,

Im having some weird issue  when running some stream  job to ELK .

The job i starting fine but after few hours im getting this exception and
the  TM/JB is crashed .



this is the config for the elesticserch sink , may by  1 sec flush can
cause the deadlock ??

config.put("bulk.flush.max.actions", "20");
config.put("bulk.flush.interval.ms", "1000");






>
> Exception in thread
> "elasticsearch[_client_][transport_client_worker][T#3]"
> java.lang.NoClassDefFoundError:
> org/apache/flink/streaming/connectors/elasticsearch5/shaded/org/jboss/netty/util/internal/ByteBufferUtil
> at
> org.apache.flink.streaming.connectors.elasticsearch5.shaded.org.jboss.netty.channel.socket.nio.SocketSendBufferPool.releaseExternalResources(SocketSendBufferPool.java:380)
> at
> org.apache.flink.streaming.connectors.elasticsearch5.shaded.org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:90)
> at
> org.apache.flink.streaming.connectors.elasticsearch5.shaded.org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
> at
> org.apache.flink.streaming.connectors.elasticsearch5.shaded.org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
> at
> org.apache.flink.streaming.connectors.elasticsearch5.shaded.org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)


Trigger state clear

2018-04-24 Thread miki haiat
Hi
I have some issue possibly memory issue that causing the task manager to
crash .

full code :
https://gist.github.com/miko-code/6d7010505c3cb95be122364b29057237

I defined  fire_and_purge on element and also   evictor  so state should be
very small ...

Any suggestion how  figure this issue ?

Thanks,


Miki


Re: Flink State monitoring

2018-04-24 Thread Juho Autio
Anything to add? Is there a Jira ticket for this yet?

On Fri, Apr 20, 2018 at 1:03 PM, Stefan Richter  wrote:

> If estimates are good enough, we should be able to expose something. Would
> still like to double check the guarantees to see if the estimates of
> RocksDB are helpful or could be misleading.
>
>
> Am 20.04.2018 um 11:59 schrieb Juho Autio :
>
> Thanks. At least for us it doesn't matter how exact the number is. I would
> expect most users to be only interested in monitoring if the total state
> size keeps growing (rapidly), or remains about the same. I suppose all of
> the options that you suggested would satisfy this need?
>
> On Fri, Apr 20, 2018 at 12:53 PM, Stefan Richter <
> s.rich...@data-artisans.com> wrote:
>
>> Hi,
>>
>> for incremental checkpoints, it is only showing the size of the deltas.
>> It would probably also be possible to report the full size, but the current
>> reporting and UI is only supporting to deliver a single value. In general,
>> some things are rather hard to report. For example, for the heap based
>> backend, is the state size the size of the serialized data or the size of
>> the heap objects?
>> Another remark about key count: the key count is easy to determine for
>> the heap based backend, but there is no (efficient) method in RocksDb that
>> gives the key count (because of the way RocksDB works). In this case,
>> afaik, we have the (inefficient) option to iterate all keys and count or
>> use the (efficient) estimated key count is supported by RocksDB.
>>
>> Best,
>> Stefan
>>
>>
>> Am 04.01.2018 um 19:23 schrieb Steven Wu :
>>
>> Aljoscha/Stefan,
>>
>> if incremental checkpoint is enabled, I assume the "checkpoint size" is
>> only the delta/incremental size (not the full state size), right?
>>
>> Thanks,
>> Steven
>>
>>
>> On Thu, Jan 4, 2018 at 5:18 AM, Aljoscha Krettek 
>> wrote:
>>
>>> Hi,
>>>
>>> I'm afraid there is currently no metrics around state. I see that it's
>>> very good to have so I'm putting it on my list of stuff that we should have
>>> at some point.
>>>
>>> One thing that comes to mind is checking the size of checkpoints, which
>>> gives you an indirect way of figuring out how big state is but that's not
>>> very exact, i.e. doesn't give you "number of keys" or some such.
>>>
>>> Best,
>>> Aljoscha
>>>
>>> > On 20. Dec 2017, at 08:09, Netzer, Liron 
>>> wrote:
>>> >
>>> > Ufuk, Thanks for replying !
>>> >
>>> > Aljoscha, can you please assist with the questions below?
>>> >
>>> > Thanks,
>>> > Liron
>>> >
>>> > -Original Message-
>>> > From: Ufuk Celebi [mailto:u...@apache.org]
>>> > Sent: Friday, December 15, 2017 3:06 PM
>>> > To: Netzer, Liron [ICG-IT]
>>> > Cc: user@flink.apache.org
>>> > Subject: Re: Flink State monitoring
>>> >
>>> > Hey Liron,
>>> >
>>> > unfortunately, there are no built-in metrics related to state. In
>>> general, exposing the actual values as metrics is problematic, but exposing
>>> summary statistics would be a good idea. I'm not aware of a good work
>>> around at the moment that would work in the general case (taking into
>>> account state restore, etc.).
>>> >
>>> > Let me pull in Aljoscha (cc'd) who knows the state backend internals
>>> well.
>>> >
>>> > @Aljoscha:
>>> > 1) Are there any plans to expose keyed state related metrics (like
>>> number of keys)?
>>> > 2) Is there a way to work around the lack of these metrics in 1.3?
>>> >
>>> > – Ufuk
>>> >
>>> > On Thu, Dec 14, 2017 at 10:55 AM, Netzer, Liron 
>>> wrote:
>>> >> Hi group,
>>> >>
>>> >>
>>> >>
>>> >> We are using Flink keyed state in several operators.
>>> >>
>>> >> Is there an easy was to expose the data that is stored in the state,
>>> i.e.
>>> >> the key and the values?
>>> >>
>>> >> This is needed for both monitoring as well as debugging. We would like
>>> >> to understand how many key+values are stored in each state and also to
>>> >> view the data itself.
>>> >>
>>> >> I know that there is the "Queryable state" option, but this is still
>>> >> in Beta, and doesn't really give us what we want easily.
>>> >>
>>> >>
>>> >>
>>> >>
>>> >>
>>> >> *We are using Flink 1.3.2 with Java.
>>> >>
>>> >>
>>> >>
>>> >> Thanks,
>>> >>
>>> >> Liron
>>>
>>>
>>
>>
>
>


Flink + HDInsight Cluster Deployment

2018-04-24 Thread m@xi
Hello everyone! 

My task is to install Flink on an HDInsight cluster in Azure. More 
specifically, I have installed a Kafka cluster (with preconfigured Kafka) as 
I would like to also combine Flink and Kafka. 

Unfortunately, Azure does not provide preconfigured cluster for Flink. So I 
have to install it. First, I would like to run my algorithm only in Flink, 
and as a further step I would like to combine with Kafka. 

If someone has does it before, please write down the key components or 
difficulties you may had encountered. I am not that good in administrative 
task such us cluster deployment, thus this whole procedure is a heavy 
burden! 

Thanks in advance. 

Best, 
Max 




--
Sent from: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/


Re: Use gradle with flink

2018-04-24 Thread Fabian Hueske
You can certainly setup and build Flink applications with Gradle.

However the bad news is, the Flink project does not provide a
pre-configured Gradle project/configuration yet.
The good news is, the community is working on that [1] and there's already
a PR [2] (opened 19 hours ago).

Btw. besides Maven, there's also an SBT configuration for Flink Scala
projects [3].

Best,
Fabian

[1] https://issues.apache.org/jira/browse/FLINK-9222
[2] https://github.com/apache/flink/pull/5900/files
[3]
https://ci.apache.org/projects/flink/flink-docs-release-1.4/quickstart/scala_api_quickstart.html

2018-04-24 12:46 GMT+02:00 Ted Yu :

> Currently only maven build is supported.
>
>  Original message 
> From: Georgi Stoyanov 
> Date: 4/24/18 2:17 AM (GMT-08:00)
> To: user@flink.apache.org
> Subject: Use gradle with flink
>
> Hi guys,
> I’m wondering is it possible to setup my java flink application with
> gradle? I’m confused cause everywhere win the stackoverflow/documentation
> is used maven only :/
>
> Kind Regards,
> Georgi Stoyanov
>
>
>


Re: Unsure how to further debug - operator threads stuck on java.lang.Thread.State: WAITING

2018-04-24 Thread Nico Kruber
Hi James,
it is unlikely that your issue is the same as the one Miguel is having.
His one https://issues.apache.org/jira/browse/FLINK-9242 is probably the
same as https://issues.apache.org/jira/browse/FLINK-9144 and happens
only in batch programs spilling data in Flink 1.5 and 1.6 versions
before last Friday.

From the information you provided, I suppose you are running a streaming
job in Flink 1.4, do you? Your example looks like a simpler setup: can
you try to minimise it so that you can share the code and we can have a
look?


Regards
Nico

On 18/04/18 01:59, James Yu wrote:
> Miguel, I and my colleague ran into same problem yesterday.
> We were expecting Flink to get 4 inputs from Kafka and write the inputs
> to Cassandra, but the operators got stuck after the 1st input is written
> into Cassandra.
> This is how DAG looks like:
> Source: Custom Source -> Map -> (Sink: Unnamed, Sink: Cassandra Sink)
> After we disable the auto chaining
> (https://ci.apache.org/projects/flink/flink-docs-release-1.4/dev/stream/operators/#task-chaining-and-resource-groups),
> all 4 inputs are read from Kafka and written into Cassandra.
> We are still figuring out why the chaining causes the blocking.
> 
> 
> This is a UTF-8 formatted mail
> ---
> James C.-C.Yu
> +886988713275
> 
> 2018-04-18 6:57 GMT+08:00 Miguel Coimbra  >:
> 
> Chesnay, following your suggestions I got access to the web
> interface and also took a closer look at the debugging logs.
> I have noticed one problem regarding the web interface port - it
> keeps changing port now and then during my Java program's execution.
> 
> Not sure if that is due to my program launching several job
> executions sequentially, but the fact is that it happened.
> Since I am accessing the web interface via tunneling, it becomes
> rather cumbersome to keep adapting it.
> 
> Another particular problem I'm noticing is that this exception
> frequently pops up (debugging with log4j):
> 
> 00:17:54,368 DEBUG
> org.apache.flink.runtime.jobmaster.slotpool.SlotPool  -
> Releasing slot with slot request id 9055ef473251505dac04c99727106dc9.
> org.apache.flink.util.FlinkException: Slot is being returned to the
> SlotPool.
>     at
> 
> org.apache.flink.runtime.jobmaster.slotpool.SlotPool$ProviderAndOwner.returnAllocatedSlot(SlotPool.java:1521)
>     at
> 
> org.apache.flink.runtime.jobmaster.slotpool.SingleLogicalSlot.lambda$releaseSlot$0(SingleLogicalSlot.java:130)
>     at
> 
> java.util.concurrent.CompletableFuture.uniHandle(CompletableFuture.java:822)
>     at
> 
> java.util.concurrent.CompletableFuture.uniHandleStage(CompletableFuture.java:834)
>     at
> java.util.concurrent.CompletableFuture.handle(CompletableFuture.java:2155)
>     at
> 
> org.apache.flink.runtime.jobmaster.slotpool.SingleLogicalSlot.releaseSlot(SingleLogicalSlot.java:130)
>     at
> 
> org.apache.flink.runtime.executiongraph.Execution.releaseAssignedResource(Execution.java:1239)
>     at
> 
> org.apache.flink.runtime.executiongraph.Execution.markFinished(Execution.java:946)
>     at
> 
> org.apache.flink.runtime.executiongraph.ExecutionGraph.updateState(ExecutionGraph.java:1588)
>     at
> 
> org.apache.flink.runtime.jobmaster.JobMaster.updateTaskExecutionState(JobMaster.java:593)
>     at sun.reflect.GeneratedMethodAccessor9.invoke(Unknown Source)
>     at
> 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:498)
>     at
> 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcInvocation(AkkaRpcActor.java:210)
>     at
> 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleMessage(AkkaRpcActor.java:154)
>     at
> 
> org.apache.flink.runtime.rpc.akka.FencedAkkaRpcActor.handleMessage(FencedAkkaRpcActor.java:66)
>     at
> 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.lambda$onReceive$1(AkkaRpcActor.java:132)
>     at
> akka.actor.ActorCell$$anonfun$become$1.applyOrElse(ActorCell.scala:544)
>     at akka.actor.Actor$class.aroundReceive(Actor.scala:502)
>     at akka.actor.UntypedActor.aroundReceive(UntypedActor.scala:95)
>     at akka.actor.ActorCell.receiveMessage(ActorCell.scala:526)
>     at akka.actor.ActorCell.invoke(ActorCell.scala:495)
>     at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:257)
>     at akka.dispatch.Mailbox.run(Mailbox.scala:224)
>     at akka.dispatch.Mailbox.exec(Mailbox.scala:234)
>     at
> scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
>     at
> 
> scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)

Re: Use gradle with flink

2018-04-24 Thread Ted Yu
Currently only maven build is supported.
 Original message From: Georgi Stoyanov  
Date: 4/24/18  2:17 AM  (GMT-08:00) To: user@flink.apache.org Subject: Use 
gradle with flink 




Hi guys,
I’m wondering is it possible to setup my java flink application with gradle? 
I’m confused cause everywhere win the stackoverflow/documentation is used maven 
only :/




Kind Regards,
Georgi Stoyanov










Re: KafkaJsonTableSource purpose

2018-04-24 Thread Fabian Hueske
Hi Sebastien,

I think you can do that with Flink's Table API / SQL and the
KafkaJsonTableSource.
Note that in Flink 1.4.2, the KafkaJsonTableSource does not support flat
JSON yet.
You'd also need a table-valued UDFs for the parsing of the message and
joining the result with the original row. Depending on what you want to do,
you might need additional UDFs.

Best,
Fabian

2018-04-24 8:48 GMT+02:00 miki haiat :

> HI ,
> Assuming that your looking for streaming   use case , i think this is a
> better approach
>
>1. Send Avro from logstash  ,better performance.
>2. Deserialize it to POJO .
>3. Do logic...
>
>
>
>
> On Mon, Apr 23, 2018 at 4:03 PM, Lehuede sebastien 
> wrote:
>
>> Hi Guys,
>>
>> I'm actually trying to understand the purpose of Table and in particular
>> KafkaJsonTableSource. I try to see if for my use case ths can be usefull.
>>
>> Here is my context :
>>
>> I send logs on logstash, i add some information (Type, Tags), Logstash
>> send logs to Kafka in JSON format and finally i use Flink-Connector-Kafka
>> to read from Kafka and parse the logs.
>>
>>
>> Before any processing events from Kafka to Flink look like this :
>>
>> *{"message":"accept service_id: domain-udp; src: 1.1.1.1; dst: 2.2.2.2;
>> proto: udp; product: VPN-1 & FireWall-1; service: 53; s_port:
>> 32769","@timestamp":"2018-04-20T14:47:35.285Z","host":"FW"}*
>>
>> Then i use "JSONDeserializationSchema" to deserialize events :
>>
>> *FlinkKafkaConsumer011 kafkaConsumer = new
>> FlinkKafkaConsumer011<>("Firewall",new
>> JSONDeserializationSchema(),properties);*
>>
>> I take the value of the key "message" :
>>
>> *public String map(ObjectNode value) throws Exception {*
>> *String message =
>> value.get("message").asText();*
>>
>> Then parse it with Java Regex and put each match group in a
>> String/Int/... :
>>
>> action : accept
>> service_id : doamin-udp
>> src_ip : 1.1.1.1
>> dst_ip : 2.2.2.2
>> .
>>
>> Now i want to replace "message" key by "rawMessage" and put each match
>> group in JSON object to obain the final result :
>>
>> *{"rawMessage":"accept service_id: domain-udp; src: 1.1.1.1; dst:
>> 2.2.2.2; proto: udp; product: VPN-1 & FireWall-1; service: 53; s_port:
>> 32769",*
>> *"@timestamp":"2018-04-20T14:47:35.285Z",*
>> *"host":"FW",*
>> *"type":"firewall",*
>> *"tags":["Checkpoint"],*
>> *"action":"accept",*
>> *"service_id":"domain-udp",*
>> *"src_ip":"1.1.1.1",*
>> *"dst_ip":"2.2.2.2",*
>> *...}*
>>
>> I'm a newbie with Streaming Application technologies, with Flink, and for
>> the moment i still discover how it works and what are the different
>> fonctionnalities. But when i was looking for a solution to obtain my final
>> result, i came across KafkaJsonTableSource.
>>
>> Does anyone think this can be a good solution for my use case ?
>>
>> I think i will be able to store JSON from Kafka, process data then modify
>> the table and send data to another Kafka, is it correct ?
>>
>> Regards,
>> Sebastien
>>
>>
>>
>


Re: data enrichment with SQL use case

2018-04-24 Thread Fabian Hueske
Hi Alex,

An operator that has to join two input streams obviously requires two
inputs. In case of an enrichment join, the operator should first read the
meta-data stream and build up a data structure as state against which the
other input is joined. If the meta data is (infrequently) updated, these
updates should be integrated into the state.

The problem is that it is currently not possible to implement such an
operator with Flink because operators cannot decide from which input to
read, i.e., they have to process whatever data is given to them.
Hence, it is not possible to build up a data structure from the meta data
stream before consuming the other stream.

There are a few workarounds that work in special cases.
1) The meta data is rather small and never updated. You put the meta data
as a file into a (distributed) file system an read it from each function
instance when it is initialized, i.e., in open(), and put into a hash map.
Each function instance will hold the complete meta data in memory (on the
heap). Since the meta data is broadcasted, the other stream does not need
to be partitioned to join against the meta data in the hash map. You can
implement this function as a FlatMapFunction or ProcessFunction.
2) The meta data is too large and/or is updated. In this case, you need a
function with two inputs. Both inputs are keyed (keyBy()) on a join
attribute. Since you cannot hold back the non-meta data stream, you need to
buffer it in (keyed) state until you've read the meta data stream up to a
point when you can start processing the other stream. If the meta data is
updated at some point, you can just add the new data to the state. The
benefits of this approach is that the state is shared across all operators
and can be updated. However, you might need to initially buffer quite a bit
of data in state if the non-meta data stream has a high volume.

Hope that one of these approaches works for your use case.

Best, Fabian

2018-04-23 13:29 GMT+02:00 Alexander Smirnov :

> Hi Fabian,
>
> please share the workarounds, that must be helpful for my case as well
>
> Thank you,
> Alex
>
> On Mon, Apr 23, 2018 at 2:14 PM Fabian Hueske  wrote:
>
>> Hi Miki,
>>
>> Sorry for the late response.
>> There are basically two ways to implement an enrichment join as in your
>> use case.
>>
>> 1) Keep the meta data in the database and implement a job that reads the
>> stream from Kafka and queries the database in an ASyncIO operator for every
>> stream record. This should be the easier implementation but it will send
>> one query to the DB for each streamed record.
>> 2) Replicate the meta data into Flink state and join the streamed records
>> with the state. This solution is more complex because you need propagate
>> updates of the meta data (if there are any) into the Flink state. At the
>> moment, Flink lacks a few features to have a good implementation of this
>> approach, but there a some workarounds that help in certain cases.
>>
>> Note that Flink's SQL support does not add advantages for the either of
>> both approaches. You should use the DataStream API (and possible
>> ProcessFunctions).
>>
>> I'd go for the first approach if one query per record is feasible.
>> Let me know if you need to tackle the second approach and I can give some
>> details on the workarounds I mentioned.
>>
>> Best, Fabian
>>
>> 2018-04-16 20:38 GMT+02:00 Ken Krugler :
>>
>>> Hi Miki,
>>>
>>> I haven’t tried mixing AsyncFunctions with SQL queries.
>>>
>>> Normally I’d create a regular DataStream workflow that first reads from
>>> Kafka, then has an AsyncFunction to read from the SQL database.
>>>
>>> If there are often duplicate keys in the Kafka-based stream, you could
>>> keyBy(key) before the AsyncFunction, and then cache the result of the SQL
>>> query.
>>>
>>> — Ken
>>>
>>> On Apr 16, 2018, at 11:19 AM, miki haiat  wrote:
>>>
>>> HI thanks  for the reply  i will try to break your reply to the flow
>>> execution order .
>>>
>>> First data stream Will use AsyncIO and select the table ,
>>> Second stream will be kafka and the i can join the stream and map it ?
>>>
>>> If that   the case  then i will select the table only once on load ?
>>> How can i make sure that my stream table is "fresh" .
>>>
>>> Im thinking to myself , is thire a way to use flink backend (ROKSDB)
>>> and create read/write through
>>> macanisem ?
>>>
>>> Thanks
>>>
>>> miki
>>>
>>>
>>>
>>> On Mon, Apr 16, 2018 at 2:45 AM, Ken Krugler <
>>> kkrugler_li...@transpac.com> wrote:
>>>
 If the SQL data is all (or mostly all) needed to join against the data
 from Kafka, then I might try a regular join.

 Otherwise it sounds like you want to use an AsyncFunction to do ad hoc
 queries (in parallel) against your SQL DB.

 https://ci.apache.org/projects/flink/flink-docs-release-1.4/dev/stream/
 operators/asyncio.html

 — Ken


 On Apr 15, 2018, at 12:15 PM, miki haiat  wrote:

 Hi,

 I have a case of meta data

Re: Checkpointing barriers

2018-04-24 Thread Fabian Hueske
Hi Alex,

That's correct. The n refers to the n-th checkpoint. The checkpoint ID is
important, because operators need to align the barriers to ensure that they
consumed all inputs up to the point, where the barriers were injected into
the stream.
Each operator checkpoints its own state. For sources, this could be the
reading offset in a Kafka topic, or path and the byte offset in a file, etc.

Cheers, Fabian

2018-04-24 10:47 GMT+02:00 Alexander Smirnov :

> ok, I got it. Barrier-n is an indicator or n-th checkpoint.
>
> My first impression was that barriers are carrying offset information, but
> it was wrong.
>
> Thanks for unblocking ;-)
>
> Alex
>


Use gradle with flink

2018-04-24 Thread Georgi Stoyanov
Hi guys,
I’m wondering is it possible to setup my java flink application with gradle? 
I’m confused cause everywhere win the stackoverflow/documentation is used maven 
only :/

Kind Regards,
Georgi Stoyanov




Install Flink on Microsoft Azure HDInsight

2018-04-24 Thread m@xi
Hello everyone!

My task is to install Flink on an HDInsight cluster in Azure. More
specifically, I have installed a Kafka cluster (with preconfigured Kafka) as
I would like to also combine Flink and Kafka.

Unfortunately, Azure does not provide preconfigured cluster for Flink. So I
have to install it. First, I would like to run my algorithm only in Flink,
and as a further step I would like to combine with Kafka.

If someone has does it before, please write down the key components or
difficulties you may had encountered. I am not that good in administrative
task such us cluster deployment, thus this whole procedure is a heavy
burden!

Thanks in advance.

Best,
Max



--
Sent from: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/


Re: Checkpointing barriers

2018-04-24 Thread Alexander Smirnov
ok, I got it. Barrier-n is an indicator or n-th checkpoint.

My first impression was that barriers are carrying offset information, but
it was wrong.

Thanks for unblocking ;-)

Alex


Re: How to run flip-6 on mesos

2018-04-24 Thread miki haiat
Its 1.4.2 ...
Any approximate date for 1.5 release ?

Thanks allot for your help .



On Tue, Apr 24, 2018 at 10:39 AM, Gary Yao  wrote:

> Hi Miki,
>
> The stacktrace you posted looks familiar [1]. We have fixed the issue in
> Flink
> 1.5. What is the Flink version you are using? FLIP-6 before Flink 1.5 is
> very
> experimental, and I doubt that it is in a usable state. Since 1.5 is not
> out
> yet, you can either compile the release branch yourself, or use the RC1
> binaries [2]. If you are already on 1.5, please open a ticket.
>
> Best,
> Gary
>
> [1] https://issues.apache.org/jira/browse/FLINK-8176
> [2] http://people.apache.org/~trohrmann/flink-1.5.0-rc1/
>
>
> On Tue, Apr 24, 2018 at 9:27 AM, miki haiat  wrote:
>
>> The problem is that the Web UI hasn't started at all
>>  Im using the sane config file that i used for none flip-6 is that ok ?
>> Also i got this error in the logs .
>>
>>
>> 2018-04-24 10:16:05,466 ERROR 
>> org.apache.flink.runtime.dispatcher.StandaloneDispatcher
>>> - Could not recover the job graph for 4ac6ed0270bf6836941285ffcb9eb9
>>> c6.
>>> java.lang.IllegalStateException: Not running. Forgot to call start()?
>>> at org.apache.flink.util.Preconditions.checkState(Preconditions
>>> .java:195)
>>> at org.apache.flink.runtime.jobmanager.ZooKeeperSubmittedJobGra
>>> phStore.verifyIsRunning(ZooKeeperSubmittedJobGraphStore.java:411)
>>> at org.apache.flink.runtime.jobmanager.ZooKeeperSubmittedJobGra
>>> phStore.recoverJobGraph(ZooKeeperSubmittedJobGraphStore.java:167)
>>> at org.apache.flink.runtime.dispatcher.Dispatcher.lambda$recove
>>> rJobs$3(Dispatcher.java:445)
>>> at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:39)
>>> at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.
>>> exec(AbstractDispatcher.scala:415)
>>> at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.j
>>> ava:260)
>>> at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(For
>>> kJoinPool.java:1339)
>>> at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPoo
>>> l.java:1979)
>>> at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinW
>>> orkerThread.java:107)
>>> 2018-04-24 10:16:05,469 ERROR 
>>> org.apache.flink.runtime.dispatcher.StandaloneDispatcher
>>> - Could not recover the job graph for 700f37540fe95787510dfa2bc0cc5a
>>> c3.
>>> java.lang.IllegalStateException: Not running. Forgot to call start()?
>>> at org.apache.flink.util.Preconditions.checkState(Preconditions
>>> .java:195)
>>> at org.apache.flink.runtime.jobmanager.ZooKeeperSubmittedJobGra
>>> phStore.verifyIsRunning(ZooKeeperSubmittedJobGraphStore.java:411)
>>> at org.apache.flink.runtime.jobmanager.ZooKeeperSubmittedJobGra
>>> phStore.recoverJobGraph(ZooKeeperSubmittedJobGraphStore.java:167)
>>> at org.apache.flink.runtime.dispatcher.Dispatcher.lambda$recove
>>> rJobs$3(Dispatcher.java:445)
>>> at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:39)
>>> at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.
>>> exec(AbstractDispatcher.scala:415)
>>> at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.j
>>> ava:260)
>>> at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(For
>>> kJoinPool.java:1339)
>>> at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPoo
>>> l.java:1979)
>>> at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinW
>>> orkerThread.java:107)
>>
>>
>>
>> On Tue, Apr 24, 2018 at 10:06 AM, Gary Yao 
>> wrote:
>>
>>> Hi Miki,
>>>
>>> IIRC the port on which the Web UI is listening is not allocated
>>> dynamically when
>>> deploying on Mesos, and should be 8081 by default (you can override the
>>> default
>>> by setting rest.port in flink-conf.yaml). If you can find out the
>>> hostname/IP of
>>> the JobManager, you can submit as usual via the Web UI. Alternatively
>>> you can
>>> use the CLI, e.g.,
>>>
>>>   bin/flink run -m hostname:6123 examples/streaming/WordCount.jar
>>>
>>> where 6123 is the jobmanager.rpc.port.
>>>
>>> Let me know if any of these work for you
>>>
>>> Best,
>>> Gary
>>>
>>>
>>> On Tue, Apr 24, 2018 at 8:55 AM, miki haiat  wrote:
>>>
 NO  :) ...
 I usually using the web UI .
 Can you refer me to some example how to submit  a job ?
 Using REST ? to which port ?

 thanks,

 miki

 On Tue, Apr 24, 2018 at 9:42 AM, Gary Yao 
 wrote:

> Hi Miki,
>
> Did you try to submit a job? With the introduction of FLIP-6,
> resources are
> allocated dynamically.
>
> Best,
> Gary
>
>
> On Tue, Apr 24, 2018 at 8:31 AM, miki haiat 
> wrote:
>
>>
>> HI,
>> Im trying to tun flip-6 on mesos but its not clear to me what is the
>> correct way to do it .
>>
>> I run  the session script and i can see that new framework has been
>> created in mesos

Re: How to run flip-6 on mesos

2018-04-24 Thread Gary Yao
Hi Miki,

The stacktrace you posted looks familiar [1]. We have fixed the issue in
Flink
1.5. What is the Flink version you are using? FLIP-6 before Flink 1.5 is
very
experimental, and I doubt that it is in a usable state. Since 1.5 is not out
yet, you can either compile the release branch yourself, or use the RC1
binaries [2]. If you are already on 1.5, please open a ticket.

Best,
Gary

[1] https://issues.apache.org/jira/browse/FLINK-8176
[2] http://people.apache.org/~trohrmann/flink-1.5.0-rc1/

On Tue, Apr 24, 2018 at 9:27 AM, miki haiat  wrote:

> The problem is that the Web UI hasn't started at all
>  Im using the sane config file that i used for none flip-6 is that ok ?
> Also i got this error in the logs .
>
>
> 2018-04-24 10:16:05,466 ERROR 
> org.apache.flink.runtime.dispatcher.StandaloneDispatcher
>> - Could not recover the job graph for 4ac6ed0270bf6836941285ffcb9eb9
>> c6.
>> java.lang.IllegalStateException: Not running. Forgot to call start()?
>> at org.apache.flink.util.Preconditions.checkState(Preconditions
>> .java:195)
>> at org.apache.flink.runtime.jobmanager.ZooKeeperSubmittedJobGra
>> phStore.verifyIsRunning(ZooKeeperSubmittedJobGraphStore.java:411)
>> at org.apache.flink.runtime.jobmanager.ZooKeeperSubmittedJobGra
>> phStore.recoverJobGraph(ZooKeeperSubmittedJobGraphStore.java:167)
>> at org.apache.flink.runtime.dispatcher.Dispatcher.lambda$recove
>> rJobs$3(Dispatcher.java:445)
>> at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:39)
>> at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.
>> exec(AbstractDispatcher.scala:415)
>> at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.
>> java:260)
>> at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(
>> ForkJoinPool.java:1339)
>> at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPoo
>> l.java:1979)
>> at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinW
>> orkerThread.java:107)
>> 2018-04-24 10:16:05,469 ERROR 
>> org.apache.flink.runtime.dispatcher.StandaloneDispatcher
>> - Could not recover the job graph for 700f37540fe95787510dfa2bc0cc5a
>> c3.
>> java.lang.IllegalStateException: Not running. Forgot to call start()?
>> at org.apache.flink.util.Preconditions.checkState(Preconditions
>> .java:195)
>> at org.apache.flink.runtime.jobmanager.ZooKeeperSubmittedJobGra
>> phStore.verifyIsRunning(ZooKeeperSubmittedJobGraphStore.java:411)
>> at org.apache.flink.runtime.jobmanager.ZooKeeperSubmittedJobGra
>> phStore.recoverJobGraph(ZooKeeperSubmittedJobGraphStore.java:167)
>> at org.apache.flink.runtime.dispatcher.Dispatcher.lambda$recove
>> rJobs$3(Dispatcher.java:445)
>> at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:39)
>> at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.
>> exec(AbstractDispatcher.scala:415)
>> at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.
>> java:260)
>> at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(
>> ForkJoinPool.java:1339)
>> at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPoo
>> l.java:1979)
>> at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinW
>> orkerThread.java:107)
>
>
>
> On Tue, Apr 24, 2018 at 10:06 AM, Gary Yao  wrote:
>
>> Hi Miki,
>>
>> IIRC the port on which the Web UI is listening is not allocated
>> dynamically when
>> deploying on Mesos, and should be 8081 by default (you can override the
>> default
>> by setting rest.port in flink-conf.yaml). If you can find out the
>> hostname/IP of
>> the JobManager, you can submit as usual via the Web UI. Alternatively you
>> can
>> use the CLI, e.g.,
>>
>>   bin/flink run -m hostname:6123 examples/streaming/WordCount.jar
>>
>> where 6123 is the jobmanager.rpc.port.
>>
>> Let me know if any of these work for you
>>
>> Best,
>> Gary
>>
>>
>> On Tue, Apr 24, 2018 at 8:55 AM, miki haiat  wrote:
>>
>>> NO  :) ...
>>> I usually using the web UI .
>>> Can you refer me to some example how to submit  a job ?
>>> Using REST ? to which port ?
>>>
>>> thanks,
>>>
>>> miki
>>>
>>> On Tue, Apr 24, 2018 at 9:42 AM, Gary Yao 
>>> wrote:
>>>
 Hi Miki,

 Did you try to submit a job? With the introduction of FLIP-6, resources
 are
 allocated dynamically.

 Best,
 Gary


 On Tue, Apr 24, 2018 at 8:31 AM, miki haiat  wrote:

>
> HI,
> Im trying to tun flip-6 on mesos but its not clear to me what is the
> correct way to do it .
>
> I run  the session script and i can see that new framework has been
> created in mesos  but the task manager  hasn't been created
> running  taskmanager-flip6.sh throw null pointer ...
>
> what is the correct way to run flip-6 .
>
>
> thanks,
>
> miki
>


>>>
>>
>


Re: Testing on Flink 1.5

2018-04-24 Thread Gary Yao
Hi Amit,

web.timeout should only affect RPC calls originating from the REST API. In
FLIP-6, the submission of the job graph happens via HTTP. The value under
akka.ask.timeout is still used as the default timeout for RPC calls [1][2].
Since you also had custom heartbeats settings, you should consider setting
heartbeat.interval and heartbeat.timeout when using FLIP-6 mode [3]. AFAIK
Akka's DeathWatch is not used anymore to detect TaskManager failures. Hence,
akka.watch.heartbeat.interval should have no effect.

Best,
Gary

[1]
https://github.com/apache/flink/blob/fb254763c00df5d336c6defa1ae960e32c97b2ae/flink-runtime/src/main/java/org/apache/flink/runtime/entrypoint/ClusterEntrypoint.java#L389
[2]
https://github.com/apache/flink/blob/fb254763c00df5d336c6defa1ae960e32c97b2ae/flink-runtime/src/main/scala/org/apache/flink/runtime/akka/AkkaUtils.scala#L606
[3]
https://ci.apache.org/projects/flink/flink-docs-master/ops/config.html#heartbeat-manager

On Fri, Apr 20, 2018 at 12:22 PM, Amit Jain  wrote:

> Hi Gary,
>
> This setting has resolved the issue. Does it increase timeout for all the
> RPC or specific components?
>
> We had following settings in Flink 1.3.2 and they did the job for us.
>
> akka.watch.heartbeat.pause: 600 s
> akka.client.timeout: 5 min
> akka.ask.timeout: 120 s
>
>
> --
> Thanks,
> Amit
>


Re: How to run flip-6 on mesos

2018-04-24 Thread miki haiat
The problem is that the Web UI hasn't started at all
 Im using the sane config file that i used for none flip-6 is that ok ?
Also i got this error in the logs .


2018-04-24 10:16:05,466 ERROR
> org.apache.flink.runtime.dispatcher.StandaloneDispatcher  - Could not
> recover the job graph for 4ac6ed0270bf6836941285ffcb9eb9c6.
> java.lang.IllegalStateException: Not running. Forgot to call start()?
> at
> org.apache.flink.util.Preconditions.checkState(Preconditions.java:195)
> at
> org.apache.flink.runtime.jobmanager.ZooKeeperSubmittedJobGraphStore.verifyIsRunning(ZooKeeperSubmittedJobGraphStore.java:411)
> at
> org.apache.flink.runtime.jobmanager.ZooKeeperSubmittedJobGraphStore.recoverJobGraph(ZooKeeperSubmittedJobGraphStore.java:167)
> at
> org.apache.flink.runtime.dispatcher.Dispatcher.lambda$recoverJobs$3(Dispatcher.java:445)
> at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:39)
> at
> akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:415)
> at
> scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
> at
> scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
> at
> scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
> at
> scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
> 2018-04-24 10:16:05,469 ERROR
> org.apache.flink.runtime.dispatcher.StandaloneDispatcher  - Could not
> recover the job graph for 700f37540fe95787510dfa2bc0cc5ac3.
> java.lang.IllegalStateException: Not running. Forgot to call start()?
> at
> org.apache.flink.util.Preconditions.checkState(Preconditions.java:195)
> at
> org.apache.flink.runtime.jobmanager.ZooKeeperSubmittedJobGraphStore.verifyIsRunning(ZooKeeperSubmittedJobGraphStore.java:411)
> at
> org.apache.flink.runtime.jobmanager.ZooKeeperSubmittedJobGraphStore.recoverJobGraph(ZooKeeperSubmittedJobGraphStore.java:167)
> at
> org.apache.flink.runtime.dispatcher.Dispatcher.lambda$recoverJobs$3(Dispatcher.java:445)
> at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:39)
> at
> akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:415)
> at
> scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
> at
> scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
> at
> scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
> at
> scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)



On Tue, Apr 24, 2018 at 10:06 AM, Gary Yao  wrote:

> Hi Miki,
>
> IIRC the port on which the Web UI is listening is not allocated
> dynamically when
> deploying on Mesos, and should be 8081 by default (you can override the
> default
> by setting rest.port in flink-conf.yaml). If you can find out the
> hostname/IP of
> the JobManager, you can submit as usual via the Web UI. Alternatively you
> can
> use the CLI, e.g.,
>
>   bin/flink run -m hostname:6123 examples/streaming/WordCount.jar
>
> where 6123 is the jobmanager.rpc.port.
>
> Let me know if any of these work for you
>
> Best,
> Gary
>
>
> On Tue, Apr 24, 2018 at 8:55 AM, miki haiat  wrote:
>
>> NO  :) ...
>> I usually using the web UI .
>> Can you refer me to some example how to submit  a job ?
>> Using REST ? to which port ?
>>
>> thanks,
>>
>> miki
>>
>> On Tue, Apr 24, 2018 at 9:42 AM, Gary Yao  wrote:
>>
>>> Hi Miki,
>>>
>>> Did you try to submit a job? With the introduction of FLIP-6, resources
>>> are
>>> allocated dynamically.
>>>
>>> Best,
>>> Gary
>>>
>>>
>>> On Tue, Apr 24, 2018 at 8:31 AM, miki haiat  wrote:
>>>

 HI,
 Im trying to tun flip-6 on mesos but its not clear to me what is the
 correct way to do it .

 I run  the session script and i can see that new framework has been
 created in mesos  but the task manager  hasn't been created
 running  taskmanager-flip6.sh throw null pointer ...

 what is the correct way to run flip-6 .


 thanks,

 miki

>>>
>>>
>>
>


Re: How to run flip-6 on mesos

2018-04-24 Thread Gary Yao
Hi Miki,

IIRC the port on which the Web UI is listening is not allocated dynamically
when
deploying on Mesos, and should be 8081 by default (you can override the
default
by setting rest.port in flink-conf.yaml). If you can find out the
hostname/IP of
the JobManager, you can submit as usual via the Web UI. Alternatively you
can
use the CLI, e.g.,

  bin/flink run -m hostname:6123 examples/streaming/WordCount.jar

where 6123 is the jobmanager.rpc.port.

Let me know if any of these work for you

Best,
Gary

On Tue, Apr 24, 2018 at 8:55 AM, miki haiat  wrote:

> NO  :) ...
> I usually using the web UI .
> Can you refer me to some example how to submit  a job ?
> Using REST ? to which port ?
>
> thanks,
>
> miki
>
> On Tue, Apr 24, 2018 at 9:42 AM, Gary Yao  wrote:
>
>> Hi Miki,
>>
>> Did you try to submit a job? With the introduction of FLIP-6, resources
>> are
>> allocated dynamically.
>>
>> Best,
>> Gary
>>
>>
>> On Tue, Apr 24, 2018 at 8:31 AM, miki haiat  wrote:
>>
>>>
>>> HI,
>>> Im trying to tun flip-6 on mesos but its not clear to me what is the
>>> correct way to do it .
>>>
>>> I run  the session script and i can see that new framework has been
>>> created in mesos  but the task manager  hasn't been created
>>> running  taskmanager-flip6.sh throw null pointer ...
>>>
>>> what is the correct way to run flip-6 .
>>>
>>>
>>> thanks,
>>>
>>> miki
>>>
>>
>>
>