[jira] [Created] (FLINK-9583) Wrong number of TaskManagers' slots after recovery.

2018-06-13 Thread Truong Duc Kien (JIRA)
Truong Duc Kien created FLINK-9583:
--

 Summary: Wrong number of TaskManagers' slots after recovery.
 Key: FLINK-9583
 URL: https://issues.apache.org/jira/browse/FLINK-9583
 Project: Flink
  Issue Type: Bug
  Components: ResourceManager
Affects Versions: 1.5.0
 Environment: Flink 1.5.0 on YARN with the default execution mode.
Reporter: Truong Duc Kien
 Attachments: jm.log

We started a job with 120 slots, using a FixedDelayRestart strategy with the 
delay of 1 minutes.

During recovery, some but not all Slots were released.

When the job restarts again, Flink requests a new batch of slots.

The total number of slots is now 193, larger than the configured amount, but 
the excess slots are never released.

 

This bug does not happen with legacy mode. I've attach the job manager log.

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-9582) dist assemblies access files outside of flink-dist

2018-06-13 Thread Chesnay Schepler (JIRA)
Chesnay Schepler created FLINK-9582:
---

 Summary: dist assemblies access files outside of flink-dist
 Key: FLINK-9582
 URL: https://issues.apache.org/jira/browse/FLINK-9582
 Project: Flink
  Issue Type: Bug
  Components: Build System
Affects Versions: 1.5.0, 1.6.0
Reporter: Chesnay Schepler
Assignee: Chesnay Schepler
 Fix For: 1.6.0


The flink-dist assemblies access compiled jars outside of flink-dist, for 
example like this:
{code:java}
../flink-libraries/flink-cep/target/flink-cep_${scala.binary.version}-${project.version}.jar{code}
As usual, accessing files outside of the module that you're building is a 
terrible idea.

It's brittle as it relies on paths that aren't guaranteed to be stable, and 
requires these modules to be built beforehand. There's also an inherent 
potential for dependency conflicts when building flink-dist on it's own, as 
maven may download certain snapshot artifacts, but the assemblies ignore these 
and bundle jars present in Flink.

We can use the maven-dependency plugin to copy required dependencies into the 
{{target}} directory of flink-dist, and point the assemblies to these jars.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-9581) Redundant spaces for Collect at sql.md

2018-06-13 Thread Sergey Nuyanzin (JIRA)
Sergey Nuyanzin created FLINK-9581:
--

 Summary: Redundant spaces for Collect at sql.md
 Key: FLINK-9581
 URL: https://issues.apache.org/jira/browse/FLINK-9581
 Project: Flink
  Issue Type: Bug
  Components: Documentation, Table API  SQL
Reporter: Sergey Nuyanzin
Assignee: Sergey Nuyanzin
 Attachments: collect.png

could be seen at 
https://ci.apache.org/projects/flink/flink-docs-master/dev/table/sql.html
+ attach



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-9580) Potentially unclosed ByteBufInputStream in RestClient#readRawResponse

2018-06-13 Thread Ted Yu (JIRA)
Ted Yu created FLINK-9580:
-

 Summary: Potentially unclosed ByteBufInputStream in 
RestClient#readRawResponse
 Key: FLINK-9580
 URL: https://issues.apache.org/jira/browse/FLINK-9580
 Project: Flink
  Issue Type: Bug
Reporter: Ted Yu


Here is related code:
{code}
  ByteBufInputStream in = new ByteBufInputStream(content);
  byte[] data = new byte[in.available()];
  in.readFully(data);
{code}
In the catch block, ByteBufInputStream is not closed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: WELCOME to dev@flink.apache.org

2018-06-13 Thread Sandish Kumar HN
Hi Tzu-Li,

Yes I have JIRA task here it is
https://issues.apache.org/jira/browser/FLINK-9548

And account details
sanysand...@gmail.com
Name: Sandish Kumar HN


On Wed, Jun 13, 2018 at 2:32 AM Tzu-Li (Gordon) Tai 
wrote:

> Hi Sandish,
>
> Welcome to the Flink community!
>
> Do you mean contributor permissions on JIRA?
> The community usually only assigns contributor permissions when you find a
> specific JIRA ticket you would like to start working on.
>
> Once you do find one, let us know your JIRA account ID and the ticket, and
> then we can add you as a contributor on JIRA.
>
> Cheers,
> Gordon
>
> On 13 June 2018 at 6:17:07 AM, Sandish Kumar HN (sanysand...@gmail.com)
> wrote:
>
> Can someone add me as a contributor
> Mail:sanysand...@gmail.com
> FullName: Sandish Kumar HN
>
>
> On 12 June 2018 at 23:14,  wrote:
>
> > Hi! This is the ezmlm program. I'm managing the
> > dev@flink.apache.org mailing list.
> >
> > I'm working for my owner, who can be reached
> > at dev-ow...@flink.apache.org.
> >
> > Acknowledgment: I have added the address
> >
> > sanysand...@gmail.com
> >
> > to the dev mailing list.
> >
> > Welcome to dev@flink.apache.org!
> >
> > Please save this message so that you know the address you are
> > subscribed under, in case you later want to unsubscribe or change your
> > subscription address.
> >
> >
> > --- Administrative commands for the dev list ---
> >
> > I can handle administrative requests automatically. Please
> > do not send them to the list address! Instead, send
> > your message to the correct command address:
> >
> > To subscribe to the list, send a message to:
> > 
> >
> > To remove your address from the list, send a message to:
> > 
> >
> > Send mail to the following for info and FAQ for this list:
> > 
> > 
> >
> > Similar addresses exist for the digest list:
> > 
> > 
> >
> > To get messages 123 through 145 (a maximum of 100 per request), mail:
> > 
> >
> > To get an index with subject and author for messages 123-456 , mail:
> > 
> >
> > They are always returned as sets of 100, max 2000 per request,
> > so you'll actually get 100-499.
> >
> > To receive all messages with the same subject as message 12345,
> > send a short message to:
> > 
> >
> > The messages should contain one line or word of text to avoid being
> > treated as sp@m, but I will ignore their content.
> > Only the ADDRESS you send to is important.
> >
> > You can start a subscription for an alternate address,
> > for example "john@host.domain", just add a hyphen and your
> > address (with '=' instead of '@') after the command word:
> > 
> >
> > To stop subscription for this address, mail:
> > 
> >
> > In both cases, I'll send a confirmation message to that address. When
> > you receive it, simply reply to it to complete your subscription.
> >
> > If despite following these instructions, you do not get the
> > desired results, please contact my owner at
> > dev-ow...@flink.apache.org. Please be patient, my owner is a
> > lot slower than I am ;-)
> >
> > --- Enclosed is a copy of the request I received.
> >
> > Return-Path: 
> > Received: (qmail 6295 invoked by uid 99); 13 Jun 2018 04:14:39 -
> > Received: from pnap-us-west-generic-nat.apache.org (HELO
> > spamd2-us-west.apache.org) (209.188.14.142)
> > by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 13 Jun 2018 04:14:39
> > +
> > Received: from localhost (localhost [127.0.0.1])
> > by spamd2-us-west.apache.org (ASF Mail Server at
> > spamd2-us-west.apache.org) with ESMTP id 251381A1706
> > for  > c...@flink.apache.org>; Wed, 13 Jun 2018 04:14:39 + (UTC)
> > X-Virus-Scanned: Debian amavisd-new at spamd2-us-west.apache.org
> > X-Spam-Flag: NO
> > X-Spam-Score: 1.869
> > X-Spam-Level: *
> > X-Spam-Status: No, score=1.869 tagged_above=-999 required=6.31
> > tests=[DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1,
> > HTML_MESSAGE=2, RCVD_IN_DNSWL_NONE=-0.0001,
> > RCVD_IN_MSPIKE_H3=-0.01,
> > RCVD_IN_MSPIKE_WL=-0.01, SPF_PASS=-0.001, T_DKIMWL_WL_MED=-0.01]
> > autolearn=disabled
> > Authentication-Results: spamd2-us-west.apache.org (amavisd-new);
> > dkim=pass (2048-bit key) header.d=gmail.com
> > Received: from mx1-lw-us.apache.org ([10.40.0.8])
> > by localhost (spamd2-us-west.apache.org [10.40.0.9])
> > (amavisd-new, port 10024)
> > with ESMTP id wHNqYMkcZMeN
> > for  > c...@flink.apache.org>;
> > Wed, 13 Jun 2018 04:14:36 + (UTC)
> > Received: from mail-it0-f68.google.com (mail-it0-f68.google.com
> > [209.85.214.68])
> > by mx1-lw-us.apache.org (ASF Mail Server at mx1-lw-us.apache.org)
> > with ESMTPS id D65EB5F19B
> > for  > c...@flink.apache.org>; Wed, 13 Jun 2018 04:14:35 + (UTC)
> > Received: by mail-it0-f68.google.com with SMTP id k17-v6so15936144ita.0
> > for  > c...@flink.apache.org>; Tue, 12 Jun 2018 21:14:35 -0700 (PDT)
> > DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
> > d=gmail.com; s=20161025;
> > h=mime-version:in-reply-to:references:from:date:message-id:
> > subject:to;
> > 

[jira] [Created] (FLINK-9578) Allow to define an auto watermark interval in SQL Client

2018-06-13 Thread Timo Walther (JIRA)
Timo Walther created FLINK-9578:
---

 Summary: Allow to define an auto watermark interval in SQL Client
 Key: FLINK-9578
 URL: https://issues.apache.org/jira/browse/FLINK-9578
 Project: Flink
  Issue Type: Improvement
  Components: Table API  SQL
Reporter: Timo Walther
Assignee: Timo Walther


Currently it is not possible to define an auto watermark interval in a 
non-programmatic way for the SQL Client.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-9576) Wrong contiguity documentation

2018-06-13 Thread Dawid Wysakowicz (JIRA)
Dawid Wysakowicz created FLINK-9576:
---

 Summary: Wrong contiguity documentation
 Key: FLINK-9576
 URL: https://issues.apache.org/jira/browse/FLINK-9576
 Project: Flink
  Issue Type: Bug
  Components: CEP, Documentation
Reporter: Dawid Wysakowicz


Example for the contiguity is first of all wrong, and second of all misleading:

 
{code:java}
To illustrate the above with an example, a pattern sequence "a+ b" (one or more 
"a"’s followed by a "b") with input "a1", "c", "a2", "b" will have the 
following results:
Strict Contiguity: {a2 b} – the "c" after "a1" causes "a1" to be discarded.
Relaxed Contiguity: {a1 b} and {a1 a2 b} – "c" is ignored.
Non-Deterministic Relaxed Contiguity: {a1 b}, {a2 b}, and {a1 a2 b}.
For looping patterns (e.g. oneOrMore() and times()) the default is relaxed 
contiguity. If you want strict contiguity, you have to explicitly specify it by 
using the consecutive() call, and if you want non-deterministic relaxed 
contiguity you can use the allowCombinations() call.
{code}
 

Results for the relaxed contiguity are wrong plus they do not clearly explains 
the internal contiguity of kleene closure.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-9577) Divide-by-zero in PageRank

2018-06-13 Thread Chesnay Schepler (JIRA)
Chesnay Schepler created FLINK-9577:
---

 Summary: Divide-by-zero in PageRank
 Key: FLINK-9577
 URL: https://issues.apache.org/jira/browse/FLINK-9577
 Project: Flink
  Issue Type: Bug
  Components: Gelly
Affects Versions: 1.5.0, 1.4.0, 1.6.0
Reporter: Chesnay Schepler


{code}
// org.apache.flink.graph.library.linkanalysis.PageRank#AdjustScores#open

this.vertexCount = vertexCountIterator.hasNext() ? 
vertexCountIterator.next().getValue() : 0;

this.uniformlyDistributedScore = ((1 - dampingFactor) + dampingFactor * 
sumOfSinks) / this.vertexCount;
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: Static code analysis for Flink project

2018-06-13 Thread Chesnay Schepler

We will need a significant amount of exclusions/suppressions.

For example, 300 of the 511 vulnerabilities are caused by our TupleX 
classes, with public mutable fields.
The "unclosed resource" inspection is rather simplistic and only tracks 
the life-cycle in the scope of the method.
As a result it fails for any resource factory (e.g. any method that 
returns a stream), or resources stored in fields like one does for 
frequently in wrappers.


On 13.06.2018 11:56, Piotr Nowojski wrote:

Hi,

Generally speaking I would be in favour of adding some reasonable static code 
analysis, however this requires a lot of effort and can not be done at once. 
Also if such checks are firing incorrectly too often it would be annoying to 
manually suppress such warnings/errors.


I don't really see a benefit in enabling it /continuously/.

On the other hand, I do not see a point of not running it as a part of travis 
CI and non-fast mvn build (`mvn clean install -DskipTests` should perform such 
static checks). If such rules are not enforced by a tool, then there is really 
no point in talking about them - they will be very quickly ignored and 
abandoned.

Piotrek


On 13 Jun 2018, at 10:38, Chesnay Schepler  wrote:

I don't really see a benefit in enabling it /continuously/.
This wouldn't be part of the build or CI processes, as we can't fail the builds 
since it happens too often that issues are improperly categorized.

Wading through these lists is time-consuming and I very much doubt that we will 
do that with a high or even regular frequency.
We would always require 1-2 committers to commit to this process.
Thus there's no benefit in running sonarcube beyond these irregular checks as 
we'd just be wasting processing time.

I suggest to keep it as a manual process.

On 13.06.2018 09:35, Till Rohrmann wrote:

Hi Alex,

thanks for bringing this topic up. So far the Flink project does not use a
static code analysis tool but I think it can strongly benefit from it
(simply by looking at the reported bugs). There was a previous discussion
about enabling the ASF Sonarcube integration for Flink [1] but it was never
put into reality. There is also an integration for Travis which might be
interesting to look into [2]. I would be in favour of enabling this.

[1]
http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/Add-Sonarqube-analysis-td14556.html
[2] https://docs.travis-ci.com/user/sonarcloud/

Cheers,
Till

On Tue, Jun 12, 2018 at 11:12 PM Ted Yu  wrote:


I took a look at some of the blocker defects.
e.g.

https://sonarcloud.io/project/issues?id=org.apache.flink%3Aflink-parent=AWPxETxA3e-qcckj1Sl1=false=BLOCKER=BUG

For
./flink-state-backends/flink-statebackend-rocksdb/src/main/java/org/apache/flink/contrib/streaming/state/PredefinedOptions.java
, the closing of DBOptions using try-with-resources is categorized as
blocker by the analysis.

I don't think that categorization is proper.

We can locate the high priority defects, according to consensus, and fix
those.

Cheers

On Tue, Jun 12, 2018 at 2:01 PM,  wrote:


Hello Flink community.

I am new in Flink project and probably don't understand it a lot. Could
you please clarify one question to me?

I download Flink sources and build it from scratch. I found checkstyle
guidelines that every Flink developer should follow which is very useful.
However, I didn't find anything about static analysis tools like

Sonarcube.

I have looked through mailing lists archive but without success. That
seemed very strange to me.

I have setup Sonarcube and run analysis on whole Flink project. After a
while I have got 442 bugs, 511 vulnerabilities and more than 13K Code
Smells issues. You can see them all here: https://sonarcloud.io/
dashboard?id=org.apache.flink%3Aflink-parent

I looked through some of bugs and vulnerabilities and there are many
important ones (in my opinions) like these:
- 'other' is dereferenced. A "NullPointerException" could be thrown;
"other" is nullable here.
- Either re-interrupt this method or rethrow the "InterruptedException".
- Move this call to "wait()" into a synchronized block to be sure the
monitor on "Object" is held.
- Refactor this code so that the Iterator supports multiple traversal
- Use try-with-resources or close this "JsonGenerator" in a "finally"
clause. Use try-with-resources or close this "JsonGenerator" in a

"finally"

clause.
- Cast one of the operands of this subtraction operation to a "long".
- Make "ZERO_CALENDAR" an instance variable.
- Add a "NoSuchElementException" for iteration beyond the end of the
collection.
- Replace the call to "Thread.sleep(...)" with a call to "wait(...)".
- Call "Optional#isPresent()" before accessing the value.
- Change this condition so that it does not always evaluate to "false".
Expression is always false.
- This class overrides "equals()" and should therefore also override
"hashCode()".
- "equals(Object obj)" should test argument type
- Not enough arguments in LOG.debug function. Not enough 

[jira] [Created] (FLINK-9575) Potential race condition when removing JobGraph in HA

2018-06-13 Thread JIRA
Dominik Wosiński created FLINK-9575:
---

 Summary: Potential race condition when removing JobGraph in HA
 Key: FLINK-9575
 URL: https://issues.apache.org/jira/browse/FLINK-9575
 Project: Flink
  Issue Type: Bug
Reporter: Dominik Wosiński


When we are removing the _JobGraph_ from _JobManager_ for example after 
invoking _cancel()_, the following code is executed : 
{noformat}
 
val futureOption = currentJobs.get(jobID) match { case Some((eg, _)) => val 
result = if (removeJobFromStateBackend) { val futureOption = Some(future { try 
{ // ...otherwise, we can have lingering resources when there is a concurrent 
shutdown // and the ZooKeeper client is closed. Not removing the job 
immediately allow the // shutdown to release all resources. 
submittedJobGraphs.removeJobGraph(jobID) } catch { case t: Throwable => 
log.warn(s"Could not remove submitted job graph $jobID.", t) } 
}(context.dispatcher)) try { archive ! decorateMessage( ArchiveExecutionGraph( 
jobID, ArchivedExecutionGraph.createFrom(eg))) } catch { case t: Throwable => 
log.warn(s"Could not archive the execution graph $eg.", t) } futureOption } 
else { None } currentJobs.remove(jobID) result case None => None } // remove 
all job-related BLOBs from local and HA store 
libraryCacheManager.unregisterJob(jobID) blobServer.cleanupJob(jobID, 
removeJobFromStateBackend) jobManagerMetricGroup.removeJob(jobID) futureOption }
{noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-9574) Add a dedicated documentation page for state evolution

2018-06-13 Thread Tzu-Li (Gordon) Tai (JIRA)
Tzu-Li (Gordon) Tai created FLINK-9574:
--

 Summary: Add a dedicated documentation page for state evolution
 Key: FLINK-9574
 URL: https://issues.apache.org/jira/browse/FLINK-9574
 Project: Flink
  Issue Type: Improvement
  Components: Documentation, State Backends, Checkpointing, Type 
Serialization System
Reporter: Tzu-Li (Gordon) Tai
Assignee: Tzu-Li (Gordon) Tai


Currently, the only bit of documentation about serializer upgrades / state 
evolution, is 
[https://ci.apache.org/projects/flink/flink-docs-release-1.5/dev/stream/state/custom_serialization.html#handling-serializer-upgrades-and-compatibility.],
 which only explains things at an API level.

State evolution over the time has proved to be a rather complex topic that is 
often overlooked by users. Users would probably benefit from a actual 
full-grown dedicated page that covers both API, and a some necessary internal 
details regarding the interplay of the outdated serializer, restore serializer, 
and new registered serializer of states.

I propose to add this documentation as a subpage under Streaming/State & 
Fault-Tolerance/.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [DISCUSS] Long-term goal of making flink-table Scala-free

2018-06-13 Thread Piotr Nowojski
Hi,

I do not have an experience with how scala and java interacts with each other, 
so I can not fully validate your proposal, but generally speaking +1 from me.

Does it also mean, that we should slowly migrate `flink-table-core` to Java? 
How would you envision it? It would be nice to be able to add new 
classes/features written in Java and so that they can coexist with old Scala 
code until we gradually switch from Scala to Java.

Piotrek

> On 13 Jun 2018, at 11:32, Timo Walther  wrote:
> 
> Hi everyone,
> 
> as you all know, currently the Table & SQL API is implemented in Scala. This 
> decision was made a long-time ago when the initital code base was created as 
> part of a master's thesis. The community kept Scala because of the nice 
> language features that enable a fluent Table API like 
> table.select('field.trim()) and because Scala allows for quick prototyping 
> (e.g. multi-line comments for code generation). The committers enforced not 
> splitting the code-base into two programming languages.
> 
> However, nowadays the flink-table module more and more becomes an important 
> part in the Flink ecosystem. Connectors, formats, and SQL client are actually 
> implemented in Java but need to interoperate with flink-table which makes 
> these modules dependent on Scala. As mentioned in an earlier mail thread, 
> using Scala for API classes also exposes member variables and methods in Java 
> that should not be exposed to users [1]. Java is still the most important API 
> language and right now we treat it as a second-class citizen. I just noticed 
> that you even need to add Scala if you just want to implement a 
> ScalarFunction because of method clashes between `public String toString()` 
> and `public scala.Predef.String toString()`.
> 
> Given the size of the current code base, reimplementing the entire 
> flink-table code in Java is a goal that we might never reach. However, we 
> should at least treat the symptoms and have this as a long-term goal in mind. 
> My suggestion would be to convert user-facing and runtime classes and split 
> the code base into multiple modules:
> 
> > flink-table-java {depends on flink-table-core}
> Implemented in Java. Java users can use this. This would require to convert 
> classes like TableEnvironment, Table.
> 
> > flink-table-scala {depends on flink-table-core}
> Implemented in Scala. Scala users can use this.
> 
> > flink-table-common
> Implemented in Java. Connectors, formats, and UDFs can use this. It contains 
> interface classes such as descriptors, table sink, table source.
> 
> > flink-table-core {depends on flink-table-common and flink-table-runtime}
> Implemented in Scala. Contains the current main code base.
> 
> > flink-table-runtime
> Implemented in Java. This would require to convert classes in 
> o.a.f.table.runtime but would improve the runtime potentially.
> 
> 
> What do you think?
> 
> 
> Regards,
> 
> Timo
> 
> [1] 
> http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/DISCUSS-Convert-main-Table-API-classes-into-traits-tp21335.html
> 



Re: Static code analysis for Flink project

2018-06-13 Thread Piotr Nowojski
Hi,

Generally speaking I would be in favour of adding some reasonable static code 
analysis, however this requires a lot of effort and can not be done at once. 
Also if such checks are firing incorrectly too often it would be annoying to 
manually suppress such warnings/errors.

> I don't really see a benefit in enabling it /continuously/.

On the other hand, I do not see a point of not running it as a part of travis 
CI and non-fast mvn build (`mvn clean install -DskipTests` should perform such 
static checks). If such rules are not enforced by a tool, then there is really 
no point in talking about them - they will be very quickly ignored and 
abandoned.

Piotrek

> On 13 Jun 2018, at 10:38, Chesnay Schepler  wrote:
> 
> I don't really see a benefit in enabling it /continuously/.
> This wouldn't be part of the build or CI processes, as we can't fail the 
> builds since it happens too often that issues are improperly categorized.
> 
> Wading through these lists is time-consuming and I very much doubt that we 
> will do that with a high or even regular frequency.
> We would always require 1-2 committers to commit to this process.
> Thus there's no benefit in running sonarcube beyond these irregular checks as 
> we'd just be wasting processing time.
> 
> I suggest to keep it as a manual process.
> 
> On 13.06.2018 09:35, Till Rohrmann wrote:
>> Hi Alex,
>> 
>> thanks for bringing this topic up. So far the Flink project does not use a
>> static code analysis tool but I think it can strongly benefit from it
>> (simply by looking at the reported bugs). There was a previous discussion
>> about enabling the ASF Sonarcube integration for Flink [1] but it was never
>> put into reality. There is also an integration for Travis which might be
>> interesting to look into [2]. I would be in favour of enabling this.
>> 
>> [1]
>> http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/Add-Sonarqube-analysis-td14556.html
>> [2] https://docs.travis-ci.com/user/sonarcloud/
>> 
>> Cheers,
>> Till
>> 
>> On Tue, Jun 12, 2018 at 11:12 PM Ted Yu  wrote:
>> 
>>> I took a look at some of the blocker defects.
>>> e.g.
>>> 
>>> https://sonarcloud.io/project/issues?id=org.apache.flink%3Aflink-parent=AWPxETxA3e-qcckj1Sl1=false=BLOCKER=BUG
>>> 
>>> For
>>> ./flink-state-backends/flink-statebackend-rocksdb/src/main/java/org/apache/flink/contrib/streaming/state/PredefinedOptions.java
>>> , the closing of DBOptions using try-with-resources is categorized as
>>> blocker by the analysis.
>>> 
>>> I don't think that categorization is proper.
>>> 
>>> We can locate the high priority defects, according to consensus, and fix
>>> those.
>>> 
>>> Cheers
>>> 
>>> On Tue, Jun 12, 2018 at 2:01 PM,  wrote:
>>> 
 Hello Flink community.
 
 I am new in Flink project and probably don't understand it a lot. Could
 you please clarify one question to me?
 
 I download Flink sources and build it from scratch. I found checkstyle
 guidelines that every Flink developer should follow which is very useful.
 However, I didn't find anything about static analysis tools like
>>> Sonarcube.
 I have looked through mailing lists archive but without success. That
 seemed very strange to me.
 
 I have setup Sonarcube and run analysis on whole Flink project. After a
 while I have got 442 bugs, 511 vulnerabilities and more than 13K Code
 Smells issues. You can see them all here: https://sonarcloud.io/
 dashboard?id=org.apache.flink%3Aflink-parent
 
 I looked through some of bugs and vulnerabilities and there are many
 important ones (in my opinions) like these:
 - 'other' is dereferenced. A "NullPointerException" could be thrown;
 "other" is nullable here.
 - Either re-interrupt this method or rethrow the "InterruptedException".
 - Move this call to "wait()" into a synchronized block to be sure the
 monitor on "Object" is held.
 - Refactor this code so that the Iterator supports multiple traversal
 - Use try-with-resources or close this "JsonGenerator" in a "finally"
 clause. Use try-with-resources or close this "JsonGenerator" in a
>>> "finally"
 clause.
 - Cast one of the operands of this subtraction operation to a "long".
 - Make "ZERO_CALENDAR" an instance variable.
 - Add a "NoSuchElementException" for iteration beyond the end of the
 collection.
 - Replace the call to "Thread.sleep(...)" with a call to "wait(...)".
 - Call "Optional#isPresent()" before accessing the value.
 - Change this condition so that it does not always evaluate to "false".
 Expression is always false.
 - This class overrides "equals()" and should therefore also override
 "hashCode()".
 - "equals(Object obj)" should test argument type
 - Not enough arguments in LOG.debug function. Not enough arguments.
 - Remove this return statement from this finally block.
 - "notify" may not wake up the appropriate thread.
 - Remove 

Re: [TABLE][SQL] Unify UniqueKeyExtractor and DataStreamRetractionRules

2018-06-13 Thread Piotr Nowojski
Hi,

Maybe this boils down to how do we envision plan modifications, after setting 
up initial upserts/retraction modes/traits. If we do some plan rewrite 
afterwards, do we want to relay on our current dynamic rules to “fix it”? Do we 
want to rerun DataStreamRetractionRules shuttle again after rewriting the plan? 
Or do we want to guarantee, that any plan rewriting rule that we run AFTER 
setting up retractions/upsert traits, do not brake them, but must take them 
into account (for Example if we add a new node, we would expect it to have 
correctly and consistently set retraction traits with respect of 
parent/children).

I was thinking about the last approach - rules executed after adding traits 
should preserve consistency of those traits. That’s why I didn’t mind setting 
up retractions rules in a shuttle.

Piotrek

> On 6 Jun 2018, at 04:33, Hequn Cheng  wrote:
> 
> Hi, thanks for bringing up this discussion. 
> 
> I agree to unify the UniqueKeyExtractor and DataStreamRetractionRules, 
> however I am not sure if it is a good idea to implement it with RelShuttle. 
> Theoretically, retraction rules and other rules may depend on each other. So, 
> by using a RelShuttle instead of rules we might loose the flexiblity to 
> perform further optimizations.
> 
> As for the join problem, we can solve it by the flowing two changes:
> 1.Implement current UniqueKeyExtractor by adding a FlinkRelMdUniqueKeys 
> RelMetadataProvider in FlinkDefaultRelMetadataProvider, so that we can get 
> unique keys of a RelNode during optimization.
> 2.Treat needsUpdatesAsRetraction method in DataStreamRel as a edge attribute 
> instead of a node attribute. We can implement this with minor changes. The 
> new needsUpdatesAsRetraction in DataStreamJoin will looks like `def 
> needsUpdatesAsRetraction(input: RelNode): Boolean`. In 
> needsUpdatesAsRetraction of join, we can compare the join key and unique keys 
> of the input relnode and return false if unique keys contain join key. In 
> this way, the two input edges of join can work in different mode.
> 
> Best, Hequn.
> 
> On Wed, Jun 6, 2018 at 12:00 AM, Rong Rong  > wrote:
> +1 on the refactoring.
> 
> I spent some time a while back trying to get a better understanding on the 
> several rules mentioned here.
> Correct me if I were wrong by I was under the impression that the reason why 
> the rules are split was because AccMode and UpdateMode are the ones that we 
> care about and the "NeedToRetract" was only the "intermediate" indicator. I 
> guess that's the part that confuse me the most.
> 
> Another thing that confuses me is whether we can mix the modes of operators 
> and while traversing the plan to pick the "least restrictive" mode, like 
> @piotr mentioned, if operators can both support upserts or retractions like 
> in [2b] (the 2nd [2a]). 
> 
> --
> Rong
> 
> 
> 
> On Tue, Jun 5, 2018 at 2:35 AM, Fabian Hueske  > wrote:
> Hi,
> 
> I think the proposed refactoring is a good idea.
> It should simplify the logic to determine which update mode to use.
> We could also try to make some of the method and field names more intuitive
> and extend the internal documentation a bit.
> 
> @Hequn, It would be good to get your thoughts on this issue as well. Thank
> you!
> 
> While thinking about this issue I noticed a severe bug in how filters
> handle upsert messages.
> I've opened FLINK-9528 [1] for that.
> 
> Best, Fabian
> 
> [1] https://issues.apache.org/jira/browse/FLINK-9528 
> 
> 
> 2018-06-04 10:23 GMT+02:00 Timo Walther  >:
> 
> > Hi Piotr,
> >
> > thanks for bringing up this discussion. I was not involved in the design
> > discussions at that time but I also find the logic about upserts and
> > retractions in multiple stages quite confusing. So in general +1 for
> > simplification, however, by using a RelShuttle instead of rules we might
> > loose the flexiblity to perform further optimizations by introducing new
> > rules in the future. Users could not change the static logic in a
> > RelShuttle, right now they can influence the behaviour using CalciteConfig
> > and custom rules.
> >
> > Regards,
> > Timo
> >
> > Am 01.06.18 um 13:26 schrieb Piotr Nowojski:
> >
> > Hi,
> >>
> >> Recently I was looking into upserts and upserts sources in Flink and
> >> while doing so, I noticed some potential room for
> >> improvement/simplification.
> >>
> >> Currently there are 3 optimiser rules in DataStreamRetractionRules that
> >> work in three stages followed by UniqueKeyExtractor plan node visitor to
> >> set preferred updates mode, with validation for correct keys for upserts.
> >> First DataStreamRetractionRules setups UpdateAsRetractionTrait, next in
> >> another rule we use it setup AccModeTrait. AccModeTrait has only two values
> >> Acc (upserts) or AccRetract (retractions). This has some severe limitations
> >> and requires additional stage of 

[DISCUSS] Long-term goal of making flink-table Scala-free

2018-06-13 Thread Timo Walther

Hi everyone,

as you all know, currently the Table & SQL API is implemented in Scala. 
This decision was made a long-time ago when the initital code base was 
created as part of a master's thesis. The community kept Scala because 
of the nice language features that enable a fluent Table API like 
table.select('field.trim()) and because Scala allows for quick 
prototyping (e.g. multi-line comments for code generation). The 
committers enforced not splitting the code-base into two programming 
languages.


However, nowadays the flink-table module more and more becomes an 
important part in the Flink ecosystem. Connectors, formats, and SQL 
client are actually implemented in Java but need to interoperate with 
flink-table which makes these modules dependent on Scala. As mentioned 
in an earlier mail thread, using Scala for API classes also exposes 
member variables and methods in Java that should not be exposed to users 
[1]. Java is still the most important API language and right now we 
treat it as a second-class citizen. I just noticed that you even need to 
add Scala if you just want to implement a ScalarFunction because of 
method clashes between `public String toString()` and `public 
scala.Predef.String toString()`.


Given the size of the current code base, reimplementing the entire 
flink-table code in Java is a goal that we might never reach. However, 
we should at least treat the symptoms and have this as a long-term goal 
in mind. My suggestion would be to convert user-facing and runtime 
classes and split the code base into multiple modules:


> flink-table-java {depends on flink-table-core}
Implemented in Java. Java users can use this. This would require to 
convert classes like TableEnvironment, Table.


> flink-table-scala {depends on flink-table-core}
Implemented in Scala. Scala users can use this.

> flink-table-common
Implemented in Java. Connectors, formats, and UDFs can use this. It 
contains interface classes such as descriptors, table sink, table source.


> flink-table-core {depends on flink-table-common and flink-table-runtime}
Implemented in Scala. Contains the current main code base.

> flink-table-runtime
Implemented in Java. This would require to convert classes in 
o.a.f.table.runtime but would improve the runtime potentially.



What do you think?


Regards,

Timo

[1] 
http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/DISCUSS-Convert-main-Table-API-classes-into-traits-tp21335.html




Re: Static code analysis for Flink project

2018-06-13 Thread Chesnay Schepler

I don't really see a benefit in enabling it /continuously/.
This wouldn't be part of the build or CI processes, as we can't fail the 
builds since it happens too often that issues are improperly categorized.


Wading through these lists is time-consuming and I very much doubt that 
we will do that with a high or even regular frequency.

We would always require 1-2 committers to commit to this process.
Thus there's no benefit in running sonarcube beyond these irregular 
checks as we'd just be wasting processing time.


I suggest to keep it as a manual process.

On 13.06.2018 09:35, Till Rohrmann wrote:

Hi Alex,

thanks for bringing this topic up. So far the Flink project does not use a
static code analysis tool but I think it can strongly benefit from it
(simply by looking at the reported bugs). There was a previous discussion
about enabling the ASF Sonarcube integration for Flink [1] but it was never
put into reality. There is also an integration for Travis which might be
interesting to look into [2]. I would be in favour of enabling this.

[1]
http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/Add-Sonarqube-analysis-td14556.html
[2] https://docs.travis-ci.com/user/sonarcloud/

Cheers,
Till

On Tue, Jun 12, 2018 at 11:12 PM Ted Yu  wrote:


I took a look at some of the blocker defects.
e.g.

https://sonarcloud.io/project/issues?id=org.apache.flink%3Aflink-parent=AWPxETxA3e-qcckj1Sl1=false=BLOCKER=BUG

For
./flink-state-backends/flink-statebackend-rocksdb/src/main/java/org/apache/flink/contrib/streaming/state/PredefinedOptions.java
, the closing of DBOptions using try-with-resources is categorized as
blocker by the analysis.

I don't think that categorization is proper.

We can locate the high priority defects, according to consensus, and fix
those.

Cheers

On Tue, Jun 12, 2018 at 2:01 PM,  wrote:


Hello Flink community.

I am new in Flink project and probably don't understand it a lot. Could
you please clarify one question to me?

I download Flink sources and build it from scratch. I found checkstyle
guidelines that every Flink developer should follow which is very useful.
However, I didn't find anything about static analysis tools like

Sonarcube.

I have looked through mailing lists archive but without success. That
seemed very strange to me.

I have setup Sonarcube and run analysis on whole Flink project. After a
while I have got 442 bugs, 511 vulnerabilities and more than 13K Code
Smells issues. You can see them all here: https://sonarcloud.io/
dashboard?id=org.apache.flink%3Aflink-parent

I looked through some of bugs and vulnerabilities and there are many
important ones (in my opinions) like these:
- 'other' is dereferenced. A "NullPointerException" could be thrown;
"other" is nullable here.
- Either re-interrupt this method or rethrow the "InterruptedException".
- Move this call to "wait()" into a synchronized block to be sure the
monitor on "Object" is held.
- Refactor this code so that the Iterator supports multiple traversal
- Use try-with-resources or close this "JsonGenerator" in a "finally"
clause. Use try-with-resources or close this "JsonGenerator" in a

"finally"

clause.
- Cast one of the operands of this subtraction operation to a "long".
- Make "ZERO_CALENDAR" an instance variable.
- Add a "NoSuchElementException" for iteration beyond the end of the
collection.
- Replace the call to "Thread.sleep(...)" with a call to "wait(...)".
- Call "Optional#isPresent()" before accessing the value.
- Change this condition so that it does not always evaluate to "false".
Expression is always false.
- This class overrides "equals()" and should therefore also override
"hashCode()".
- "equals(Object obj)" should test argument type
- Not enough arguments in LOG.debug function. Not enough arguments.
- Remove this return statement from this finally block.
- "notify" may not wake up the appropriate thread.
- Remove the boxing to "Double".
- Classes should not be compared by name
- "buffers" is a method parameter, and should not be used for
synchronization.

Are there any plans to work on static analysis support for Flink project
or it was intentionally agreed do not use static analysis as time

consuming

and worthless?

Thank you in advance for you replies.

Best Regards,
---
Alex Arkhipov






Re: how to build the connectors and examples from the source

2018-06-13 Thread Chesnay Schepler
You can build specific connector/examples by going into the the 
respective directory and executing mvn clean package -DskipTests.


Connector:
* cd flink-connectors
* cd 
* mvn clean package -DskipTests
* cd target
* pick jar

Example:
* cd flink-examples
* cd 
* mvn clean package -DskipTests
* cd target
* pick jar

On 13.06.2018 05:39, Ted Yu wrote:

Which connector from the following list are you trying to build ?

https://flink.apache.org/ecosystem.html#connectors

The available connectors from 1.5.0 are quite recent. Is there any
functionality missing in the 1.5.0 release ?

Thanks

On Tue, Jun 12, 2018 at 5:17 PM, Chris Kellogg  wrote:


How can one build a connectors jar from the source?

Also, is there a quick way to build the examples from the source without
having to do a mvn clean package -DskipTests?


Thanks.
Chris





Re: Static code analysis for Flink project

2018-06-13 Thread Till Rohrmann
Hi Alex,

thanks for bringing this topic up. So far the Flink project does not use a
static code analysis tool but I think it can strongly benefit from it
(simply by looking at the reported bugs). There was a previous discussion
about enabling the ASF Sonarcube integration for Flink [1] but it was never
put into reality. There is also an integration for Travis which might be
interesting to look into [2]. I would be in favour of enabling this.

[1]
http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/Add-Sonarqube-analysis-td14556.html
[2] https://docs.travis-ci.com/user/sonarcloud/

Cheers,
Till

On Tue, Jun 12, 2018 at 11:12 PM Ted Yu  wrote:

> I took a look at some of the blocker defects.
> e.g.
>
> https://sonarcloud.io/project/issues?id=org.apache.flink%3Aflink-parent=AWPxETxA3e-qcckj1Sl1=false=BLOCKER=BUG
>
> For
> ./flink-state-backends/flink-statebackend-rocksdb/src/main/java/org/apache/flink/contrib/streaming/state/PredefinedOptions.java
> , the closing of DBOptions using try-with-resources is categorized as
> blocker by the analysis.
>
> I don't think that categorization is proper.
>
> We can locate the high priority defects, according to consensus, and fix
> those.
>
> Cheers
>
> On Tue, Jun 12, 2018 at 2:01 PM,  wrote:
>
> > Hello Flink community.
> >
> > I am new in Flink project and probably don't understand it a lot. Could
> > you please clarify one question to me?
> >
> > I download Flink sources and build it from scratch. I found checkstyle
> > guidelines that every Flink developer should follow which is very useful.
> > However, I didn't find anything about static analysis tools like
> Sonarcube.
> > I have looked through mailing lists archive but without success. That
> > seemed very strange to me.
> >
> > I have setup Sonarcube and run analysis on whole Flink project. After a
> > while I have got 442 bugs, 511 vulnerabilities and more than 13K Code
> > Smells issues. You can see them all here: https://sonarcloud.io/
> > dashboard?id=org.apache.flink%3Aflink-parent
> >
> > I looked through some of bugs and vulnerabilities and there are many
> > important ones (in my opinions) like these:
> > - 'other' is dereferenced. A "NullPointerException" could be thrown;
> > "other" is nullable here.
> > - Either re-interrupt this method or rethrow the "InterruptedException".
> > - Move this call to "wait()" into a synchronized block to be sure the
> > monitor on "Object" is held.
> > - Refactor this code so that the Iterator supports multiple traversal
> > - Use try-with-resources or close this "JsonGenerator" in a "finally"
> > clause. Use try-with-resources or close this "JsonGenerator" in a
> "finally"
> > clause.
> > - Cast one of the operands of this subtraction operation to a "long".
> > - Make "ZERO_CALENDAR" an instance variable.
> > - Add a "NoSuchElementException" for iteration beyond the end of the
> > collection.
> > - Replace the call to "Thread.sleep(...)" with a call to "wait(...)".
> > - Call "Optional#isPresent()" before accessing the value.
> > - Change this condition so that it does not always evaluate to "false".
> > Expression is always false.
> > - This class overrides "equals()" and should therefore also override
> > "hashCode()".
> > - "equals(Object obj)" should test argument type
> > - Not enough arguments in LOG.debug function. Not enough arguments.
> > - Remove this return statement from this finally block.
> > - "notify" may not wake up the appropriate thread.
> > - Remove the boxing to "Double".
> > - Classes should not be compared by name
> > - "buffers" is a method parameter, and should not be used for
> > synchronization.
> >
> > Are there any plans to work on static analysis support for Flink project
> > or it was intentionally agreed do not use static analysis as time
> consuming
> > and worthless?
> >
> > Thank you in advance for you replies.
> >
> > Best Regards,
> > ---
> > Alex Arkhipov
> >
> >
>


Re: WELCOME to dev@flink.apache.org

2018-06-13 Thread Tzu-Li (Gordon) Tai
Hi Sandish,

Welcome to the Flink community!

Do you mean contributor permissions on JIRA?
The community usually only assigns contributor permissions when you find a 
specific JIRA ticket you would like to start working on.

Once you do find one, let us know your JIRA account ID and the ticket, and then 
we can add you as a contributor on JIRA.

Cheers,
Gordon
On 13 June 2018 at 6:17:07 AM, Sandish Kumar HN (sanysand...@gmail.com) wrote:

Can someone add me as a contributor 
Mail:sanysand...@gmail.com 
FullName: Sandish Kumar HN 


On 12 June 2018 at 23:14,  wrote: 

> Hi! This is the ezmlm program. I'm managing the 
> dev@flink.apache.org mailing list. 
> 
> I'm working for my owner, who can be reached 
> at dev-ow...@flink.apache.org. 
> 
> Acknowledgment: I have added the address 
> 
> sanysand...@gmail.com 
> 
> to the dev mailing list. 
> 
> Welcome to dev@flink.apache.org! 
> 
> Please save this message so that you know the address you are 
> subscribed under, in case you later want to unsubscribe or change your 
> subscription address. 
> 
> 
> --- Administrative commands for the dev list --- 
> 
> I can handle administrative requests automatically. Please 
> do not send them to the list address! Instead, send 
> your message to the correct command address: 
> 
> To subscribe to the list, send a message to: 
>  
> 
> To remove your address from the list, send a message to: 
>  
> 
> Send mail to the following for info and FAQ for this list: 
>  
>  
> 
> Similar addresses exist for the digest list: 
>  
>  
> 
> To get messages 123 through 145 (a maximum of 100 per request), mail: 
>  
> 
> To get an index with subject and author for messages 123-456 , mail: 
>  
> 
> They are always returned as sets of 100, max 2000 per request, 
> so you'll actually get 100-499. 
> 
> To receive all messages with the same subject as message 12345, 
> send a short message to: 
>  
> 
> The messages should contain one line or word of text to avoid being 
> treated as sp@m, but I will ignore their content. 
> Only the ADDRESS you send to is important. 
> 
> You can start a subscription for an alternate address, 
> for example "john@host.domain", just add a hyphen and your 
> address (with '=' instead of '@') after the command word: 
>  
> 
> To stop subscription for this address, mail: 
>  
> 
> In both cases, I'll send a confirmation message to that address. When 
> you receive it, simply reply to it to complete your subscription. 
> 
> If despite following these instructions, you do not get the 
> desired results, please contact my owner at 
> dev-ow...@flink.apache.org. Please be patient, my owner is a 
> lot slower than I am ;-) 
> 
> --- Enclosed is a copy of the request I received. 
> 
> Return-Path:  
> Received: (qmail 6295 invoked by uid 99); 13 Jun 2018 04:14:39 - 
> Received: from pnap-us-west-generic-nat.apache.org (HELO 
> spamd2-us-west.apache.org) (209.188.14.142) 
> by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 13 Jun 2018 04:14:39 
> + 
> Received: from localhost (localhost [127.0.0.1]) 
> by spamd2-us-west.apache.org (ASF Mail Server at 
> spamd2-us-west.apache.org) with ESMTP id 251381A1706 
> for  c...@flink.apache.org>; Wed, 13 Jun 2018 04:14:39 + (UTC) 
> X-Virus-Scanned: Debian amavisd-new at spamd2-us-west.apache.org 
> X-Spam-Flag: NO 
> X-Spam-Score: 1.869 
> X-Spam-Level: * 
> X-Spam-Status: No, score=1.869 tagged_above=-999 required=6.31 
> tests=[DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, 
> HTML_MESSAGE=2, RCVD_IN_DNSWL_NONE=-0.0001, 
> RCVD_IN_MSPIKE_H3=-0.01, 
> RCVD_IN_MSPIKE_WL=-0.01, SPF_PASS=-0.001, T_DKIMWL_WL_MED=-0.01] 
> autolearn=disabled 
> Authentication-Results: spamd2-us-west.apache.org (amavisd-new); 
> dkim=pass (2048-bit key) header.d=gmail.com 
> Received: from mx1-lw-us.apache.org ([10.40.0.8]) 
> by localhost (spamd2-us-west.apache.org [10.40.0.9]) 
> (amavisd-new, port 10024) 
> with ESMTP id wHNqYMkcZMeN 
> for  c...@flink.apache.org>; 
> Wed, 13 Jun 2018 04:14:36 + (UTC) 
> Received: from mail-it0-f68.google.com (mail-it0-f68.google.com 
> [209.85.214.68]) 
> by mx1-lw-us.apache.org (ASF Mail Server at mx1-lw-us.apache.org) 
> with ESMTPS id D65EB5F19B 
> for  c...@flink.apache.org>; Wed, 13 Jun 2018 04:14:35 + (UTC) 
> Received: by mail-it0-f68.google.com with SMTP id k17-v6so15936144ita.0 
> for  c...@flink.apache.org>; Tue, 12 Jun 2018 21:14:35 -0700 (PDT) 
> DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; 
> d=gmail.com; s=20161025; 
> h=mime-version:in-reply-to:references:from:date:message-id: 
> subject:to; 
> bh=faPY6QRVgah9IETsCxWks/V6Hc5P0NyYEhDF6nJk+sw=; 
> b=XMfkIikJVT/kLZQ2ZWj0tAqHS0LOloCOs4PJ9AzxHftYNS1sxn7EJeqPZb 
> BR9JdD2c 
> L71WxqjeVHElS12/HLn8Q9Dj0t+fWSH5fkk3mxe5bvwc4tUaEcF/31j0Zqs 
> r95eKQ25W 
> Ag/ZJfgmXy3aHvQZGpm8kMyFp3wC9/fFduJ3yVId2uqDKE5RYcR4YEPJQUiwikO 
> zuwqS 
> tRCVH1Wqt/vr+7ZUjD1unMrmsKhuLhDmbuZSTUy91tq0PbxLLZIJaABkI1b 
> Du0E+dyy8 
> EBwXMPSvTnhqM27CZCZiYwFgV451IfhiB4046Y9jVSrIvHU1p+mfhGdPCcEo