Re: SIGSEGV (0xb) on TransactionCoordinator

2019-01-07 Thread Car Devops
 Hi folks :)

Can you please Wenxing comfirm that removing G1GC really solved the problem.
Unfortunately I faced that issue last night.

And what do you mean by " remove the G1GC"? How did you do that?

Thanks!



-Original Message-
From: wenxing zheng [mailto:wenxing.zh...@gmail.com]
Sent: 28 grudnia 2018 04:05
To: Peter Levart 
Cc: users@kafka.apache.org
Subject: Re: SIGSEGV (0xb) on TransactionCoordinator



Hi Peter, we didn't upgrade the JDK8, but just remove the G1GC. And now it
seemed stable, but we need to keep monitoring the status to finally confirm.



Kind Regards, Wenxing



On Thu, Dec 27, 2018 at 9:43 PM Peter Levart  wrote:



> Here's a report on the Jira with exactly the same crash and the

> reported was using the same JDK 8u92...

>

> https://issues.apache.org/jira/browse/KAFKA-7625

>

> ...the reporter upgraded to latest JDK 8 and it seems to be stable

> since then.

>

> Regards, Peter

>

> On 12/27/18 11:29 AM, wenxing zheng wrote:

> > Thanks to Peter.

> >

> > We did a lot of tests today, and found that the issue will happen

> > after enabling G1GC. If we go with default settings, everything looks
fine.

> >

> > On Thu, Dec 27, 2018 at 4:49 PM Peter Levart

> > 

> wrote:

> >

> >> Hi,

> >>

> >> It looks like a JVM bug. If I were you, 1st thing I'd do is

> >> upgrading the JDK to the latest JDK8u192. You're using JDK8u92

> >> which is quite old (2+ years)...

> >>

> >> Regards, Peter

> >>

> >> On 12/27/18 3:53 AM, wenxing zheng wrote:

> >>> Dear all,

> >>>

> >>> We got a coredump with the following info last night, on this

> >> environment,

> >>> we enable the

> >>> transaction. Please kindly advice what would be the problem here.

> >>>

> >>> #

>  # A fatal error has been detected by the Java Runtime Environment:

>  #

>  #  SIGSEGV (0xb) at pc=0x7f546a857d0d, pid=13288,

>  tid=0x7f53701f9700

>  #

>  # JRE version: Java(TM) SE Runtime Environment (8.0_92-b14)

>  (build

>  1.8.0_92-b14)

>  # Java VM: Java HotSpot(TM) 64-Bit Server VM (25.92-b14 mixed

>  mode

>  linux-amd64 compressed oops)

>  # Problematic frame:

>  # J 9563 C1

> >>

> *kafka.coordinator.transaction.TransactionCoordinator.$anonfun$handleE

> ndTransaction$7(Lkafka/coordinator/transaction/TransactionCoordinator;

> Ljava/lang/String;JSLorg/apache/kafka/common/requests/TransactionResul

> t;Lkafka/coordinator/transaction/TransactionMetadata;)Lscala/util/Eith

> er;

>  (518 bytes) @ 0x7f546a857d0d [0x7f546a856b40+0x11cd]* # #

>  Failed to write core dump. Core dumps have been disabled. To

>  enable

> >> core

>  dumping, try "ulimit -c unlimited" before starting Java again # #

>  If you would like to submit a bug report, please visit:

>  #   http://bugreport.java.com/bugreport/crash.jsp

>  #

>  ---  T H R E A D  --- Current thread

>  (0x7f547a29e800):  JavaThread

> >> "kafka-request-handler-5"

>  daemon [_thread_in_Java, id=13722,

>  stack(0x7f53700f9000,0x7f53701fa000)]

>  siginfo: si_signo: 11 (SIGSEGV), si_code: 1 (SEGV_MAPERR), si_addr:

>  0xdd310c13

>  Registers:

>  RAX=0x0001, RBX=0x0006e9072fc8,

> RCX=0x0688,

>  RDX=0x00075e026fc0

>  RSP=0x7f53701f7f00, RBP=0x0006e98861f8,

> RSI=0x7f53771a4238,

>  RDI=0x0006e9886098

>  R8 =0x132d, R9 =0xdd310c13,

> R10=0x0007c010bbb0,

>  R11=0xdd310c13

>  R12=0x, R13=0xdd310b3d,

> R14=0xdd310c0c,

>  R15=0x7f547a29e800

>  RIP=0x7f546a857d0d, EFLAGS=0x00010202,

>  CSGSFS=0x002b0033, ERR=0x0004

>  TRAPNO=0x000e

> >>> Thanks,

> >>>

> >>

>

>


Re: Kafka Streams 2.1.0, 3rd time data lose investigation

2019-01-07 Thread John Roesler
Hi Nitay,

> I will provide extra logs if it will happen again (I really really hope it
> won't hehe :))

Yeah, I hear you. Reproducing errors in production is a real double-edged
sword!

Thanks for the explanation. It makes sense now.

This may be grasping at straws, but it seems like your frequent rebalances
may be exposing you to this recently reported bug:
https://issues.apache.org/jira/browse/KAFKA-7672

The reporter mentioned one thing that can help identify it, which is that
it prints a message saying that it's going to re-initialize the state
store, followed immediately by a transition to "running". Perhaps you can
check your Streams logs to see if you see anything similar.

Thanks,
-John

On Sat, Jan 5, 2019 at 10:48 AM Nitay Kufert  wrote:

> Hey John,
> Thanks for the response!
>
> I will provide extra logs if it will happen again (I really really hope it
> won't hehe :))
>
> Some clarification regarding the previous mail:
> The only thing that shows the data loss is the messages from the compacted
> topic which I consumed a couple of hours after the I noticed the data loss.
> This compacted topic is an output of my stream application (basically, I am
> using reduce on the same key to SUM the values, and pushing it to the
> compacted topic using ".to")
>
> The only correlation I have for those messages are the timestamps of the
> messages in the compacted topic.
> So I took logs from Spotinst & Kafka Stream instances around the same time
> and shared them here (I know correlation doesn't mean causation but that's
> the only thing I have :) )
>
> The messages showed an ever-increasing value for the specific key I was
> investigating, which was expected.
> The unexpected thing was that suddenly the value started the aggregation
> back at 0 for some reason.
>
> In an effort to understand what's going on, I added logs to the function I
> use for reducing the stream to try and log cases where this thing happens -
> but it didn't log anything.. which makes me think as if the reduce function
> "initialized" itself, meaning it acted as if it was the first message
> (nothing to aggregate the value with - so we just put the value)
>
> In the example I have shared, I have keys in the format cdr_ with
> values which are BigDecimal numbers.
> I could have shared the thousands of messages I consumed from the topic
> before reaching the value 1621.72, It would have looked something like :
> cdr_44334 -> 1619.32
> cdr_44334 -> 1619.72
> cdr_44334 -> 1620.12
> cdr_44334 -> 1620.52
> cdr_44334 -> 1620.92
> cdr_44334 -> 1621.32
> cdr_44334 -> 1621.72
> cdr_44334 -> 0.27
> cdr_44334 -> 0.67
> cdr_44334 -> 1.07
>
> So basically, the only thing that shows the loss is the sudden decrease in
> value in a specific key (I had thousands of keys who lost their value - but
> many many more that didn't lose their value).
> (I am monitoring those changes using datadog, so I know which keys are
> affected and I can investigate them)
>
> Let me know if you need some more details or if you want me to escalate
> this situation to a jira
>
> Thanks again
>
>
>
> On Thu, Jan 3, 2019 at 11:36 PM John Roesler  wrote:
>
> > Hi Nitay,
> >
> > I'm sorry to hear of these troubles; it sounds frustrating.
> >
> > No worries about spamming the list, but it does sound like this might be
> > worth tracking as a bug report in Jira.
> > Obviously, we do not expect to lose data when instances come and go,
> > regardless of the frequency, and we do have tests in place to verify
> this.
> > Of course, you might be exercising something that our tests miss.
> >
> > Thanks for collating the logs. It really helps to understand what's going
> > on.
> >
> > Unfortunately, the red coloring didn't make it through the mailing list,
> so
> > I'm not sure which specific line you were referencing as demonstrating
> data
> > loss.
> >
> > Just in case you're concerned about the "Updating StandbyTasks failed"
> > warnings, they should be fine. It indicates that a thread was unable to
> > re-use a state store that it had previously been assigned in the past, so
> > instead it deletes the local data and recreates the whole thing from the
> > changelog.
> >
> > The Streams logs that would be really useful to capture are the lifecycle
> > ones, like
> >
> > [2018-12-14 17:34:30,326] INFO stream-thread
> > >
> >
> [kafka-streams-standby-tasks-75ca0cca-cc0b-4524-843c-2d9d1d555980-StreamThread-1]
> > > State transition from RUNNING to PARTITIONS_REVOKED
> > > (org.apache.kafka.streams.processor.internals.StreamThread)
> >
> >
> >
> > [2018-12-14 17:34:30,326] INFO stream-client
> > > [kafka-streams-standby-tasks-75ca0cca-cc0b-4524-843c-2d9d1d555980]
> State
> > > transition from RUNNING to REBALANCING
> > > (org.apache.kafka.streams.KafkaStreams)
> >
> >
> > Also, it would be helpful to see the assignment transitions in line with
> > the state transitions. Examples:
> >
> > [2018-12-14 17:34:31,863] DEBUG stream-thread
> > >
> >
> 

RE: Unable to Restrict Access to Consumer Group ACLs

2019-01-07 Thread Chapin, Ryan
Well, I feel a bit stupid on this one . . . .

Guess what happens when I discover the following ACL rule that was in place?

Current ACLs for resource `Group:*`: 
User:* has Allow permission for operations: Read from hosts: *

And then guess what happens when I delete that rule?

Yes, you guessed it . . . it works as you would expect and limits access to the 
specifically defined group.


-Original Message-
From: Chapin, Ryan  
Sent: Monday, January 07, 2019 2:12 PM
To: users@kafka.apache.org
Subject: Unable to Restrict Access to Consumer Group ACLs

WARNING: The sender of this email could not be validated and may not match the 
person in the "From" field.

CAUTION: This email originated from outside of the organization. Do not click 
links or open attachments unless you recognize the sender and know the content 
is safe.


I do not seem to be able to restrict access to specific users for specific 
consumer groups.

It seems as though I am doing something wrong, or that there is a bug as I 
imagine what I am trying to do is very straight forward.

We have an 11 node cluster running the 2.11-1.0.0 open source version of Kafka 
in conjunction with an open source Zookeeper distro (zookeeper-3.4.6-1.el7) 
from the bigtop repo.  The cluster is secured via Kerberos (AD) authentication, 
over TLS.  All nodes communicate with each other over authenticated TLS 
connections as well.

We are using the Zookeeper based authorization mechanism with the following 
configs from Kafka's server.properties (the comments also indicate the only 
Zookeeper config items specific to the ACL mechanism):

##
# This setting ensures that the acl settings for znodes that are # created by 
kafka are not modifiable by other users that can connect # to ZooKeeper.  This 
has no effect on the data that can be accessed # in kafka, but ensures that 
outside actors cannot modify the znodes # that enable kafka to maintain state 
on the topics.
#
zookeeper.set.acl=true

##
# Given that we have figured out that the two following ZooKeeper # configs:
#
#   kerberos.removeHostFromPrincipal=true
#   kerberos.removeRealmFromPrincipal=true
#
# will take a principal in the form of
#
#   kafka/kafka01.prod.quasar.nadops@gtn.nadops.net
#
# and 'convert' it to:
#
#   kafka
#
# This means that all of the kafka brokers now 'share' a username and # we can 
easily configure the super.users to grant all access to all # znodes to all of 
the brokers without having to continually update # this config and/or set it 
conditionally based on the environment # and list of hosts.
#
super.users=User:kafka

##
# Authorization Configs:
#
# The following is a stepping stone using the Zookeeper based # authorization 
mechanism

authorizer.class.name=kafka.security.auth.SimpleAclAuthorizer

# Since we were able to sort out the ability to define the kafka # brokers as 
super.users, we can, by default ensure that only those # users that have 
specifically configured acls can get access to # kafka resources.
#
allow.everyone.if.no.acl.found=false

I would like to be able to provide self-service access to certain topics such 
that certain topics allow read access to any authenticated user from any host.

Further, I want to ensure that only certain users can use specifically defined 
consumer groups when reading from those topics.

For example:

There is a topic "rchapin_topic" configured with three partitions.  There is a 
production application that is reading from this topic and driving a customer 
facing dashboard.  The production application is authenticating as the 
"rchapin" user with the "rchapin_group" consumer id.

Any other authenticated user should be able to read from that topic, but SHOULD 
NOT be able to do so with the "rchapin_group" consumer group.  If any other 
consumer starts reading from the topic with that consumer group, it will start 
siphoning off data from the production application.

Following are the ACLs that we have attempted to put in place that we figure 
would allow us to configure such a use case.

Current ACLs for resource `Topic:rchapin_topic`:
User:rchapin has Allow permission for operations: Read from hosts: *
User:rchapin has Allow permission for operations: Write from hosts: *
User:rchapin has Allow permission for operations: Describe from hosts: *
User:* has Allow permission for operations: Describe from hosts: *
User:* has Allow permission for operations: Read from hosts: *

Current ACLs for resource `Group:rchapin_group`:
User:rchapin has Allow permission for operations: Read from hosts: *

With the aforementioned configurations I can connect as any other authenticated 
use to read data from the rchapin topic using the rchapin_group consumer group 
and begin 

Re: KTable.suppress(Suppressed.untilWindowCloses) does not suppress some non-final results when the kafka streams process is restarted

2019-01-07 Thread John Roesler
Hi Peter,

Sorry, I just now have seen this thread.

You asked if this behavior is unexpected, and the answer is yes.
Suppress.untilWindowCloses is intended to emit only the final result,
regardless of restarts.

You also asked how the suppression buffer can resume after a restart, since
it's not persistent.
The answer is the same as for in-memory stores. The state of the store (or
buffer, in this case)
is persisted to a changelog topic, which is re-read on restart to re-create
the exact state prior to shutdown.
"Persistent" in the store nomenclature refers only to "persistent on the
local disk".

Just to confirm your response regarding the buffer size:
While it is better to use the public ("Suppressed.unbounded()") API, yes,
your buffer was already unbounded.

I looked at your custom transfomer, and it looks almost correct to me. The
only flaw seems to be that it only looks
for closed windows for the key currently being processed, which means that
if you have key "A" buffered, but don't get another event for it for a
while after the window closes, you won't emit the final result. This might
actually take longer than the window retention period, in which case, the
data would be deleted without ever emitting the final result.

You said you think it should be possible to get the DSL version working,
and I agree, since this is exactly what it was designed for. Do you mind
filing a bug in the "KAFKA" Jira project (
https://issues.apache.org/jira/secure/Dashboard.jspa)? It will be easier to
keep the investigation organized that way.

In the mean time, I'll take another look at your logs above and try to
reason about what could be wrong.

Just one clarification... For example, you showed
> [pool-1-thread-4] APP Consumed: [c@1545398874000/1545398876000] -> [14,
272, 548, 172], sum: 138902
> [pool-1-thread-4] APP Consumed: [c@1545398874000/1545398876000] -> [14,
272, 548, 172, 596, 886, 780] INSTEAD OF [14, 272, 548, 172], sum: 141164

Am I correct in thinking that the first, shorter list is the "incremental"
version, and the second is the "final" version? I think so, but am confused
by "INSTEAD OF".

Thanks for the report,
-John



On Wed, Dec 26, 2018 at 3:21 AM Peter Levart  wrote:

>
>
> On 12/21/18 3:16 PM, Peter Levart wrote:
> > I also see some results that are actual non-final window aggregations
> > that precede the final aggregations. These non-final results are never
> > emitted out of order (for example, no such non-final result would ever
> > come after the final result for a particular key/window).
>
> Absence of proof is not the proof of absence... And I have later
> observed (using the DSL variant, not the custom Transformer) an
> occurrence of a non-final result that was emited after restart of
> streams processor while the final result for the same key/window had
> been emitted before the restart:
>
> [pool-1-thread-4] APP Consumed: [a@154581526/1545815262000] -> [550,
> 81, 18, 393, 968, 847, 452, 0, 0, 0], sum: 444856
> ...
> ... restart ...
> ...
> [pool-1-thread-4] APP Consumed: [a@154581526/1545815262000] -> [550]
> INSTEAD OF [550, 81, 18, 393, 968, 847, 452, 0, 0, 0], sum: 551648
>
>
> The app logic can not even rely on guarantee that results are ordered
> then. This is really not usable until the bug is fixed.
>
> Regards, Peter
>
>


Automating topic cleanup

2019-01-07 Thread Diogo Vieira
Hello!

I’m using a Kafka cluster with dynamically created and temporary topics that 
each map to a different consumer. After a consumer disconnects the topic isn’t 
cleaned which means my topics will grow with the consumers.

As far as I know, Kafka won’t automatically clean those topics (I believe 
something like it was suggested as a feature when topic deletion was introduced 
but was never implemented).

Does anyone more knowledgeable than me can please provide some insight to 
tackle this problem? I’m thinking of automating the cleanup of these topics, 
but I have no idea how to find out which topics should be deleted, since I 
would like to have access to the timestamp of the last consumer offset commit 
(or something like it) to clean topics after an arbitrary amount of time.

Thank you in advance,
Diogo Vieira

Unable to Restrict Access to Consumer Group ACLs

2019-01-07 Thread Chapin, Ryan
I do not seem to be able to restrict access to specific users for specific 
consumer groups.

It seems as though I am doing something wrong, or that there is a bug as I 
imagine what I am trying to do is very straight forward.

We have an 11 node cluster running the 2.11-1.0.0 open source version of Kafka 
in conjunction with an open source Zookeeper distro (zookeeper-3.4.6-1.el7) 
from the bigtop repo.  The cluster is secured via Kerberos (AD) authentication, 
over TLS.  All nodes communicate with each other over authenticated TLS 
connections as well.

We are using the Zookeeper based authorization mechanism with the following 
configs from Kafka's server.properties (the comments also indicate the only 
Zookeeper config items specific to the ACL mechanism):

##
# This setting ensures that the acl settings for znodes that are
# created by kafka are not modifiable by other users that can connect
# to ZooKeeper.  This has no effect on the data that can be accessed
# in kafka, but ensures that outside actors cannot modify the znodes
# that enable kafka to maintain state on the topics.
#
zookeeper.set.acl=true

##
# Given that we have figured out that the two following ZooKeeper
# configs:
#
#   kerberos.removeHostFromPrincipal=true
#   kerberos.removeRealmFromPrincipal=true
#
# will take a principal in the form of
#
#   kafka/kafka01.prod.quasar.nadops@gtn.nadops.net
#
# and 'convert' it to:
#
#   kafka
#
# This means that all of the kafka brokers now 'share' a username and
# we can easily configure the super.users to grant all access to all
# znodes to all of the brokers without having to continually update
# this config and/or set it conditionally based on the environment
# and list of hosts.
#
super.users=User:kafka

##
# Authorization Configs:
#
# The following is a stepping stone using the Zookeeper based
# authorization mechanism

authorizer.class.name=kafka.security.auth.SimpleAclAuthorizer

# Since we were able to sort out the ability to define the kafka
# brokers as super.users, we can, by default ensure that only those
# users that have specifically configured acls can get access to
# kafka resources.
#
allow.everyone.if.no.acl.found=false

I would like to be able to provide self-service access to certain topics such 
that certain topics allow read access to any authenticated user from any host.

Further, I want to ensure that only certain users can use specifically defined 
consumer groups when reading from those topics.

For example:

There is a topic "rchapin_topic" configured with three partitions.  There is a 
production application that is reading from this topic and driving a customer 
facing dashboard.  The production application is authenticating as the 
"rchapin" user with the "rchapin_group" consumer id.

Any other authenticated user should be able to read from that topic, but SHOULD 
NOT be able to do so with the "rchapin_group" consumer group.  If any other 
consumer starts reading from the topic with that consumer group, it will start 
siphoning off data from the production application.

Following are the ACLs that we have attempted to put in place that we figure 
would allow us to configure such a use case.

Current ACLs for resource `Topic:rchapin_topic`:
User:rchapin has Allow permission for operations: Read from hosts: *
User:rchapin has Allow permission for operations: Write from hosts: *
User:rchapin has Allow permission for operations: Describe from hosts: *
User:* has Allow permission for operations: Describe from hosts: *
User:* has Allow permission for operations: Read from hosts: *

Current ACLs for resource `Group:rchapin_group`:
User:rchapin has Allow permission for operations: Read from hosts: *

With the aforementioned configurations I can connect as any other authenticated 
use to read data from the rchapin topic using the rchapin_group consumer group 
and begin siphoning off data from the rchapin production application.

If I add an ACL that specifically adds "deny-principal" for all users for the 
group as follows:

# kafka-acls.sh --authorizer-properties zookeeper.connect=zk2-01 --add 
--deny-principal User:* --group rchapin_group
Adding ACLs for resource `Group:rchapin_group`:
User:* has Deny permission for operations: All from hosts: *

Current ACLs for resource `Group:rchapin_group`:
User:rchapin has Allow permission for operations: Read from 
hosts: *
User:* has Deny permission for operations: All from hosts: *

Then no one is able to read from that topic with the rchapin_group consumer id, 
but any authenticated user can connect with another group id to consume from 
the topic.

Is it possible to restrict access to consumer group to a specific user?  If so, 

Re: Is kafka support dynamic ACL rule

2019-01-07 Thread ilter P
Hi,

While creating the ACL you can do that however while Kafka authorizing it
does not support any REGEX for users
You have to create a new Authorizer class by extending Authorizer f.i
"SimpleAclAuthorizer.scala" ->
https://github.com/apache/kafka/blob/trunk/core/src/main/scala/kafka/security/auth/SimpleAclAuthorizer.scala

Then you can tell Kafka to use your authorizer from the server.properties
as:

authorizer.class.name=com.example.CustomAclAuthorizer


Then you can do any kind of authorization yourself

Regards



hui happy , 27 Ara 2018 Per, 02:18 tarihinde şunu
yazdı:

> Hi
>
> As I learned that kafka can use  '--resource-pattern-type prefixed'  to add
> rule for prefixed topic.
> For example an user 'kafkaclient', we could define a rule let the user can
> access all topics start with that user name, i.e., 'kafkaclient--', such
> as  'kafkaclient--topic1', 'kafkaclient--topic2', etc.
>
> /opt/kafka/bin/kafka-acls.sh \
>
>   --authorizer-properties zookeeper.connect=zookeeper:2181 \
>
>   --add \
>
>   --allow-principal User:"kafkaclient" \
>
>   --operation All \
>
>   --resource-pattern-type prefixed \
>
>   --topic "kafkaclient--" \
>
>
> But is it possible to define dynamic user name ?
> In above case we know the username is 'kafkaclient', and if there are many
> other users, we have to add rule for each user; these rules are similar,
> except the user name.
>
> So i want to know if it's possible to just define a single rule, using
> dynamic user name, each user could access the topics start with itself
> username. something likes:
>
> /opt/kafka/bin/kafka-acls.sh \
>
>   --authorizer-properties zookeeper.connect=zookeeper:2181 \
>
>   --add \
>
>   --allow-principal User:"**" \
>
>   --operation All \
>
>   --resource-pattern-type prefixed \
>
>   --topic "**--" \
>
>
> Then whatever to add user or add topic later, we don't need to add any
> rules.
>
> Thanks.
> Hui
>


Re: One partitions segment files in different log-dirs

2019-01-07 Thread margusja
Hi

Thank you for a answer. 

The answer I was looking for was how partition segment files are distributed 
over kafka-logs directories? 
In example I have one broker with two log directories: kafka-logs1 and 
kafka-logs2 both of them 100MB in example and partition file segment size is 
90MB. If one segment will be full can kafka start second segment file in other 
kafka-logs directory?

Br, Margus

On 2019/01/05 17:20:06, Jonathan Santilli  wrote: 
> Hello Margus,
> 
> Am not sure if I got your question correctly, but, assuming you have a
> topic called "*kafka-log*" with two partitions, each of them (kafka-log-1
> and kafka-log-2) will contain its own segments.
> Kafka Brokers will distribute/replicate (according to the Brokers config)
> the topics partitions among the available Brokers (once again, it depends
> on the configuration you have in place).
> 
> The segments within a topic partition belongs to that particular partition
> and are not shared between partitions, that is, one particular segment
> sticks to the partition it belongs and is not shared/split with other
> partitions.
> 
> Hope this helps or maybe you can provide more details about your doubt.
> 
> Cheers!
> --
> Jonathan
> 
> 
> On Fri, Jan 4, 2019 at 4:29 PM  wrote:
> 
> > Hi
> >
> > In example if I have /kafka-log1 and /kafka-log2
> >
> > Can kafka distribute one partitions segment files between different logs
> > directories?
> >
> > Br,
> > Margus Roo
> >
> 
> 
> -- 
> Santilli Jonathan
>