Re: How to scale livy servers?

2017-12-18 Thread amarouni


On 18/12/2017 04:03, Meisam Fathi wrote:
>  
>
> 1) I have a couple of livy servers that are submitting jobs and
> say one of them crashes the session id's again start from 0 which
> can coincide with the non-faulty running livy servers. I think it
> would be nice to have session id's as UUID. isn't it?
>
>  
> If you enable recovery, the session IDs won't restart from 0 after
> recovery.
+1 for changing the ids, incremental ids are not a good practice from an
API design standpoint (and security). UUIDs are easy to implement and
make it easier to avoid confusion.
>  
>
> 2) Is there a way to get job progress periodically or get notified
> if it dies and so on ?
>
>
>  In the REST API, not as far as I know. You have to poll session/job
> status.
>
> Thanks,
> Meisam



[GitHub] bahir pull request #42: [BAHIR-116] Add spark streaming connector to Google ...

2017-05-03 Thread amarouni
Github user amarouni commented on a diff in the pull request:

https://github.com/apache/bahir/pull/42#discussion_r114502066
  
--- Diff: 
streaming-pubsub/examples/src/main/scala/org.apache.spark.examples.streaming.pubsub/PubsubWordCount.scala
 ---
@@ -0,0 +1,150 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+// scalastyle:off println
+package org.apache.spark.examples.streaming.pubsub
+
+import scala.collection.JavaConverters._
+import scala.util.Random
+
+import com.google.api.client.googleapis.javanet.GoogleNetHttpTransport
+import com.google.api.client.json.jackson2.JacksonFactory
+import com.google.api.services.pubsub.Pubsub.Builder
+import com.google.api.services.pubsub.model.PublishRequest
+import com.google.api.services.pubsub.model.PubsubMessage
+import com.google.cloud.hadoop.util.RetryHttpInitializer
+
+import org.apache.spark.storage.StorageLevel
+import org.apache.spark.streaming.pubsub.ConnectionUtils
+import org.apache.spark.streaming.pubsub.PubsubTestUtils
+import org.apache.spark.streaming.pubsub.PubsubUtils
+import org.apache.spark.streaming.pubsub.SparkGCPCredentials
+import org.apache.spark.streaming.Milliseconds
+import org.apache.spark.streaming.StreamingContext
+import org.apache.spark.SparkConf
+
+/**
+ * Consumes messages from a Google Cloud Pub/Sub subscription and does 
wordcount.
+ * In this example it use application default credentials, so need to use 
gcloud
+ * client to generate token file before running example
+ *
+ * Usage: PubsubWordCount  
+ *is the name of Google cloud
+ *is the subscription to a topic
+ *
+ * Example:
+ *  # use gcloud client generate token file
+ *  $ gcloud init
+ *  $ gcloud auth application-default login
+ *
+ *  # run the example
+ *  $ bin/run-example \
+ *  org.apache.spark.examples.streaming.pubsub.PubsubWordCount 
project_1 subscription_1
+ *
+ */
+object PubsubWordCount {
+  def main(args: Array[String]): Unit = {
+if (args.length != 2) {
+  System.err.println(
+"""
+  |Usage: PubsubWordCount  
+  |
+  |  is the name of Google cloud
+  |  is the subscription to a topic
+  |
+""".stripMargin)
+  System.exit(1)
+}
+
+val Seq(projectId, subscription) = args.toSeq
+
+val sparkConf = new SparkConf().setAppName("PubsubWordCount")
+val ssc = new StreamingContext(sparkConf, Milliseconds(2000))
+
+val pubsubStream = PubsubUtils.createStream(ssc, projectId, 
subscription,
+  SparkGCPCredentials.builder.build(), 
StorageLevel.MEMORY_AND_DISK_SER_2)
+
+val wordCounts =
+  pubsubStream.map(message => (new String(message.getData()), 
1)).reduceByKey(_ + _)
+
+wordCounts.print()
+
+ssc.start()
+ssc.awaitTermination()
+  }
+
+}
+
+/**
--- End diff --

@bchen-talend Can you add a little description as for the above example ?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: Beam spark 2.x runner status

2017-03-16 Thread amarouni
Yeah maintaining 2 RDD branches (master + 2.x branch) is doable but will
add more maintenance merge work.

The maven profiles solution is worth investigating, with Spark 1.6 RDD
as the default profile and an additional Spark 2.x profile.

As JBO mentioned carbondata I had a quick look and it looks like an good
solution :
https://github.com/apache/incubator-carbondata/blob/master/pom.xml#L347

What do you think ?

Abbass,

On 16/03/2017 07:00, Cody Innowhere wrote:
> I'm personally in favor of maintaining one single branch, e.g.,
> spark-runner, which supports both Spark 1.6 & 2.1.
> Since there's currently no DataFrame support in spark 1.x runner, there
> should be no conflicts if we put two versions of Spark into one runner.
>
> I'm also +1 for adding adapters in the branch to support both Spark
> versions.
>
> Also, we can have two translators, say, 1.x translator which translates
> into RDDs & DataStreams and 2.x translator which translates into DataSets.
>
> On Thu, Mar 16, 2017 at 9:33 AM, Jean-Baptiste Onofré <j...@nanthrax.net>
> wrote:
>
>> Hi guys,
>>
>> sorry, due to the time zone shift, I answer a bit late ;)
>>
>> I think we can have the same runner dealing with the two major Spark
>> version, introducing some adapters. For instance, in CarbonData, we created
>> some adapters to work with Spark 1?5, Spark 1.6 and Spark 2.1. The
>> dependencies come from Maven profiles. Of course, it's easier there as it's
>> more "user" code.
>>
>> My proposal is just it's worth to try ;)
>>
>> I just created a branch to experiment a bit and have more details.
>>
>> Regards
>> JB
>>
>>
>> On 03/16/2017 02:31 AM, Amit Sela wrote:
>>
>>> I answered inline to Abbass' comment, but I think he hit something - how
>>> about we have a branch with those adaptations ? same RDD implementation,
>>> but depending on the latest 2.x version with the minimal changes required.
>>> I'd be happy to do that, or guide anyone who wants to (I did most of it on
>>> my branch for Spark 2 anyway) but since it's a branch and not on master (I
>>> don't believe it "deserves" a place on master), it would always be a bit
>>> behind since we would have to rebase and merge once in a while.
>>>
>>> How does that sound ?
>>>
>>> On Wed, Mar 15, 2017 at 7:49 PM amarouni <amaro...@talend.com> wrote:
>>>
>>> +1 for Spark runners based on different APIs RDD/Dataset and keeping the
>>>> Spark versions as a deployment dependency.
>>>>
>>>> The RDD API is stable & mature enough so it makes sense to have it on
>>>> master, the Dataset API still have some work to do and from our own
>>>> experience it just reached a comparable RDD API performance. The
>>>> community is clearly heading in the Dataset API direction but the RDD
>>>> API is still a viable option for most use cases.
>>>>
>>>> Just one quick question, today on master can we swap Spark 1.x by Spark
>>>> 2.x  and compile and use the Spark Runner ?
>>>>
>>>> Good question!
>>> I think this is the root cause of this problem - Spark 2 not only
>>> introduced a new API, but also broke a few such as: context is now
>>> session,
>>> Accumulators are AccumulatorV2, and this is what I recall right now.
>>> I don't think it's to hard to adapt those, and anyone who wants to could
>>> see how I did it on my branch:
>>> https://github.com/amitsela/beam/commit/8a1cf889d14d2b47e9e3
>>> 5bae742d78a290cbbdc9
>>>
>>>
>>>
>>>> Thanks,
>>>>
>>>> Abbass,
>>>>
>>>>
>>>> On 15/03/2017 17:57, Amit Sela wrote:
>>>>
>>>>> So you're suggesting we copy-paste the current runner and adapt whatever
>>>>>
>>>> is
>>>>
>>>>> necessary so it runs with Spark 2 ?
>>>>> This also means any bug-fix / improvement would have to be maintained in
>>>>> two runners, and I wouldn't wanna do that.
>>>>>
>>>>> I don't like to think in terms of Spark1/2 but in terms of RDD/Dataset
>>>>>
>>>> API.
>>>>
>>>>> Since the RDD API is mature, it should be the runner in master (not
>>>>> preventing another runner once Dataset API is mature enough) and the
>>>>> version (1.6.3 or 2.x) should be determined by the common installation.
>>>>>
>>>>> That's why I believe we sti

Re: Beam spark 2.x runner status

2017-03-15 Thread amarouni
+1 for Spark runners based on different APIs RDD/Dataset and keeping the
Spark versions as a deployment dependency.

The RDD API is stable & mature enough so it makes sense to have it on
master, the Dataset API still have some work to do and from our own
experience it just reached a comparable RDD API performance. The
community is clearly heading in the Dataset API direction but the RDD
API is still a viable option for most use cases.

Just one quick question, today on master can we swap Spark 1.x by Spark
2.x  and compile and use the Spark Runner ?

Thanks,

Abbass,


On 15/03/2017 17:57, Amit Sela wrote:
> So you're suggesting we copy-paste the current runner and adapt whatever is
> necessary so it runs with Spark 2 ?
> This also means any bug-fix / improvement would have to be maintained in
> two runners, and I wouldn't wanna do that.
>
> I don't like to think in terms of Spark1/2 but in terms of RDD/Dataset API.
> Since the RDD API is mature, it should be the runner in master (not
> preventing another runner once Dataset API is mature enough) and the
> version (1.6.3 or 2.x) should be determined by the common installation.
>
> That's why I believe we still need to leave things as they are, but start
> working on the Dataset API runner.
> Otherwise, we'll have the current runner, another RDD API runner with Spark
> 2, and a third one for the Dataset API. I don't want to maintain all of
> them. It's a mess.
>
> On Wed, Mar 15, 2017 at 6:39 PM Ismaël Mejía  wrote:
>
>>> However, I do feel that we should use the Dataset API, starting with
>> batch
>>> support first. WDYT ?
>> Well, this is the exact current status quo, and it will take us some
>> time to have something as complete as what we have with the spark 1
>> runner for the spark 2.
>>
>> The other proposal has two advantages:
>>
>> One is that we can leverage on the existing implementation (with the
>> needed adjustments) to run Beam pipelines on Spark 2, in the end final
>> users don’t care so much if pipelines are translated via RDD/DStream
>> or Dataset, they just want to know that with Beam they can run their
>> code in their favorite data processing framework.
>>
>> The other advantage is that we can base the work on the latest spark
>> version and advance simultaneously in translators for both APIs, and
>> once we consider that the DataSet is mature enough we can stop
>> maintaining the RDD one and make it the official one.
>>
>> The only missing piece is backporting new developments on the RDD
>> based translator from the spark 2 version into the spark 1, but maybe
>> this won’t be so hard if we consider what you said, that at this point
>> we are getting closer to have streaming right (of course you are the
>> most appropriate person to decide if we are in a sufficient good shape
>> to make this, so backporting things won’t be so hard).
>>
>> Finally I agree with you, I would prefer a nice and full featured
>> translator based on the Structured Streaming API but the question is
>> how much time this will take to be in shape and the impact on final
>> users who are already requesting this. This is the reason why I think
>> the more conservative approach (keeping around the RDD translator) and
>> moving incrementally makes sense.
>>
>> On Wed, Mar 15, 2017 at 4:52 PM, Amit Sela  wrote:
>>> I feel that as we're getting closer to supporting streaming with Spark 1
>>> runner, and having Structured Streaming advance in Spark 2, we could
>> start
>>> work on Spark 2 runner in a separate branch.
>>>
>>> However, I do feel that we should use the Dataset API, starting with
>> batch
>>> support first. WDYT ?
>>>
>>> On Wed, Mar 15, 2017 at 5:47 PM Ismaël Mejía  wrote:
>>>
> So you propose to have the Spark 2 branch a clone of the current one
>> with
> adaptations around Context->Session, Accumulator->AccumulatorV2 etc.
 while
> still using the RDD API ?
 Yes this is exactly what I have in mind.

> I think that having another Spark runner is great if it has value,
> otherwise, let's just bump the version.
 There is value because most people are already starting to move to
 spark 2 and all Big Data distribution providers support it now, as
 well as the Cloud-based distributions (Dataproc and EMR) not like the
 last time we had this discussion.

> We could think of starting to migrate the Spark 1 runner to Spark 2
>> and
> follow with Dataset API support feature-by-feature as ot advances,
>> but I
> think most Spark installations today still run 1.X, or am I wrong ?
 No, you are right, that’s why I didn’t even mentioned removing the
 spark 1 runner, I know that having to support things for both versions
 can add additional work for us, but maybe the best approach would be
 to continue the work only in the spark 2 runner (both refining the RDD
 based translator and starting to create the Dataset one there that

Re: Graduation!

2017-01-11 Thread amarouni
Congratulations to everyone on this important milestone.


On 11/01/2017 11:52, Neelesh Salian wrote:
> Congratulations to the community. :)
>
> On Jan 11, 2017 3:37 PM, "Stephan Ewen"  wrote:
>
>> Very nice :-)
>>
>> Good to see this happening!
>>
>> On Tue, Jan 10, 2017 at 11:58 PM, Tyler Akidau > wrote:
>>
>>> Congrats and thanks to everyone who helped make this happen! :-D
>>>
>>> -Tyler
>>>
>>> On Tue, Jan 10, 2017 at 2:20 PM Kenneth Knowles 
>>> wrote:
>>>
>>> This is really exciting. It is such a privilege to be involved with this
>>> project & community.
>>>
>>> Kenn
>>>
>>> On Tue, Jan 10, 2017 at 1:42 PM, JongYoon Lim 
>>> wrote:
>>>
 Congrats to everyone involved !

 Best Regards,
 JongYoon

 2017-01-11 7:18 GMT+13:00 Jean-Baptiste Onofré :

> Congrats to the team !!
>
> I'm proud and glad to humbly be part of it.
>
> Regards
> JB⁣​
>
> On Jan 10, 2017, 19:09, at 19:09, Raghu Angadi
 
> wrote:
>> Congrats to everyone involved.
>>
>> It has been a great experience following the rapid progress of Beam
>>> and
>> hard work of many. Well deserved promotion.
>>
>>
>> On Tue, Jan 10, 2017 at 3:07 AM, Davor Bonaci 
>>> wrote:
>>> The ASF has publicly announced our graduation!
>>>
>>>
>>> https://blogs.apache.org/foundation/entry/the-apache-
>>> software-foundation-announces
>>>
>>> https://beam.apache.org/blog/2017/01/10/beam-graduates.html
>>>
>>> Graduation is a recognition of the community that we have built
>> together. I
>>> am humbled to be part of this group and this project, and so
>> excited
>> for
>>> what we can accomplish together going forward.
>>>
>>> Davor
>>>



Re: DataFrame Sort gives Cannot allocate a page with more than 17179869176 bytes

2016-10-06 Thread amarouni
You can get some more insights by using the Spark history server
(http://spark.apache.org/docs/latest/monitoring.html), it can show you
which task is failing and some other information that might help you
debugging the issue.


On 05/10/2016 19:00, Babak Alipour wrote:
> The issue seems to lie in the RangePartitioner trying to create equal
> ranges. [1]
>
> [1]
> https://spark.apache.org/docs/2.0.0/api/java/org/apache/spark/RangePartitioner.ml
> 
>  
>
>  The /Double/ values I'm trying to sort are mostly in the range [0,1]
> (~70% of the data which roughly equates 1 billion records), other
> numbers in the dataset are as high as 2000. With the RangePartitioner
> trying to create equal ranges, some tasks are becoming almost empty
> while others are extremely large, due to the heavily skewed distribution. 
>
> This is either a bug in Apache Spark or a major limitation of the
> framework. Has anyone else encountered this?
>
> */Babak Alipour ,/*
> */University of Florida/*
>
> On Sun, Oct 2, 2016 at 1:38 PM, Babak Alipour  > wrote:
>
> Thanks Vadim for sharing your experience, but I have tried
> multi-JVM setup (2 workers), various sizes for
> spark.executor.memory (8g, 16g, 20g, 32g, 64g) and
> spark.executor.core (2-4), same error all along.
>
> As for the files, these are all .snappy.parquet files, resulting
> from inserting some data from other tables. None of them actually
> exceeds 25MiB (I don't know why this number) Setting the DataFrame
> to persist using StorageLevel.MEMORY_ONLY shows size in memory at
> ~10g.  I still cannot understand why it is trying to create such a
> big page when sorting. The entire column (this df has only 1
> column) is not that big, neither are the original files. Any ideas?
>
>
> >Babak
>
>
>
> */Babak Alipour ,/*
> */University of Florida/*
>
> On Sun, Oct 2, 2016 at 1:45 AM, Vadim Semenov
> >
> wrote:
>
> oh, and try to run even smaller executors, i.e. with
> `spark.executor.memory` <= 16GiB. I wonder what result you're
> going to get.
>
> On Sun, Oct 2, 2016 at 1:24 AM, Vadim Semenov
>  > wrote:
>
> > Do you mean running a multi-JVM 'cluster' on the single
> machine? 
> Yes, that's what I suggested.
>
> You can get some information here: 
> 
> http://blog.cloudera.com/blog/2015/03/how-to-tune-your-apache-spark-jobs-part-2/
> 
> 
>
> > How would that affect performance/memory-consumption? If
> a multi-JVM setup can handle such a large input, then why
> can't a single-JVM break down the job into smaller tasks?
> I don't have an answer to these questions, it requires
> understanding of Spark, JVM, and your setup internal.
>
> I ran into the same issue only once when I tried to read a
> gzipped file which size was >16GiB. That's the only time I
> had to meet
> this 
> https://github.com/apache/spark/blob/5d84c7fd83502aeb551d46a740502db4862508fe/core/src/main/java/org/apache/spark/memory/TaskMemoryManager.java#L238-L243
> 
> 
> In the end I had to recompress my file into bzip2 that is
> splittable to be able to read it with spark.
>
>
> I'd look into size of your files and if they're huge I'd
> try to connect the error you got to the size of the files
> (but it's strange to me as a block size of a Parquet file
> is 128MiB). I don't have any other suggestions, I'm sorry.
>
>
> On Sat, Oct 1, 2016 at 11:35 PM, Babak Alipour
> >
> wrote:
>
> Do you mean running a multi-JVM 'cluster' on the
> single machine? How would that affect
> performance/memory-consumption? If a multi-JVM setup
> can handle such a large input, then why can't a
> single-JVM break down the job into smaller tasks?
>
> I also found that SPARK-9411 mentions making the
> page_size configurable but it's hard-limited
> to ((1L<<31) -1) *8L [1]
>
> [1]
> 
> https://github.com/apache/spark/blob/master/core/src/main/java/org/apache/spark/memory/TaskMemoryManager.java
> 
> 

[GitHub] incubator-beam pull request: [BEAM-313] Enable the use of an existing spark ...

2016-05-31 Thread amarouni
GitHub user amarouni opened a pull request:

https://github.com/apache/incubator-beam/pull/401

[BEAM-313] Enable the use of an existing spark context with the 
SparkPipelineRunner

The general use case is that the SparkPipelineRunner creates its own Spark 
context and uses it for the pipeline execution.
Another alternative is to provide the SparkPipelineRunner with an existing 
spark context. This can be interesting for a lot of use cases where the Spark 
context is managed outside of beam (context reuse, advanced context management, 
spark job server, ...).

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/amarouni/incubator-beam mycbeam313

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/incubator-beam/pull/401.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #401


commit fec04dee06cad9bb6d8914c6ba027a5574c73e41
Author: Abbass MAROUNI <amaro...@talend.com>
Date:   2016-05-31T12:45:32Z

[BEAM-313] Enable the use of an existing spark context with the 
SparkPipelineRunner




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Spark ML Interaction

2016-03-08 Thread amarouni
Hi,

Did anyone here manage to write an example of the following ML feature
transformer
http://spark.apache.org/docs/latest/api/java/org/apache/spark/ml/feature/Interaction.html
?
It's not documented on the official Spark ML features pages but it can
be found in the package API javadocs.

Thanks,

-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org



Dynamic jar loading

2015-12-17 Thread amarouni
Hello guys,

Do you know if the method SparkContext.addJar("file:///...") can be used
on a running context (an already started spark-shell) ?
And if so, does it add the jar to the class-path of the Spark workers
(Yarn containers in case of yarn-client) ?

Thanks,

-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org



Re: Database does not exist: (Spark-SQL ===> Hive)

2015-12-15 Thread amarouni
Can you test with latest version of spark ? I had the same issue with
1.3 and it was resolved 1.5.

On 15/12/2015 04:31, Jeff Zhang wrote:
> Do you put hive-site.xml on the classpath ?
>
> On Tue, Dec 15, 2015 at 11:14 AM, Gokula Krishnan D
> > wrote:
>
> Hello All - 
>
>
> I tried to execute a Spark-Scala Program in order to create a
> table in HIVE and faced couple of error so I just tried to execute
> the "show tables" and "show databases"
>
> And I have already created a database named "test_db".But I have
> encountered the error "Database does not exist"
>
> *Note: I do see couple of posts related to this error but nothing
> was helpful for me.*
>
> 
> =
> name := "ExploreSBT_V1"
>
> version := "1.0"
>
> scalaVersion := "2.11.5"
>
> libraryDependencies
> 
> ++=Seq("org.apache.spark"%%"spark-core"%"1.3.0","org.apache.spark"%%"spark-sql"%"1.3.0")
> libraryDependencies += "org.apache.spark"%%"spark-hive"%"1.3.0"
> 
> =
> Inline image 1
>
> Error: Encountered the following exceptions
> :org.apache.spark.sql.execution.QueryExecutionException: FAILED:
> Execution Error, return code 1 from
> org.apache.hadoop.hive.ql.exec.DDLTask. Database does not exist:
> test_db
> 15/12/14 18:49:57 ERROR HiveContext: 
> ==
> HIVE FAILURE OUTPUT
> ==   
>  
>  
>  
>  
>  
>  OK
> FAILED: Execution Error, return code 1 from
> org.apache.hadoop.hive.ql.exec.DDLTask. Database does not exist:
> test_db
>
> ==
> END HIVE FAILURE OUTPUT
> ==
>   
>
> Process finished with exit code 0
>
> Thanks & Regards, 
> Gokula Krishnan*(Gokul)*
>
>
>
>
> -- 
> Best Regards
>
> Jeff Zhang



Re: Save RandomForest Model from ML package

2015-10-23 Thread amarouni

It's an open issue : https://issues.apache.org/jira/browse/SPARK-4587

That's being said, you can workaround the issue by serializing the Model
(simple java serialization) and then restoring it before calling the
predicition job.

Best Regards,

On 22/10/2015 14:33, Sebastian Kuepers wrote:
> Hey,
>
> I try to figure out the best practice on saving and loading models
> which have bin fitted with the ML package - i.e. with the RandomForest
> classifier.
>
> There is PMML support in the MLib package afaik but not in ML - is
> that correct?
>
> How do you approach this, so that you do not have to fit your model
> before every prediction job?
>
> Thanks,
> Sebastian
>
>
> Sebastian Küpers
> Account Director
>
> Publicis Pixelpark
> Leibnizstrasse 65, 10629 Berlin
> T +49 30 5058 1838
> M +49 172 389 28 52
> sebastian.kuep...@publicispixelpark.de
> Web: publicispixelpark.de, Twitter: @pubpxp
> Facebook: publicispixelpark.de/facebook
> Publicis Pixelpark - eine Marke der Pixelpark AG
> Vorstand: Horst Wagner (Vorsitzender), Dirk Kedrowitsch
> Aufsichtsratsvorsitzender: Pedro Simko
> Amtsgericht Charlottenburg: HRB 72163
>
>
>
>
>
> 
> Disclaimer The information in this email and any attachments may
> contain proprietary and confidential information that is intended for
> the addressee(s) only. If you are not the intended recipient, you are
> hereby notified that any disclosure, copying, distribution, retention
> or use of the contents of this information is prohibited. When
> addressed to our clients or vendors, any information contained in this
> e-mail or any attachments is subject to the terms and conditions in
> any governing contract. If you have received this e-mail in error,
> please immediately contact the sender and delete the e-mail.