Re: Install issue with CDH 5.7.0 & Spark 1.6.0

2016-04-13 Thread Felix Cheung
hi Scott
Vendor-repo would be the way to go. It is possible in this case CDH Spark 1.6 
has some incompatible API changes, though I couldn't find it yet. Do you have 
more from the logs on that NoSuchMethodException?

_
From: Scott Zelenka 
Sent: Wednesday, April 13, 2016 2:23 PM
Subject: Install issue with CDH 5.7.0 & Spark 1.6.0
To:  


Hi,
  
  I'm trying to build/install Zeppelin 0.6.0 (version 0.5.6 also has  
the same symptoms) on a new CDH cluster running Hadoop  2.6.0-cdh5.7.0 and 
Spark 1.6.0, but I'm getting this error when I  use SPARK_HOME to point to 
the /opt/cloudera/parcels/CDH/lib/spark  directory in zeppelin-env.sh:

java.lang.NoSuchMethodException:
org.apache.spark.repl.SparkILoop$SparkILoopInterpreter.classServerUri()

  Which seems to imply that  there are no Interpreters available 
for Spark? Is there a way  to get around this? I've tried deleting the 
build folder and  pulling a fresh copy, but end up at the same place.
  
It built successfully on Ubuntu 14.0.4 LTS and  Maven 3.3.3 using 
this command:
  
  sudo mvn clean package -Dspark.version=1.6.0 -Pspark-1.6  
-Dhadoop.version=2.6.0-cdh5.6.0 -Phadoop-2.6 -Ppyspark  -Pvendor-repo 
-DskipTests
  
  However, if I leave the configuration at it's default level, when  I 
try to run the "Zeppelin Tutorial", it'll return this error:
  
  akka.ConfigurationException: Akka JAR version [2.2.3] does not  match 
the provided config version [2.3.11]
  
  Which makes sense, because the CDH builds Spark under Akka version  
2.2.3, but I'm not sure why the builtin Spark is attempting to use  2.2.3? 
Shouldn't I be able to run Zeppelin without any  dependencies on CDH, or 
did the -Pvendor-repo mess up this build?
  
http://www.cloudera.com/documentation/enterprise/release-notes/topics/cdh_rn_spark_ic.html
  
  Any guidance is welcome!
  
  thx,
  z
-- 
  Scott Zelenka
Jabber Engineering - US
Phone: (+1) 919-392-1394
Email: szele...@cisco.com

This email may contain confidential and privileged material for
the sole use of the intended recipient. Any review, use,distribution or 
disclosure by others is strictly prohibited. Ifyou are not the intended 
recipient (or authorized to receive forthe recipient), please contact 
the sender by reply email anddelete all copies of this message.

For corporate legal information go to:
http://www.cisco.com/web/about/doing_business/legal/cri/index.html


  

Install issue with CDH 5.7.0 & Spark 1.6.0

2016-04-13 Thread Scott Zelenka

Hi,

I'm trying to build/install Zeppelin 0.6.0 (version 0.5.6 also has the 
same symptoms) on a new CDH cluster running Hadoop 2.6.0-cdh5.7.0 and 
Spark 1.6.0, but I'm getting this error when I use SPARK_HOME to point 
to the /opt/cloudera/parcels/CDH/lib/spark directory in zeppelin-env.sh:


java.lang.NoSuchMethodException: 
org.apache.spark.repl.SparkILoop$SparkILoopInterpreter.classServerUri()


Which seems to imply that there are no Interpreters available for Spark? 
Is there a way to get around this? I've tried deleting the build folder 
and pulling a fresh copy, but end up at the same place.


It built successfully on Ubuntu 14.0.4 LTS and Maven 3.3.3 using this 
command:


sudo mvn clean package -Dspark.version=1.6.0 -Pspark-1.6 
-Dhadoop.version=2.6.0-cdh5.6.0 -Phadoop-2.6 -Ppyspark -Pvendor-repo 
-DskipTests*

*
However, if I leave the configuration at it's default level, when I try 
to run the "Zeppelin Tutorial", it'll return this error:


akka.ConfigurationException: Akka JAR version [2.2.3] does not match the 
provided config version [2.3.11]


Which makes sense, because the CDH builds Spark under Akka version 
2.2.3, but I'm not sure why the builtin Spark is attempting to use 
2.2.3? Shouldn't I be able to run Zeppelin without any dependencies on 
CDH, or did the -Pvendor-repo mess up this build?

http://www.cloudera.com/documentation/enterprise/release-notes/topics/cdh_rn_spark_ic.html

Any guidance is welcome!

thx,
z
--
Scott Zelenka
Jabber Engineering - US
Phone: (+1) 919-392-1394
Email: szele...@cisco.com

This email may contain confidential and privileged material for the sole 
use of the intended recipient. Any review, use, distribution or 
disclosure by others is strictly prohibited. If you are not the intended 
recipient (or authorized to receive for the recipient), please contact 
the sender by reply email and delete all copies of this message.


For corporate legal information go to:
http://www.cisco.com/web/about/doing_business/legal/cri/index.html


Re: build r-intepreter

2016-04-13 Thread Eric Charles

Can you post the full stacktrace you have (look also at the log file)?
Did you install R on your machine?

SPARK_HOME is optional.


On 13/04/16 15:39, Patcharee Thongtra wrote:

Hi,

When I ran R notebook example, I got these errors in the logs:

- Caused by: org.apache.zeppelin.interpreter.InterpreterException:
sparkr is not responding

- Caused by: org.apache.thrift.transport.TTransportException

I did not config SPARK_HOME so far, and intended to use the embedded
spark for testing first.

BR,
Patcharee


On 04/13/2016 02:52 PM, Patcharee Thongtra wrote:

Hi,

I have been struggling with R interpreter / SparkR interpreter. Is
below the right command to build zeppelin with R interpreter / SparkR
interpreter?

mvn clean package -Pspark-1.6 -Phadoop-2.6 -Pyarn -Ppyspark -Psparkr

BR,
Patcharee








Re: HA for Zeppelin

2016-04-13 Thread vincent gromakowski
It's a global decision on our  SMACK stack platform but maybe we will go
for applications only on docker for devops (client of spark). For zeppelin
I dont see the need (no devops)
Le 13 avr. 2016 4:05 PM, "John Omernik"  a écrit :

> Is this a specific Docker decision or a Zeppelin on Docker decision. I am
> curious on the amount of network traffic Zeppelin actually generates. I
> could be around, but I made the assumption that most of the network traffic
> with Zeppelin is results from the various endpoints (Spark, JDBC, Elastic
> Search etc) and not heavy lifting type activities.
>
>
> John
> On Apr 12, 2016 5:03 PM, "vincent gromakowski" <
> vincent.gromakow...@gmail.com> wrote:
>
>> We  decided  to not use docker for network performance In production
>> flows not dor deployment. virtualisation of the network brings 50% decrease
>> In perf. It may change with calico because it abstract network with routing
>> not virtualizing like flannel
>> Le 12 avr. 2016 2:22 PM, "John Omernik"  a écrit :
>>
>>> On 2.  I had some thoughts there.  How "expensive" would it be fore
>>> Zeppelin to run a timer of sorts that can be accessed via a specific URL.
>>> Basically, this URL would return the idle time. This thing that knows most
>>> if Zeppelin has activity is Zeppelin.  So, any actions within Zeppelin
>>> would reset this timer basically, changing notebooks, opening, closing,
>>> moving notes around, running notes, adding new notes, changing interpreter
>>> settings. Any requests that are handled by Zeppelin in the UI, would reset
>>> said timer. A request to the "timer" URL obviously would NOT reset the
>>> timer, but basically, if nothing that was user actionable (we'd have to
>>> separate user actionable items from automated API requests) was run, the
>>> timer would not get reset. This would allow us using Zeppelin in a
>>> multi-user/multi-tenant environment to monitor for idle instances and take
>>> action when the occur. (Ideally, we could through an authenticated API
>>> issue a "save" of all notebooks before taking said action...
>>>
>>> So, to summarize:
>>>
>>> API that provides seconds since last human action...
>>>
>>> Monitor that API, when seconds since last human actions exceed
>>> enterprise threshold, then API can issue the "Safe Save all"  to Zeppelin,
>>> which will go ahead and do a save (addition point, the timer API could
>>> return seconds since last human use and a bool value of "all saved" or
>>> not... basically, if normal Zeppelin processes have saved all human
>>> interaction, the API could indicate that, then, when the timer check hits
>>> the API, it knows, "The seconds past the threshold, and Zeppelin reports
>>> all saved, we can issue a termination, or if it's not all safe, it can
>>> issue the "save all" command, and wait for it to be safe... if something is
>>> keeping Zeppelin from being in a safe condition for shutdown, the API would
>>> reflect this and prevent a shutdown).
>>>
>>> Then, API seconds exceed enterprise threshold, we can safely shutdown
>>> the instance of Zeppelin returning resources to the cluster.
>>>
>>> Would love discussion here...
>>>
>>> On Tue, Apr 12, 2016 at 1:57 AM, vincent gromakowski <
>>> vincent.gromakow...@gmail.com> wrote:
>>>
 1. I am using ansible to deploy zeppelin on all slaves and to launch
 zeppelin instance for one user. So if zeppelin binaries are already
 deployed, the launch is very quick through marathon (1 or 2 sec). ooking
 for velocity solution (based on jfrog) on Mesos to manage binaries and
 artifacts with versioning, rights... No use of docker for network
 performance constraints

 2. Same answer as John. Still running. I will test dynamic resource for
 spark interpreter but zeppelin daemon will still be up and taking 4GB

 3. I have a service discovery that authenticate the user and route him
 to his instance (and only his instance). It's based right now on a simple
 shell script pulling marathon through its API and updating an apache
 configuration file every 15s. The username is in the marathon task. We will
 update this with a fully industrialized solution (consul ? haproxy ?...)


 3.

 2016-04-12 2:37 GMT+02:00 Johnny W. :

> Thanks John for your insights.
>
> For 2., one solution we have experimented is spark dynamic resource
> allocation. We could define a timer to scale down. Hope that helps.
>
> J.
>
> On Mon, Apr 11, 2016 at 4:24 PM, John Omernik 
> wrote:
>
>> 1. Things launch pretty fast for me, however, it depends if the
>> docker container I am running Zeppelin in is cached on the node mesos 
>> wants
>> to run it on. If not, it pulls from a local docker registry, so worst 
>> case,
>> up to a minute to get things running if the image isn't cached.
>>
>> 2. No, if the user logs out 

Re: HA for Zeppelin

2016-04-13 Thread John Omernik
Is this a specific Docker decision or a Zeppelin on Docker decision. I am
curious on the amount of network traffic Zeppelin actually generates. I
could be around, but I made the assumption that most of the network traffic
with Zeppelin is results from the various endpoints (Spark, JDBC, Elastic
Search etc) and not heavy lifting type activities.


John
On Apr 12, 2016 5:03 PM, "vincent gromakowski" <
vincent.gromakow...@gmail.com> wrote:

> We  decided  to not use docker for network performance In production flows
> not dor deployment. virtualisation of the network brings 50% decrease In
> perf. It may change with calico because it abstract network with routing
> not virtualizing like flannel
> Le 12 avr. 2016 2:22 PM, "John Omernik"  a écrit :
>
>> On 2.  I had some thoughts there.  How "expensive" would it be fore
>> Zeppelin to run a timer of sorts that can be accessed via a specific URL.
>> Basically, this URL would return the idle time. This thing that knows most
>> if Zeppelin has activity is Zeppelin.  So, any actions within Zeppelin
>> would reset this timer basically, changing notebooks, opening, closing,
>> moving notes around, running notes, adding new notes, changing interpreter
>> settings. Any requests that are handled by Zeppelin in the UI, would reset
>> said timer. A request to the "timer" URL obviously would NOT reset the
>> timer, but basically, if nothing that was user actionable (we'd have to
>> separate user actionable items from automated API requests) was run, the
>> timer would not get reset. This would allow us using Zeppelin in a
>> multi-user/multi-tenant environment to monitor for idle instances and take
>> action when the occur. (Ideally, we could through an authenticated API
>> issue a "save" of all notebooks before taking said action...
>>
>> So, to summarize:
>>
>> API that provides seconds since last human action...
>>
>> Monitor that API, when seconds since last human actions exceed enterprise
>> threshold, then API can issue the "Safe Save all"  to Zeppelin, which will
>> go ahead and do a save (addition point, the timer API could return seconds
>> since last human use and a bool value of "all saved" or not... basically,
>> if normal Zeppelin processes have saved all human interaction, the API
>> could indicate that, then, when the timer check hits the API, it knows,
>> "The seconds past the threshold, and Zeppelin reports all saved, we can
>> issue a termination, or if it's not all safe, it can issue the "save all"
>> command, and wait for it to be safe... if something is keeping Zeppelin
>> from being in a safe condition for shutdown, the API would reflect this and
>> prevent a shutdown).
>>
>> Then, API seconds exceed enterprise threshold, we can safely shutdown the
>> instance of Zeppelin returning resources to the cluster.
>>
>> Would love discussion here...
>>
>> On Tue, Apr 12, 2016 at 1:57 AM, vincent gromakowski <
>> vincent.gromakow...@gmail.com> wrote:
>>
>>> 1. I am using ansible to deploy zeppelin on all slaves and to launch
>>> zeppelin instance for one user. So if zeppelin binaries are already
>>> deployed, the launch is very quick through marathon (1 or 2 sec). ooking
>>> for velocity solution (based on jfrog) on Mesos to manage binaries and
>>> artifacts with versioning, rights... No use of docker for network
>>> performance constraints
>>>
>>> 2. Same answer as John. Still running. I will test dynamic resource for
>>> spark interpreter but zeppelin daemon will still be up and taking 4GB
>>>
>>> 3. I have a service discovery that authenticate the user and route him
>>> to his instance (and only his instance). It's based right now on a simple
>>> shell script pulling marathon through its API and updating an apache
>>> configuration file every 15s. The username is in the marathon task. We will
>>> update this with a fully industrialized solution (consul ? haproxy ?...)
>>>
>>>
>>> 3.
>>>
>>> 2016-04-12 2:37 GMT+02:00 Johnny W. :
>>>
 Thanks John for your insights.

 For 2., one solution we have experimented is spark dynamic resource
 allocation. We could define a timer to scale down. Hope that helps.

 J.

 On Mon, Apr 11, 2016 at 4:24 PM, John Omernik  wrote:

> 1. Things launch pretty fast for me, however, it depends if the docker
> container I am running Zeppelin in is cached on the node mesos wants to 
> run
> it on. If not, it pulls from a local docker registry, so worst case, up to
> a minute to get things running if the image isn't cached.
>
> 2. No, if the user logs out it stays running.  Ideally I would want to
> setup some sort of timer that could scale down an instance if left unused.
> I have some ideas here, but haven't put them into practice yet.   I wanted
> to play with Nginx to see if I could do something there (lack of activity
> causes Nginx to shutdown Zeppelin for example). With spark resources, one

Re: build r-intepreter

2016-04-13 Thread Patcharee Thongtra

Hi,

When I ran R notebook example, I got these errors in the logs:

- Caused by: org.apache.zeppelin.interpreter.InterpreterException: 
sparkr is not responding


- Caused by: org.apache.thrift.transport.TTransportException

I did not config SPARK_HOME so far, and intended to use the embedded 
spark for testing first.


BR,
Patcharee


On 04/13/2016 02:52 PM, Patcharee Thongtra wrote:

Hi,

I have been struggling with R interpreter / SparkR interpreter. Is 
below the right command to build zeppelin with R interpreter / SparkR 
interpreter?


mvn clean package -Pspark-1.6 -Phadoop-2.6 -Pyarn -Ppyspark -Psparkr

BR,
Patcharee








Re: Guava 16.0 Cassandra Error using Zeppelin 0.60/Spark 1.6.1/Cassandra 3.4

2016-04-13 Thread DuyHai Doan
Ahh yes, forgot that you're using the 0.6.0 build. The guava jar was
missing in the 0.5.5 release

On Wed, Apr 13, 2016 at 2:03 PM, Sanne de Roever 
wrote:

> Rocking! Vincents suggestion worked.
>
> I tried a %dep in the notebook first, this did not work.
>
> The $ZEPPELIN-HOME/interpreter/cassandra does not have a lib folder, but
> is filled with jars itself, oa. guava-16.0.1.jar. No changes necessary
> there it seems.
>
> On Wed, Apr 13, 2016 at 1:37 PM, vincent gromakowski <
> vincent.gromakow...@gmail.com> wrote:
>
>> It's not a configuration error but a well known conflict between guava 12
>> in Spark and guava 16 in spark cassandra driver. You can find some
>> workarounds in spark cassandra mailing list
>>
>> My workaround in zeppelin is to load in zeppelin dependency loader (spark
>> interpreter config web page) the guava 16 lib. It's a big conflict that
>> will probably be resolved in Spark 2.0
>>
>> 2016-04-13 13:32 GMT+02:00 Sanne de Roever :
>>
>>> Hi,
>>>
>>> My goal is to get Zeppelin 0.60 working with a remote Spark 1.6.1 and
>>> Cassandra 3.4.
>>>
>>> The connection between Zeppelin and Spark works. Currently I'm stuck on
>>> a Guava error, more specifically in the connection between Spark and
>>> Cassandra:
>>>
>>> Caused by: java.lang.IllegalStateException: Detected Guava issue #1635
>>> which indicates that a version of Guava less than 16.01 is in use. This
>>> introduces codec resolution issues and potentially other incompatibility
>>> issues in the driver. Please upgrade to Guava 16.01 or later.
>>> at com.datastax.driver.core.SanityChecks.checkGuava(SanityChecks.java:62)
>>> at com.datastax.driver.core.SanityChecks.check(SanityChecks.java:36)
>>> at com.datastax.driver.core.Cluster.(Cluster.java:67)
>>>
>>> A related issue has appeared earlier in Zeppelin:
>>> https://issues.apache.org/jira/browse/ZEPPELIN-620
>>>
>>> I'm configuring the Cassandra driver by setting the spark.jars property
>>> in spark-defaults.conf:
>>>
>>> spark.jars
>>> /u01/app/zeppelin/spark-cassandra-libs/spark-core_2.10-1.6.1.jar,/u01/app/zeppelin/spark-cassandra-libs/joda-convert-1.8.1.jar,/u01/app/zeppelin/spark-cassandra-libs/cassandra-thrift-3.4.jar,/u01/app/zeppelin/spark-cassandra-libs/joda-time-2.9.3.jar,/u01/app/zeppelin/spark-cassandra-libs/spark-cassandra-connector-java_2.10-1.6.0-M1.jar,/u01/app/zeppelin/spark-cassandra-libs/spark-cassandra-connector-1.6.0-M1-s_2.10.jar,/u01/app/zeppelin/spark-cassandra-libs/guava-19.0.jar,/u01/app/zeppelin/spark-cassandra-libs/cassandra-driver-core-3.0.0.jar
>>>
>>> (There are no external connections in the data center)
>>>
>>> Is this a configuration error?
>>>
>>> Cheers,
>>>
>>> Sanne
>>>
>>
>>
>


Re: Guava 16.0 Cassandra Error using Zeppelin 0.60/Spark 1.6.1/Cassandra 3.4

2016-04-13 Thread Sanne de Roever
Rocking! Vincents suggestion worked.

I tried a %dep in the notebook first, this did not work.

The $ZEPPELIN-HOME/interpreter/cassandra does not have a lib folder, but is
filled with jars itself, oa. guava-16.0.1.jar. No changes necessary there
it seems.

On Wed, Apr 13, 2016 at 1:37 PM, vincent gromakowski <
vincent.gromakow...@gmail.com> wrote:

> It's not a configuration error but a well known conflict between guava 12
> in Spark and guava 16 in spark cassandra driver. You can find some
> workarounds in spark cassandra mailing list
>
> My workaround in zeppelin is to load in zeppelin dependency loader (spark
> interpreter config web page) the guava 16 lib. It's a big conflict that
> will probably be resolved in Spark 2.0
>
> 2016-04-13 13:32 GMT+02:00 Sanne de Roever :
>
>> Hi,
>>
>> My goal is to get Zeppelin 0.60 working with a remote Spark 1.6.1 and
>> Cassandra 3.4.
>>
>> The connection between Zeppelin and Spark works. Currently I'm stuck on a
>> Guava error, more specifically in the connection between Spark and
>> Cassandra:
>>
>> Caused by: java.lang.IllegalStateException: Detected Guava issue #1635
>> which indicates that a version of Guava less than 16.01 is in use. This
>> introduces codec resolution issues and potentially other incompatibility
>> issues in the driver. Please upgrade to Guava 16.01 or later.
>> at com.datastax.driver.core.SanityChecks.checkGuava(SanityChecks.java:62)
>> at com.datastax.driver.core.SanityChecks.check(SanityChecks.java:36)
>> at com.datastax.driver.core.Cluster.(Cluster.java:67)
>>
>> A related issue has appeared earlier in Zeppelin:
>> https://issues.apache.org/jira/browse/ZEPPELIN-620
>>
>> I'm configuring the Cassandra driver by setting the spark.jars property
>> in spark-defaults.conf:
>>
>> spark.jars
>> /u01/app/zeppelin/spark-cassandra-libs/spark-core_2.10-1.6.1.jar,/u01/app/zeppelin/spark-cassandra-libs/joda-convert-1.8.1.jar,/u01/app/zeppelin/spark-cassandra-libs/cassandra-thrift-3.4.jar,/u01/app/zeppelin/spark-cassandra-libs/joda-time-2.9.3.jar,/u01/app/zeppelin/spark-cassandra-libs/spark-cassandra-connector-java_2.10-1.6.0-M1.jar,/u01/app/zeppelin/spark-cassandra-libs/spark-cassandra-connector-1.6.0-M1-s_2.10.jar,/u01/app/zeppelin/spark-cassandra-libs/guava-19.0.jar,/u01/app/zeppelin/spark-cassandra-libs/cassandra-driver-core-3.0.0.jar
>>
>> (There are no external connections in the data center)
>>
>> Is this a configuration error?
>>
>> Cheers,
>>
>> Sanne
>>
>
>


Re: Guava 16.0 Cassandra Error using Zeppelin 0.60/Spark 1.6.1/Cassandra 3.4

2016-04-13 Thread DuyHai Doan
Easy work-around, in $ZEPPELIN-HOME/interpreter/cassandra/lib folder, add
the guava-16.0.1.jar file and it's done.

On Wed, Apr 13, 2016 at 1:37 PM, vincent gromakowski <
vincent.gromakow...@gmail.com> wrote:

> It's not a configuration error but a well known conflict between guava 12
> in Spark and guava 16 in spark cassandra driver. You can find some
> workarounds in spark cassandra mailing list
>
> My workaround in zeppelin is to load in zeppelin dependency loader (spark
> interpreter config web page) the guava 16 lib. It's a big conflict that
> will probably be resolved in Spark 2.0
>
> 2016-04-13 13:32 GMT+02:00 Sanne de Roever :
>
>> Hi,
>>
>> My goal is to get Zeppelin 0.60 working with a remote Spark 1.6.1 and
>> Cassandra 3.4.
>>
>> The connection between Zeppelin and Spark works. Currently I'm stuck on a
>> Guava error, more specifically in the connection between Spark and
>> Cassandra:
>>
>> Caused by: java.lang.IllegalStateException: Detected Guava issue #1635
>> which indicates that a version of Guava less than 16.01 is in use. This
>> introduces codec resolution issues and potentially other incompatibility
>> issues in the driver. Please upgrade to Guava 16.01 or later.
>> at com.datastax.driver.core.SanityChecks.checkGuava(SanityChecks.java:62)
>> at com.datastax.driver.core.SanityChecks.check(SanityChecks.java:36)
>> at com.datastax.driver.core.Cluster.(Cluster.java:67)
>>
>> A related issue has appeared earlier in Zeppelin:
>> https://issues.apache.org/jira/browse/ZEPPELIN-620
>>
>> I'm configuring the Cassandra driver by setting the spark.jars property
>> in spark-defaults.conf:
>>
>> spark.jars
>> /u01/app/zeppelin/spark-cassandra-libs/spark-core_2.10-1.6.1.jar,/u01/app/zeppelin/spark-cassandra-libs/joda-convert-1.8.1.jar,/u01/app/zeppelin/spark-cassandra-libs/cassandra-thrift-3.4.jar,/u01/app/zeppelin/spark-cassandra-libs/joda-time-2.9.3.jar,/u01/app/zeppelin/spark-cassandra-libs/spark-cassandra-connector-java_2.10-1.6.0-M1.jar,/u01/app/zeppelin/spark-cassandra-libs/spark-cassandra-connector-1.6.0-M1-s_2.10.jar,/u01/app/zeppelin/spark-cassandra-libs/guava-19.0.jar,/u01/app/zeppelin/spark-cassandra-libs/cassandra-driver-core-3.0.0.jar
>>
>> (There are no external connections in the data center)
>>
>> Is this a configuration error?
>>
>> Cheers,
>>
>> Sanne
>>
>
>