I would recommend to use the JedisPool with autocloseable pattern:

private JedisPool pool = new JedisPool(host, port);

try (Jedis jedis = pool.getResource())
{
/*do magic to jedis*/
}
pool.destroy();

We use this contruct successfully in a foreachPartition action.

Am 24.03.2016 um 15:20 schrieb Michel Hubert:
>
> No.
>
>  
>
> But I may be on to something.
>
> I use Jedis to send data to Redis.
>
>  
>
> I used a ThreadLocal construct:
>
>  
>
> private static final ThreadLocal<Jedis> /jedis /= new
> ThreadLocal<Jedis>(){
>     @Override
>     protected Jedis initialValue()
>     {
>         return new Jedis("10.101.41.19",6379);
>     }
> };
>
>  
>
> and then
>
>  
>
> .foreachRDD(new VoidFunction<JavaRDD<TopData>>() {     public void
> call(JavaRDD<TopData> rdd) throws Exception {         for (TopData t:
> rdd.take(top)) {             jedis …        }
>
>  
>
> May this resulted in a memory leak?
>
>  
>
> *Van:*Ted Yu [mailto:yuzhih...@gmail.com]
> *Verzonden:* donderdag 24 maart 2016 15:15
> *Aan:* Michel Hubert <mich...@phact.nl>
> *CC:* user@spark.apache.org
> *Onderwerp:* Re: apache spark errors
>
>  
>
> Do you have history server enabled ?
>
>  
>
> Posting your code snippet would help us understand your use case (and
> reproduce the leak).
>
>  
>
> Thanks
>
>  
>
> On Thu, Mar 24, 2016 at 6:40 AM, Michel Hubert <mich...@phact.nl
> <mailto:mich...@phact.nl>> wrote:
>
>     <dependencies>
>         <dependency> <!-- Spark dependency -->
>             <groupId>org.apache.spark</groupId>
>             <artifactId>spark-core_2.10</artifactId>
>             <version>1.6.1</version>
>         </dependency>
>         <dependency>
>             <groupId>org.apache.spark</groupId>
>             <artifactId>spark-streaming_2.10</artifactId>
>             <version>1.6.1</version>
>         </dependency>
>         <dependency>
>             <groupId>org.apache.spark</groupId>
>             <artifactId>spark-streaming-kafka_2.10</artifactId>
>             <version>1.6.1</version>
>         </dependency>
>
>         <dependency>
>             <groupId>org.elasticsearch</groupId>
>             <artifactId>elasticsearch</artifactId>
>             <version>2.2.0</version>
>         </dependency>
>
>         <dependency>
>             <groupId>org.apache.kafka</groupId>
>            <artifactId>kafka_2.10</artifactId>
>             <version>0.8.2.2</version>
>         </dependency>
>
>
>         <dependency>
>             <groupId>org.elasticsearch</groupId>
>             <artifactId>elasticsearch-spark_2.10</artifactId>
>             <version>2.2.0</version>
>         </dependency>
>         <dependency>
>             <groupId>redis.clients</groupId>
>             <artifactId>jedis</artifactId>
>             <version>2.8.0</version>
>             <type>jar</type>
>             <scope>compile</scope>
>         </dependency>
>     </dependencies>
>
>      
>
>      
>
>     How can I look at those tasks?
>
>      
>
>     *Van:*Ted Yu [mailto:yuzhih...@gmail.com
>     <mailto:yuzhih...@gmail.com>]
>     *Verzonden:* donderdag 24 maart 2016 14:33
>     *Aan:* Michel Hubert <mich...@phact.nl <mailto:mich...@phact.nl>>
>     *CC:* user@spark.apache.org <mailto:user@spark.apache.org>
>     *Onderwerp:* Re: apache spark errors
>
>      
>
>     Which release of Spark are you using ?
>
>      
>
>     Have you looked the tasks whose Ids were printed to see if there
>     was more clue ?
>
>      
>
>     Thanks
>
>      
>
>     On Thu, Mar 24, 2016 at 6:12 AM, Michel Hubert <mich...@phact.nl
>     <mailto:mich...@phact.nl>> wrote:
>
>         HI,
>
>          
>
>         I constantly get these errors:
>
>          
>
>         0    [Executor task launch worker-15] ERROR
>         org.apache.spark.executor.Executor  - Managed memory leak
>         detected; size = 6564500 bytes, TID = 38969
>
>         310002 [Executor task launch worker-12] ERROR
>         org.apache.spark.executor.Executor  - Managed memory leak
>         detected; size = 5523550 bytes, TID = 43270
>
>         318445 [Executor task launch worker-12] ERROR
>         org.apache.spark.executor.Executor  - Managed memory leak
>         detected; size = 6879566 bytes, TID = 43408
>
>         388893 [Executor task launch worker-12] ERROR
>         org.apache.spark.executor.Executor  - Managed memory leak
>         detected; size = 5572546 bytes, TID = 44382
>
>         418186 [Executor task launch worker-13] ERROR
>         org.apache.spark.executor.Executor  - Managed memory leak
>         detected; size = 5289222 bytes, TID = 44795
>
>         488421 [Executor task launch worker-4] ERROR
>         org.apache.spark.executor.Executor  - Managed memory leak
>         detected; size = 8738142 bytes, TID = 45769
>
>         619276 [Executor task launch worker-4] ERROR
>         org.apache.spark.executor.Executor  - Managed memory leak
>         detected; size = 5759312 bytes, TID = 47571
>
>         632275 [Executor task launch worker-12] ERROR
>         org.apache.spark.executor.Executor  - Managed memory leak
>         detected; size = 5602240 bytes, TID = 47709
>
>         644989 [Executor task launch worker-13] ERROR
>         org.apache.spark.executor.Executor  - Managed memory leak
>         detected; size = 5326260 bytes, TID = 47863
>
>         720701 [Executor task launch worker-12] ERROR
>         org.apache.spark.executor.Executor  - Managed memory leak
>         detected; size = 5399578 bytes, TID = 48959
>
>         1147961 [Executor task launch worker-16] ERROR
>         org.apache.spark.executor.Executor  - Managed memory leak
>         detected; size = 5251872 bytes, TID = 54922
>
>          
>
>          
>
>         How can I fix this?
>
>          
>
>         With kind regard,
>
>          
>
>         Michel
>
>      
>
>  
>

-- 
*Max Schmidt, Senior Java Developer* | m...@datapath.io
<mailto:m...@datapath.io> | LinkedIn
<https://www.linkedin.com/in/maximilian-schmidt-9893b7bb/>
Datapath.io
 
Decreasing AWS latency.
Your traffic optimized.

Datapath.io GmbH
Mainz | HRB Nr. 46222
Sebastian Spies, CEO

Reply via email to