[jira] [Commented] (SPARK-4631) Add real unit test for MQTT

2015-01-29 Thread Ye Xianjin (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-4631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14296811#comment-14296811
 ] 

Ye Xianjin commented on SPARK-4631:
---

[~dragos], Thread.sleep(50) do pass the test on my machine. 

 Add real unit test for MQTT 
 

 Key: SPARK-4631
 URL: https://issues.apache.org/jira/browse/SPARK-4631
 Project: Spark
  Issue Type: Test
  Components: Streaming
Reporter: Tathagata Das
Priority: Critical
 Fix For: 1.3.0


 A real unit test that actually transfers data to ensure that the MQTTUtil is 
 functional



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Issue Comment Deleted] (SPARK-4631) Add real unit test for MQTT

2015-01-29 Thread Ye Xianjin (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-4631?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ye Xianjin updated SPARK-4631:
--
Comment: was deleted

(was: [~dragos], Thread.sleep(50) do pass the test on my machine. )

 Add real unit test for MQTT 
 

 Key: SPARK-4631
 URL: https://issues.apache.org/jira/browse/SPARK-4631
 Project: Spark
  Issue Type: Test
  Components: Streaming
Reporter: Tathagata Das
Priority: Critical
 Fix For: 1.3.0


 A real unit test that actually transfers data to ensure that the MQTTUtil is 
 functional



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-4631) Add real unit test for MQTT

2015-01-29 Thread Ye Xianjin (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-4631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14296812#comment-14296812
 ] 

Ye Xianjin commented on SPARK-4631:
---

[~dragos], Thread.sleep(50) do pass the test on my machine. 

 Add real unit test for MQTT 
 

 Key: SPARK-4631
 URL: https://issues.apache.org/jira/browse/SPARK-4631
 Project: Spark
  Issue Type: Test
  Components: Streaming
Reporter: Tathagata Das
Priority: Critical
 Fix For: 1.3.0


 A real unit test that actually transfers data to ensure that the MQTTUtil is 
 functional



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-4631) Add real unit test for MQTT

2015-01-28 Thread Ye Xianjin (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-4631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14296232#comment-14296232
 ] 

Ye Xianjin commented on SPARK-4631:
---

Hi [~dragos], I have the same issue here. I'd like to copy the email I sent to 
Sean here, which may help. 

{quote}
Hi Sean:

I enabled the debug flag in log4j. I believe the MQRRStreamSuite failure is 
more likely due to some weird network issue. However I cannot understand why 
this exception will be thrown.

what I saw in the unit-tests.log is below:
15/01/28 23:41:37.390 ActiveMQ Transport: tcp:///127.0.0.1:53845@23456 DEBUG 
Transport: Transport Connection to: tcp://127.0.0.1:53845 failed: 
java.net.ProtocolException: Invalid CONNECT encoding
java.net.ProtocolException: Invalid CONNECT encoding
at org.fusesource.mqtt.codec.CONNECT.decode(CONNECT.java:77)
at 
org.apache.activemq.transport.mqtt.MQTTProtocolConverter.onMQTTCommand(MQTTProtocolConverter.java:118)
at 
org.apache.activemq.transport.mqtt.MQTTTransportFilter.onCommand(MQTTTransportFilter.java:74)
at 
org.apache.activemq.transport.TransportSupport.doConsume(TransportSupport.java:83)
at 
org.apache.activemq.transport.tcp.TcpTransport.doRun(TcpTransport.java:222)
at 
org.apache.activemq.transport.tcp.TcpTransport.run(TcpTransport.java:204)
at java.lang.Thread.run(Thread.java:695)

However when I looked at the code 
http://grepcode.com/file/repo1.maven.org/maven2/org.fusesource.mqtt-client/mqtt-client/1.3/org/fusesource/mqtt/codec/CONNECT.java#76
 , I don’t quite understand why that would happen.
I am not familiar with activemq, maybe you can look at this and figure what 
really happened.
{quote}

The possible cause for that failure is that maybe org.eclipse.paho.mqtt-client 
don't write PROTOCOL_NAME in the mqtt frame with a quick look at the 
paho.mqtt-client code. But it don't make sense as the Jenkins run test 
successfully and I am not sure.

 Add real unit test for MQTT 
 

 Key: SPARK-4631
 URL: https://issues.apache.org/jira/browse/SPARK-4631
 Project: Spark
  Issue Type: Test
  Components: Streaming
Reporter: Tathagata Das
Priority: Critical
 Fix For: 1.3.0


 A real unit test that actually transfers data to ensure that the MQTTUtil is 
 functional



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-5201) ParallelCollectionRDD.slice(seq, numSlices) has int overflow when dealing with inclusive range

2015-01-11 Thread Ye Xianjin (JIRA)
Ye Xianjin created SPARK-5201:
-

 Summary: ParallelCollectionRDD.slice(seq, numSlices) has int 
overflow when dealing with inclusive range
 Key: SPARK-5201
 URL: https://issues.apache.org/jira/browse/SPARK-5201
 Project: Spark
  Issue Type: Bug
  Components: Spark Core
Affects Versions: 1.2.0
Reporter: Ye Xianjin
 Fix For: 1.2.1


{code}
 sc.makeRDD(1 to (Int.MaxValue)).count   // result = 0
 sc.makeRDD(1 to (Int.MaxValue - 1)).count   // result = 2147483646 = 
Int.MaxValue - 1
 sc.makeRDD(1 until (Int.MaxValue)).count// result = 2147483646 = 
Int.MaxValue - 1
{code}
More details on the discussion https://github.com/apache/spark/pull/2874



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-5201) ParallelCollectionRDD.slice(seq, numSlices) has int overflow when dealing with inclusive range

2015-01-11 Thread Ye Xianjin (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-5201?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14273277#comment-14273277
 ] 

Ye Xianjin commented on SPARK-5201:
---

I will send a pr for this.

 ParallelCollectionRDD.slice(seq, numSlices) has int overflow when dealing 
 with inclusive range
 --

 Key: SPARK-5201
 URL: https://issues.apache.org/jira/browse/SPARK-5201
 Project: Spark
  Issue Type: Bug
  Components: Spark Core
Affects Versions: 1.2.0
Reporter: Ye Xianjin
  Labels: rdd
 Fix For: 1.2.1

   Original Estimate: 2h
  Remaining Estimate: 2h

 {code}
  sc.makeRDD(1 to (Int.MaxValue)).count   // result = 0
  sc.makeRDD(1 to (Int.MaxValue - 1)).count   // result = 2147483646 = 
 Int.MaxValue - 1
  sc.makeRDD(1 until (Int.MaxValue)).count// result = 2147483646 = 
 Int.MaxValue - 1
 {code}
 More details on the discussion https://github.com/apache/spark/pull/2874



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (FLUME-2385) Flume spans log file with Spooling Directory Source runner has shutdown messages at INFO level

2014-11-10 Thread Ye Xianjin (JIRA)

[ 
https://issues.apache.org/jira/browse/FLUME-2385?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14205891#comment-14205891
 ] 

Ye Xianjin commented on FLUME-2385:
---

hi, [~scaph01], I think (according to my colleague) the more reasonable change 
is to set the log level to debug. 

 Flume spans log file with Spooling Directory Source runner has shutdown 
 messages at INFO level
 

 Key: FLUME-2385
 URL: https://issues.apache.org/jira/browse/FLUME-2385
 Project: Flume
  Issue Type: Improvement
Affects Versions: v1.4.0
Reporter: Justin Hayes
Assignee: Phil Scala
Priority: Minor
 Fix For: v1.6.0

 Attachments: FLUME-2385-0.patch


 When I start an agent with the following config, the spooling directory 
 source emits 14/05/14 22:36:12 INFO source.SpoolDirectorySource: Spooling 
 Directory Source runner has shutdown. messages twice a second. Pretty 
 innocuous but it will fill up the file system needlessly and get in the way 
 of other INFO messages.
 cis.sources = httpd
 cis.sinks = loggerSink
 cis.channels = mem2logger
 cis.sources.httpd.type = spooldir
 cis.sources.httpd.spoolDir = /var/log/httpd
 cis.sources.httpd.trackerDir = /var/lib/flume-ng/tracker/httpd
 cis.sources.httpd.channels = mem2logger
 cis.sinks.loggerSink.type = logger
 cis.sinks.loggerSink.channel = mem2logger
 cis.channels.mem2logger.type = memory
 cis.channels.mem2logger.capacity = 1
 cis.channels.mem2logger.transactionCapacity = 1000 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (SPARK-4002) JavaKafkaStreamSuite.testKafkaStream fails on OSX

2014-10-22 Thread Ye Xianjin (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-4002?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14179753#comment-14179753
 ] 

Ye Xianjin commented on SPARK-4002:
---

Hi, [~rdub] what's your mac os x's hostname ? Mine was advancedxy's-pro. 
notice the illegal ['] there in the hostname. That's causing Kafka failing. 
That's what I saw a Kafka related test failure couple weeks ago. Hope It's 
related. 
The detail is in the unit-tests.log. So, as [~jerryshao] said, It's better you 
post your unit test log here and we may get the real cause.

 JavaKafkaStreamSuite.testKafkaStream fails on OSX
 -

 Key: SPARK-4002
 URL: https://issues.apache.org/jira/browse/SPARK-4002
 Project: Spark
  Issue Type: Bug
  Components: Streaming
 Environment: Mac OSX 10.9.5.
Reporter: Ryan Williams

 [~sowen] mentioned this on spark-dev 
 [here|http://mail-archives.apache.org/mod_mbox/spark-dev/201409.mbox/%3ccamassdjs0fmsdc-k-4orgbhbfz2vvrmm0hfyifeeal-spft...@mail.gmail.com%3E]
  and I just reproduced it on {{master}} 
 ([7e63bb4|https://github.com/apache/spark/commit/7e63bb49c526c3f872619ae14e4b5273f4c535e9]).
 The relevant output I get when running {{./dev/run-tests}} is:
 {code}
 [info] KafkaStreamSuite:
 [info] - Kafka input stream
 [info] Test run started
 [info] Test 
 org.apache.spark.streaming.kafka.JavaKafkaStreamSuite.testKafkaStream started
 [error] Test 
 org.apache.spark.streaming.kafka.JavaKafkaStreamSuite.testKafkaStream failed: 
 junit.framework.AssertionFailedError: expected:3 but was:0
 [error] at junit.framework.Assert.fail(Assert.java:50)
 [error] at junit.framework.Assert.failNotEquals(Assert.java:287)
 [error] at junit.framework.Assert.assertEquals(Assert.java:67)
 [error] at junit.framework.Assert.assertEquals(Assert.java:199)
 [error] at junit.framework.Assert.assertEquals(Assert.java:205)
 [error] at 
 org.apache.spark.streaming.kafka.JavaKafkaStreamSuite.testKafkaStream(JavaKafkaStreamSuite.java:129)
 [error] ...
 [info] Test run finished: 1 failed, 0 ignored, 1 total, 19.798s
 {code}
 Seems like this test should be {{@Ignore}}'d, or some note about this made on 
 the {{README.md}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-3098) In some cases, operation zipWithIndex get a wrong results

2014-09-01 Thread Ye Xianjin (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-3098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14117558#comment-14117558
 ] 

Ye Xianjin commented on SPARK-3098:
---

hi, [~srowen] and [~gq], I think what [~matei] wants to say is that because the 
ordering of elements in distinct() is not guaranteed, the result of 
zipWithIndex is not deterministic. If you recompute the RDD with distinct 
transformation, you are not guaranteed to get the same result. That explains 
the behavior here.

But as [~srowen] said, It's surprised to see different results from the same 
RDD. [~matei], what do you think about this behavior?

  In some cases, operation zipWithIndex get a wrong results
 --

 Key: SPARK-3098
 URL: https://issues.apache.org/jira/browse/SPARK-3098
 Project: Spark
  Issue Type: Bug
  Components: Spark Core
Affects Versions: 1.0.1
Reporter: Guoqiang Li
Priority: Critical

 The reproduce code:
 {code}
  val c = sc.parallelize(1 to 7899).flatMap { i =
   (1 to 1).toSeq.map(p = i * 6000 + p)
 }.distinct().zipWithIndex() 
 c.join(c).filter(t = t._2._1 != t._2._2).take(3)
 {code}
  = 
 {code}
  Array[(Int, (Long, Long))] = Array((1732608,(11,12)), (45515264,(12,13)), 
 (36579712,(13,14)))
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-3040) pick up a more proper local ip address for Utils.findLocalIpAddress method

2014-08-14 Thread Ye Xianjin (JIRA)
Ye Xianjin created SPARK-3040:
-

 Summary: pick up a more proper local ip address for 
Utils.findLocalIpAddress method
 Key: SPARK-3040
 URL: https://issues.apache.org/jira/browse/SPARK-3040
 Project: Spark
  Issue Type: Improvement
  Components: Spark Core
Affects Versions: 1.0.2
 Environment: Mac os x, a bunch of network interfaces: eth0, wlan0, 
vnic0, vnic1, tun0, lo
Reporter: Ye Xianjin
Priority: Trivial


I noticed this inconvenience when I ran spark-shell with my virtual machines on 
and VPN service running.

There are a lot of network interfaces on my laptop(inactive devices omitted):
{quote}
lo0: inet 127.0.0.1
en1: inet 192.168.0.102
vnic0: inet 10.211.55.2 (virtual if for vm1)
vnic1: inet 10.37.129.3 (virtual if for vm2)
tun0: inet 172.16.100.191 -- 172.16.100.191 (tun device for VPN)
{quote}

In spark core, Utils.findLocalIpAddress() uses 
NetworkInterface.getNetworkInterfaces to get all active network interfaces, but 
unfortunately, this method returns network interfaces in reverse order compared 
to the ifconfig output (both use ioctl sys call). I dug into the openJDK 6 and 
7 source code and confirms this behavior(It just happens on unix-like system, 
windows deals with it and returns in index order). So, the findLocalIpAddress 
method will pick the ip address associated with tun0 rather than en1




--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-2557) createTaskScheduler should be consistent between local and local-n-failures

2014-07-17 Thread Ye Xianjin (JIRA)
Ye Xianjin created SPARK-2557:
-

 Summary: createTaskScheduler should be consistent between local 
and local-n-failures 
 Key: SPARK-2557
 URL: https://issues.apache.org/jira/browse/SPARK-2557
 Project: Spark
  Issue Type: Improvement
  Components: Spark Core
Affects Versions: 1.0.0
Reporter: Ye Xianjin
Priority: Minor


In SparkContext.createTaskScheduler, we can use {code}local[*]{code} to 
estimates the number of cores on the machine. I think we should also be able to 
use * in the local-n-failures mode.

And according to the code in the LOCAL_N_REGEX pattern matching code, I believe 
the regular expression of LOCAL_N_REGEX is wrong. LOCAL_N_REFEX should be 
{code}
local\[([0-9]+|\*)\].r
{code} 
rather than
{code}
 local\[([0-9\*]+)\].r
{code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (SPARK-2557) createTaskScheduler should be consistent between local and local-n-failures

2014-07-17 Thread Ye Xianjin (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-2557?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14065001#comment-14065001
 ] 

Ye Xianjin commented on SPARK-2557:
---

I will send a pr for this.

 createTaskScheduler should be consistent between local and local-n-failures 
 

 Key: SPARK-2557
 URL: https://issues.apache.org/jira/browse/SPARK-2557
 Project: Spark
  Issue Type: Improvement
  Components: Spark Core
Affects Versions: 1.0.0
Reporter: Ye Xianjin
Priority: Minor
  Labels: starter
   Original Estimate: 2h
  Remaining Estimate: 2h

 In SparkContext.createTaskScheduler, we can use {code}local[*]{code} to 
 estimates the number of cores on the machine. I think we should also be able 
 to use * in the local-n-failures mode.
 And according to the code in the LOCAL_N_REGEX pattern matching code, I 
 believe the regular expression of LOCAL_N_REGEX is wrong. LOCAL_N_REFEX 
 should be 
 {code}
 local\[([0-9]+|\*)\].r
 {code} 
 rather than
 {code}
  local\[([0-9\*]+)\].r
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (SPARK-2557) createTaskScheduler should be consistent between local and local-n-failures

2014-07-17 Thread Ye Xianjin (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-2557?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14065029#comment-14065029
 ] 

Ye Xianjin commented on SPARK-2557:
---

Github pr: https://github.com/apache/spark/pull/1464

 createTaskScheduler should be consistent between local and local-n-failures 
 

 Key: SPARK-2557
 URL: https://issues.apache.org/jira/browse/SPARK-2557
 Project: Spark
  Issue Type: Improvement
  Components: Spark Core
Affects Versions: 1.0.0
Reporter: Ye Xianjin
Priority: Minor
  Labels: starter
   Original Estimate: 2h
  Remaining Estimate: 2h

 In SparkContext.createTaskScheduler, we can use {code}local[*]{code} to 
 estimates the number of cores on the machine. I think we should also be able 
 to use * in the local-n-failures mode.
 And according to the code in the LOCAL_N_REGEX pattern matching code, I 
 believe the regular expression of LOCAL_N_REGEX is wrong. LOCAL_N_REFEX 
 should be 
 {code}
 local\[([0-9]+|\*)\].r
 {code} 
 rather than
 {code}
  local\[([0-9\*]+)\].r
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Closed] (SPARK-1511) Update TestUtils.createCompiledClass() API to work with creating class file on different filesystem

2014-04-17 Thread Ye Xianjin (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-1511?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ye Xianjin closed SPARK-1511.
-

   Resolution: Fixed
Fix Version/s: 1.0.0

 Update TestUtils.createCompiledClass() API to work with creating class file 
 on different filesystem
 ---

 Key: SPARK-1511
 URL: https://issues.apache.org/jira/browse/SPARK-1511
 Project: Spark
  Issue Type: Bug
  Components: Spark Core
Affects Versions: 0.8.1, 0.9.0, 1.0.0
 Environment: Mac OS X, two disks. 
Reporter: Ye Xianjin
Priority: Minor
  Labels: starter
 Fix For: 1.0.0

   Original Estimate: 24h
  Remaining Estimate: 24h

 The createCompliedClass method uses java File.renameTo method to rename 
 source file to destination file, which will fail if source and destination 
 files are on different disks (or partitions).
 see 
 http://apache-spark-developers-list.1001551.n3.nabble.com/Tests-failed-after-assembling-the-latest-code-from-github-td6315.html
  for more details.
 Use com.google.common.io.Files.move instead of renameTo will solve this issue.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (SPARK-1527) rootDirs in DiskBlockManagerSuite doesn't get full path from rootDir0, rootDir1

2014-04-17 Thread Ye Xianjin (JIRA)
Ye Xianjin created SPARK-1527:
-

 Summary: rootDirs in DiskBlockManagerSuite doesn't get full path 
from rootDir0, rootDir1
 Key: SPARK-1527
 URL: https://issues.apache.org/jira/browse/SPARK-1527
 Project: Spark
  Issue Type: Bug
  Components: Spark Core
Affects Versions: 0.9.0
Reporter: Ye Xianjin
Priority: Minor


In core/src/test/scala/org/apache/storage/DiskBlockManagerSuite.scala

  val rootDir0 = Files.createTempDir()
  rootDir0.deleteOnExit()
  val rootDir1 = Files.createTempDir()
  rootDir1.deleteOnExit()
  val rootDirs = rootDir0.getName + , + rootDir1.getName

rootDir0 and rootDir1 are in system's temporary directory. 
rootDir0.getName will not get the full path of the directory but the last 
component of the directory. When passing to DiskBlockManage constructor, the 
DiskBlockerManger creates directories in pwd not the temporary directory.

rootDir0.toString will fix this issue.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (SPARK-1527) rootDirs in DiskBlockManagerSuite doesn't get full path from rootDir0, rootDir1

2014-04-17 Thread Ye Xianjin (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-1527?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13973087#comment-13973087
 ] 

Ye Xianjin commented on SPARK-1527:
---

Yes. You are right. toString() may give relative path. And since it's 
determined by java.io.tmpdir system property. see 
https://code.google.com/p/guava-libraries/source/browse/guava/src/com/google/common/io/Files.java
 line 591. It's possible that the DiskBlockManager will create different 
directories than the original temp dir when java.io.tmpdir is a relative path. 

so use getAbsolutePath since I use this method in my last pr?

But, I saw toString() was called other places! Should we do something about 
that?

 rootDirs in DiskBlockManagerSuite doesn't get full path from rootDir0, 
 rootDir1
 ---

 Key: SPARK-1527
 URL: https://issues.apache.org/jira/browse/SPARK-1527
 Project: Spark
  Issue Type: Bug
  Components: Spark Core
Affects Versions: 0.9.0
Reporter: Ye Xianjin
Priority: Minor
  Labels: starter
   Original Estimate: 24h
  Remaining Estimate: 24h

 In core/src/test/scala/org/apache/storage/DiskBlockManagerSuite.scala
   val rootDir0 = Files.createTempDir()
   rootDir0.deleteOnExit()
   val rootDir1 = Files.createTempDir()
   rootDir1.deleteOnExit()
   val rootDirs = rootDir0.getName + , + rootDir1.getName
 rootDir0 and rootDir1 are in system's temporary directory. 
 rootDir0.getName will not get the full path of the directory but the last 
 component of the directory. When passing to DiskBlockManage constructor, the 
 DiskBlockerManger creates directories in pwd not the temporary directory.
 rootDir0.toString will fix this issue.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (SPARK-1527) rootDirs in DiskBlockManagerSuite doesn't get full path from rootDir0, rootDir1

2014-04-17 Thread Ye Xianjin (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-1527?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13973096#comment-13973096
 ] 

Ye Xianjin commented on SPARK-1527:
---

Yes, of course, sometimes we want absolute path, sometimes we want to transmit 
a relative path. It depends on logic. 
But I think maybe we should review these usages so that we can make sure 
absolute paths or relative paths are used appropriately.

I may have time to review it after I finish another JIRA issue. If you want to 
take it over, please!

Anyway, thanks for your comments and help.


 rootDirs in DiskBlockManagerSuite doesn't get full path from rootDir0, 
 rootDir1
 ---

 Key: SPARK-1527
 URL: https://issues.apache.org/jira/browse/SPARK-1527
 Project: Spark
  Issue Type: Bug
  Components: Spark Core
Affects Versions: 0.9.0
Reporter: Ye Xianjin
Priority: Minor
  Labels: starter
   Original Estimate: 24h
  Remaining Estimate: 24h

 In core/src/test/scala/org/apache/storage/DiskBlockManagerSuite.scala
   val rootDir0 = Files.createTempDir()
   rootDir0.deleteOnExit()
   val rootDir1 = Files.createTempDir()
   rootDir1.deleteOnExit()
   val rootDirs = rootDir0.getName + , + rootDir1.getName
 rootDir0 and rootDir1 are in system's temporary directory. 
 rootDir0.getName will not get the full path of the directory but the last 
 component of the directory. When passing to DiskBlockManage constructor, the 
 DiskBlockerManger creates directories in pwd not the temporary directory.
 rootDir0.toString will fix this issue.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (SPARK-1511) Update TestUtils.createCompiledClass() API to work with creating class file on different filesystem

2014-04-16 Thread Ye Xianjin (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-1511?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ye Xianjin updated SPARK-1511:
--

Affects Version/s: 0.8.1
   0.9.0

 Update TestUtils.createCompiledClass() API to work with creating class file 
 on different filesystem
 ---

 Key: SPARK-1511
 URL: https://issues.apache.org/jira/browse/SPARK-1511
 Project: Spark
  Issue Type: Bug
  Components: Spark Core
Affects Versions: 0.8.1, 0.9.0, 1.0.0
 Environment: Mac OS X, two disks. 
Reporter: Ye Xianjin
Priority: Minor
  Labels: starter
   Original Estimate: 24h
  Remaining Estimate: 24h

 The createCompliedClass method uses java File.renameTo method to rename 
 source file to destination file, which will fail if source and destination 
 files are on different disks (or partitions).
 see 
 http://apache-spark-developers-list.1001551.n3.nabble.com/Tests-failed-after-assembling-the-latest-code-from-github-td6315.html
  for more details.
 Use com.google.common.io.Files.move instead of renameTo will solve this issue.



--
This message was sent by Atlassian JIRA
(v6.2#6252)