[GitHub] spark pull request: [WIP][SPARK-3517]mapPartitions is not correct ...

2014-09-28 Thread witgo
Github user witgo closed the pull request at:

https://github.com/apache/spark/pull/2376


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [WIP][SPARK-3517]mapPartitions is not correct ...

2014-09-28 Thread witgo
Github user witgo commented on the pull request:

https://github.com/apache/spark/pull/2376#issuecomment-57078975
  
I temporarily can not reproduce it,  and close this PR


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [WIP][SPARK-3517]mapPartitions is not correct ...

2014-09-17 Thread witgo
Github user witgo commented on the pull request:

https://github.com/apache/spark/pull/2376#issuecomment-55913971
  
I tried to reproduce it with the way the test case, but was 
unsuccessful.But like the code below does appear the problem, do not know why
```scala
def runGibbsSampling(
data: RDD[Document], initModel: TopicModel,
totalIter: Int, burnInIter: Int
  ): (TopicModel, RDD[Document]) = {
require(totalIter  burnInIter, totalIter is less than burnInIter)
require(totalIter  0, totalIter is less than 0)
require(burnInIter  0, burnInIter is less than 0)

val (numTopics, numTerms, alpha, beta) = (initModel.topicCounts_.size,
  initModel.topicTermCounts_.head.size,
  initModel.alpha, initModel.beta)
val probModel = TopicModel(numTopics, numTerms, alpha, beta)

logInfo(Start initialization)
var (topicModel, corpus) = sampleTermAssignment(data, initModel)

for (iter - 1 to totalIter) {
  logInfo(Start Gibbs sampling (Iteration %d/%d).format(iter, 
totalIter))
  val broadcastModel = data.context.broadcast(topicModel)
  val previousCorpus = corpus
  corpus = corpus.mapPartitions { docs =
val rand = new Random
val topicModel = broadcastModel.value
val topicThisTerm = BDV.zeros[Double](numTopics)
docs.map { doc =
  val content = doc.content
  val topics = doc.topics
  val topicsDist = doc.topicsDist
  for (i - 0 until content.length) {
val term = content(i)
val topic = topics(i)
val chosenTopic = topicModel.dropOneDistSampler(topicsDist, 
topicThisTerm,
  rand, term, topic)
if (topic != chosenTopic) {
  topics(i) = chosenTopic
  topicsDist(topic) += -1
  topicsDist(chosenTopic) += 1
  topicModel.update(term, topic, -1)
  topicModel.update(term, chosenTopic, 1)
}
  }
  doc
}
  }.setName(sLDA-$iter).persist(StorageLevel.MEMORY_AND_DISK)

  if (iter % 5 == 0  data.context.getCheckpointDir.isDefined) {
corpus.checkpoint()
  }
  topicModel = collectTopicCounters(corpus, numTerms, numTopics)
  if (iter  burnInIter) {
probModel.merge(topicModel)
  }
  previousCorpus.unpersist()
  broadcastModel.unpersist()
}
val burnIn = (totalIter - burnInIter).toDouble
probModel.topicCounts_ :/= burnIn
probModel.topicTermCounts_.foreach(_ :/= burnIn)
(probModel, corpus)
  }
```


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [WIP][SPARK-3517]mapPartitions is not correct ...

2014-09-13 Thread JoshRosen
Github user JoshRosen commented on the pull request:

https://github.com/apache/spark/pull/2376#issuecomment-55509430
  
It seems like the issue here is that unnecessary objects are being included 
in the closure, since presumably this bug would also manifest itself through 
serialization errors if the extra objects aren't serializable

Maybe we can test this directly with a mock object and a check that its 
`writeObject` method isn't called.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [WIP][SPARK-3517]mapPartitions is not correct ...

2014-09-12 Thread SparkQA
Github user SparkQA commented on the pull request:

https://github.com/apache/spark/pull/2376#issuecomment-55480695
  
  [QA tests have 
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/20248/consoleFull)
 for   PR 2376 at commit 
[`9d841bb`](https://github.com/apache/spark/commit/9d841bb3cc228f1c5b7a120d959b19c3b929d274).
 * This patch **passes** unit tests.
 * This patch merges cleanly.
 * This patch adds the following public classes _(experimental)_:
  * `class Dummy(object):`



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [WIP][SPARK-3517]mapPartitions is not correct ...

2014-09-12 Thread rxin
Github user rxin commented on the pull request:

https://github.com/apache/spark/pull/2376#issuecomment-55482040
  
What is the problem here? Can you give an example or test case?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [WIP][SPARK-3517]mapPartitions is not correct ...

2014-09-12 Thread witgo
Github user witgo commented on the pull request:

https://github.com/apache/spark/pull/2376#issuecomment-55482720
  
@rxin  Code like this: 
```
 val topicModel =  Big object
  val broadcastModel = data.context.broadcast(topicModel) 
  corpus = corpus.mapPartitions { docs =
val topicModel = broadcastModel.value
   .
  }
```
The serialized corpus RDD and serialized topicModel broadcast almost as big.
` cat spark.log | grep 'stored as values in memory'` =
```
14/09/13 00:49:21 INFO MemoryStore: Block broadcast_11 stored as values in 
memory (estimated size 197.5 MB, free 2.6 GB)
14/09/13 00:49:24 INFO MemoryStore: Block broadcast_12 stored as values in 
memory (estimated size 197.7 MB, free 2.3 GB)
```


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [WIP][SPARK-3517]mapPartitions is not correct ...

2014-09-12 Thread rxin
Github user rxin commented on the pull request:

https://github.com/apache/spark/pull/2376#issuecomment-55482915
  
Can you add a unit test? You can just create a RDD and serialize it to test 
the size.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org