[ https://issues.apache.org/jira/browse/SPARK-16428?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Sean Owen updated SPARK-16428: ------------------------------ Priority: Major (was: Blocker) > Spark file system watcher not working on Windows > ------------------------------------------------ > > Key: SPARK-16428 > URL: https://issues.apache.org/jira/browse/SPARK-16428 > Project: Spark > Issue Type: Bug > Components: Examples, Input/Output, Spark Core, Windows > Affects Versions: 1.6.2 > Environment: Ubuntu 15.10 64 bit, Windows 7 Enterprise 64 bit > Reporter: John-Michael Reed > > Two people tested Apache Spark on their computers... > [Spark Download - http://i.stack.imgur.com/z1oqu.png] > We downloaded the version of Spark prebuild for Hadoop 2.6, went to the > folder /spark-1.6.2-bin-hadoop2.6/, created a "tmp" directory, went to that > directory, and ran: > $ bin/run-example org.apache.spark.examples.streaming.HdfsWordCount tmp > I added arbitrary files content1 and content2dssdgdg to that "tmp" directory. > ------------------------------------------- > Time: 1467921704000 ms > ------------------------------------------- > (content1,1) > (content2dssdgdg,1) > ------------------------------------------- > Time: 1467921706000 ms > Spark detected those files with the above terminal output on my Ubuntu 15.10 > laptop, but not on my colleague's Windows 7 Enterprise laptop. > This is preventing us from getting work done with Spark. > Link: > http://stackoverflow.com/questions/38254405/spark-file-system-watcher-not-working-on-windows -- This message was sent by Atlassian JIRA (v6.3.4#6332) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org