I'd like to preemptively post the current list of 35 Blockers for
release 1.4.0.
(There are 53 Critical too, and a total of 273 JIRAs targeted for
1.4.0. Clearly most of that isn't accurate, so would be good to
un-target most of that.)

As a matter of process and hygiene, it would be best to either decide
they're not Blockers at this point and reprioritize, or focus on
addressing them, as we're now in the run up to release. I suggest that
we shouldn't release with any Blockers outstanding, by definition.


SPARK-7298
Web UI
Harmonize style of new UI visualizations
Patrick Wendell

SPARK-7297
Web UI
Make timeline more discoverable
Patrick Wendell

SPARK-7284
"Documentation
 Streaming"
Update streaming documentation for Spark 1.4.0 release
Tathagata Das

SPARK-7228
SparkR
SparkR public API for 1.4 release
Shivaram Venkataraman

SPARK-7158
SQL
collect and take return different results

SPARK-7139
Streaming
Allow received block metadata to be saved to WAL and recovered on driver failure
Tathagata Das

SPARK-7111
Streaming
Exposing of input data rates of non-receiver streams like Kafka Direct stream
Saisai Shao

SPARK-6941
SQL
Provide a better error message to explain that tables created from
RDDs are immutable

SPARK-6923
SQL
Spark SQL CLI does not read Data Source schema correctly

SPARK-6906
SQL
Refactor Connection to Hive Metastore
Michael Armbrust

SPARK-6831
"Documentation
 PySpark
 SparkR
 SQL"
Document how to use external data sources

SPARK-6824
SparkR
Fill the docs for DataFrame API in SparkR

SPARK-6812
SparkR
filter() on DataFrame does not work as expected

SPARK-6811
SparkR
Building binary R packages for SparkR

SPARK-6806
"Documentation
 SparkR"
SparkR examples in programming guide
Davies Liu

SPARK-6784
SQL
Clean up all the inbound/outbound conversions for DateType
Yin Huai

SPARK-6702
"Streaming
 Web UI"
Update the Streaming Tab in Spark UI to show more batch information
Tathagata Das

SPARK-6654
Streaming
Update Kinesis Streaming impls (both KCL-based and Direct) to use
latest aws-java-sdk and kinesis-client-library

SPARK-5960
Streaming
Allow AWS credentials to be passed to KinesisUtils.createStream()
Chris Fregly

SPARK-5948
SQL
Support writing to partitioned table for the Parquet data source

SPARK-5947
SQL
First class partitioning support in data sources API

SPARK-5920
Shuffle
Use a BufferedInputStream to read local shuffle data
Kay Ousterhout

SPARK-5707
SQL
Enabling spark.sql.codegen throws ClassNotFound exception

SPARK-5517
SQL
Add input types for Java UDFs

SPARK-5463
SQL
Fix Parquet filter push-down

SPARK-5456
SQL
Decimal Type comparison issue

SPARK-5182
SQL
Partitioning support for tables created by the data source API
Cheng Lian

SPARK-5180
SQL
Data source API improvement

SPARK-4867
SQL
UDF clean up

SPARK-2973
SQL
"Use LocalRelation for all ExecutedCommands
 avoid job for take/collect()"
Cheng Lian

SPARK-2883
"Input/Output
 SQL"
Spark Support for ORCFile format

SPARK-2873
SQL
Support disk spilling in Spark SQL aggregation
Yin Huai

SPARK-1517
"Build
 Project Infra"
"Publish nightly snapshots of documentation
 maven artifacts
 and binary builds"
Nicholas Chammas

SPARK-1442
SQL
Add Window function support

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
For additional commands, e-mail: dev-h...@spark.apache.org

Reply via email to