[
https://issues.apache.org/jira/browse/SPARK-26243?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Maxim Gekk updated SPARK-26243:
---
Issue Type: Sub-task (was: Improvement)
Parent: SPARK-26651
> Use java.time API for parsing
[
https://issues.apache.org/jira/browse/SPARK-26374?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Maxim Gekk updated SPARK-26374:
---
Issue Type: Sub-task (was: Bug)
Parent: SPARK-26651
> Support new date/timestamp parser in
[
https://issues.apache.org/jira/browse/SPARK-26384?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Maxim Gekk updated SPARK-26384:
---
Issue Type: Sub-task (was: Bug)
Parent: SPARK-26651
> CSV schema inferring does not respect
[
https://issues.apache.org/jira/browse/SPARK-26456?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Maxim Gekk updated SPARK-26456:
---
Issue Type: Sub-task (was: Improvement)
Parent: SPARK-26651
> Cast date/timestamp by Date/T
[
https://issues.apache.org/jira/browse/SPARK-26503?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Maxim Gekk updated SPARK-26503:
---
Issue Type: Sub-task (was: Task)
Parent: SPARK-26651
> Get rid of spark.sql.legacy.timePars
[
https://issues.apache.org/jira/browse/SPARK-26546?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Maxim Gekk updated SPARK-26546:
---
Issue Type: Sub-task (was: Improvement)
Parent: SPARK-26651
> Caching of DateTimeFormatter
[
https://issues.apache.org/jira/browse/SPARK-26593?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Maxim Gekk updated SPARK-26593:
---
Issue Type: Sub-task (was: Improvement)
Parent: SPARK-26651
> Use Proleptic Gregorian calen
[
https://issues.apache.org/jira/browse/SPARK-26618?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Maxim Gekk updated SPARK-26618:
---
Issue Type: Sub-task (was: Improvement)
Parent: SPARK-26651
> Make typed Timestamp/Date lit
Maxim Gekk created SPARK-26651:
--
Summary: Use Proleptic Gregorian calendar
Key: SPARK-26651
URL: https://issues.apache.org/jira/browse/SPARK-26651
Project: Spark
Issue Type: Umbrella
C
Maxim Gekk created SPARK-26618:
--
Summary: Make typed Timestamp/Date literals consistent to casting
Key: SPARK-26618
URL: https://issues.apache.org/jira/browse/SPARK-26618
Project: Spark
Issue Ty
Maxim Gekk created SPARK-26593:
--
Summary: Use Proleptic Gregorian calendar in casting UTF8String to
date/timestamp types
Key: SPARK-26593
URL: https://issues.apache.org/jira/browse/SPARK-26593
Project: S
Maxim Gekk created SPARK-26550:
--
Summary: New datasource for benchmarking
Key: SPARK-26550
URL: https://issues.apache.org/jira/browse/SPARK-26550
Project: Spark
Issue Type: New Feature
Maxim Gekk created SPARK-26547:
--
Summary: Remove duplicate toHiveString from HiveUtils
Key: SPARK-26547
URL: https://issues.apache.org/jira/browse/SPARK-26547
Project: Spark
Issue Type: Improvem
Maxim Gekk created SPARK-26546:
--
Summary: Caching of DateTimeFormatter
Key: SPARK-26546
URL: https://issues.apache.org/jira/browse/SPARK-26546
Project: Spark
Issue Type: Improvement
Co
Maxim Gekk created SPARK-26504:
--
Summary: Rope-wise dumping of Spark plans
Key: SPARK-26504
URL: https://issues.apache.org/jira/browse/SPARK-26504
Project: Spark
Issue Type: Improvement
[
https://issues.apache.org/jira/browse/SPARK-26503?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Maxim Gekk updated SPARK-26503:
---
External issue ID: (was: SPARK-26374)
> Get rid of spark.sql.legacy.timeParser.enabled
> -
[
https://issues.apache.org/jira/browse/SPARK-26503?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Maxim Gekk updated SPARK-26503:
---
External issue ID: SPARK-26374
> Get rid of spark.sql.legacy.timeParser.enabled
> --
Maxim Gekk created SPARK-26503:
--
Summary: Get rid of spark.sql.legacy.timeParser.enabled
Key: SPARK-26503
URL: https://issues.apache.org/jira/browse/SPARK-26503
Project: Spark
Issue Type: Task
Maxim Gekk created SPARK-26502:
--
Summary: Get rid of hiveResultString() in QueryExecution
Key: SPARK-26502
URL: https://issues.apache.org/jira/browse/SPARK-26502
Project: Spark
Issue Type: Impro
Maxim Gekk created SPARK-26456:
--
Summary: Cast date/timestamp by Date/TimestampFormatter
Key: SPARK-26456
URL: https://issues.apache.org/jira/browse/SPARK-26456
Project: Spark
Issue Type: Improv
[
https://issues.apache.org/jira/browse/SPARK-26248?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Maxim Gekk resolved SPARK-26248.
Resolution: Won't Fix
> Infer date type from CSV
>
>
> Ke
[
https://issues.apache.org/jira/browse/SPARK-26424?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Maxim Gekk updated SPARK-26424:
---
Summary: Use java.time API in timestamp/date functions (was: Use
java.time API in timestamp/date f
Maxim Gekk created SPARK-26424:
--
Summary: Use java.time API in timestamp/date function
Key: SPARK-26424
URL: https://issues.apache.org/jira/browse/SPARK-26424
Project: Spark
Issue Type: Improve
[
https://issues.apache.org/jira/browse/SPARK-26374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16725422#comment-16725422
]
Maxim Gekk commented on SPARK-26374:
Most likely this is related ticket:
https://bu
[
https://issues.apache.org/jira/browse/SPARK-26246?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Maxim Gekk updated SPARK-26246:
---
Summary: Infer timestamp types from JSON (was: Infer date and timestamp
types from JSON)
> Infer t
[
https://issues.apache.org/jira/browse/SPARK-26246?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Maxim Gekk updated SPARK-26246:
---
Description: Currently, TimestampType cannot be inferred from JSON. To
parse JSON string, you have t
Maxim Gekk created SPARK-26384:
--
Summary: CSV schema inferring does not respect
spark.sql.legacy.timeParser.enabled
Key: SPARK-26384
URL: https://issues.apache.org/jira/browse/SPARK-26384
Project: Spark
Maxim Gekk created SPARK-26376:
--
Summary: Skip inputs without tokens by JSON datasource
Key: SPARK-26376
URL: https://issues.apache.org/jira/browse/SPARK-26376
Project: Spark
Issue Type: Improve
Maxim Gekk created SPARK-26374:
--
Summary: Support new date/timestamp parser in HadoopFsRelationTest
Key: SPARK-26374
URL: https://issues.apache.org/jira/browse/SPARK-26374
Project: Spark
Issue T
Maxim Gekk created SPARK-26310:
--
Summary: Verification of JSON options
Key: SPARK-26310
URL: https://issues.apache.org/jira/browse/SPARK-26310
Project: Spark
Issue Type: Sub-task
Compo
Maxim Gekk created SPARK-26309:
--
Summary: Verification of Data source options
Key: SPARK-26309
URL: https://issues.apache.org/jira/browse/SPARK-26309
Project: Spark
Issue Type: Improvement
Maxim Gekk created SPARK-26303:
--
Summary: Return partial results for bad JSON records
Key: SPARK-26303
URL: https://issues.apache.org/jira/browse/SPARK-26303
Project: Spark
Issue Type: Improveme
[
https://issues.apache.org/jira/browse/SPARK-26248?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Maxim Gekk updated SPARK-26248:
---
Summary: Infer date type from CSV (was: Infer date type from JSON)
> Infer date type from CSV
> ---
Maxim Gekk created SPARK-26248:
--
Summary: Infer date type from JSON
Key: SPARK-26248
URL: https://issues.apache.org/jira/browse/SPARK-26248
Project: Spark
Issue Type: Improvement
Compo
Maxim Gekk created SPARK-26246:
--
Summary: Infer date and timestamp types from JSON
Key: SPARK-26246
URL: https://issues.apache.org/jira/browse/SPARK-26246
Project: Spark
Issue Type: Improvement
Maxim Gekk created SPARK-26243:
--
Summary: Use java.time API for parsing timestamps and dates from
JSON
Key: SPARK-26243
URL: https://issues.apache.org/jira/browse/SPARK-26243
Project: Spark
Iss
[
https://issues.apache.org/jira/browse/SPARK-23410?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16701640#comment-16701640
]
Maxim Gekk commented on SPARK-23410:
Yes, you can. You can find more info there
[ht
Maxim Gekk created SPARK-26191:
--
Summary: Control number of truncated fields
Key: SPARK-26191
URL: https://issues.apache.org/jira/browse/SPARK-26191
Project: Spark
Issue Type: Sub-task
[
https://issues.apache.org/jira/browse/SPARK-23410?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16700799#comment-16700799
]
Maxim Gekk commented on SPARK-23410:
> Even if lineSeps is set, it is still necessar
[
https://issues.apache.org/jira/browse/SPARK-26178?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16700147#comment-16700147
]
Maxim Gekk commented on SPARK-26178:
[~srowen] It is somehow related to the ticket b
Maxim Gekk created SPARK-26178:
--
Summary: Use java.time API for parsing timestamps and dates from
CSV
Key: SPARK-26178
URL: https://issues.apache.org/jira/browse/SPARK-26178
Project: Spark
Iss
[
https://issues.apache.org/jira/browse/SPARK-23410?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16699333#comment-16699333
]
Maxim Gekk commented on SPARK-23410:
> Every line has the BOM?
BOM can be only at t
Maxim Gekk created SPARK-26163:
--
Summary: Parsing decimals from JSON using locale
Key: SPARK-26163
URL: https://issues.apache.org/jira/browse/SPARK-26163
Project: Spark
Issue Type: Improvement
Maxim Gekk created SPARK-26161:
--
Summary: Ignore empty files in load
Key: SPARK-26161
URL: https://issues.apache.org/jira/browse/SPARK-26161
Project: Spark
Issue Type: Improvement
Comp
Maxim Gekk created SPARK-26151:
--
Summary: Return partial results for bad CSV records
Key: SPARK-26151
URL: https://issues.apache.org/jira/browse/SPARK-26151
Project: Spark
Issue Type: Improvemen
[
https://issues.apache.org/jira/browse/SPARK-26075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16695261#comment-16695261
]
Maxim Gekk commented on SPARK-26075:
The restriction of 8GB still exists
https://gi
Maxim Gekk created SPARK-26122:
--
Summary: Support encoding for multiLine in CSV datasource
Key: SPARK-26122
URL: https://issues.apache.org/jira/browse/SPARK-26122
Project: Spark
Issue Type: Impr
[
https://issues.apache.org/jira/browse/SPARK-26039?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16690985#comment-16690985
]
Maxim Gekk commented on SPARK-26039:
This behaviour is not specific to ORC datasour
Maxim Gekk created SPARK-26108:
--
Summary: Support custom lineSep in CSV datasource
Key: SPARK-26108
URL: https://issues.apache.org/jira/browse/SPARK-26108
Project: Spark
Issue Type: New Feature
Maxim Gekk created SPARK-26102:
--
Summary: Common CSV/JSON functions tests
Key: SPARK-26102
URL: https://issues.apache.org/jira/browse/SPARK-26102
Project: Spark
Issue Type: Test
Compon
[
https://issues.apache.org/jira/browse/SPARK-23410?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16690616#comment-16690616
]
Maxim Gekk commented on SPARK-23410:
[~x1q1j1] Encoding different from UTF-8 (except
Maxim Gekk created SPARK-26099:
--
Summary: Verification of the corrupt column in from_csv/from_json
Key: SPARK-26099
URL: https://issues.apache.org/jira/browse/SPARK-26099
Project: Spark
Issue Ty
Maxim Gekk created SPARK-26081:
--
Summary: Do not write empty files by text datasources
Key: SPARK-26081
URL: https://issues.apache.org/jira/browse/SPARK-26081
Project: Spark
Issue Type: Improvem
Maxim Gekk created SPARK-26066:
--
Summary: Moving truncatedString to sql/catalyst
Key: SPARK-26066
URL: https://issues.apache.org/jira/browse/SPARK-26066
Project: Spark
Issue Type: Sub-task
Maxim Gekk created SPARK-26023:
--
Summary: Dumping truncated plans to a file
Key: SPARK-26023
URL: https://issues.apache.org/jira/browse/SPARK-26023
Project: Spark
Issue Type: Sub-task
Maxim Gekk created SPARK-26007:
--
Summary: DataFrameReader.csv() should respect to
spark.sql.columnNameOfCorruptRecord
Key: SPARK-26007
URL: https://issues.apache.org/jira/browse/SPARK-26007
Project: Spar
[
https://issues.apache.org/jira/browse/SPARK-24244?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16681786#comment-16681786
]
Maxim Gekk commented on SPARK-24244:
> is this new option available in PySpark too?
[
https://issues.apache.org/jira/browse/SPARK-24540?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16679870#comment-16679870
]
Maxim Gekk commented on SPARK-24540:
The restriction has been fixed already at least
Maxim Gekk created SPARK-25972:
--
Summary: Missed JSON options in streaming.py
Key: SPARK-25972
URL: https://issues.apache.org/jira/browse/SPARK-25972
Project: Spark
Issue Type: Improvement
Maxim Gekk created SPARK-25977:
--
Summary: Parsing decimals from CSV using locale
Key: SPARK-25977
URL: https://issues.apache.org/jira/browse/SPARK-25977
Project: Spark
Issue Type: Improvement
Maxim Gekk created SPARK-25955:
--
Summary: Porting JSON test for CSV functions
Key: SPARK-25955
URL: https://issues.apache.org/jira/browse/SPARK-25955
Project: Spark
Issue Type: Test
Co
Maxim Gekk created SPARK-25952:
--
Summary: from_json returns wrong result if corrupt record column
is in the middle of schema
Key: SPARK-25952
URL: https://issues.apache.org/jira/browse/SPARK-25952
Projec
Maxim Gekk created SPARK-25950:
--
Summary: from_csv should respect to
spark.sql.columnNameOfCorruptRecord
Key: SPARK-25950
URL: https://issues.apache.org/jira/browse/SPARK-25950
Project: Spark
I
Maxim Gekk created SPARK-25945:
--
Summary: Support locale while parsing date/timestamp from CSV/JSON
Key: SPARK-25945
URL: https://issues.apache.org/jira/browse/SPARK-25945
Project: Spark
Issue T
[
https://issues.apache.org/jira/browse/SPARK-25890?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16675110#comment-16675110
]
Maxim Gekk commented on SPARK-25890:
I have double checked on branch-2.4. It doesn't
[
https://issues.apache.org/jira/browse/SPARK-25890?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16675088#comment-16675088
]
Maxim Gekk commented on SPARK-25890:
I got the following on the commit *4afb35*:
{co
[
https://issues.apache.org/jira/browse/SPARK-23194?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16674468#comment-16674468
]
Maxim Gekk commented on SPARK-23194:
This changes
https://github.com/apache/spark/c
[
https://issues.apache.org/jira/browse/SPARK-25935?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Maxim Gekk updated SPARK-25935:
---
Summary: Prevent null rows from JSON parser (was: Prevent nulls from JSON
parser)
> Prevent null r
Maxim Gekk created SPARK-25935:
--
Summary: Prevent nulls from JSON parser
Key: SPARK-25935
URL: https://issues.apache.org/jira/browse/SPARK-25935
Project: Spark
Issue Type: Improvement
[
https://issues.apache.org/jira/browse/SPARK-25890?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16674112#comment-16674112
]
Maxim Gekk commented on SPARK-25890:
I haven't reproduced the issue on the master br
[
https://issues.apache.org/jira/browse/SPARK-17967?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16674007#comment-16674007
]
Maxim Gekk commented on SPARK-17967:
What about to preserve existing API as is, and
Maxim Gekk created SPARK-25931:
--
Summary: Benchmarking creation of Jackson parser
Key: SPARK-25931
URL: https://issues.apache.org/jira/browse/SPARK-25931
Project: Spark
Issue Type: Test
Maxim Gekk created SPARK-25927:
--
Summary: Fix number of partitions returned by outputPartitioning
Key: SPARK-25927
URL: https://issues.apache.org/jira/browse/SPARK-25927
Project: Spark
Issue Typ
Maxim Gekk created SPARK-25913:
--
Summary: Unary SparkPlan nodes should extend UnaryExecNode
Key: SPARK-25913
URL: https://issues.apache.org/jira/browse/SPARK-25913
Project: Spark
Issue Type: Imp
Maxim Gekk created SPARK-25672:
--
Summary: Inferring schema from CSV string literal
Key: SPARK-25672
URL: https://issues.apache.org/jira/browse/SPARK-25672
Project: Spark
Issue Type: Improvement
[
https://issues.apache.org/jira/browse/SPARK-25466?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Maxim Gekk updated SPARK-25466:
---
Summary: Documentation does not specify how to set Kafka consumer cache
capacity for SS (was: Docum
Maxim Gekk created SPARK-25670:
--
Summary: Speed up JsonExpressionsSuite
Key: SPARK-25670
URL: https://issues.apache.org/jira/browse/SPARK-25670
Project: Spark
Issue Type: Test
Componen
[
https://issues.apache.org/jira/browse/SPARK-25669?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Maxim Gekk updated SPARK-25669:
---
Summary: Check CSV header only when it exists (was: Check header only when
it exists)
> Check CSV
Maxim Gekk created SPARK-25669:
--
Summary: Check header only when it exists
Key: SPARK-25669
URL: https://issues.apache.org/jira/browse/SPARK-25669
Project: Spark
Issue Type: Bug
Compon
[
https://issues.apache.org/jira/browse/SPARK-25660?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Maxim Gekk updated SPARK-25660:
---
Summary: Impossible to use the backward slash as the CSV fields delimiter
(was: Impossible to use
Maxim Gekk created SPARK-25660:
--
Summary: Impossible to use backward slash as the CSV fields
delimiter
Key: SPARK-25660
URL: https://issues.apache.org/jira/browse/SPARK-25660
Project: Spark
Is
[
https://issues.apache.org/jira/browse/SPARK-25660?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Maxim Gekk updated SPARK-25660:
---
Issue Type: Bug (was: Improvement)
> Impossible to use backward slash as the CSV fields delimiter
Maxim Gekk created SPARK-25638:
--
Summary: Convert structs to CSV strings
Key: SPARK-25638
URL: https://issues.apache.org/jira/browse/SPARK-25638
Project: Spark
Issue Type: Improvement
[
https://issues.apache.org/jira/browse/SPARK-25514?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Maxim Gekk updated SPARK-25514:
---
Summary: Generating pretty JSON by to_json (was: Pretty JSON)
> Generating pretty JSON by to_json
>
Maxim Gekk created SPARK-25514:
--
Summary: Pretty JSON
Key: SPARK-25514
URL: https://issues.apache.org/jira/browse/SPARK-25514
Project: Spark
Issue Type: Improvement
Components: SQL
[
https://issues.apache.org/jira/browse/SPARK-25513?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Maxim Gekk updated SPARK-25513:
---
Description:
Spark can read compression files if there is compression codec for them. By
default, H
Maxim Gekk created SPARK-25513:
--
Summary: Read zipped CSV and JSON files
Key: SPARK-25513
URL: https://issues.apache.org/jira/browse/SPARK-25513
Project: Spark
Issue Type: Improvement
Maxim Gekk created SPARK-25447:
--
Summary: Support JSON options by schema_of_json
Key: SPARK-25447
URL: https://issues.apache.org/jira/browse/SPARK-25447
Project: Spark
Issue Type: Improvement
Maxim Gekk created SPARK-25446:
--
Summary: Add schema_of_json() to R
Key: SPARK-25446
URL: https://issues.apache.org/jira/browse/SPARK-25446
Project: Spark
Issue Type: Improvement
Compo
Maxim Gekk created SPARK-25440:
--
Summary: Dump query execution info to a file
Key: SPARK-25440
URL: https://issues.apache.org/jira/browse/SPARK-25440
Project: Spark
Issue Type: Improvement
Maxim Gekk created SPARK-25425:
--
Summary: Extra options must overwrite sessions options
Key: SPARK-25425
URL: https://issues.apache.org/jira/browse/SPARK-25425
Project: Spark
Issue Type: Bug
[
https://issues.apache.org/jira/browse/SPARK-25396?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16609605#comment-16609605
]
Maxim Gekk commented on SPARK-25396:
I have a concern regarding to when I should clo
[
https://issues.apache.org/jira/browse/SPARK-25396?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Maxim Gekk updated SPARK-25396:
---
Description:
If a JSON file has a structure like below:
{code}
[
{
"time":"2018-08-13
[
https://issues.apache.org/jira/browse/SPARK-25396?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16609469#comment-16609469
]
Maxim Gekk commented on SPARK-25396:
[~hyukjin.kwon] WDYT
> Read array of JSON obje
Maxim Gekk created SPARK-25396:
--
Summary: Read array of JSON objects via an Iterator
Key: SPARK-25396
URL: https://issues.apache.org/jira/browse/SPARK-25396
Project: Spark
Issue Type: Improvemen
Maxim Gekk created SPARK-25393:
--
Summary: Parsing CSV strings in a column
Key: SPARK-25393
URL: https://issues.apache.org/jira/browse/SPARK-25393
Project: Spark
Issue Type: Improvement
Maxim Gekk created SPARK-25387:
--
Summary: Malformed CSV causes NPE
Key: SPARK-25387
URL: https://issues.apache.org/jira/browse/SPARK-25387
Project: Spark
Issue Type: Bug
Components: SQ
Maxim Gekk created SPARK-25384:
--
Summary: Removing spark.sql.fromJsonForceNullableSchema
Key: SPARK-25384
URL: https://issues.apache.org/jira/browse/SPARK-25384
Project: Spark
Issue Type: Improv
Maxim Gekk created SPARK-25381:
--
Summary: Stratified sampling by Column argument
Key: SPARK-25381
URL: https://issues.apache.org/jira/browse/SPARK-25381
Project: Spark
Issue Type: Improvement
[
https://issues.apache.org/jira/browse/SPARK-25283?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Maxim Gekk resolved SPARK-25283.
Resolution: Fixed
Fix Version/s: 2.4.0
It is fixed by the PR: https://github.com/apache/spa
901 - 1000 of 1107 matches
Mail list logo