Github user robbyki commented on the issue:

    https://github.com/apache/spark/pull/16209
  
    Is there a recommended workaround to achieve exactly this in spark 2.1? I'm 
going through several resources to try and understand how to maintain my schema 
created outside of spark and then just truncating my tables from spark followed 
by writing with a savemode of overwrite. My problem exactly this issue with 
respect to my db netezza failing when it sees spark trying to save a text data 
type so I then have to go specify in my new jdbc dialect to use varchar(n) 
which does work however that just replaces all of my varchar columns (different 
lengths for different columns) with whatever I specified in my dialect which is 
not what I want. How can I just have it save the TEXT as varchar without 
specifying a length in the custom dialect? 


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to