Github user gvramana commented on a diff in the pull request:

    https://github.com/apache/carbondata/pull/2220#discussion_r183798281
  
    --- Diff: docs/faq.md ---
    @@ -182,3 +183,15 @@ select cntry,sum(gdp) from gdp21,pop1 where cntry=ctry 
group by cntry;
     ## Why all executors are showing success in Spark UI even after Dataload 
command failed at Driver side?
     Spark executor shows task as failed after the maximum number of retry 
attempts, but loading the data having bad records and BAD_RECORDS_ACTION 
(carbon.bad.records.action) is set as “FAIL” will attempt only once but 
will send the signal to driver as failed instead of throwing the exception to 
retry, as there is no point to retry if bad record found and BAD_RECORDS_ACTION 
is set to fail. Hence the Spark executor displays this one attempt as 
successful but the command has actually failed to execute. Task attempts or 
executor logs can be checked to observe the failure reason.
     
    +## Why different time zone result for select query output when query SDK 
writer output? 
    +SDK writer is an independent entity, hence SDK writer can generate 
carbondata files from a non-cluster machine that has different time zones. But 
at cluster when those files are read, it always takes cluster time-zone. Hence, 
the value of timestamp and date datatype fields are not original value.
    +If you do not want to see according to time-zone, then set cluster's 
time-zone in SDK writer by calling below API.
    --- End diff --
    
    If wanted to control timezone of data while writing, then set cluster's 
time-zone in SDK writer by calling below API. 


---

Reply via email to