what will the scenario in case of s3 and local file system?
On Tue, Jun 21, 2016 at 4:36 PM, Jörn Franke wrote:
> Based on the underlying Hadoop FileFormat. This one does it mostly based
> on blocksize. You can change this though.
>
> On 21 Jun 2016, at 12:19, Sachin Aggar
rquet =
sqlContext.readStream.format("parquet").parquet("/Users/sachin/testSpark/inputParquet")
--
Thanks & Regards
Sachin Aggarwal
7760502772
ence
> between the batchTime and SubmissionTime for that nth batch
>
>
> thanks
> Mario
>
>
>
>
>
>
> On Thu, Mar 10, 2016 at 10:29 AM, Sachin Aggarwal <
> *different.sac...@gmail.com* > wrote:
>
>Hi cody,
>
>let me try on
king.
>
> On Wed, Mar 9, 2016 at 12:43 PM, Sachin Aggarwal
> wrote:
> > where are we capturing this delay?
> > I am aware of scheduling delay which is defined as processing
> > time-submission time not the batch create time
> >
> > On Wed, Mar 9, 2016 at 1
tch is finished. So if your processing time is larger than
> your batch time, delays will build up.
>
> On Wed, Mar 9, 2016 at 11:09 AM, Sachin Aggarwal
> wrote:
> > Hi All,
> >
> > we have batchTime and submissionTime.
> >
> > @param batchTime Time of the ba
will wait for current batch to finish first ?
I would be thankful if you give me some pointers
Thanks!
--
Thanks & Regards
Sachin Aggarwal
7760502772
Hi,
adding question from user group to dev group need expert advice
please help us decide which version to choose for production as standard.
http://apache-spark-user-list.1001560.n3.nabble.com/Status-of-2-11-support-tp25362.html
thanks
--
Thanks & Regards
Sachin Aggarwal
7760502772
--
Thanks & Regards
Sachin Aggarwal
7760502772