:22 PM:
> From: Madabhattula Rajesh Kumar <mrajaf...@gmail.com>
> To: Richard Hillegas/San Francisco/IBM@IBMUS
> Cc: "u...@spark.incubator.apache.org"
> <u...@spark.incubator.apache.org>, "user@spark.apache.org"
> <user@spark.apache.org>
> Date: 11/0
Or you may be referring to
https://issues.apache.org/jira/browse/SPARK-10648. That issue has a couple
pull requests but I think that the limited bandwidth of the committers
still applies.
Thanks,
Rick
Richard Hillegas/San Francisco/IBM@IBMUS wrote on 11/05/2015 09:16:42 AM:
> From: Rich
Hi Rajesh,
I think that you may be referring to
https://issues.apache.org/jira/browse/SPARK-10909. A pull request on that
issue was submitted more than a month ago but it has not been committed. I
think that the committers are busy working on issues which were targeted
for 1.6 and I doubt that
Note that embedded Derby supports multiple, simultaneous connections, that
is, multiple simultaneous users. But a Derby database is owned by the
process which boots it. Only one process can boot a Derby database at a
given time. The creation of multiple SQL contexts must be spawning multiple
Hi Jeff,
Hard to say what's going on. I have had problems subscribing to the Apache
lists in the past. My problems, which may be different than yours, were
caused by replying to the confirmation request from a different email
account than the account I was trying to subscribe from. It was easy
As an academic aside, note that all datatypes are nullable according to the
SQL Standard. NOT NULL is modelled in the Standard as a constraint on data
values, not as a parallel universe of special data types. However, very few
databases implement NOT NULL via integrity constraints. Instead,
A crude workaround may be to run your spark shell with a sudo command.
Hope this helps,
Rick Hillegas
Sourav Mazumder wrote on 10/15/2015 09:59:02
AM:
> From: Sourav Mazumder
> To: user
> Date: 10/15/2015 09:59
Hi Ravi,
If you build Spark with Hive support, then your sqlContext variable will be
an instance of HiveContext and you will enjoy the full capabilities of the
Hive query language rather than the more limited capabilities of Spark SQL.
However, even Hive QL does not support the OFFSET clause, at
Hi Akhandeshi,
It may be that you are not seeing your own posts because you are sending
from a gmail account. See for instance
https://support.google.com/a/answer/1703601?hl=en
Hope this helps,
Rick Hillegas
STSM, IBM Analytics, Platform - IBM USA
akhandeshi wrote on
Hi Ruslan,
Here is some sample code which writes a DataFrame to a table in a Derby
database:
import org.apache.spark.sql._
import org.apache.spark.sql.types._
val binaryVal = Array[Byte] ( 1, 2, 3, 4 )
val timestampVal = java.sql.Timestamp.valueOf("1996-01-01 03:30:36")
val dateVal =
Hi Sukesh,
To unsubscribe from the dev list, please send a message to
dev-unsubscr...@spark.apache.org. To unsubscribe from the user list, please
send a message user-unsubscr...@spark.apache.org. Please see:
http://spark.apache.org/community.html#mailing-lists.
Thanks,
-Rick
sukesh kumar
Hi Ntale,
To unsubscribe from the user list, please send a message to
user-unsubscr...@spark.apache.org as described here:
http://spark.apache.org/community.html#mailing-lists.
Thanks,
-Rick
Ntale Lukama wrote on 09/23/2015 04:34:48 AM:
> From: Ntale Lukama
For what it's worth, I get the expected result that "filter" behaves like
"group by" when I run the same experiment against a DataFrame which was
loaded from a relational store:
import org.apache.spark.sql._
import org.apache.spark.sql.types._
val df = sqlContext.read.format("jdbc").options(
To unsubscribe from the user list, please send a message to
user-unsubscr...@spark.apache.org as described here:
http://spark.apache.org/community.html#mailing-lists.
Thanks,
-Rick
The latest Derby SQL Reference manual (version 10.11) can be found here:
https://db.apache.org/derby/docs/10.11/ref/index.html. It is, indeed, very
useful to have a comprehensive reference guide. The Derby build scripts can
also produce a BNF description of the grammar--but that is not part of
15 matches
Mail list logo