Hello Shahab,
I think CassandraAwareHiveContext
https://github.com/tuplejump/calliope/blob/develop/sql/hive/src/main/scala/org/apache/spark/sql/hive/CassandraAwareHiveContext.scala
in
Calliopee is what you are looking for. Create CAHC instance and you should
be able to run hive functions against
, Mar 3, 2015 at 5:41 PM, Rohit Rai ro...@tuplejump.com wrote:
Hello Shahab,
I think CassandraAwareHiveContext
https://github.com/tuplejump/calliope/blob/develop/sql/hive/src/main/scala/org/apache/spark/sql/hive/CassandraAwareHiveContext.scala
in
Calliopee is what you are looking for. Create
.x)
org.apache.hadoop.mapreduce.TaskAttemptContext, but class (hadoop 1.x) was
expected
com.tuplejump.calliope.hadoop.cql3.CqlRecordReader.initialize(CqlRecordReader.java:82)
Tian
On Friday, October 3, 2014 11:15 AM, Rohit Rai ro...@tuplejump.com
wrote:
Hi All,
An year ago we
Hi All,
An year ago we started this journey and laid the path for Spark + Cassandra
stack. We established the ground work and direction for Spark Cassandra
connectors and we have been happy seeing the results.
With Spark 1.1.0 and SparkSQL release, we its time to take Calliope
Alan/TD,
We are facing the problem in a project going to production.
Was there any progress on this? Are we able to confirm that this is a
bug/limitation in the current streaming code? Or there is anything wrong in
user scope?
Regards,
Rohit
*Founder CEO, **Tuplejump, Inc.*
and faster to use.
Will you be attending the Spark Summit? I'll be around.
We'll be in touch in any case :-)
-kr, Gerard.
On Thu, Jun 26, 2014 at 11:03 AM, Rohit Rai ro...@tuplejump.com wrote:
Hi Gerard,
What is the version of Spark, Hadoop, Cassandra and Calliope are you
using. We never
Hi Gerard,
What is the version of Spark, Hadoop, Cassandra and Calliope are you using.
We never built Calliope to Hadoop2 as we/or our clients don't use Hadoop in
their deployments or use it only as the Infra component for Spark in which
case H1/H2 doesn't make a difference for them.
I know
Hello Adrian,
Calliope relies on transformers to convert from a given type to ByteBuffer
which is the format that is required by Cassandra.
RichByteBuffer's incompleteness is at fault here. We are working on
increasing the types we support out of the box, and will support all types
supported
Hello Eric,
This happens when the data being fetched from Cassandra in single split is
greater than the maximum framesize allowed in thrift (yes it still uses
thrift underneath, until the next release when we will start using Native
CQL).
Generally, we do set the the Cassandra the framesize in
. And
it should let you write-back to Cassandra by giving a mapping of fields to
the respective cassandra columns. I think all of this would be fairly easy
to implement on SchemaRDD and likely will make it into Spark 1.1
- Patrick
On Wed, Mar 26, 2014 at 10:59 PM, Rohit Rai ro...@tuplejump.com wrote
We are happy that you found Calliope useful and glad we could help.
*Founder CEO, **Tuplejump, Inc.*
www.tuplejump.com
*The Data Engineering Platform*
On Sat, Mar 8, 2014 at 2:18 AM, Brian O'Neill b...@alumni.brown.edu wrote:
FWIW - I posted some notes to help
Hello Andy,
This is a problem we have seen in using the CQL Java driver under heavy
ready loads where it is using NIO and is waiting on many pending responses
which causes to many open sockets and hence too many open files. Are you by
any chance using async queries?
I am the maintainer of
12 matches
Mail list logo