Re: Datastax Java Driver Compatibility Matrix

2022-04-19 Thread Jai Bheemsen Rao Dhanwada
Thank you, this information is very helpful.

On Tue, Apr 19, 2022 at 11:53 AM C. Scott Andreas 
wrote:

> Hi Jai,
>
> Cassandra 4.0 supports CQLv3, CQLv4, and CQLv5. A driver connecting using
> any of these protocols will work. Cassandra 4.0 did not remove support for
> CQLv3 which makes adoption easier for a very large portion of the user
> community.
>
> I'd recommend not specifying the protocol version in your cluster builder
> and allow the client and server to negotiate the newest matching protocol
> version instead.
>
> I wouldn't recommend attempting to force a 3.2 Java Driver to negotiate
> CQLv5, though as its support is definitely incomplete. The 3.2 Java Driver
> is five years old and a very large number of bugs have been fixed since
> then. Newer 3.x Java Driver releases should be binary-compatible so you can
> likely just bump your dependency version and immediately pick up a large
> number of bugfixes.
>
> But yes, Java Driver 3.2 will work fine using CQLv4 or CQLv3 with
> Cassandra 4.0.
>
> – Scott
>
> On Apr 19, 2022, at 11:45 AM, Jai Bheemsen Rao Dhanwada <
> jaibheem...@gmail.com> wrote:
>
>
> Thank you Scott for the information.
>
> I am currently using the 3.2 version of Datastax Driver and using the
> Cluster Builder with Protocol Version V3. Does this mean 3.2 with
> protocol version v3 can still work with Cassandra4.0 server?
>
> Also from the documentation
> <https://docs.datastax.com/en/drivers/java/3.2/com/datastax/driver/core/Cluster.Builder.html#withProtocolVersion-com.datastax.driver.core.ProtocolVersion->
> I see that 3.2 supports upto V5 version of protocol.
>
> Does this mean a) 3.2 driver with V3 protocol works for cassandra 4.0 or
> b) I have to change the protocol version to V4 or higher on 3.2 to be able
> to work with 4.0?
>
> On Tue, Apr 19, 2022 at 11:15 AM C. Scott Andreas 
> wrote:
>
>> The DataStax Java 3.x drivers work very well with Apache Cassandra 4.0.
>> I'd recommend one of the more recent releases in the series, though (e.g.,
>> 3.6.x+).
>>
>> I'm not the author of this documentation, but it may refer to the fact
>> that the 3.x Java Driver supports the CQL v4 wire protocol, but not the new
>> v5 wire protocol introduced in Cassandra 4.0. This means that all existing
>> features will continue to work fine; but a small number of new features in
>> 4.0 will require a new driver before they can be adopted.
>>
>> A couple examples of new features in the CQLv5 wire protocol are client
>> checksumming in the absence of TLS or a checksumming codec, better
>> read/write failure error messages, and native duration types.
>>
>> – Scott
>>
>> On Apr 19, 2022, at 10:08 AM, Jai Bheemsen Rao Dhanwada <
>> jaibheem...@gmail.com> wrote:
>>
>>
>> Hello Erick,
>>
>> It looks like the 3.0+ driver is not compatible with the Cassandra 4.0 as
>> per: https://docs.datastax.com/en/driver-matrix/doc/java-drivers.html
>>
>> The documents say it's partially compatible, what does this mean? What
>> will be broken if I continue to use 3.0+ driver with Cassandra 4.0? I did a
>> quick test with my application using 3.2 driver with Cassandra 4.0.3 and it
>> works fine.
>>
>>
>> On Mon, Apr 19, 2021 at 7:14 PM Jai Bheemsen Rao Dhanwada <
>> jaibheem...@gmail.com> wrote:
>>
>>> Thank you
>>>
>>> On Monday, April 19, 2021, Erick Ramirez 
>>> wrote:
>>>
>>>> Is there a Datastax Java Driver
>>>>> <https://docs.datastax.com/en/driver-matrix/doc/driver_matrix/javaDrivers.html>
>>>>> Compatibility matrix available for Cassandra 4.0?
>>>>>
>>>>
>>>> No, there isn't but the same driver versions apply to C* 4.0 under the
>>>> column 3.0+.
>>>>
>>>> Thanks for bringing this up as it has prompted me to consider its
>>>> inclusion in the official Apache Cassandra website and I've logged
>>>> CASSANDRA-16617 <https://issues.apache.org/jira/browse/CASSANDRA-16617>.
>>>> Cheers!
>>>>
>>>
>>
>>
>>
>>
>>
>


Re: Datastax Java Driver Compatibility Matrix

2022-04-19 Thread C. Scott Andreas

Hi Jai,Cassandra 4.0 supports CQLv3, CQLv4, and CQLv5. A driver connecting using any of these protocols will 
work. Cassandra 4.0 did not remove support for CQLv3 which makes adoption easier for a very large portion of 
the user community.I'd recommend not specifying the protocol version in your cluster builder and allow the 
client and server to negotiate the newest matching protocol version instead.I wouldn't recommend attempting 
to force a 3.2 Java Driver to negotiate CQLv5, though as its support is definitely incomplete. The 3.2 Java 
Driver is five years old and a very large number of bugs have been fixed since then. Newer 3.x Java Driver 
releases should be binary-compatible so you can likely just bump your dependency version and immediately pick 
up a large number of bugfixes.But yes, Java Driver 3.2 will work fine using CQLv4 or CQLv3 with Cassandra 
4.0.– ScottOn Apr 19, 2022, at 11:45 AM, Jai Bheemsen Rao Dhanwada  wrote:Thank 
you Scott for the information.I am currently using the 3.2 version of Datastax Driver and using the Cluster 
Builder with Protocol Version V3. Does this mean 3.2 with protocol version v3 can still work with 
Cassandra4.0 server?Also from the documentation I see that 3.2 supports upto V5 version of protocol.Does this 
mean a) 3.2 driver with V3 protocol works for cassandra 4.0 or b) I have to change the protocol version to V4 
or higher on 3.2 to be able to work with 4.0?On Tue, Apr 19, 2022 at 11:15 AM C. Scott Andreas 
 wrote:The DataStax Java 3.x drivers work very well with Apache Cassandra 4.0. 
I'd recommend one of the more recent releases in the series, though (e.g., 3.6.x+).I'm not the author of this 
documentation, but it may refer to the fact that the 3.x Java Driver supports the CQL v4 wire protocol, but 
not the new v5 wire protocol introduced in Cassandra 4.0. This means that all existing features will continue 
to work fine; but a small number of new features in 4.0 will require a new driver before they can be 
adopted.A couple examples of new features in the CQLv5 wire protocol are client checksumming in the absence 
of TLS or a checksumming codec, better read/write failure error messages, and native duration types.– ScottOn 
Apr 19, 2022, at 10:08 AM, Jai Bheemsen Rao Dhanwada  wrote:Hello Erick,It looks 
like the 3.0+ driver is not compatible with the Cassandra 4.0 as per: 
https://docs.datastax.com/en/driver-matrix/doc/java-drivers.htmlThe documents say it's partially compatible, 
what does this mean? What will be broken if I continue to use 3.0+ driver with Cassandra 4.0? I did a quick 
test with my application using 3.2 driver with Cassandra 4.0.3 and it works fine.On Mon, Apr 19, 2021 at 7:14 
PM Jai Bheemsen Rao Dhanwada  wrote:Thank you On Monday, April 19, 2021, Erick 
Ramirez  wrote:Is there a Datastax Java Driver  Compatibility matrix 
available for Cassandra 4.0?No, there isn't but the same driver versions apply to C* 4.0 under the column 
3.0+.Thanks for bringing this up as it has prompted me to consider its inclusion in the official Apache 
Cassandra website and I've logged CASSANDRA-16617. Cheers!

Re: Datastax Java Driver Compatibility Matrix

2022-04-19 Thread Jai Bheemsen Rao Dhanwada
Thank you Scott for the information.

I am currently using the 3.2 version of Datastax Driver and using the
Cluster Builder with Protocol Version V3. Does this mean 3.2 with
protocol version v3 can still work with Cassandra4.0 server?

Also from the documentation
<https://docs.datastax.com/en/drivers/java/3.2/com/datastax/driver/core/Cluster.Builder.html#withProtocolVersion-com.datastax.driver.core.ProtocolVersion->
I see that 3.2 supports upto V5 version of protocol.

Does this mean a) 3.2 driver with V3 protocol works for cassandra 4.0 or b)
I have to change the protocol version to V4 or higher on 3.2 to be able to
work with 4.0?

On Tue, Apr 19, 2022 at 11:15 AM C. Scott Andreas 
wrote:

> The DataStax Java 3.x drivers work very well with Apache Cassandra 4.0.
> I'd recommend one of the more recent releases in the series, though (e.g.,
> 3.6.x+).
>
> I'm not the author of this documentation, but it may refer to the fact
> that the 3.x Java Driver supports the CQL v4 wire protocol, but not the new
> v5 wire protocol introduced in Cassandra 4.0. This means that all existing
> features will continue to work fine; but a small number of new features in
> 4.0 will require a new driver before they can be adopted.
>
> A couple examples of new features in the CQLv5 wire protocol are client
> checksumming in the absence of TLS or a checksumming codec, better
> read/write failure error messages, and native duration types.
>
> – Scott
>
> On Apr 19, 2022, at 10:08 AM, Jai Bheemsen Rao Dhanwada <
> jaibheem...@gmail.com> wrote:
>
>
> Hello Erick,
>
> It looks like the 3.0+ driver is not compatible with the Cassandra 4.0 as
> per: https://docs.datastax.com/en/driver-matrix/doc/java-drivers.html
>
> The documents say it's partially compatible, what does this mean? What
> will be broken if I continue to use 3.0+ driver with Cassandra 4.0? I did a
> quick test with my application using 3.2 driver with Cassandra 4.0.3 and it
> works fine.
>
>
> On Mon, Apr 19, 2021 at 7:14 PM Jai Bheemsen Rao Dhanwada <
> jaibheem...@gmail.com> wrote:
>
>> Thank you
>>
>> On Monday, April 19, 2021, Erick Ramirez 
>> wrote:
>>
>>> Is there a Datastax Java Driver
>>>> <https://docs.datastax.com/en/driver-matrix/doc/driver_matrix/javaDrivers.html>
>>>> Compatibility matrix available for Cassandra 4.0?
>>>>
>>>
>>> No, there isn't but the same driver versions apply to C* 4.0 under the
>>> column 3.0+.
>>>
>>> Thanks for bringing this up as it has prompted me to consider its
>>> inclusion in the official Apache Cassandra website and I've logged
>>> CASSANDRA-16617 <https://issues.apache.org/jira/browse/CASSANDRA-16617>.
>>> Cheers!
>>>
>>
>
>
>
>
>


Re: Datastax Java Driver Compatibility Matrix

2022-04-19 Thread C. Scott Andreas

The DataStax Java 3.x drivers work very well with Apache Cassandra 4.0. I'd recommend one of the 
more recent releases in the series, though (e.g., 3.6.x+).I'm not the author of this 
documentation, but it may refer to the fact that the 3.x Java Driver supports the CQL v4 wire 
protocol, but not the new v5 wire protocol introduced in Cassandra 4.0. This means that all 
existing features will continue to work fine; but a small number of new features in 4.0 will 
require a new driver before they can be adopted.A couple examples of new features in the CQLv5 
wire protocol are client checksumming in the absence of TLS or a checksumming codec, better 
read/write failure error messages, and native duration types.– ScottOn Apr 19, 2022, at 10:08 AM, 
Jai Bheemsen Rao Dhanwada  wrote:Hello Erick,It looks like the 3.0+ 
driver is not compatible with the Cassandra 4.0 as per: 
https://docs.datastax.com/en/driver-matrix/doc/java-drivers.htmlThe documents say it's partially 
compatible, what does this mean? What will be broken if I continue to use 3.0+ driver with 
Cassandra 4.0? I did a quick test with my application using 3.2 driver with Cassandra 4.0.3 and 
it works fine.On Mon, Apr 19, 2021 at 7:14 PM Jai Bheemsen Rao Dhanwada 
 wrote:Thank you On Monday, April 19, 2021, Erick Ramirez 
 wrote:Is there a Datastax Java Driver  Compatibility matrix 
available for Cassandra 4.0?No, there isn't but the same driver versions apply to C* 4.0 under 
the column 3.0+.Thanks for bringing this up as it has prompted me to consider its inclusion in 
the official Apache Cassandra website and I've logged CASSANDRA-16617. Cheers!

Re: Datastax Java Driver Compatibility Matrix

2022-04-19 Thread Jai Bheemsen Rao Dhanwada
Hello Erick,

It looks like the 3.0+ driver is not compatible with the Cassandra 4.0 as
per: https://docs.datastax.com/en/driver-matrix/doc/java-drivers.html

The documents say it's partially compatible, what does this mean? What will
be broken if I continue to use 3.0+ driver with Cassandra 4.0? I did a
quick test with my application using 3.2 driver with Cassandra 4.0.3 and it
works fine.


On Mon, Apr 19, 2021 at 7:14 PM Jai Bheemsen Rao Dhanwada <
jaibheem...@gmail.com> wrote:

> Thank you
>
> On Monday, April 19, 2021, Erick Ramirez 
> wrote:
>
>> Is there a Datastax Java Driver
>>> <https://docs.datastax.com/en/driver-matrix/doc/driver_matrix/javaDrivers.html>
>>> Compatibility matrix available for Cassandra 4.0?
>>>
>>
>> No, there isn't but the same driver versions apply to C* 4.0 under the
>> column 3.0+.
>>
>> Thanks for bringing this up as it has prompted me to consider its
>> inclusion in the official Apache Cassandra website and I've logged
>> CASSANDRA-16617 <https://issues.apache.org/jira/browse/CASSANDRA-16617>.
>> Cheers!
>>
>


Re: Datastax Java Driver Compatibility Matrix

2021-04-19 Thread Jai Bheemsen Rao Dhanwada
Thank you

On Monday, April 19, 2021, Erick Ramirez  wrote:

> Is there a Datastax Java Driver
>> <https://docs.datastax.com/en/driver-matrix/doc/driver_matrix/javaDrivers.html>
>> Compatibility matrix available for Cassandra 4.0?
>>
>
> No, there isn't but the same driver versions apply to C* 4.0 under the
> column 3.0+.
>
> Thanks for bringing this up as it has prompted me to consider its
> inclusion in the official Apache Cassandra website and I've logged
> CASSANDRA-16617 <https://issues.apache.org/jira/browse/CASSANDRA-16617>.
> Cheers!
>


Re: Datastax Java Driver Compatibility Matrix

2021-04-19 Thread Erick Ramirez
>
> Is there a Datastax Java Driver
> <https://docs.datastax.com/en/driver-matrix/doc/driver_matrix/javaDrivers.html>
> Compatibility matrix available for Cassandra 4.0?
>

No, there isn't but the same driver versions apply to C* 4.0 under the
column 3.0+.

Thanks for bringing this up as it has prompted me to consider its inclusion
in the official Apache Cassandra website and I've logged CASSANDRA-16617
<https://issues.apache.org/jira/browse/CASSANDRA-16617>. Cheers!


Datastax Java Driver Compatibility Matrix

2021-04-19 Thread Jai Bheemsen Rao Dhanwada
Hello,

Is there a Datastax Java Driver
<https://docs.datastax.com/en/driver-matrix/doc/driver_matrix/javaDrivers.html>
Compatibility matrix available for Cassandra 4.0?


Re: Cassandra DataStax Java Driver in combination with Java EE / EJBs

2019-06-11 Thread Ralph Soika

Hi Stefan,
Hi John,

thanks for your answers, this helps me a lot.

@John: you are right, EJB does not bring any advantage in this case. I 
will change my classes to simple CDI.


I will write a short blog about this solution after I finished.

Best regards

Ralph

On 12.06.19 07:58, Stefan Miklosovic wrote:

Hi Ralph,

yes this is completely fine, even advisable. You can further extend
this idea to have sessions per keyspace for example if you really
insist, and it could be injectable based on some qualifier ... thats
up to you.

On Wed, 12 Jun 2019 at 11:31, John Sanda  wrote:

Hi Ralph,

A session is intended to be a long-lived, i.e., application-scoped object. You 
only need one session per cluster. I think what you are doing with the 
@Singleton is fine. In my opinion though, EJB really does not offer much value 
when working with Cassandra. I would be inclined to just use CDI.

Cheers

John

On Tue, Jun 11, 2019 at 5:38 PM Ralph Soika  wrote:

Hi,

I have a question concerning the Cassandra DataStax Java Driver in combination 
with Java EE and EJBs.

I have implemented a Rest Service API based on Java EE8. In my application I 
have for example a jax-rs rest resource to write data into cassandra cluster. 
My first approach was to create in each method call

  a new Casssandra Cluster and Session object,
  write my data into cassandra
  and finally close the session and the cluster object.

This works but it takes a lot of time (2-3 seconds) until the cluster object / 
session is opened for each request.

  So my second approach is now a @Singleton EJB providing the session object 
for my jax-rs resources. My service implementation to hold the Session object 
looks something like this:


@Singleton
public class ClusterService {
 private Cluster cluster;
 private Session session;

 @PostConstruct
 private void init() throws ArchiveException {
 cluster=initCluster();
 session = initArchiveSession();
 }

 @PreDestroy
 private void tearDown() throws ArchiveException {
 // close session and cluster object
 if (session != null) {
 session.close();
 }
 if (cluster != null) {
 cluster.close();
 }
 }

 public Session getSession() {
 if (session==null) {
 try {
 init();
 } catch (ArchiveException e) {
 logger.warning("unable to get falid session: " + 
e.getMessage());
 e.printStackTrace();
 }
 }
 return session;
 }

.

}


And my rest service calls now looking like this:


@Path("/archive")
@Stateless
public class ArchiveRestService {

 @EJB
 ClusterService clusterService;

 @POST
 @Consumes({ MediaType.APPLICATION_XML, MediaType.TEXT_XML })
 public Response postData(XMLDocument xmlDocument) {
 Session session = clusterService.getSession();
 session.execute();
 ...
 }
 ...
}


The result is now a super-fast behavior! Seems to be clear because my rest 
service no longer need to open a new session for each request.

My question is: Is this approach with a @Singleton ClusterService EJB valid or 
is there something I should avoid?
As far as I can see this works pretty fine and is really fast. I am running the 
application on a Wildfly 15 server which is Java EE8.

Thanks for your comments

Ralph




--

Imixs Software Solutions GmbH
Web: www.imixs.com Phone: +49 (0)89-452136 16
Office: Agnes-Pockels-Bogen 1, 80992 München
Registergericht: Amtsgericht Muenchen, HRB 136045
Geschaeftsführer: Gaby Heinle u. Ralph Soika

Imixs is an open source company, read more: www.imixs.org



--

- John

-
To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
For additional commands, e-mail: user-h...@cassandra.apache.org


--

*Imixs Software Solutions GmbH*
*Web:* www.imixs.com <http://www.imixs.com> *Phone:* +49 (0)89-452136 16
*Office:* Agnes-Pockels-Bogen 1, 80992 München
Registergericht: Amtsgericht Muenchen, HRB 136045
Geschaeftsführer: Gaby Heinle u. Ralph Soika

*Imixs* is an open source company, read more: www.imixs.org 
<http://www.imixs.org>




Re: Cassandra DataStax Java Driver in combination with Java EE / EJBs

2019-06-11 Thread Stefan Miklosovic
Hi Ralph,

yes this is completely fine, even advisable. You can further extend
this idea to have sessions per keyspace for example if you really
insist, and it could be injectable based on some qualifier ... thats
up to you.

On Wed, 12 Jun 2019 at 11:31, John Sanda  wrote:
>
> Hi Ralph,
>
> A session is intended to be a long-lived, i.e., application-scoped object. 
> You only need one session per cluster. I think what you are doing with the 
> @Singleton is fine. In my opinion though, EJB really does not offer much 
> value when working with Cassandra. I would be inclined to just use CDI.
>
> Cheers
>
> John
>
> On Tue, Jun 11, 2019 at 5:38 PM Ralph Soika  wrote:
>>
>> Hi,
>>
>> I have a question concerning the Cassandra DataStax Java Driver in 
>> combination with Java EE and EJBs.
>>
>> I have implemented a Rest Service API based on Java EE8. In my application I 
>> have for example a jax-rs rest resource to write data into cassandra 
>> cluster. My first approach was to create in each method call
>>
>>  a new Casssandra Cluster and Session object,
>>  write my data into cassandra
>>  and finally close the session and the cluster object.
>>
>> This works but it takes a lot of time (2-3 seconds) until the cluster object 
>> / session is opened for each request.
>>
>>  So my second approach is now a @Singleton EJB providing the session object 
>> for my jax-rs resources. My service implementation to hold the Session 
>> object looks something like this:
>>
>>
>> @Singleton
>> public class ClusterService {
>> private Cluster cluster;
>> private Session session;
>>
>> @PostConstruct
>> private void init() throws ArchiveException {
>> cluster=initCluster();
>> session = initArchiveSession();
>> }
>>
>> @PreDestroy
>> private void tearDown() throws ArchiveException {
>> // close session and cluster object
>> if (session != null) {
>> session.close();
>> }
>> if (cluster != null) {
>> cluster.close();
>> }
>> }
>>
>> public Session getSession() {
>> if (session==null) {
>> try {
>> init();
>> } catch (ArchiveException e) {
>> logger.warning("unable to get falid session: " + 
>> e.getMessage());
>> e.printStackTrace();
>> }
>> }
>> return session;
>> }
>>
>>.
>>
>> }
>>
>>
>> And my rest service calls now looking like this:
>>
>>
>> @Path("/archive")
>> @Stateless
>> public class ArchiveRestService {
>>
>> @EJB
>> ClusterService clusterService;
>>
>> @POST
>> @Consumes({ MediaType.APPLICATION_XML, MediaType.TEXT_XML })
>> public Response postData(XMLDocument xmlDocument) {
>> Session session = clusterService.getSession();
>> session.execute();
>> ...
>> }
>> ...
>> }
>>
>>
>> The result is now a super-fast behavior! Seems to be clear because my rest 
>> service no longer need to open a new session for each request.
>>
>> My question is: Is this approach with a @Singleton ClusterService EJB valid 
>> or is there something I should avoid?
>> As far as I can see this works pretty fine and is really fast. I am running 
>> the application on a Wildfly 15 server which is Java EE8.
>>
>> Thanks for your comments
>>
>> Ralph
>>
>>
>>
>>
>> --
>>
>> Imixs Software Solutions GmbH
>> Web: www.imixs.com Phone: +49 (0)89-452136 16
>> Office: Agnes-Pockels-Bogen 1, 80992 München
>> Registergericht: Amtsgericht Muenchen, HRB 136045
>> Geschaeftsführer: Gaby Heinle u. Ralph Soika
>>
>> Imixs is an open source company, read more: www.imixs.org
>
>
>
> --
>
> - John

-
To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
For additional commands, e-mail: user-h...@cassandra.apache.org



Re: Cassandra DataStax Java Driver in combination with Java EE / EJBs

2019-06-11 Thread John Sanda
Hi Ralph,

A session is intended to be a long-lived, i.e., application-scoped object.
You only need one session per cluster. I think what you are doing with
the @Singleton is fine. In my opinion though, EJB really does not offer
much value when working with Cassandra. I would be inclined to just use CDI.

Cheers

John

On Tue, Jun 11, 2019 at 5:38 PM Ralph Soika  wrote:

> Hi,
>
> I have a question concerning the Cassandra DataStax Java Driver in
> combination with Java EE and EJBs.
>
> I have implemented a Rest Service API based on Java EE8. In my application
> I have for example a jax-rs rest resource to write data into cassandra
> cluster. My first approach was to create in each method call
>
>1.  a new Casssandra Cluster and Session object,
>2.  write my data into cassandra
>3.  and finally close the session and the cluster object.
>
> This works but it takes a lot of time (2-3 seconds) until the cluster
> object / session is opened for each request.
>
>  So my second approach is now a @Singleton EJB providing the session
> object for my jax-rs resources. My service implementation to hold the
> Session object looks something like this:
>
>
> *@Singleton*
> *public class* ClusterService {
> private Cluster cluster;
> private Session session;
>
> @PostConstruct
> *private void* init() throws ArchiveException {
> cluster=initCluster();
> session = initArchiveSession();
> }
>
> @PreDestroy
> *private* void tearDown() throws ArchiveException {
> // close session and cluster object
> if (session != null) {
> session.close();
> }
> if (cluster != null) {
> cluster.close();
> }
> }
>
> *public* Session getSession() {
> if (session==null) {
> try {
> init();
> } catch (ArchiveException e) {
> logger.warning("unable to get falid session: " +
> e.getMessage());
> e.printStackTrace();
> }
> }
> *return* session;
> }
>
>.
>
> }
>
>
> And my rest service calls now looking like this:
>
>
> @Path("/archive")
> @Stateless
> *public class* ArchiveRestService {
>
> @EJB
> ClusterService clusterService;
>
> @POST
> @Consumes({ MediaType.APPLICATION_XML, MediaType.TEXT_XML })
> *public* Response postData(XMLDocument xmlDocument) {
> Session session = clusterService.getSession();
> session.execute();
> ...
> }
> ...
> }
>
>
> The result is now a super-fast behavior! Seems to be clear because my
> rest service no longer need to open a new session for each request.
>
> My question is: Is this approach with a @Singleton ClusterService EJB
> valid or is there something I should avoid?
> As far as I can see this works pretty fine and is really fast. I am
> running the application on a Wildfly 15 server which is Java EE8.
>
> Thanks for your comments
>
> Ralph
>
>
>
>
> --
>
> *Imixs Software Solutions GmbH*
> *Web:* www.imixs.com *Phone:* +49 (0)89-452136 16
> *Office:* Agnes-Pockels-Bogen 1, 80992 München
> Registergericht: Amtsgericht Muenchen, HRB 136045
> Geschaeftsführer: Gaby Heinle u. Ralph Soika
>
> *Imixs* is an open source company, read more: www.imixs.org
>


-- 

- John


Cassandra DataStax Java Driver in combination with Java EE / EJBs

2019-06-11 Thread Ralph Soika

Hi,

I have a question concerning the Cassandra DataStax Java Driver in 
combination with Java EE and EJBs.


I have implemented a Rest Service API based on Java EE8. In my 
application I have for example a jax-rs rest resource to write data into 
cassandra cluster. My first approach was to create in each method call


1.   a new Casssandra Cluster and Session object,
2.   write my data into cassandra
3.   and finally close the session and the cluster object.

This works but it takes a lot of time (2-3 seconds) until the cluster 
object / session is opened for each request.


 So my second approach is now a @Singleton EJB providing the session 
object for my jax-rs resources. My service implementation to hold the 
Session object looks something like this:



*@Singleton*
*public class* ClusterService {
    private Cluster cluster;
    private Session session;

    @PostConstruct
*private void* init() throws ArchiveException {
        cluster=initCluster();
        session = initArchiveSession();
    }

    @PreDestroy
*private* void tearDown() throws ArchiveException {
        // close session and cluster object
        if (session != null) {
            session.close();
        }
        if (cluster != null) {
            cluster.close();
        }
    }

*public* Session getSession() {
        if (session==null) {
            try {
                init();
            } catch (ArchiveException e) {
                logger.warning("unable to get falid session: " + 
e.getMessage());

                e.printStackTrace();
            }
        }
*return* session;
    }

   .

}


And my rest service calls now looking like this:


@Path("/archive")
@Stateless
*public class* ArchiveRestService {

    @EJB
    ClusterService clusterService;

    @POST
    @Consumes({ MediaType.APPLICATION_XML, MediaType.TEXT_XML })
*public* Response postData(XMLDocument xmlDocument) {
        Session session = clusterService.getSession();
        session.execute();
        ...
    }
    ...
}


The result is now a super-fast behavior!Seems to be clear because my 
rest service no longer need to open a new session for each request.


My question is: Is this approach with a @Singleton ClusterService EJB 
valid or is there something I should avoid?
As far as I can see this works pretty fine and is really fast. I am 
running the application on a Wildfly 15 server which is Java EE8.


Thanks for your comments

Ralph




--

*Imixs Software Solutions GmbH*
*Web:* www.imixs.com <http://www.imixs.com> *Phone:* +49 (0)89-452136 16
*Office:* Agnes-Pockels-Bogen 1, 80992 München
Registergericht: Amtsgericht Muenchen, HRB 136045
Geschaeftsführer: Gaby Heinle u. Ralph Soika

*Imixs* is an open source company, read more: www.imixs.org 
<http://www.imixs.org>




Re: Datastax Java Driver compatibility

2019-01-22 Thread Jonathan Haddad
The drivers are not maintained by the Cassandra project, it's up to each
driver maintainer to list their compatibility.

On Tue, Jan 22, 2019 at 10:48 AM Jai Bheemsen Rao Dhanwada <
jaibheem...@gmail.com> wrote:

> Thanks for the response Amanda,
>
> Yes we can go with the latest version but we are trying one change at a
> time, so want to make sure the version compatibility. b/w any plans to
> update the documentation for the latest versions for apache cassandra?
>
> On Tue, Jan 22, 2019 at 10:28 AM Amanda Moran 
> wrote:
>
>> Hi there-
>>
>> I checked with the team here (at DataStax) and this should work. Any
>> reason you need to stick with Java Driver 3.2, there is a 3.6 release.
>>
>> Thanks!
>>
>> Amanda
>>
>> On Tue, Jan 22, 2019 at 8:45 AM Jai Bheemsen Rao Dhanwada <
>> jaibheem...@gmail.com> wrote:
>>
>>> Hello,
>>>
>>> I am looking for Datastax Driver compatibility vs apache cassandra
>>> 3.11.3 version.
>>> However the doc doesn't talk about the 3.11 version.
>>>
>>> https://docs.datastax.com/en/driver-matrix/doc/driver_matrix/javaDrivers.html
>>>
>>> Can someone please confirm if the Datastax Java Driver 3.2.0 version
>>> work with 3.11.3 version of apache cassandra?
>>> Thanks
>>>
>>

-- 
Jon Haddad
http://www.rustyrazorblade.com
twitter: rustyrazorblade


Re: Datastax Java Driver compatibility

2019-01-22 Thread Jai Bheemsen Rao Dhanwada
Thanks for the response Amanda,

Yes we can go with the latest version but we are trying one change at a
time, so want to make sure the version compatibility. b/w any plans to
update the documentation for the latest versions for apache cassandra?

On Tue, Jan 22, 2019 at 10:28 AM Amanda Moran 
wrote:

> Hi there-
>
> I checked with the team here (at DataStax) and this should work. Any
> reason you need to stick with Java Driver 3.2, there is a 3.6 release.
>
> Thanks!
>
> Amanda
>
> On Tue, Jan 22, 2019 at 8:45 AM Jai Bheemsen Rao Dhanwada <
> jaibheem...@gmail.com> wrote:
>
>> Hello,
>>
>> I am looking for Datastax Driver compatibility vs apache cassandra 3.11.3
>> version.
>> However the doc doesn't talk about the 3.11 version.
>>
>> https://docs.datastax.com/en/driver-matrix/doc/driver_matrix/javaDrivers.html
>>
>> Can someone please confirm if the Datastax Java Driver 3.2.0 version work
>> with 3.11.3 version of apache cassandra?
>> Thanks
>>
>


Re: Datastax Java Driver compatibility

2019-01-22 Thread Amanda Moran
Hi there-

I checked with the team here (at DataStax) and this should work. Any reason
you need to stick with Java Driver 3.2, there is a 3.6 release.

Thanks!

Amanda

On Tue, Jan 22, 2019 at 8:45 AM Jai Bheemsen Rao Dhanwada <
jaibheem...@gmail.com> wrote:

> Hello,
>
> I am looking for Datastax Driver compatibility vs apache cassandra 3.11.3
> version.
> However the doc doesn't talk about the 3.11 version.
>
> https://docs.datastax.com/en/driver-matrix/doc/driver_matrix/javaDrivers.html
>
> Can someone please confirm if the Datastax Java Driver 3.2.0 version work
> with 3.11.3 version of apache cassandra?
> Thanks
>


Datastax Java Driver compatibility

2019-01-22 Thread Jai Bheemsen Rao Dhanwada
Hello,

I am looking for Datastax Driver compatibility vs apache cassandra 3.11.3
version.
However the doc doesn't talk about the 3.11 version.
https://docs.datastax.com/en/driver-matrix/doc/driver_matrix/javaDrivers.html

Can someone please confirm if the Datastax Java Driver 3.2.0 version work
with 3.11.3 version of apache cassandra?
Thanks


Re: DataStax Java driver QueryBuilder: CREATE table?

2017-12-14 Thread Andy Tolbert
Hi Oliver,

SchemaBuilder
<http://docs.datastax.com/en/latest-java-driver-api/com/datastax/driver/core/schemabuilder/SchemaBuilder.html>
enables building schema DDL statements like CREATE TABLE, KEYSPACE and so
on.  You can find some examples in the tests
<https://github.com/datastax/java-driver/blob/3.3.x/driver-core/src/test/java/com/datastax/driver/core/schemabuilder/CreateTest.java>
.

Thanks,
Andy

On Thu, Dec 14, 2017 at 5:16 PM Oliver Ruebenacker  wrote:

>
>  Hello,
>
>   I'm using the DataStax Java Driver, which has a QueryBuilder class to
> construct CQL statements. I can see how to build SELECT, INSERT, TRUNCATE
> etc statements, but I can't find how to build a CREATE statement. Am I
> missing something?
>
>   Thanks!
>
>  Best, Oliver
>
>
> --
> Oliver Ruebenacker
> Senior Software Engineer, Diabetes Portal
> <http://www.type2diabetesgenetics.org/>, Broad Institute
> <http://www.broadinstitute.org/>
>
>


DataStax Java driver QueryBuilder: CREATE table?

2017-12-14 Thread Oliver Ruebenacker
 Hello,

  I'm using the DataStax Java Driver, which has a QueryBuilder class to
construct CQL statements. I can see how to build SELECT, INSERT, TRUNCATE
etc statements, but I can't find how to build a CREATE statement. Am I
missing something?

  Thanks!

 Best, Oliver

-- 
Oliver Ruebenacker
Senior Software Engineer, Diabetes Portal
<http://www.type2diabetesgenetics.org/>, Broad Institute
<http://www.broadinstitute.org/>


Re: A question to 'paging' support in DataStax java driver

2016-05-10 Thread Sebastian Estevez
I didn't read the whole thread last time around, please disregard my
comment about the java driver jira.

One other thought (hopefully relevant this time). Once we have
https://issues.apache.org/jira/browse/CASSANDRA-10783, you could write a
write a (*start*, *rows*) style paging UDF which would allow you to read
just page 4 for example. Granted you will still have to *scan* the data
from 0 to start at the server and throw it away, but might get you closer
to what you are looking for.




All the best,


[image: datastax_logo.png] <http://www.datastax.com/>

Sebastián Estévez

Solutions Architect | 954 905 8615 | sebastian.este...@datastax.com

[image: linkedin.png] <https://www.linkedin.com/company/datastax> [image:
facebook.png] <https://www.facebook.com/datastax> [image: twitter.png]
<https://twitter.com/datastax> [image: g+.png]
<https://plus.google.com/+Datastax/about>
<http://feeds.feedburner.com/datastax>
<http://goog_410786983>


<http://www.datastax.com/gartner-magic-quadrant-odbms>

DataStax is the fastest, most scalable distributed database technology,
delivering Apache Cassandra to the world’s most innovative enterprises.
Datastax is built to be agile, always-on, and predictably scalable to any
size. With more than 500 customers in 45 countries, DataStax is the
database technology and transactional backbone of choice for the worlds
most innovative companies such as Netflix, Adobe, Intuit, and eBay.

On Tue, May 10, 2016 at 9:23 AM, Sebastian Estevez <
sebastian.este...@datastax.com> wrote:

> I think this request belongs in the java driver jira not the Cassandra
> jira.
>
> https://datastax-oss.atlassian.net/projects/JAVA/
>
> all the best,
>
> Sebastián
> On May 10, 2016 1:09 AM, "Lu, Boying"  wrote:
>
>> I filed a JIRA https://issues.apache.org/jira/browse/CASSANDRA-11741 to
>> track this.
>>
>>
>>
>> *From:* DuyHai Doan [mailto:doanduy...@gmail.com]
>> *Sent:* 2016年5月10日 12:47
>> *To:* user@cassandra.apache.org
>> *Subject:* Re: A question to 'paging' support in DataStax java driver
>>
>>
>>
>> I guess it's technically possible but then we'll need to update the
>> binary protocol. Just create a JIRA and ask for this feature
>>
>>
>>
>> On Tue, May 10, 2016 at 5:00 AM, Lu, Boying  wrote:
>>
>> Thanks very much.
>>
>>
>>
>> I understand that the data needs to be read from the DB to get the next
>> ‘PagingState’.
>>
>>
>>
>> But is it possible not to return those data to the client side, just
>> returning the ‘PagingState’?
>>
>> I.e. the data is read on the server side, but not return to client side,
>> this can save some bandwidth
>>
>> between client and server.
>>
>>
>>
>>
>>
>> *From:* DuyHai Doan [mailto:doanduy...@gmail.com]
>> *Sent:* 2016年5月9日 21:06
>> *To:* user@cassandra.apache.org
>> *Subject:* Re: A question to 'paging' support in DataStax java driver
>>
>>
>>
>> In a truly consistent world (should I say "snapshot isolation" world
>> instead), re-reading the same page should yield the same results no matter
>> how many new inserts have occurred since the last page read.
>>
>>
>>
>> Caching previous page at app level can be a solution but not viable if
>> the amount of data is huge, also you'll need a cache layer and deal with
>> cache invalidation etc ...
>>
>>
>>
>> The point is, providing snapshot isolation in a distributed system is
>> hard without some sort of synchronous coordination e.g. global lock (read
>> http://www.bailis.org/papers/hat-vldb2014.pdf)
>>
>>
>>
>>
>>
>> On Mon, May 9, 2016 at 2:17 PM, Bhuvan Rawal  wrote:
>>
>> Hi Doan,
>>
>>
>>
>> What does it have to do being eventual consistency? Lets assume a
>> scenario with complete consistency and we are at page X, and at the same
>> time some inserts/updates happened at page X-2 and we jumped to that.
>>
>> User will see inconsistent page in that case as well, right? Also in such
>> cases how would you design a user facing application (Cache previous pages
>> at app level?)
>>
>>
>>
>> Regards,
>>
>> Bhuvan
>>
>>
>>
>> On Mon, May 9, 2016 at 4:18 PM, DuyHai Doan  wrote:
>>
>> "Is it possible to just return PagingState object without returning
>> data?" --> No
>>
>>
>>
>> Simply because before reading the actual data for each page of N rows,
>> you cannot know at wh

RE: A question to 'paging' support in DataStax java driver

2016-05-10 Thread Sebastian Estevez
I think this request belongs in the java driver jira not the Cassandra jira.

https://datastax-oss.atlassian.net/projects/JAVA/

all the best,

Sebastián
On May 10, 2016 1:09 AM, "Lu, Boying"  wrote:

> I filed a JIRA https://issues.apache.org/jira/browse/CASSANDRA-11741 to
> track this.
>
>
>
> *From:* DuyHai Doan [mailto:doanduy...@gmail.com]
> *Sent:* 2016年5月10日 12:47
> *To:* user@cassandra.apache.org
> *Subject:* Re: A question to 'paging' support in DataStax java driver
>
>
>
> I guess it's technically possible but then we'll need to update the binary
> protocol. Just create a JIRA and ask for this feature
>
>
>
> On Tue, May 10, 2016 at 5:00 AM, Lu, Boying  wrote:
>
> Thanks very much.
>
>
>
> I understand that the data needs to be read from the DB to get the next
> ‘PagingState’.
>
>
>
> But is it possible not to return those data to the client side, just
> returning the ‘PagingState’?
>
> I.e. the data is read on the server side, but not return to client side,
> this can save some bandwidth
>
> between client and server.
>
>
>
>
>
> *From:* DuyHai Doan [mailto:doanduy...@gmail.com]
> *Sent:* 2016年5月9日 21:06
> *To:* user@cassandra.apache.org
> *Subject:* Re: A question to 'paging' support in DataStax java driver
>
>
>
> In a truly consistent world (should I say "snapshot isolation" world
> instead), re-reading the same page should yield the same results no matter
> how many new inserts have occurred since the last page read.
>
>
>
> Caching previous page at app level can be a solution but not viable if the
> amount of data is huge, also you'll need a cache layer and deal with cache
> invalidation etc ...
>
>
>
> The point is, providing snapshot isolation in a distributed system is hard
> without some sort of synchronous coordination e.g. global lock (read
> http://www.bailis.org/papers/hat-vldb2014.pdf)
>
>
>
>
>
> On Mon, May 9, 2016 at 2:17 PM, Bhuvan Rawal  wrote:
>
> Hi Doan,
>
>
>
> What does it have to do being eventual consistency? Lets assume a scenario
> with complete consistency and we are at page X, and at the same time some
> inserts/updates happened at page X-2 and we jumped to that.
>
> User will see inconsistent page in that case as well, right? Also in such
> cases how would you design a user facing application (Cache previous pages
> at app level?)
>
>
>
> Regards,
>
> Bhuvan
>
>
>
> On Mon, May 9, 2016 at 4:18 PM, DuyHai Doan  wrote:
>
> "Is it possible to just return PagingState object without returning
> data?" --> No
>
>
>
> Simply because before reading the actual data for each page of N rows, you
> cannot know at which token value a page of data starts...
>
>
>
> And it is worst than that, with paging you don't have any isolation. Let's
> suppose you keep in your application/web front-end the paging states for
> page 1, 2 and 3. Since there are concurrent inserts on the cluster at the
> same time, when you re-use the paging state 2 for example, you may not get
> the same results as the previous read.
>
>
>
> And it is inevitable in an eventual consistent distributed DB world
>
>
>
> On Mon, May 9, 2016 at 12:25 PM, Lu, Boying  wrote:
>
> dHi, All,
>
>
>
> We are considering to use DataStax java driver in our codes. One important
> feature provided by the driver we want to use is ‘paging’.
>
> But according to the
> https://datastax.github.io/java-driver/3.0.0/manual/paging/, it seems
> that we can’t jump between pages.
>
>
>
> Is it possible to just return PagingState object without returning data?
> e.g.  If I want to jump to the page 5 from the page 1,
>
> I need to go through each page from page 1 to page 5,  Is it possible to
> just return the PagingState object of page 1, 2, 3 and 4 without
>
> actual data of each page? This can save some bandwidth at least.
>
>
>
> Thanks in advance.
>
>
>
> Boying
>
>
>
>
>
>
>
>
>
>
>
>
>


RE: A question to 'paging' support in DataStax java driver

2016-05-09 Thread Lu, Boying
I filed a JIRA https://issues.apache.org/jira/browse/CASSANDRA-11741 to track 
this.

From: DuyHai Doan [mailto:doanduy...@gmail.com]
Sent: 2016年5月10日 12:47
To: user@cassandra.apache.org
Subject: Re: A question to 'paging' support in DataStax java driver

I guess it's technically possible but then we'll need to update the binary 
protocol. Just create a JIRA and ask for this feature

On Tue, May 10, 2016 at 5:00 AM, Lu, Boying 
mailto:boying...@emc.com>> wrote:
Thanks very much.

I understand that the data needs to be read from the DB to get the next 
‘PagingState’.

But is it possible not to return those data to the client side, just returning 
the ‘PagingState’?
I.e. the data is read on the server side, but not return to client side, this 
can save some bandwidth
between client and server.


From: DuyHai Doan [mailto:doanduy...@gmail.com<mailto:doanduy...@gmail.com>]
Sent: 2016年5月9日 21:06
To: user@cassandra.apache.org<mailto:user@cassandra.apache.org>
Subject: Re: A question to 'paging' support in DataStax java driver

In a truly consistent world (should I say "snapshot isolation" world instead), 
re-reading the same page should yield the same results no matter how many new 
inserts have occurred since the last page read.

Caching previous page at app level can be a solution but not viable if the 
amount of data is huge, also you'll need a cache layer and deal with cache 
invalidation etc ...

The point is, providing snapshot isolation in a distributed system is hard 
without some sort of synchronous coordination e.g. global lock (read 
http://www.bailis.org/papers/hat-vldb2014.pdf)


On Mon, May 9, 2016 at 2:17 PM, Bhuvan Rawal 
mailto:bhu1ra...@gmail.com>> wrote:
Hi Doan,

What does it have to do being eventual consistency? Lets assume a scenario with 
complete consistency and we are at page X, and at the same time some 
inserts/updates happened at page X-2 and we jumped to that.
User will see inconsistent page in that case as well, right? Also in such cases 
how would you design a user facing application (Cache previous pages at app 
level?)

Regards,
Bhuvan

On Mon, May 9, 2016 at 4:18 PM, DuyHai Doan 
mailto:doanduy...@gmail.com>> wrote:
"Is it possible to just return PagingState object without returning data?" --> 
No

Simply because before reading the actual data for each page of N rows, you 
cannot know at which token value a page of data starts...

And it is worst than that, with paging you don't have any isolation. Let's 
suppose you keep in your application/web front-end the paging states for page 
1, 2 and 3. Since there are concurrent inserts on the cluster at the same time, 
when you re-use the paging state 2 for example, you may not get the same 
results as the previous read.

And it is inevitable in an eventual consistent distributed DB world

On Mon, May 9, 2016 at 12:25 PM, Lu, Boying 
mailto:boying...@emc.com>> wrote:
dHi, All,

We are considering to use DataStax java driver in our codes. One important 
feature provided by the driver we want to use is ‘paging’.
But according to the 
https://datastax.github.io/java-driver/3.0.0/manual/paging/, it seems that we 
can’t jump between pages.

Is it possible to just return PagingState object without returning data? e.g.  
If I want to jump to the page 5 from the page 1,
I need to go through each page from page 1 to page 5,  Is it possible to just 
return the PagingState object of page 1, 2, 3 and 4 without
actual data of each page? This can save some bandwidth at least.

Thanks in advance.

Boying








Re: A question to 'paging' support in DataStax java driver

2016-05-09 Thread DuyHai Doan
I guess it's technically possible but then we'll need to update the binary
protocol. Just create a JIRA and ask for this feature

On Tue, May 10, 2016 at 5:00 AM, Lu, Boying  wrote:

> Thanks very much.
>
>
>
> I understand that the data needs to be read from the DB to get the next
> ‘PagingState’.
>
>
>
> But is it possible not to return those data to the client side, just
> returning the ‘PagingState’?
>
> I.e. the data is read on the server side, but not return to client side,
> this can save some bandwidth
>
> between client and server.
>
>
>
>
>
> *From:* DuyHai Doan [mailto:doanduy...@gmail.com]
> *Sent:* 2016年5月9日 21:06
> *To:* user@cassandra.apache.org
> *Subject:* Re: A question to 'paging' support in DataStax java driver
>
>
>
> In a truly consistent world (should I say "snapshot isolation" world
> instead), re-reading the same page should yield the same results no matter
> how many new inserts have occurred since the last page read.
>
>
>
> Caching previous page at app level can be a solution but not viable if the
> amount of data is huge, also you'll need a cache layer and deal with cache
> invalidation etc ...
>
>
>
> The point is, providing snapshot isolation in a distributed system is hard
> without some sort of synchronous coordination e.g. global lock (read
> http://www.bailis.org/papers/hat-vldb2014.pdf)
>
>
>
>
>
> On Mon, May 9, 2016 at 2:17 PM, Bhuvan Rawal  wrote:
>
> Hi Doan,
>
>
>
> What does it have to do being eventual consistency? Lets assume a scenario
> with complete consistency and we are at page X, and at the same time some
> inserts/updates happened at page X-2 and we jumped to that.
>
> User will see inconsistent page in that case as well, right? Also in such
> cases how would you design a user facing application (Cache previous pages
> at app level?)
>
>
>
> Regards,
>
> Bhuvan
>
>
>
> On Mon, May 9, 2016 at 4:18 PM, DuyHai Doan  wrote:
>
> "Is it possible to just return PagingState object without returning
> data?" --> No
>
>
>
> Simply because before reading the actual data for each page of N rows, you
> cannot know at which token value a page of data starts...
>
>
>
> And it is worst than that, with paging you don't have any isolation. Let's
> suppose you keep in your application/web front-end the paging states for
> page 1, 2 and 3. Since there are concurrent inserts on the cluster at the
> same time, when you re-use the paging state 2 for example, you may not get
> the same results as the previous read.
>
>
>
> And it is inevitable in an eventual consistent distributed DB world
>
>
>
> On Mon, May 9, 2016 at 12:25 PM, Lu, Boying  wrote:
>
> dHi, All,
>
>
>
> We are considering to use DataStax java driver in our codes. One important
> feature provided by the driver we want to use is ‘paging’.
>
> But according to the
> https://datastax.github.io/java-driver/3.0.0/manual/paging/, it seems
> that we can’t jump between pages.
>
>
>
> Is it possible to just return PagingState object without returning data?
> e.g.  If I want to jump to the page 5 from the page 1,
>
> I need to go through each page from page 1 to page 5,  Is it possible to
> just return the PagingState object of page 1, 2, 3 and 4 without
>
> actual data of each page? This can save some bandwidth at least.
>
>
>
> Thanks in advance.
>
>
>
> Boying
>
>
>
>
>
>
>
>
>
>
>


RE: A question to 'paging' support in DataStax java driver

2016-05-09 Thread Lu, Boying
Thanks very much.

I understand that the data needs to be read from the DB to get the next 
‘PagingState’.

But is it possible not to return those data to the client side, just returning 
the ‘PagingState’?
I.e. the data is read on the server side, but not return to client side, this 
can save some bandwidth
between client and server.


From: DuyHai Doan [mailto:doanduy...@gmail.com]
Sent: 2016年5月9日 21:06
To: user@cassandra.apache.org
Subject: Re: A question to 'paging' support in DataStax java driver

In a truly consistent world (should I say "snapshot isolation" world instead), 
re-reading the same page should yield the same results no matter how many new 
inserts have occurred since the last page read.

Caching previous page at app level can be a solution but not viable if the 
amount of data is huge, also you'll need a cache layer and deal with cache 
invalidation etc ...

The point is, providing snapshot isolation in a distributed system is hard 
without some sort of synchronous coordination e.g. global lock (read 
http://www.bailis.org/papers/hat-vldb2014.pdf)


On Mon, May 9, 2016 at 2:17 PM, Bhuvan Rawal 
mailto:bhu1ra...@gmail.com>> wrote:
Hi Doan,

What does it have to do being eventual consistency? Lets assume a scenario with 
complete consistency and we are at page X, and at the same time some 
inserts/updates happened at page X-2 and we jumped to that.
User will see inconsistent page in that case as well, right? Also in such cases 
how would you design a user facing application (Cache previous pages at app 
level?)

Regards,
Bhuvan

On Mon, May 9, 2016 at 4:18 PM, DuyHai Doan 
mailto:doanduy...@gmail.com>> wrote:
"Is it possible to just return PagingState object without returning data?" --> 
No

Simply because before reading the actual data for each page of N rows, you 
cannot know at which token value a page of data starts...

And it is worst than that, with paging you don't have any isolation. Let's 
suppose you keep in your application/web front-end the paging states for page 
1, 2 and 3. Since there are concurrent inserts on the cluster at the same time, 
when you re-use the paging state 2 for example, you may not get the same 
results as the previous read.

And it is inevitable in an eventual consistent distributed DB world

On Mon, May 9, 2016 at 12:25 PM, Lu, Boying 
mailto:boying...@emc.com>> wrote:
dHi, All,

We are considering to use DataStax java driver in our codes. One important 
feature provided by the driver we want to use is ‘paging’.
But according to the 
https://datastax.github.io/java-driver/3.0.0/manual/paging/, it seems that we 
can’t jump between pages.

Is it possible to just return PagingState object without returning data? e.g.  
If I want to jump to the page 5 from the page 1,
I need to go through each page from page 1 to page 5,  Is it possible to just 
return the PagingState object of page 1, 2, 3 and 4 without
actual data of each page? This can save some bandwidth at least.

Thanks in advance.

Boying







Re: A question to 'paging' support in DataStax java driver

2016-05-09 Thread DuyHai Doan
In a truly consistent world (should I say "snapshot isolation" world
instead), re-reading the same page should yield the same results no matter
how many new inserts have occurred since the last page read.

Caching previous page at app level can be a solution but not viable if the
amount of data is huge, also you'll need a cache layer and deal with cache
invalidation etc ...

The point is, providing snapshot isolation in a distributed system is hard
without some sort of synchronous coordination e.g. global lock (read
http://www.bailis.org/papers/hat-vldb2014.pdf)


On Mon, May 9, 2016 at 2:17 PM, Bhuvan Rawal  wrote:

> Hi Doan,
>
> What does it have to do being eventual consistency? Lets assume a scenario
> with complete consistency and we are at page X, and at the same time some
> inserts/updates happened at page X-2 and we jumped to that.
> User will see inconsistent page in that case as well, right? Also in such
> cases how would you design a user facing application (Cache previous pages
> at app level?)
>
> Regards,
> Bhuvan
>
> On Mon, May 9, 2016 at 4:18 PM, DuyHai Doan  wrote:
>
>> "Is it possible to just return PagingState object without returning
>> data?" --> No
>>
>> Simply because before reading the actual data for each page of N rows,
>> you cannot know at which token value a page of data starts...
>>
>> And it is worst than that, with paging you don't have any isolation.
>> Let's suppose you keep in your application/web front-end the paging states
>> for page 1, 2 and 3. Since there are concurrent inserts on the cluster at
>> the same time, when you re-use the paging state 2 for example, you may not
>> get the same results as the previous read.
>>
>> And it is inevitable in an eventual consistent distributed DB world
>>
>> On Mon, May 9, 2016 at 12:25 PM, Lu, Boying  wrote:
>>
>>> dHi, All,
>>>
>>>
>>>
>>> We are considering to use DataStax java driver in our codes. One
>>> important feature provided by the driver we want to use is ‘paging’.
>>>
>>> But according to the
>>> https://datastax.github.io/java-driver/3.0.0/manual/paging/, it seems
>>> that we can’t jump between pages.
>>>
>>>
>>>
>>> Is it possible to just return PagingState object without returning data?
>>> e.g.  If I want to jump to the page 5 from the page 1,
>>>
>>> I need to go through each page from page 1 to page 5,  Is it possible to
>>> just return the PagingState object of page 1, 2, 3 and 4 without
>>>
>>> actual data of each page? This can save some bandwidth at least.
>>>
>>>
>>>
>>> Thanks in advance.
>>>
>>>
>>>
>>> Boying
>>>
>>>
>>>
>>>
>>>
>>
>>
>


Re: A question to 'paging' support in DataStax java driver

2016-05-09 Thread Bhuvan Rawal
Hi Doan,

What does it have to do being eventual consistency? Lets assume a scenario
with complete consistency and we are at page X, and at the same time some
inserts/updates happened at page X-2 and we jumped to that.
User will see inconsistent page in that case as well, right? Also in such
cases how would you design a user facing application (Cache previous pages
at app level?)

Regards,
Bhuvan

On Mon, May 9, 2016 at 4:18 PM, DuyHai Doan  wrote:

> "Is it possible to just return PagingState object without returning
> data?" --> No
>
> Simply because before reading the actual data for each page of N rows, you
> cannot know at which token value a page of data starts...
>
> And it is worst than that, with paging you don't have any isolation. Let's
> suppose you keep in your application/web front-end the paging states for
> page 1, 2 and 3. Since there are concurrent inserts on the cluster at the
> same time, when you re-use the paging state 2 for example, you may not get
> the same results as the previous read.
>
> And it is inevitable in an eventual consistent distributed DB world
>
> On Mon, May 9, 2016 at 12:25 PM, Lu, Boying  wrote:
>
>> dHi, All,
>>
>>
>>
>> We are considering to use DataStax java driver in our codes. One
>> important feature provided by the driver we want to use is ‘paging’.
>>
>> But according to the
>> https://datastax.github.io/java-driver/3.0.0/manual/paging/, it seems
>> that we can’t jump between pages.
>>
>>
>>
>> Is it possible to just return PagingState object without returning data?
>> e.g.  If I want to jump to the page 5 from the page 1,
>>
>> I need to go through each page from page 1 to page 5,  Is it possible to
>> just return the PagingState object of page 1, 2, 3 and 4 without
>>
>> actual data of each page? This can save some bandwidth at least.
>>
>>
>>
>> Thanks in advance.
>>
>>
>>
>> Boying
>>
>>
>>
>>
>>
>
>


Re: A question to 'paging' support in DataStax java driver

2016-05-09 Thread DuyHai Doan
"Is it possible to just return PagingState object without returning data?"
--> No

Simply because before reading the actual data for each page of N rows, you
cannot know at which token value a page of data starts...

And it is worst than that, with paging you don't have any isolation. Let's
suppose you keep in your application/web front-end the paging states for
page 1, 2 and 3. Since there are concurrent inserts on the cluster at the
same time, when you re-use the paging state 2 for example, you may not get
the same results as the previous read.

And it is inevitable in an eventual consistent distributed DB world

On Mon, May 9, 2016 at 12:25 PM, Lu, Boying  wrote:

> dHi, All,
>
>
>
> We are considering to use DataStax java driver in our codes. One important
> feature provided by the driver we want to use is ‘paging’.
>
> But according to the
> https://datastax.github.io/java-driver/3.0.0/manual/paging/, it seems
> that we can’t jump between pages.
>
>
>
> Is it possible to just return PagingState object without returning data?
> e.g.  If I want to jump to the page 5 from the page 1,
>
> I need to go through each page from page 1 to page 5,  Is it possible to
> just return the PagingState object of page 1, 2, 3 and 4 without
>
> actual data of each page? This can save some bandwidth at least.
>
>
>
> Thanks in advance.
>
>
>
> Boying
>
>
>
>
>


A question to 'paging' support in DataStax java driver

2016-05-09 Thread Lu, Boying
dHi, All,

We are considering to use DataStax java driver in our codes. One important 
feature provided by the driver we want to use is 'paging'.
But according to the 
https://datastax.github.io/java-driver/3.0.0/manual/paging/, it seems that we 
can't jump between pages.

Is it possible to just return PagingState object without returning data? e.g.  
If I want to jump to the page 5 from the page 1,
I need to go through each page from page 1 to page 5,  Is it possible to just 
return the PagingState object of page 1, 2, 3 and 4 without
actual data of each page? This can save some bandwidth at least.

Thanks in advance.

Boying




Re: Production Ready/Stable DataStax Java Driver

2016-05-08 Thread Alex Popescu
On Sun, May 8, 2016 at 10:00 AM, Anuj Wadehra 
wrote:

> As 3.x driver supports all 1.2+ Cassandra versions, I would also like to
> better understand the motivation of having 2.1 releases simultaneously with
> 3.x releases of Java driver.


Hi Anuj,

Both Apache Cassandra and the DataStax drivers are evolving fast with
significant improvements across the board. While we support and provide the
latest and greatest, we do also support the users that are already in
production and allow them enough time to upgrade. Major release are
sometimes introducing breaking changes. That's unfortunate but sometimes
the only way we can push things forward.

I do agree with your assessment 1000% that if starting now, the best
version to go with is the latest on the 3.0 branch.


-- 
Bests,

Alex Popescu | @al3xandru
Sen. Product Manager @ DataStax



» DataStax Enterprise - the database for cloud applications. «


Re: Production Ready/Stable DataStax Java Driver

2016-05-08 Thread Anuj Wadehra
Thanks Alex !!
We are starting to use CQL for the first time (using Thrift till now), so I 
think it makes sense to directly use Java driver 3.0.1 instead of 2.1.10.

As 3.x driver supports all 1.2+ Cassandra versions, I would also like to better 
understand the motivation of having 2.1 releases simultaneously with 3.x 
releases of Java driver.
One obvious reason should be the "Breaking changes" in 3.x. So, 2.1.x bug fix 
releases give some breathing time to existing 2.1 users for getting ready for 
accomodating those breaking changes in their code instead of forcing them to do 
those changes at short notice and upgrade to 3.x immediately. Is that 
understanding correct?



ThanksAnuj
Sent from Yahoo Mail on Android 
 
  On Sun, 8 May, 2016 at 9:01 PM, Alex Popescu wrote:   Hi 
Anuj,
All released versions of the DataStax Java driver are production ready:
1. they all go through the complete QA cycle2. we merge all bug fixes and 
improvements upstream.
Now, if you are asking which is currently the most deployed version, that's 2.1 
(latest version 2.1.10.1 [1]).
If you want to be ready for future Cassandra upgrades and benefit of the latest 
features of the Java driver, thenthat's the 3.0 branch (latest version 3.0.1 
[2]).
Last but not least, you should also consider when making the decision that our 
current focus and main development goes into the 3.x branch and that the 2.1 is 
in maintenance mode (meaning that no new features will be added and itwill only 
see critical bug fixes). 
Bottom line, if your application is not already developed against the 2.1 
version of the Java driver, you should use the latest 3.0 release. 

[1]: 
https://groups.google.com/a/lists.datastax.com/d/msg/java-driver-user/bYQSUvKQm5k/JduPTt7cGAAJ
[2]: 
https://groups.google.com/a/lists.datastax.com/d/msg/java-driver-user/tOWZm4RVbm4/5E_aDAc8IAAJ

On Sun, May 8, 2016 at 7:39 AM, Anuj Wadehra  wrote:

Hi,
Which DataStax Java Driver release is most stable (production ready) for 
Cassandra 2.1?
ThanksAnuj






-- 
Bests,
Alex Popescu | @al3xandruSen. Product Manager @ DataStax



» DataStax Enterprise - the database for cloud applications. «


  


Re: Production Ready/Stable DataStax Java Driver

2016-05-08 Thread Alex Popescu
Hi Anuj,

All released versions of the DataStax Java driver are production ready:

1. they all go through the complete QA cycle
2. we merge all bug fixes and improvements upstream.

Now, if you are asking which is currently the most deployed version, that's
2.1 (latest version 2.1.10.1 [1]).

If you want to be ready for future Cassandra upgrades and benefit of the
latest features of the Java driver, then
that's the 3.0 branch (latest version 3.0.1 [2]).

Last but not least, you should also consider when making the decision that
our current focus and main development
goes into the 3.x branch and that the 2.1 is in maintenance mode (meaning
that no new features will be added and it
will only see critical bug fixes).

Bottom line, if your application is not already developed against the 2.1
version of the Java driver, you should use
the latest 3.0 release.


[1]:
https://groups.google.com/a/lists.datastax.com/d/msg/java-driver-user/bYQSUvKQm5k/JduPTt7cGAAJ

[2]:
https://groups.google.com/a/lists.datastax.com/d/msg/java-driver-user/tOWZm4RVbm4/5E_aDAc8IAAJ


On Sun, May 8, 2016 at 7:39 AM, Anuj Wadehra  wrote:

> Hi,
>
> Which DataStax Java Driver release is most stable (production ready) for
> Cassandra 2.1?
>
> Thanks
> Anuj
>
>
>


-- 
Bests,

Alex Popescu | @al3xandru
Sen. Product Manager @ DataStax

<http://cassandrasummit.org/Email_Signature>

» DataStax Enterprise - the database for cloud applications. «


Production Ready/Stable DataStax Java Driver

2016-05-08 Thread Anuj Wadehra
Hi,
Which DataStax Java Driver release is most stable (production ready) for 
Cassandra 2.1?
ThanksAnuj




Re: datastax java driver Batch vs BatchStatement

2016-03-25 Thread Alexandre Dutra
Hi,

Query builder's Batch simply sends a QUERY message through the wire where
the query string is a CQL batch statement
:
"BEGIN BATCH ... APPLY BATCH".

BatchStatement actually sends a BATCH message

through the wire, and indeed is only available from protocol V2 onwards.

Both are valid ways of executing a batch and are semantically equivalent;
one big advantage of BatchStatement vs Batch is that you can group prepared
statements together and execute them as a batch.

However neither Batch nor BatchStatements will split big batches into
smaller ones AFAIK.

Thanks
Alexandre


On Fri, Mar 25, 2016 at 4:44 AM Jimmy Lin  wrote:

> Hi all,
> What is the difference between datastax driver Batch and BatchStatement?
>
> In particular, BatchStatment call out that it needs native protocol of
> version 2 or above.
> What is the advantage using native protocol 2.0  for batch execution?
>
> Will any of these two api smart enough to split a big batch into multiple
> smaller one ?
> (to avoid batch_size_warn_threshold_in_kb  or
> batch_size_failed_threshold_in_kb
> )
>
> Thanks
>
> Batch
>
> https://docs.datastax.com/en/drivers/java/2.1/com/datastax/driver/core/querybuilder/Batch.html
>
> BatchStatement
>
> https://docs.datastax.com/en/drivers/java/2.1/com/datastax/driver/core/BatchStatement.html
>
-- 
Alexandre Dutra
Driver & Tools Engineer @ DataStax


datastax java driver Batch vs BatchStatement

2016-03-24 Thread Jimmy Lin
Hi all,
What is the difference between datastax driver Batch and BatchStatement?

In particular, BatchStatment call out that it needs native protocol of
version 2 or above.
What is the advantage using native protocol 2.0  for batch execution?

Will any of these two api smart enough to split a big batch into multiple
smaller one ?
(to avoid batch_size_warn_threshold_in_kb  or
batch_size_failed_threshold_in_kb
)

Thanks

Batch
https://docs.datastax.com/en/drivers/java/2.1/com/datastax/driver/core/querybuilder/Batch.html

BatchStatement
https://docs.datastax.com/en/drivers/java/2.1/com/datastax/driver/core/BatchStatement.html


Re: Can we set TTL on individual fields (columns) using the Datastax java-driver

2016-02-08 Thread DuyHai Doan
I think you should direct your request to the java driver mailing list:
https://groups.google.com/a/lists.datastax.com/forum/#!forum/java-driver-user

To answer your question, no, there is no @Ttl annotation on the
driver-mapping module, even in the latest release:
https://github.com/datastax/java-driver/tree/3.0/driver-mapping/src/main/java/com/datastax/driver/mapping/annotations

You'll need to handle the insertion with TTL yourself or look at other
object mappers


On Mon, Feb 8, 2016 at 8:27 PM, Ajay Garg  wrote:

> Something like ::
>
>
> ##
> class A {
>
>   @Id
>   @Column (name = "pojo_key")
>   int key;
>
>   @Ttl(10)
>   @Column (name = "pojo_temporary_guest")
>   String guest;
>
> }
> ##
>
>
> When I persist, let's say value "ajay" in guest-field
> (pojo_temporary_guest column), it stays forever, and does not become "null"
> after 10 seconds.
>
> Kindly point me what I am doing wrong.
> I will be grateful.
>
>
> Thanks and Regards,
> Ajay
>


Can we set TTL on individual fields (columns) using the Datastax java-driver

2016-02-08 Thread Ajay Garg
Something like ::


##
class A {

  @Id
  @Column (name = "pojo_key")
  int key;

  @Ttl(10)
  @Column (name = "pojo_temporary_guest")
  String guest;

}
##


When I persist, let's say value "ajay" in guest-field (pojo_temporary_guest
column), it stays forever, and does not become "null" after 10 seconds.

Kindly point me what I am doing wrong.
I will be grateful.


Thanks and Regards,
Ajay


cassandra 3.0 and datastax java driver 3.0.0 beta1: unresolved user type DoubleType

2015-11-10 Thread Vova Shelgunov
Hi All,

When I am trying to insert an object of the attached "table_1.png" class
I've got the error:

com.datastax.driver.core.exceptions.UnresolvedUserTypeException: Cannot
resolve user type keyspace1."org.apache.cassandra.db.marshal.DoubleType"

Could you please suggest the solution?
Thank you


Re: Do I have to use the cql in the datastax java driver?

2015-11-09 Thread Robert Coli
On Sun, Nov 8, 2015 at 6:57 AM, Jonathan Haddad  wrote:

> You shouldn't use thrift, it's effectively dead.
>


> On Fri, Nov 6, 2015 at 10:30 PM Dikang Gu  wrote:
>
>> Can I still use thrift interface to talk to cassandra? Any reason that we
>> should not use thrift anymore?
>>
>
I agree with Jonathan.

In my opinion, Thrift is highly likely to eventually be removed from
Cassandra. I recommend that operators of new projects not use it.

=Rob


Re: Do I have to use the cql in the datastax java driver?

2015-11-08 Thread Jonathan Haddad
You shouldn't use thrift, it's effectively dead.
On Fri, Nov 6, 2015 at 10:30 PM Dikang Gu  wrote:

> Hi there,
>
> In the datastax java driver, do I have to use the cql to talk to cassandra
> cluster?
>
> Can I still use thrift interface to talk to cassandra? Any reason that we
> should not use thrift anymore?
>
> Thanks.
> --
> Dikang
>
>


Do I have to use the cql in the datastax java driver?

2015-11-06 Thread Dikang Gu
Hi there,

In the datastax java driver, do I have to use the cql to talk to cassandra
cluster?

Can I still use thrift interface to talk to cassandra? Any reason that we
should not use thrift anymore?

Thanks.
-- 
Dikang


Re: Does datastax java driver works with ipv6 address?

2015-11-05 Thread Eric Stevens
The server is binding to the IPv4 "all addresses" reserved address
(0.0.0.0), but binding it as IPv4 over IPv6 (:::0.0.0.0), which does
not have the same meaning as the IPv6 all addresses reserved IP (being ::,
aka 0:0:0:0:0:0:0:0).

My guess is you have an IPv4 address of 0.0.0.0 in rpc_address, and the
server is binding as instructed.  Probably you just need to set rpc_address
to either :: or the node's actual IPv6 address.

On Wed, Nov 4, 2015 at 10:36 PM Dikang Gu  wrote:

> Thanks Michael,
>
> Actually I find the problem is with the sever setup, I put "rpc_address:
> 0.0.0.0" in the config, and I find the sever bind to the address like this:
>
> tcp0  0 :::9160 :::*
>  LISTEN  2411582/java
> tcp0  0 :::0.0.0.0:9042 :::*
>LISTEN  2411582/java
>
> So using the sever ip "2401:db00:11:60ed:face:0:31:0", I can connect to
> the thrift port 9160, but not the native port 9042. Do you know the reason
> for this?
>
> Thanks
> Dikang.
>
>
> On Wed, Nov 4, 2015 at 12:29 PM, Michael Shuler 
> wrote:
>
>> On 11/04/2015 11:17 AM, Dikang Gu wrote:
>>
>>> I have ipv6 only cassandra cluster, and I'm trying to connect to it
>>> using java driver, like:
>>>
>>> Inet6Address inet6 = (Inet6Address)
>>> InetAddress.getByName("2401:db00:0011:60ed:face::0031:");
>>> cluster = Cluster.builder().addContactPointsWithPorts(Arrays.asList(new
>>> InetSocketAddress(inet6,9042))).build();
>>> session =cluster.connect(CASSANDRA_KEYSPACE);
>>>
>>> But it failed to connect to the cassandra, looks like the java driver
>>> does not parse the ipv6 address correctly, exceptions are:
>>>
>>> 
>>
>> Open a JIRA bug report for the java driver at:
>>
>>   https://datastax-oss.atlassian.net/browse/JAVA
>>
>> As for IPv6 testing for Cassandra in general, it has been brought up, but
>> little testing is done at this time. If you have some contributions to be
>> made in this area, I'm sure they would be greatly appreciated. You are in a
>> relatively unique position with an IPv6-only cluster, so your input is
>> valuable.
>>
>>
>>
>> https://issues.apache.org/jira/issues/?jql=project%20%3D%20CASSANDRA%20AND%20text%20~%20ipv6%20AND%20status%20!%3D%20Resolved
>>
>> --
>> Kind regards,
>> Michael
>>
>>
>
>
> --
> Dikang
>
>


Re: Question for datastax java Driver

2015-11-05 Thread Eric Stevens
In short: Yes, but it's not a good idea.

To do it, you want to look into WhiteListPolicy for your loadbalancer
policy, if your WhiteListPolicy contains only the same host(s) that you
added as contact points, then the client will only connect to those hosts.

However it's probably not a good idea for several reasons.

First, it's directly at odds with Cassandra's availability guarantees.  If
you connect only to one node, and that node goes down, your client has lost
the ability to communicate with the cluster *at all*.  Even though you
(presumably) have replication set up, and the cluster is fully capable of
answering questions and taking writes with that node offline.  If you
permit the default behavior, your client remains connected and functional
through node losses (one or more depending on your replication factor).

Second, this produces coordination overhead, which increases latency for
your requests as well as GC pressure in your cluster.  When you do an
operation on a host that does not own that data, that host will in turn
communicate with the host(s) that *do* own that data.  This is work that
doesn't have to happen, because the java driver can do that work itself,
and communicate directly with primary replicas.  This saves a network hop
(reducing latency) and saves GC pressure in the cluster (the hosts don't
have to coordinate operations, and the requests complete more quickly).

Aside from very narrow scenarios (perhaps diagnostic ones where you're
testing a specific host that you suspect to be misbehaving), I can't think
of a reason you'd want to do this.

On Wed, Nov 4, 2015 at 10:32 PM Dikang Gu  wrote:

> Hi there,
>
> Right now, it seems if I add a contact point like this:
>
> cluster = Cluster.builder().addContactPoint().build();
>
> When client is connected to the cluster, client will fetch the addresses
> for all the nodes in the cluster, and try to connect to them.
>
> I'm wondering can I disable the behavior? I mean I just want each client
> to connect to one or several contact point, not connect to all of the
> nodes, am I able to do this?
>
> Thanks.
> --
> Dikang
>
>


Re: Does datastax java driver works with ipv6 address?

2015-11-04 Thread Dikang Gu
Thanks Michael,

Actually I find the problem is with the sever setup, I put "rpc_address:
0.0.0.0" in the config, and I find the sever bind to the address like this:

tcp0  0 :::9160 :::*
 LISTEN  2411582/java
tcp0  0 :::0.0.0.0:9042 :::*
 LISTEN  2411582/java

So using the sever ip "2401:db00:11:60ed:face:0:31:0", I can connect to the
thrift port 9160, but not the native port 9042. Do you know the reason for
this?

Thanks
Dikang.


On Wed, Nov 4, 2015 at 12:29 PM, Michael Shuler 
wrote:

> On 11/04/2015 11:17 AM, Dikang Gu wrote:
>
>> I have ipv6 only cassandra cluster, and I'm trying to connect to it
>> using java driver, like:
>>
>> Inet6Address inet6 = (Inet6Address)
>> InetAddress.getByName("2401:db00:0011:60ed:face::0031:");
>> cluster = Cluster.builder().addContactPointsWithPorts(Arrays.asList(new
>> InetSocketAddress(inet6,9042))).build();
>> session =cluster.connect(CASSANDRA_KEYSPACE);
>>
>> But it failed to connect to the cassandra, looks like the java driver
>> does not parse the ipv6 address correctly, exceptions are:
>>
>> 
>
> Open a JIRA bug report for the java driver at:
>
>   https://datastax-oss.atlassian.net/browse/JAVA
>
> As for IPv6 testing for Cassandra in general, it has been brought up, but
> little testing is done at this time. If you have some contributions to be
> made in this area, I'm sure they would be greatly appreciated. You are in a
> relatively unique position with an IPv6-only cluster, so your input is
> valuable.
>
>
>
> https://issues.apache.org/jira/issues/?jql=project%20%3D%20CASSANDRA%20AND%20text%20~%20ipv6%20AND%20status%20!%3D%20Resolved
>
> --
> Kind regards,
> Michael
>
>


-- 
Dikang


Question for datastax java Driver

2015-11-04 Thread Dikang Gu
Hi there,

Right now, it seems if I add a contact point like this:

cluster = Cluster.builder().addContactPoint().build();

When client is connected to the cluster, client will fetch the addresses
for all the nodes in the cluster, and try to connect to them.

I'm wondering can I disable the behavior? I mean I just want each client to
connect to one or several contact point, not connect to all of the nodes,
am I able to do this?

Thanks.
-- 
Dikang


Re: Does datastax java driver works with ipv6 address?

2015-11-04 Thread Michael Shuler

On 11/04/2015 11:17 AM, Dikang Gu wrote:

I have ipv6 only cassandra cluster, and I'm trying to connect to it
using java driver, like:

Inet6Address inet6 = (Inet6Address) 
InetAddress.getByName("2401:db00:0011:60ed:face::0031:");
cluster = Cluster.builder().addContactPointsWithPorts(Arrays.asList(new 
InetSocketAddress(inet6,9042))).build();
session =cluster.connect(CASSANDRA_KEYSPACE);

But it failed to connect to the cassandra, looks like the java driver
does not parse the ipv6 address correctly, exceptions are:




Open a JIRA bug report for the java driver at:

  https://datastax-oss.atlassian.net/browse/JAVA

As for IPv6 testing for Cassandra in general, it has been brought up, 
but little testing is done at this time. If you have some contributions 
to be made in this area, I'm sure they would be greatly appreciated. You 
are in a relatively unique position with an IPv6-only cluster, so your 
input is valuable.



https://issues.apache.org/jira/issues/?jql=project%20%3D%20CASSANDRA%20AND%20text%20~%20ipv6%20AND%20status%20!%3D%20Resolved

--
Kind regards,
Michael



Does datastax java driver works with ipv6 address?

2015-11-04 Thread Dikang Gu
Hi there,

I have ipv6 only cassandra cluster, and I'm trying to connect to it using
java driver, like:

Inet6Address inet6 = (Inet6Address)
InetAddress.getByName("2401:db00:0011:60ed:face::0031:");
cluster = Cluster.builder().addContactPointsWithPorts(Arrays.asList(new
InetSocketAddress(inet6, 9042))).build();
session = cluster.connect(CASSANDRA_KEYSPACE);

But it failed to connect to the cassandra, looks like the java driver does
not parse the ipv6 address correctly, exceptions are:

337 [cluster1-nio-worker-0] DEBUG com.datastax.driver.core.Connection  -
Connection[/2401:db00:11:60ed:face:0:31:0:9042-1, inFlight=0, closed=true]
closing connection
339 [main] DEBUG com.datastax.driver.core.ControlConnection  - [Control
connection] error on /2401:db00:11:60ed:face:0:31:0:9042 connection, no
more host to try
com.datastax.driver.core.TransportException:
[/2401:db00:11:60ed:face:0:31:0:9042] Cannot connect
at
com.datastax.driver.core.Connection$1.operationComplete(Connection.java:156)
at
com.datastax.driver.core.Connection$1.operationComplete(Connection.java:139)
at
io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:680)
at
io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:603)
at
io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:563)
at
io.netty.util.concurrent.DefaultPromise.tryFailure(DefaultPromise.java:424)
at
io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.fulfillConnectPromise(AbstractNioChannel.java:268)
at
io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:284)
at
io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:528)
at
io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468)
at
io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354)
at
io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111)
at java.lang.Thread.run(Thread.java:744)
Caused by: java.net.ConnectException: Connection refused:
/2401:db00:11:60ed:face:0:31:0:9042
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at
sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:735)
at
io.netty.channel.socket.nio.NioSocketChannel.doFinishConnect(NioSocketChannel.java:224)
at
io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:281)
... 6 more
342 [main] DEBUG com.datastax.driver.core.AbstractReconnectionHandler  -
First reconnection scheduled in 1000ms
342 [main] DEBUG com.datastax.driver.core.AbstractReconnectionHandler  -
Becoming the active handler
342 [main] DEBUG com.datastax.driver.core.Cluster  - Shutting down
Exception in thread "main"
com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s)
tried for query failed (tried: /2401:db00:11:60ed:face:0:31:0:9042
(com.datastax.driver.core.TransportException:
[/2401:db00:11:60ed:face:0:31:0:9042] Cannot connect))
at
com.datastax.driver.core.ControlConnection.reconnectInternal(ControlConnection.java:223)
at
com.datastax.driver.core.ControlConnection.connect(ControlConnection.java:78)
at com.datastax.driver.core.Cluster$Manager.init(Cluster.java:1272)
at com.datastax.driver.core.Cluster.init(Cluster.java:158)
at com.datastax.driver.core.Cluster.connect(Cluster.java:248)
at com.datastax.driver.core.Cluster.connect(Cluster.java:281)

-- 
Dikang


Re: Can consistency-levels be different for "read" and "write" in Datastax Java-Driver?

2015-10-26 Thread daemeon reiydelle
If one rethinks "consistency" to mean "copies returned" and "copies
written" then one can have different values for the former (datastax) and
the latter (within Cassandra). The latter changes eventual consistency
(e.g. two copies must be written), the former can speed up a result at the
(slight) risk of stale data. I have no experience with the former, just
recall it somewhere in the documentation: n-copy eventual consistency is
fine for all of my work.



*...*






*“Life should not be a journey to the grave with the intention of arriving
safely in apretty and well preserved body, but rather to skid in broadside
in a cloud of smoke,thoroughly used up, totally worn out, and loudly
proclaiming “Wow! What a Ride!” - Hunter ThompsonDaemeon C.M. ReiydelleUSA
(+1) 415.501.0198London (+44) (0) 20 8144 9872*

On Mon, Oct 26, 2015 at 11:52 AM, Jonathan Haddad  wrote:

> What's your query?  Do you have IF NOT EXISTS in there?
>
> On Mon, Oct 26, 2015 at 11:17 AM Ajay Garg  wrote:
>
>> Right now, I have setup "LOCAL QUORUM" as the consistency level in the
>> driver, but it seems that "SERIAL" is being used during writes, and I
>> consistently get this error of type ::
>>
>> *Cassandra timeout during write query at consistency SERIAL (3 replica
>> were required but only 0 acknowledged the write)*
>>
>>
>> Am I missing something?
>>
>>
>>
>> --
>> Regards,
>> Ajay
>>
>


Re: Can consistency-levels be different for "read" and "write" in Datastax Java-Driver?

2015-10-26 Thread Jonathan Haddad
What's your query?  Do you have IF NOT EXISTS in there?

On Mon, Oct 26, 2015 at 11:17 AM Ajay Garg  wrote:

> Right now, I have setup "LOCAL QUORUM" as the consistency level in the
> driver, but it seems that "SERIAL" is being used during writes, and I
> consistently get this error of type ::
>
> *Cassandra timeout during write query at consistency SERIAL (3 replica
> were required but only 0 acknowledged the write)*
>
>
> Am I missing something?
>
>
>
> --
> Regards,
> Ajay
>


Can consistency-levels be different for "read" and "write" in Datastax Java-Driver?

2015-10-26 Thread Ajay Garg
Right now, I have setup "LOCAL QUORUM" as the consistency level in the
driver, but it seems that "SERIAL" is being used during writes, and I
consistently get this error of type ::

*Cassandra timeout during write query at consistency SERIAL (3 replica were
required but only 0 acknowledged the write)*


Am I missing something?


-- 
Regards,
Ajay


Re: cassandra 3.0 rc1 and datastax java driver 3.0.0 alpha3

2015-10-12 Thread Alex Popescu
You'll have better chances to get an answer directly on the Java driver
mailing list:
https://groups.google.com/a/lists.datastax.com/forum/#!forum/java-driver-user

thanks

On Sat, Oct 10, 2015 at 4:30 PM, Vova Shelgunov  wrote:

> Hi all,
>
> I've tried to connect to the cassandra 3.0 cluster, using datastax java
> driver, but I got the following exception when I tried to craete
> MappingManager:
>
> Caused by: com.datastax.driver.core.exceptions.CodecNotFoundException:
> Codec not found for requested operation:
> [varchar <-> V]
> at
> com.datastax.driver.core.CodecRegistry.newException(CodecRegistry.java:647)
> at
> com.datastax.driver.core.CodecRegistry.createCodec(CodecRegistry.java:499)
> at
> com.datastax.driver.core.CodecRegistry.findCodec(CodecRegistry.java:481)
> at
> com.datastax.driver.core.CodecRegistry.maybeCreateCodec(CodecRegistry.java:554)
> at
> com.datastax.driver.core.CodecRegistry.createCodec(CodecRegistry.java:497)
> at
> com.datastax.driver.core.CodecRegistry.findCodec(CodecRegistry.java:481)
> at
> com.datastax.driver.core.CodecRegistry.access$400(CodecRegistry.java:143)
> at
> com.datastax.driver.core.CodecRegistry$1.load(CodecRegistry.java:295)
> at
> com.datastax.driver.core.CodecRegistry$1.load(CodecRegistry.java:293)
> at
> com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3522)
> at
> com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2315)
> at
> com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2278)
> at
> com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2193)
> at com.google.common.cache.LocalCache.get(LocalCache.java:3932)
> at
> com.google.common.cache.LocalCache.getOrLoad(LocalCache.java:3936)
> at
> com.google.common.cache.LocalCache$LocalLoadingCache.get(LocalCache.java:4806)
> at
> com.datastax.driver.core.CodecRegistry.lookupCodec(CodecRegistry.java:457)
> at
> com.datastax.driver.core.CodecRegistry.codecFor(CodecRegistry.java:426)
> at
> com.datastax.driver.core.AbstractGettableByIndexData.codecFor(AbstractGettableByIndexData.java:89)
> at
> com.datastax.driver.core.AbstractGettableByIndexData.getMap(AbstractGettableByIndexData.java:390)
> at
> com.datastax.driver.core.AbstractGettableData.getMap(AbstractGettableData.java:26)
> at
> com.datastax.driver.core.AbstractGettableByIndexData.getMap(AbstractGettableByIndexData.java:378)
> at
> com.datastax.driver.core.AbstractGettableData.getMap(AbstractGettableData.java:26)
> at
> com.datastax.driver.core.AbstractGettableData.getMap(AbstractGettableData.java:233)
> at
> com.datastax.driver.core.KeyspaceMetadata.build(KeyspaceMetadata.java:70)
> at
> com.datastax.driver.core.SchemaParser.buildKeyspaces(SchemaParser.java:116)
> at
> com.datastax.driver.core.SchemaParser.refresh(SchemaParser.java:61)
> at
> com.datastax.driver.core.ControlConnection.refreshSchema(ControlConnection.java:328)
> at
> com.datastax.driver.core.ControlConnection.tryConnect(ControlConnection.java:258)
> at
> com.datastax.driver.core.ControlConnection.reconnectInternal(ControlConnection.java:185)
> at
> com.datastax.driver.core.ControlConnection.connect(ControlConnection.java:77)
> at com.datastax.driver.core.Cluster$Manager.init(Cluster.java:1372)
> at com.datastax.driver.core.Cluster.init(Cluster.java:160)
> at
> com.datastax.driver.core.SessionManager.initAsync(SessionManager.java:75)
> at
> com.datastax.driver.core.SessionManager.init(SessionManager.java:67)
> at
> com.datastax.driver.mapping.MappingManager.getProtocolVersion(MappingManager.java:65)
> at
> com.datastax.driver.mapping.MappingManager.(MappingManager.java:56)
>
> Could you please say what does it mean?
>



-- 
Bests,

Alex Popescu | @al3xandru
Sen. Product Manager @ DataStax


cassandra 3.0 rc1 and datastax java driver 3.0.0 alpha3

2015-10-10 Thread Vova Shelgunov
Hi all,

I've tried to connect to the cassandra 3.0 cluster, using datastax java
driver, but I got the following exception when I tried to craete
MappingManager:

Caused by: com.datastax.driver.core.exceptions.CodecNotFoundException:
Codec not found for requested operation:
[varchar <-> V]
at
com.datastax.driver.core.CodecRegistry.newException(CodecRegistry.java:647)
at
com.datastax.driver.core.CodecRegistry.createCodec(CodecRegistry.java:499)
at
com.datastax.driver.core.CodecRegistry.findCodec(CodecRegistry.java:481)
at
com.datastax.driver.core.CodecRegistry.maybeCreateCodec(CodecRegistry.java:554)
at
com.datastax.driver.core.CodecRegistry.createCodec(CodecRegistry.java:497)
at
com.datastax.driver.core.CodecRegistry.findCodec(CodecRegistry.java:481)
at
com.datastax.driver.core.CodecRegistry.access$400(CodecRegistry.java:143)
at
com.datastax.driver.core.CodecRegistry$1.load(CodecRegistry.java:295)
at
com.datastax.driver.core.CodecRegistry$1.load(CodecRegistry.java:293)
at
com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3522)
at
com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2315)
at
com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2278)
at
com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2193)
at com.google.common.cache.LocalCache.get(LocalCache.java:3932)
at
com.google.common.cache.LocalCache.getOrLoad(LocalCache.java:3936)
at
com.google.common.cache.LocalCache$LocalLoadingCache.get(LocalCache.java:4806)
at
com.datastax.driver.core.CodecRegistry.lookupCodec(CodecRegistry.java:457)
at
com.datastax.driver.core.CodecRegistry.codecFor(CodecRegistry.java:426)
at
com.datastax.driver.core.AbstractGettableByIndexData.codecFor(AbstractGettableByIndexData.java:89)
at
com.datastax.driver.core.AbstractGettableByIndexData.getMap(AbstractGettableByIndexData.java:390)
at
com.datastax.driver.core.AbstractGettableData.getMap(AbstractGettableData.java:26)
at
com.datastax.driver.core.AbstractGettableByIndexData.getMap(AbstractGettableByIndexData.java:378)
at
com.datastax.driver.core.AbstractGettableData.getMap(AbstractGettableData.java:26)
at
com.datastax.driver.core.AbstractGettableData.getMap(AbstractGettableData.java:233)
at
com.datastax.driver.core.KeyspaceMetadata.build(KeyspaceMetadata.java:70)
at
com.datastax.driver.core.SchemaParser.buildKeyspaces(SchemaParser.java:116)
at
com.datastax.driver.core.SchemaParser.refresh(SchemaParser.java:61)
at
com.datastax.driver.core.ControlConnection.refreshSchema(ControlConnection.java:328)
at
com.datastax.driver.core.ControlConnection.tryConnect(ControlConnection.java:258)
at
com.datastax.driver.core.ControlConnection.reconnectInternal(ControlConnection.java:185)
at
com.datastax.driver.core.ControlConnection.connect(ControlConnection.java:77)
at com.datastax.driver.core.Cluster$Manager.init(Cluster.java:1372)
at com.datastax.driver.core.Cluster.init(Cluster.java:160)
at
com.datastax.driver.core.SessionManager.initAsync(SessionManager.java:75)
at
com.datastax.driver.core.SessionManager.init(SessionManager.java:67)
at
com.datastax.driver.mapping.MappingManager.getProtocolVersion(MappingManager.java:65)
at
com.datastax.driver.mapping.MappingManager.(MappingManager.java:56)

Could you please say what does it mean?


Re: Datastax Java Driver vs Cassandra 2.1.7

2015-06-23 Thread Jean Tremblay
I agree. Thanks a lot.
On 23 Jun 2015, at 15:31 , Sam Tunnicliffe 
mailto:s...@beobal.com>> wrote:

Although amending the query is a workaround for this (and duplicating the 
columns in the selection is not something I imagine one would deliberately do), 
this is still an ugly regression, so I've opened 
https://issues.apache.org/jira/browse/CASSANDRA-9636 to fix it.

Thanks,
Sam

On Tue, Jun 23, 2015 at 1:52 PM, Jean Tremblay 
mailto:jean.tremb...@zen-innovations.com>> 
wrote:
Hi Sam,

You have a real good gut feeling.
I went to see the query that I used since many months… which was working…. but 
obviously there is something wrong with it.
The problem with it was *simply* that I placed twice the same field in the 
select. I corrected in my code and now I don’t have the error with 2.1.7.

This provocated the error on the nodes:
ERROR [SharedPool-Worker-1] 2015-06-23 10:56:01,186 Message.java:538 - 
Unexpected exception during request; channel = [id: 0x5e809aa1, 
/192.168.2.8:49581<http://192.168.2.8:49581/> => 
/192.168.2.201:9042<http://192.168.2.201:9042/>]
java.lang.AssertionError: null
at org.apache.cassandra.cql3.ResultSet.addRow(ResultSet.java:63) 
~[apache-cassandra-2.1.7.jar:2.1.7]
at 
org.apache.cassandra.cql3.statements.Selection$ResultSetBuilder.newRow(Selection.java:347)
 ~[apache-cassandra-2.1.7.jar:2.1.7]

I can also reproduce the error on cqlsh:

cqlsh> select c1, p1, mm, c2, iq, iq from ds.t1 where type='D' and c1=1 and 
mm>=201401 and mm<=201402 and p1='01';
ServerError: 
cqlsh> select c1, p1, mm, c2, iq  from ds.t1 where type='D' and c1=1 and 
mm>=201401 and mm<=201402 and p1='01';

 c1 | p1| mm | c2 | iq
+---+++-
  1 |01 | 201401 |  1 |   {‘XX': 97160}
…

Conclusion… my mistake. Sorry.


On 23 Jun 2015, at 13:06 , Sam Tunnicliffe 
mailto:s...@beobal.com>> wrote:

Can you share the query that you're executing when you see the error and the 
schema of the target table? It could be something related to CASSANDRA-9532.

On Tue, Jun 23, 2015 at 10:05 AM, Jean Tremblay 
mailto:jean.tremb...@zen-innovations.com>> 
wrote:
Hi,

I’m using Datastax Java Driver V 2.1.6
I migrated my cluster to Cassandra V2.1.7
And now I have an error on my client that goes like:

2015-06-23 10:49:11.914  WARN 20955 --- [ I/O worker #14] 
com.datastax.driver.core.RequestHandler  : 
/192.168.2.201:9042<http://192.168.2.201:9042/> replied with server error 
(java.lang.AssertionError), trying next host.

And on the node I have an NPE

ERROR [SharedPool-Worker-1] 2015-06-23 10:56:01,186 Message.java:538 - 
Unexpected exception during request; channel = [id: 0x5e809aa1, 
/192.168.2.8:49581<http://192.168.2.8:49581/> => 
/192.168.2.201:9042<http://192.168.2.201:9042/>]
java.lang.AssertionError: null
at org.apache.cassandra.cql3.ResultSet.addRow(ResultSet.java:63) 
~[apache-cassandra-2.1.7.jar:2.1.7]
at 
org.apache.cassandra.cql3.statements.Selection$ResultSetBuilder.newRow(Selection.java:347)
 ~[apache-cassandra-2.1.7.jar:2.1.7]
at 
org.apache.cassandra.cql3.statements.SelectStatement.processColumnFamily(SelectStatement.java:1289)
 ~[apache-cassandra-2.1.7.jar:2.1.7]
at 
org.apache.cassandra.cql3.statements.SelectStatement.process(SelectStatement.java:1223)
 ~[apache-cassandra-2.1.7.jar:2.1.7]
at 
org.apache.cassandra.cql3.statements.SelectStatement.processResults(SelectStatement.java:299)
 ~[apache-cassandra-2.1.7.jar:2.1.7]
at 
org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:238)
 ~[apache-cassandra-2.1.7.jar:2.1.7]
at 
org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:67)
 ~[apache-cassandra-2.1.7.jar:2.1.7]
at 
org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:238)
 ~[apache-cassandra-2.1.7.jar:2.1.7]
at 
org.apache.cassandra.cql3.QueryProcessor.processPrepared(QueryProcessor.java:493)
 ~[apache-cassandra-2.1.7.jar:2.1.7]
at 
org.apache.cassandra.transport.messages.ExecuteMessage.execute(ExecuteMessage.java:134)
 ~[apache-cassandra-2.1.7.jar:2.1.7]
at 
org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:439)
 [apache-cassandra-2.1.7.jar:2.1.7]
at 
org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:335)
 [apache-cassandra-2.1.7.jar:2.1.7]
at 
io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
 [netty-all-4.0.23.Final.jar:4.0.23.Final]
at 
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333)
 [netty-all-4.0.23.Final.jar:4.0.23.Final]
at 
io.netty.channel.AbstractChannelHandlerContext.access$700(AbstractChannelHandlerContext.java:32)
 [netty-all-4.0.23.Final.jar:4.0.23.Final]
at 
io.netty.channel.AbstractChannelHandlerContext$8.run(AbstractChannelHandlerContext

Re: Datastax Java Driver vs Cassandra 2.1.7

2015-06-23 Thread Sam Tunnicliffe
Although amending the query is a workaround for this (and duplicating the
columns in the selection is not something I imagine one would deliberately
do), this is still an ugly regression, so I've opened
https://issues.apache.org/jira/browse/CASSANDRA-9636 to fix it.

Thanks,
Sam

On Tue, Jun 23, 2015 at 1:52 PM, Jean Tremblay <
jean.tremb...@zen-innovations.com> wrote:

>  Hi Sam,
>
>  You have a real good gut feeling.
> I went to see the query that I used since many months… which was working….
> but obviously there is something wrong with it.
> The problem with it was *simply* that I placed twice the same field in the
> select. I corrected in my code and now I don’t have the error with 2.1.7.
>
>  This provocated the error on the nodes:
>
>ERROR [SharedPool-Worker-1] 2015-06-23 10:56:01,186 Message.java:538 -
>> Unexpected exception during request; channel = [id: 0x5e809aa1, /
>> 192.168.2.8:49581 => /192.168.2.201:9042]
>> java.lang.AssertionError: null
>> at org.apache.cassandra.cql3.ResultSet.addRow(ResultSet.java:63)
>> ~[apache-cassandra-2.1.7.jar:2.1.7]
>> at
>> org.apache.cassandra.cql3.statements.Selection$ResultSetBuilder.newRow(Selection.java:347)
>> ~[apache-cassandra-2.1.7.jar:2.1.7]
>>
>
>> I can also reproduce the error on cqlsh:
>
>  cqlsh> select c1, p1, mm, c2, iq, iq from ds.t1 where type='D' and
> c1=1 and mm>=201401 and mm<=201402 and p1='01';
>  *ServerError:  message="java.lang.AssertionError">*
> cqlsh> select c1, p1, mm, c2, iq  from ds.t1 where type='D' and c1=1
> and mm>=201401 and mm<=201402 and p1='01';
>
>*c1* | *p1   * | *mm* | *c2* | *iq*
> +---+++-
>   *1* |*01* | *201401* |  *1* |   *{**‘XX'**: **97160**}*
>  *…*
>
>  *Conclusion… my mistake. Sorry.*
>
>
>   On 23 Jun 2015, at 13:06 , Sam Tunnicliffe  wrote:
>
>  Can you share the query that you're executing when you see the error and
> the schema of the target table? It could be something related to
> CASSANDRA-9532.
>
> On Tue, Jun 23, 2015 at 10:05 AM, Jean Tremblay <
> jean.tremb...@zen-innovations.com> wrote:
>
>> Hi,
>>
>>  I’m using Datastax Java Driver V 2.1.6
>> I migrated my cluster to Cassandra V2.1.7
>> And now I have an error on my client that goes like:
>>
>>  2015-06-23 10:49:11.914  WARN 20955 --- [ I/O worker #14]
>> com.datastax.driver.core.RequestHandler  : /192.168.2.201:9042 replied
>> with server error (java.lang.AssertionError), trying next host.
>>
>>  And on the node I have an NPE
>>
>>  ERROR [SharedPool-Worker-1] 2015-06-23 10:56:01,186 Message.java:538 -
>> Unexpected exception during request; channel = [id: 0x5e809aa1, /
>> 192.168.2.8:49581 => /192.168.2.201:9042]
>> java.lang.AssertionError: null
>> at org.apache.cassandra.cql3.ResultSet.addRow(ResultSet.java:63)
>> ~[apache-cassandra-2.1.7.jar:2.1.7]
>> at
>> org.apache.cassandra.cql3.statements.Selection$ResultSetBuilder.newRow(Selection.java:347)
>> ~[apache-cassandra-2.1.7.jar:2.1.7]
>> at
>> org.apache.cassandra.cql3.statements.SelectStatement.processColumnFamily(SelectStatement.java:1289)
>> ~[apache-cassandra-2.1.7.jar:2.1.7]
>> at
>> org.apache.cassandra.cql3.statements.SelectStatement.process(SelectStatement.java:1223)
>> ~[apache-cassandra-2.1.7.jar:2.1.7]
>> at
>> org.apache.cassandra.cql3.statements.SelectStatement.processResults(SelectStatement.java:299)
>> ~[apache-cassandra-2.1.7.jar:2.1.7]
>> at
>> org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:238)
>> ~[apache-cassandra-2.1.7.jar:2.1.7]
>> at
>> org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:67)
>> ~[apache-cassandra-2.1.7.jar:2.1.7]
>> at
>> org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:238)
>> ~[apache-cassandra-2.1.7.jar:2.1.7]
>> at
>> org.apache.cassandra.cql3.QueryProcessor.processPrepared(QueryProcessor.java:493)
>> ~[apache-cassandra-2.1.7.jar:2.1.7]
>> at
>> org.apache.cassandra.transport.messages.ExecuteMessage.execute(ExecuteMessage.java:134)
>> ~[apache-cassandra-2.1.7.jar:2.1.7]
>> at
>> org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:439)
>> [apache-cassandra-2.1.7.jar:2.1.7]
>> at
>> org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:335)
>> [apache-cassandra-2.1.7.jar:2.1.7]
>> at
>> io.netty.cha

Re: Datastax Java Driver vs Cassandra 2.1.7

2015-06-23 Thread Jean Tremblay
Hi Sam,

You have a real good gut feeling.
I went to see the query that I used since many months… which was working…. but 
obviously there is something wrong with it.
The problem with it was *simply* that I placed twice the same field in the 
select. I corrected in my code and now I don’t have the error with 2.1.7.

This provocated the error on the nodes:
ERROR [SharedPool-Worker-1] 2015-06-23 10:56:01,186 Message.java:538 - 
Unexpected exception during request; channel = [id: 0x5e809aa1, 
/192.168.2.8:49581<http://192.168.2.8:49581/> => 
/192.168.2.201:9042<http://192.168.2.201:9042/>]
java.lang.AssertionError: null
at org.apache.cassandra.cql3.ResultSet.addRow(ResultSet.java:63) 
~[apache-cassandra-2.1.7.jar:2.1.7]
at 
org.apache.cassandra.cql3.statements.Selection$ResultSetBuilder.newRow(Selection.java:347)
 ~[apache-cassandra-2.1.7.jar:2.1.7]

I can also reproduce the error on cqlsh:

cqlsh> select c1, p1, mm, c2, iq, iq from ds.t1 where type='D' and c1=1 and 
mm>=201401 and mm<=201402 and p1='01';
ServerError: 
cqlsh> select c1, p1, mm, c2, iq  from ds.t1 where type='D' and c1=1 and 
mm>=201401 and mm<=201402 and p1='01';

 c1 | p1| mm | c2 | iq
+---+++-
  1 |01 | 201401 |  1 |   {‘XX': 97160}
…

Conclusion… my mistake. Sorry.


On 23 Jun 2015, at 13:06 , Sam Tunnicliffe 
mailto:s...@beobal.com>> wrote:

Can you share the query that you're executing when you see the error and the 
schema of the target table? It could be something related to CASSANDRA-9532.

On Tue, Jun 23, 2015 at 10:05 AM, Jean Tremblay 
mailto:jean.tremb...@zen-innovations.com>> 
wrote:
Hi,

I’m using Datastax Java Driver V 2.1.6
I migrated my cluster to Cassandra V2.1.7
And now I have an error on my client that goes like:

2015-06-23 10:49:11.914  WARN 20955 --- [ I/O worker #14] 
com.datastax.driver.core.RequestHandler  : 
/192.168.2.201:9042<http://192.168.2.201:9042/> replied with server error 
(java.lang.AssertionError), trying next host.

And on the node I have an NPE

ERROR [SharedPool-Worker-1] 2015-06-23 10:56:01,186 Message.java:538 - 
Unexpected exception during request; channel = [id: 0x5e809aa1, 
/192.168.2.8:49581<http://192.168.2.8:49581/> => 
/192.168.2.201:9042<http://192.168.2.201:9042/>]
java.lang.AssertionError: null
at org.apache.cassandra.cql3.ResultSet.addRow(ResultSet.java:63) 
~[apache-cassandra-2.1.7.jar:2.1.7]
at 
org.apache.cassandra.cql3.statements.Selection$ResultSetBuilder.newRow(Selection.java:347)
 ~[apache-cassandra-2.1.7.jar:2.1.7]
at 
org.apache.cassandra.cql3.statements.SelectStatement.processColumnFamily(SelectStatement.java:1289)
 ~[apache-cassandra-2.1.7.jar:2.1.7]
at 
org.apache.cassandra.cql3.statements.SelectStatement.process(SelectStatement.java:1223)
 ~[apache-cassandra-2.1.7.jar:2.1.7]
at 
org.apache.cassandra.cql3.statements.SelectStatement.processResults(SelectStatement.java:299)
 ~[apache-cassandra-2.1.7.jar:2.1.7]
at 
org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:238)
 ~[apache-cassandra-2.1.7.jar:2.1.7]
at 
org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:67)
 ~[apache-cassandra-2.1.7.jar:2.1.7]
at 
org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:238)
 ~[apache-cassandra-2.1.7.jar:2.1.7]
at 
org.apache.cassandra.cql3.QueryProcessor.processPrepared(QueryProcessor.java:493)
 ~[apache-cassandra-2.1.7.jar:2.1.7]
at 
org.apache.cassandra.transport.messages.ExecuteMessage.execute(ExecuteMessage.java:134)
 ~[apache-cassandra-2.1.7.jar:2.1.7]
at 
org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:439)
 [apache-cassandra-2.1.7.jar:2.1.7]
at 
org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:335)
 [apache-cassandra-2.1.7.jar:2.1.7]
at 
io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
 [netty-all-4.0.23.Final.jar:4.0.23.Final]
at 
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333)
 [netty-all-4.0.23.Final.jar:4.0.23.Final]
at 
io.netty.channel.AbstractChannelHandlerContext.access$700(AbstractChannelHandlerContext.java:32)
 [netty-all-4.0.23.Final.jar:4.0.23.Final]
at 
io.netty.channel.AbstractChannelHandlerContext$8.run(AbstractChannelHandlerContext.java:324)
 [netty-all-4.0.23.Final.jar:4.0.23.Final]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
[na:1.8.0_45]
at 
org.apache.cassandra.concurrent.AbstractTracingAwareExecutorService$FutureTask.run(AbstractTracingAwareExecutorService.java:164)
 [apache-cassandra-2.1.7.jar:2.1.7]
at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) 
[apache-cassandra-2.1.7.jar:2.1.7]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_45]

Is there a known problem on Cassandra 2.1.7?

Thanks for your comments.

Jean




Re: Datastax Java Driver vs Cassandra 2.1.7

2015-06-23 Thread Sam Tunnicliffe
Can you share the query that you're executing when you see the error and
the schema of the target table? It could be something related to
CASSANDRA-9532.

On Tue, Jun 23, 2015 at 10:05 AM, Jean Tremblay <
jean.tremb...@zen-innovations.com> wrote:

>  Hi,
>
>  I’m using Datastax Java Driver V 2.1.6
> I migrated my cluster to Cassandra V2.1.7
> And now I have an error on my client that goes like:
>
>  2015-06-23 10:49:11.914  WARN 20955 --- [ I/O worker #14]
> com.datastax.driver.core.RequestHandler  : /192.168.2.201:9042 replied
> with server error (java.lang.AssertionError), trying next host.
>
>  And on the node I have an NPE
>
>  ERROR [SharedPool-Worker-1] 2015-06-23 10:56:01,186 Message.java:538 -
> Unexpected exception during request; channel = [id: 0x5e809aa1, /
> 192.168.2.8:49581 => /192.168.2.201:9042]
> java.lang.AssertionError: null
> at org.apache.cassandra.cql3.ResultSet.addRow(ResultSet.java:63)
> ~[apache-cassandra-2.1.7.jar:2.1.7]
> at
> org.apache.cassandra.cql3.statements.Selection$ResultSetBuilder.newRow(Selection.java:347)
> ~[apache-cassandra-2.1.7.jar:2.1.7]
> at
> org.apache.cassandra.cql3.statements.SelectStatement.processColumnFamily(SelectStatement.java:1289)
> ~[apache-cassandra-2.1.7.jar:2.1.7]
> at
> org.apache.cassandra.cql3.statements.SelectStatement.process(SelectStatement.java:1223)
> ~[apache-cassandra-2.1.7.jar:2.1.7]
> at
> org.apache.cassandra.cql3.statements.SelectStatement.processResults(SelectStatement.java:299)
> ~[apache-cassandra-2.1.7.jar:2.1.7]
> at
> org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:238)
> ~[apache-cassandra-2.1.7.jar:2.1.7]
> at
> org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:67)
> ~[apache-cassandra-2.1.7.jar:2.1.7]
> at
> org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:238)
> ~[apache-cassandra-2.1.7.jar:2.1.7]
> at
> org.apache.cassandra.cql3.QueryProcessor.processPrepared(QueryProcessor.java:493)
> ~[apache-cassandra-2.1.7.jar:2.1.7]
> at
> org.apache.cassandra.transport.messages.ExecuteMessage.execute(ExecuteMessage.java:134)
> ~[apache-cassandra-2.1.7.jar:2.1.7]
> at
> org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:439)
> [apache-cassandra-2.1.7.jar:2.1.7]
> at
> org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:335)
> [apache-cassandra-2.1.7.jar:2.1.7]
> at
> io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
> [netty-all-4.0.23.Final.jar:4.0.23.Final]
> at
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333)
> [netty-all-4.0.23.Final.jar:4.0.23.Final]
> at
> io.netty.channel.AbstractChannelHandlerContext.access$700(AbstractChannelHandlerContext.java:32)
> [netty-all-4.0.23.Final.jar:4.0.23.Final]
> at
> io.netty.channel.AbstractChannelHandlerContext$8.run(AbstractChannelHandlerContext.java:324)
> [netty-all-4.0.23.Final.jar:4.0.23.Final]
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> [na:1.8.0_45]
> at
> org.apache.cassandra.concurrent.AbstractTracingAwareExecutorService$FutureTask.run(AbstractTracingAwareExecutorService.java:164)
> [apache-cassandra-2.1.7.jar:2.1.7]
> at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105)
> [apache-cassandra-2.1.7.jar:2.1.7]
> at java.lang.Thread.run(Thread.java:745) [na:1.8.0_45]
>
>  Is there a known problem on Cassandra 2.1.7?
>
>  Thanks for your comments.
>
>  Jean
>


Datastax Java Driver vs Cassandra 2.1.7

2015-06-23 Thread Jean Tremblay
Hi,

I’m using Datastax Java Driver V 2.1.6
I migrated my cluster to Cassandra V2.1.7
And now I have an error on my client that goes like:

2015-06-23 10:49:11.914  WARN 20955 --- [ I/O worker #14] 
com.datastax.driver.core.RequestHandler  : /192.168.2.201:9042 replied with 
server error (java.lang.AssertionError), trying next host.

And on the node I have an NPE

ERROR [SharedPool-Worker-1] 2015-06-23 10:56:01,186 Message.java:538 - 
Unexpected exception during request; channel = [id: 0x5e809aa1, 
/192.168.2.8:49581 => /192.168.2.201:9042]
java.lang.AssertionError: null
at org.apache.cassandra.cql3.ResultSet.addRow(ResultSet.java:63) 
~[apache-cassandra-2.1.7.jar:2.1.7]
at 
org.apache.cassandra.cql3.statements.Selection$ResultSetBuilder.newRow(Selection.java:347)
 ~[apache-cassandra-2.1.7.jar:2.1.7]
at 
org.apache.cassandra.cql3.statements.SelectStatement.processColumnFamily(SelectStatement.java:1289)
 ~[apache-cassandra-2.1.7.jar:2.1.7]
at 
org.apache.cassandra.cql3.statements.SelectStatement.process(SelectStatement.java:1223)
 ~[apache-cassandra-2.1.7.jar:2.1.7]
at 
org.apache.cassandra.cql3.statements.SelectStatement.processResults(SelectStatement.java:299)
 ~[apache-cassandra-2.1.7.jar:2.1.7]
at 
org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:238)
 ~[apache-cassandra-2.1.7.jar:2.1.7]
at 
org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:67)
 ~[apache-cassandra-2.1.7.jar:2.1.7]
at 
org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:238)
 ~[apache-cassandra-2.1.7.jar:2.1.7]
at 
org.apache.cassandra.cql3.QueryProcessor.processPrepared(QueryProcessor.java:493)
 ~[apache-cassandra-2.1.7.jar:2.1.7]
at 
org.apache.cassandra.transport.messages.ExecuteMessage.execute(ExecuteMessage.java:134)
 ~[apache-cassandra-2.1.7.jar:2.1.7]
at 
org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:439)
 [apache-cassandra-2.1.7.jar:2.1.7]
at 
org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:335)
 [apache-cassandra-2.1.7.jar:2.1.7]
at 
io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
 [netty-all-4.0.23.Final.jar:4.0.23.Final]
at 
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333)
 [netty-all-4.0.23.Final.jar:4.0.23.Final]
at 
io.netty.channel.AbstractChannelHandlerContext.access$700(AbstractChannelHandlerContext.java:32)
 [netty-all-4.0.23.Final.jar:4.0.23.Final]
at 
io.netty.channel.AbstractChannelHandlerContext$8.run(AbstractChannelHandlerContext.java:324)
 [netty-all-4.0.23.Final.jar:4.0.23.Final]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
[na:1.8.0_45]
at 
org.apache.cassandra.concurrent.AbstractTracingAwareExecutorService$FutureTask.run(AbstractTracingAwareExecutorService.java:164)
 [apache-cassandra-2.1.7.jar:2.1.7]
at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) 
[apache-cassandra-2.1.7.jar:2.1.7]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_45]

Is there a known problem on Cassandra 2.1.7?

Thanks for your comments.

Jean


RE: Problems with user defined types (cql) and Datastax Java Driver

2015-02-05 Thread Andreas Finke
Hi Alex,

I did so. Thanks for that hint.

Andi

From: Alex Popescu [al...@datastax.com]
Sent: 05 February 2015 18:14
To: user
Subject: Re: Problems with user defined types (cql) and Datastax Java Driver

Andreas,

Can you please post your question to the Java driver ml 
https://groups.google.com/a/lists.datastax.com/forum/#!forum/java-driver-user 
as you'll have better chances to get an answer there.

thanks

On Thu, Feb 5, 2015 at 9:10 AM, Andreas Finke 
mailto:andreas.fi...@solvians.com>> wrote:
Hi,

I encountered the problem that in Java the Session does not create a valid 
UserType for my corresponding CQL user defined type.

CQL_SCHEMA:

create keyspace if not exists quotes
WITH replication = { 'class' : 'SimpleStrategy', 'replication_factor' : 1 };

CREATE TYPE IF NOT EXISTS quotes.price (
value double,
size bigint,
timestamp bigint,
delay int
);

JAVA

UserType priceType = 
session.getCluster().getMetadata().getKeyspace("quotes").getUserType("price");
Assert.assertNotNull(priceType); // true
Assert.assertEquals("price", priceType.getTypeName()); // true
Assert.assertEquals(4, priceType.getFieldNames().size()); // 
AssertionFailedError: expected:<4> but was:<0>

I am testing with Cassandra v.2.1.2 on Windows using Datastax Java Driver 2.1.2.

I am thankful for any suggestions.

Regards
Andi



--

[:>-a)

Alex Popescu
Sen. Product Manager @ DataStax
@al3xandru


Re: Problems with user defined types (cql) and Datastax Java Driver

2015-02-05 Thread Alex Popescu
Andreas,

Can you please post your question to the Java driver ml
https://groups.google.com/a/lists.datastax.com/forum/#!forum/java-driver-user
as you'll have better chances to get an answer there.

thanks

On Thu, Feb 5, 2015 at 9:10 AM, Andreas Finke 
wrote:

>  Hi,
>
>
>
> I encountered the problem that in Java the Session does not create a valid
> UserType for my corresponding CQL user defined type.
>
>
>
> CQL_SCHEMA:
>
>
>
> create keyspace if not exists quotes
>
> WITH replication = { 'class' : 'SimpleStrategy', 'replication_factor' : 1
> };
>
>
>
> CREATE TYPE IF NOT EXISTS quotes.price (
>
> value double,
>
> size bigint,
>
> timestamp bigint,
>
> delay int
>
> );
>
>
>
> JAVA
>
>
>
> UserType priceType =
> session.getCluster().getMetadata().getKeyspace("quotes").getUserType("price");
>
> Assert.assertNotNull(priceType); // true
>
> Assert.assertEquals("price", priceType.getTypeName()); // true
>
> Assert.assertEquals(4, priceType.getFieldNames().size()); //
> AssertionFailedError: expected:<4> but was:<0>
>
>
>
> I am testing with Cassandra v.2.1.2 on Windows using Datastax Java Driver
> 2.1.2.
>
>
>
> I am thankful for any suggestions.
>
>
>
> Regards
>
> Andi
>



-- 

[:>-a)

Alex Popescu
Sen. Product Manager @ DataStax
@al3xandru


Problems with user defined types (cql) and Datastax Java Driver

2015-02-05 Thread Andreas Finke
Hi,

I encountered the problem that in Java the Session does not create a valid 
UserType for my corresponding CQL user defined type.

CQL_SCHEMA:

create keyspace if not exists quotes
WITH replication = { 'class' : 'SimpleStrategy', 'replication_factor' : 1 };

CREATE TYPE IF NOT EXISTS quotes.price (
value double,
size bigint,
timestamp bigint,
delay int
);

JAVA

UserType priceType = 
session.getCluster().getMetadata().getKeyspace("quotes").getUserType("price");
Assert.assertNotNull(priceType); // true
Assert.assertEquals("price", priceType.getTypeName()); // true
Assert.assertEquals(4, priceType.getFieldNames().size()); // 
AssertionFailedError: expected:<4> but was:<0>

I am testing with Cassandra v.2.1.2 on Windows using Datastax Java Driver 2.1.2.

I am thankful for any suggestions.

Regards
Andi


Re: Random NoHostAvailableException using DataStax Java driver

2014-11-04 Thread Olivier Michallat
Hi,

Let's move the discussion to the Java driver mailing list:
https://groups.google.com/a/lists.datastax.com/forum/#!forum/java-driver-user

I'm posting a reply to this message there.

--

Olivier Michallat

Driver & tools engineer, DataStax

On Tue, Nov 4, 2014 at 12:45 PM, Ola Nowak  wrote:

> Hi All :)
> I have an application which use DataStax Java driver v2.0.2 to connect
> to Cassandra Cluster (6 nodes, v 2.0.11).
> Application is deployed in three copies on 3 different servers. From
> time to time on random application server I get this exception:
>
> 2014-11-04 10:37:15,301 - ERROR: Servlet.service() for servlet [Unique
> Identifier Service] in context with path [/uis] threw exception
> [com.datastax.driver.core.exceptions.NoHostAvailableException: All
> host(s) tried for query failed (tried: [cassandra-11/10.0.0.11:9042,
> /10.0.0.10:9042, /10.0.0.12:9042, /10.0.0.7:9042, /10.0.0.9:9042,
> /10.0.0.8:9042] - use getErrors() for details)] with root cause
> com.datastax.driver.core.exceptions.NoHostAvailableException: All
> host(s) tried for query failed (tried: [cassandra-11/10.0.0.11:9042,
> /10.0.0.10:9042, /10.0.0.12:9042, /10.0.0.7:9042, /10.0.0.9:9042,
> /10.0.0.8:9042] - use getErrors() for details)
> at
> com.datastax.driver.core.RequestHandler.sendRequest(RequestHandler.java:103)
> at
> com.datastax.driver.core.SessionManager.execute(SessionManager.java:418)
> at
> com.datastax.driver.core.SessionManager.executeQuery(SessionManager.java:454)
> at
> com.datastax.driver.core.SessionManager.executeAsync(SessionManager.java:104)
> at
> com.datastax.driver.core.SessionManager.execute(SessionManager.java:92)
> at
> eu.europeana.cloud.service.uis.database.dao.CassandraDataProviderDAO.getProviders(CassandraDataProviderDAO.java:92)
> at
> eu.europeana.cloud.service.uis.CassandraDataProviderService.getProviders(CassandraDataProviderService.java:34)
> at
> eu.europeana.cloud.service.uis.rest.DataProvidersResource.getProviders(DataProvidersResource.java:54)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at
> org.glassfish.jersey.server.model.internal.ResourceMethodInvocationHandlerFactory$1.invoke(ResourceMethodInvocationHandlerFactory.java:81)
> at
> org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher$1.run(AbstractJavaResourceMethodDispatcher.java:151)
> at
> org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher.invoke(AbstractJavaResourceMethodDispatcher.java:171)
> at
> org.glassfish.jersey.server.model.internal.JavaResourceMethodDispatcherProvider$TypeOutInvoker.doDispatch(JavaResourceMethodDispatcherProvider.java:195)
> at
> org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher.dispatch(AbstractJavaResourceMethodDispatcher.java:104)
> at
> org.glassfish.jersey.server.model.ResourceMethodInvoker.invoke(ResourceMethodInvoker.java:353)
> at
> org.glassfish.jersey.server.model.ResourceMethodInvoker.apply(ResourceMethodInvoker.java:343)
> at
> org.glassfish.jersey.server.model.ResourceMethodInvoker.apply(ResourceMethodInvoker.java:102)
> at
> org.glassfish.jersey.server.ServerRuntime$1.run(ServerRuntime.java:255)
> at org.glassfish.jersey.internal.Errors$1.call(Errors.java:271)
> at org.glassfish.jersey.internal.Errors$1.call(Errors.java:267)
> at org.glassfish.jersey.internal.Errors.process(Errors.java:315)
> at org.glassfish.jersey.internal.Errors.process(Errors.java:297)
> at org.glassfish.jersey.internal.Errors.process(Errors.java:267)
> at
> org.glassfish.jersey.process.internal.RequestScope.runInScope(RequestScope.java:318)
> at
> org.glassfish.jersey.server.ServerRuntime.process(ServerRuntime.java:235)
> at
> org.glassfish.jersey.server.ApplicationHandler.handle(ApplicationHandler.java:983)
> at
> org.glassfish.jersey.servlet.WebComponent.service(WebComponent.java:359)
> at
> org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:372)
> at
> org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:335)
> at
> org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:218)
> at
> org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:303)
> at
> org.

Random NoHostAvailableException using DataStax Java driver

2014-11-04 Thread Ola Nowak
Hi All :)
I have an application which use DataStax Java driver v2.0.2 to connect
to Cassandra Cluster (6 nodes, v 2.0.11).
Application is deployed in three copies on 3 different servers. From
time to time on random application server I get this exception:

2014-11-04 10:37:15,301 - ERROR: Servlet.service() for servlet [Unique
Identifier Service] in context with path [/uis] threw exception
[com.datastax.driver.core.exceptions.NoHostAvailableException: All
host(s) tried for query failed (tried: [cassandra-11/10.0.0.11:9042,
/10.0.0.10:9042, /10.0.0.12:9042, /10.0.0.7:9042, /10.0.0.9:9042,
/10.0.0.8:9042] - use getErrors() for details)] with root cause
com.datastax.driver.core.exceptions.NoHostAvailableException: All
host(s) tried for query failed (tried: [cassandra-11/10.0.0.11:9042,
/10.0.0.10:9042, /10.0.0.12:9042, /10.0.0.7:9042, /10.0.0.9:9042,
/10.0.0.8:9042] - use getErrors() for details)
at 
com.datastax.driver.core.RequestHandler.sendRequest(RequestHandler.java:103)
at 
com.datastax.driver.core.SessionManager.execute(SessionManager.java:418)
at 
com.datastax.driver.core.SessionManager.executeQuery(SessionManager.java:454)
at 
com.datastax.driver.core.SessionManager.executeAsync(SessionManager.java:104)
at 
com.datastax.driver.core.SessionManager.execute(SessionManager.java:92)
at 
eu.europeana.cloud.service.uis.database.dao.CassandraDataProviderDAO.getProviders(CassandraDataProviderDAO.java:92)
at 
eu.europeana.cloud.service.uis.CassandraDataProviderService.getProviders(CassandraDataProviderService.java:34)
at 
eu.europeana.cloud.service.uis.rest.DataProvidersResource.getProviders(DataProvidersResource.java:54)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
org.glassfish.jersey.server.model.internal.ResourceMethodInvocationHandlerFactory$1.invoke(ResourceMethodInvocationHandlerFactory.java:81)
at 
org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher$1.run(AbstractJavaResourceMethodDispatcher.java:151)
at 
org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher.invoke(AbstractJavaResourceMethodDispatcher.java:171)
at 
org.glassfish.jersey.server.model.internal.JavaResourceMethodDispatcherProvider$TypeOutInvoker.doDispatch(JavaResourceMethodDispatcherProvider.java:195)
at 
org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher.dispatch(AbstractJavaResourceMethodDispatcher.java:104)
at 
org.glassfish.jersey.server.model.ResourceMethodInvoker.invoke(ResourceMethodInvoker.java:353)
at 
org.glassfish.jersey.server.model.ResourceMethodInvoker.apply(ResourceMethodInvoker.java:343)
at 
org.glassfish.jersey.server.model.ResourceMethodInvoker.apply(ResourceMethodInvoker.java:102)
at 
org.glassfish.jersey.server.ServerRuntime$1.run(ServerRuntime.java:255)
at org.glassfish.jersey.internal.Errors$1.call(Errors.java:271)
at org.glassfish.jersey.internal.Errors$1.call(Errors.java:267)
at org.glassfish.jersey.internal.Errors.process(Errors.java:315)
at org.glassfish.jersey.internal.Errors.process(Errors.java:297)
at org.glassfish.jersey.internal.Errors.process(Errors.java:267)
at 
org.glassfish.jersey.process.internal.RequestScope.runInScope(RequestScope.java:318)
at 
org.glassfish.jersey.server.ServerRuntime.process(ServerRuntime.java:235)
at 
org.glassfish.jersey.server.ApplicationHandler.handle(ApplicationHandler.java:983)
at 
org.glassfish.jersey.servlet.WebComponent.service(WebComponent.java:359)
at 
org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:372)
at 
org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:335)
at 
org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:218)
at 
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:303)
at 
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208)
at 
org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:52)
at 
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:241)
at 
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208)
at 
org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:220)
at 
org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:122)
at 
org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:501

Setting the read/write consistency globaly in the CQL3 datastax java driver

2014-05-15 Thread Sebastian Schmidt
Hi,

I'm using the CQL3 Datastax Cassandra Java client. I want to use a
global read and write consistency for my queries. I know that I can set
the consistencyLevel for every single prepared statement. But I want to
do that just once per cluster or once per session. Is that possible?

Kind Regards,
Sebastian



signature.asc
Description: OpenPGP digital signature


Hadoop, CqlInputFormat, datastax java driver and uppercase in Keyspace names

2014-04-25 Thread Maxime Nay
Hi,

We have a keyspace starting with an upper-case character: Visitors.
We are trying to run a map reduce job on one of the column family of this
keyspace.

To specify the keyspace it seems we have to use:
org.apache.cassandra.hadoop.
ConfigHelper.setInputColumnFamily(conf, keyspace, columnFamily);


If we do:
ConfigHelper.setInputColumnFamily(conf, "Visitors", columnFamily); we get:

com.datastax.driver.core.exceptions.InvalidQueryException: Keyspace
'visitors' does not exist
at
com.datastax.driver.core.exceptions.InvalidQueryException.copy(InvalidQueryException.java:35)
at
com.datastax.driver.core.DefaultResultSetFuture.extractCauseFromExecutionException(DefaultResultSetFuture.java:256)
at
com.datastax.driver.core.SessionManager.setKeyspace(SessionManager.java:335)

...

And if we do:
ConfigHelper.setInputColumnFamily(conf, "\"Visitors\"", columnFamily); we
get:
Exception in thread "main" java.lang.RuntimeException:
InvalidRequestException(why:No such keyspace: "Visitors")
at
org.apache.cassandra.hadoop.AbstractColumnFamilyInputFormat.getRangeMap(AbstractColumnFamilyInputFormat.java:339)
at
org.apache.cassandra.hadoop.AbstractColumnFamilyInputFormat.getSplits(AbstractColumnFamilyInputFormat.java:125)
at org.apache.hadoop.mapred.JobClient.writeNewSplits(JobClient.java:962)
at org.apache.hadoop.mapred.JobClient.writeSplits(JobClient.java:979)
...

This is working just fine if the keyspace is lowercase.
And it was working just fine with Cassandra 2.0.6. But with Cassandra
2.0.7, and the addition of datastax's java driver in the dependencies, I am
getting this error.

Any idea how I could fix this?

Thanks!
Maxime


RE: Inserts with a dynamic datamodel using Datastax java driver

2014-04-02 Thread Raveendran, Varsha IN BLR STS
Hi,

Thanks for replying.

I dint quite get what you meant by "use clustering columns in CQL3 with 
blob/text type".

I have elaborated my problem statement below.
Assume the schema of the keyspace to which random records need to be inserted 
is given in the following format :
KeySpace Name :   KS_1
ColumnFamilyName : CF_1
Columns: [Column1 : uuid , Column2: varint, Column3: timestamp,  ... 
ColumnN:text]


So I parse this file to get the schema.  Also, the data/value for each column 
should be generated randomly depending on the datatype of the column.
My question is how do I insert the records ?


1.  I created a prepared statement depending on the number of columns 
(using a for loop).  Then for each record I called methods like setDate() or 
setVarint()  to bind the values.

But this was taking too much time because data was being generated for each 
column , then set in the prepared statement  and then inserted.  And the number 
of records = 1 billion!!



2.  The executeAsync () function seemed more likely to be faster. But the 
problem is that the bind() function takes a sequence of values.  Since the 
number of columns is variable I am not able to make this code generic (i.e to 
cater to any schema given by the user) .



I am not sure if there is another way to approach this problem.


Thanks & Regards,
Varsha


From: DuyHai Doan [mailto:doanduy...@gmail.com]
Sent: Wednesday, April 02, 2014 4:05 PM
To: user@cassandra.apache.org
Subject: Re: Inserts with a dynamic datamodel using Datastax java driver

Hello Varsha

 Your best bet is to go with blob type by serializing all data into bytes. 
Another alternative is to use text and serialize to JSON.

 For the dynamic columns, use clustering columns in CQL3 with blob/text type

 Regards

 Duy Hai DOAN

On Wed, Apr 2, 2014 at 11:21 AM, Raveendran, Varsha IN BLR STS 
mailto:varsha.raveend...@siemens.com>> wrote:
Hello,

I am building a write client in java to insert records into  Cassandra 2.0.5.  
I am using the Datastax java driver.

Problem : The datamodel is dynamic. By dynamic, I mean that the number of 
columns and the datatype of columns will be given as an input by the user.  It 
has only 1 keyspace and 1 column family.

For inserting records bound statements seems the way to go.  But the bind() 
function accepts only a sequence of Objects  ( column values) .
How do I bind the values when the number and datatype of columns is given as 
input? Any suggestions?

Thanks & Regards,
Varsha





Re: Inserts with a dynamic datamodel using Datastax java driver

2014-04-02 Thread DuyHai Doan
Hello Varsha

 Your best bet is to go with blob type by serializing all data into bytes.
Another alternative is to use text and serialize to JSON.

 For the dynamic columns, use clustering columns in CQL3 with blob/text type

 Regards

 Duy Hai DOAN


On Wed, Apr 2, 2014 at 11:21 AM, Raveendran, Varsha IN BLR STS <
varsha.raveend...@siemens.com> wrote:

>  Hello,
>
> I am building a write client in java to insert records into  Cassandra
> 2.0.5.  I am using the Datastax java driver.
>
> *Problem** : * The datamodel is dynamic. By dynamic, I mean that the
> number of columns and the datatype of columns will be given as an input by
> the user.  It has only 1 keyspace and 1 column family.
>
> For inserting records bound statements seems the way to go.  But the
> bind() function accepts only a sequence of Objects  ( column values) .
> How do I bind the values when the number and datatype of columns is given
> as input? Any suggestions?
>
>  Thanks & Regards,
> Varsha
>
>
>


Inserts with a dynamic datamodel using Datastax java driver

2014-04-02 Thread Raveendran, Varsha IN BLR STS
Hello,

I am building a write client in java to insert records into  Cassandra 2.0.5.  
I am using the Datastax java driver.

Problem :  The datamodel is dynamic. By dynamic, I mean that the number of 
columns and the datatype of columns will be given as an input by the user.  It 
has only 1 keyspace and 1 column family.

For inserting records bound statements seems the way to go.  But the bind() 
function accepts only a sequence of Objects  ( column values) .
How do I bind the values when the number and datatype of columns is given as 
input? Any suggestions?

 Thanks & Regards,
Varsha




Re: Any way to get a list of per-node token ranges using the DataStax Java driver?

2014-02-28 Thread Tupshin Harper
For the first question, try "select * from system.peers"

http://www.datastax.com/documentation/cql/cql_using/use_query_system_c.html?pagename=docs&version=1.2&file=cql_cli/using/query_system_tables

For the second, there is a JMX and nodetool command, but I'm not aware of
any way to get it directly through CQL.

http://www.datastax.com/documentation/cassandra/2.0/cassandra/tools/toolsGetEndPoints.html

-Tupshin


On Fri, Feb 28, 2014 at 1:27 PM, Clint Kelly  wrote:

> Hi everyone,
>
> I've been working on a rewrite of the Cassandra InputFormat for Hadoop 2
> using the DataStax Java driver instead of the Thrift API.
>
> I have a prototype working now, but there is one bit of code that I have
> not been able to replace with code for the Java driver.  In the
> InputFormat#getSplits method, the old code has a call like the following:
>
>   map = client.describe_ring(ConfigHelper.getInputKeyspace(conf));
>
> This gets a list of the distinct token ranges for the Cassandra cluster.
>
> The rest of "getSplits" then takes these key ranges, breaks them up into
> subranges (to match the user-specified input split size), and then gets the
> replica nodes for the various token ranges (as the locations for the
> splits).
>
> Does anyone know how I can do the following with the native protocol?
>
>- Get the distinct token ranges for the C* cluster
>- Get the set of replica nodes for a given range of tokens?
>
> I tried looking around in Cluster and Metadata, among other places, in the
> API docs, but I didn't see anything that looked like it would do what I
> want.
>
> Thanks!
>
> Best regards,
> Clint
>


Any way to get a list of per-node token ranges using the DataStax Java driver?

2014-02-28 Thread Clint Kelly
Hi everyone,

I've been working on a rewrite of the Cassandra InputFormat for Hadoop 2
using the DataStax Java driver instead of the Thrift API.

I have a prototype working now, but there is one bit of code that I have
not been able to replace with code for the Java driver.  In the
InputFormat#getSplits method, the old code has a call like the following:

  map = client.describe_ring(ConfigHelper.getInputKeyspace(conf));

This gets a list of the distinct token ranges for the Cassandra cluster.

The rest of "getSplits" then takes these key ranges, breaks them up into
subranges (to match the user-specified input split size), and then gets the
replica nodes for the various token ranges (as the locations for the
splits).

Does anyone know how I can do the following with the native protocol?

   - Get the distinct token ranges for the C* cluster
   - Get the set of replica nodes for a given range of tokens?

I tried looking around in Cluster and Metadata, among other places, in the
API docs, but I didn't see anything that looked like it would do what I
want.

Thanks!

Best regards,
Clint


Re: Naming variables in a prepared statement in the DataStax Java driver

2014-02-27 Thread Clint Kelly
Ah never mind, I see, currently you can refer to the ?'s by name by using
the name of the column to which the ? refers.  And this works as long as
each column is present only one in the statement.

Sorry for the extra list traffic!


On Thu, Feb 27, 2014 at 7:33 PM, Clint Kelly  wrote:

> Folks,
>
> Is there a way to name the variables in a prepared statement when using
> the DataStax Java driver?
>
> For example, instead of doing:
>
> ByteBuffer byteBuffer = ... // some application logic
> String query = "SELECT * FROM foo WHERE bar = ?";
> PreparedStatement preparedStatement = session.prepare(query);
> BoundStatement boundStatement = preparedStatement.bind(byteBuffer);
>
> I'd like to be able to be able to name the fields indicated by the ?s,
> e.g.,:
>
> ByteBuffer byteBuffer = ... // some application logic
> String query = "SELECT * FROM foo WHERE bar = ?"; // I just made up
> this syntax
> PreparedStatement preparedStatement = session.prepare(query);
> BoundStatement boundStatement = preparedStatement.bind("bar", byteBuffer);
>
> Looking at the DataStax API docs, it seems like there should be a way to
> be able to do this, but I can't tell for sure.
>
> This would be particularly useful when I have some application logic in
> which I have very long queries with lots of bound variables and then
> sometimes extend them with different clauses.  Right now this code gets
> very verbose, because I cannot figure out how to break up my "bind"
> statements to bind different values to a bound statement in separate
> statements.  In other words, I'd like to be able to do something like:
>
> BoundStatement boundStatement = // Create from a prepared statement
> boundStatement = boundStatement.bind( ... ); // Bind all of the values
> that I use in every flavor of this query
> if ( ... )  {
>  boundStatement = boundStatement.bind("some field", someVal);
> else {
>  boundStatement = boundStatement.bind("other field", otherVal);
>
> Thanks!
>
> Best regards,
> Clint
>


Naming variables in a prepared statement in the DataStax Java driver

2014-02-27 Thread Clint Kelly
Folks,

Is there a way to name the variables in a prepared statement when using the
DataStax Java driver?

For example, instead of doing:

ByteBuffer byteBuffer = ... // some application logic
String query = "SELECT * FROM foo WHERE bar = ?";
PreparedStatement preparedStatement = session.prepare(query);
BoundStatement boundStatement = preparedStatement.bind(byteBuffer);

I'd like to be able to be able to name the fields indicated by the ?s,
e.g.,:

ByteBuffer byteBuffer = ... // some application logic
String query = "SELECT * FROM foo WHERE bar = ?"; // I just made up
this syntax
PreparedStatement preparedStatement = session.prepare(query);
BoundStatement boundStatement = preparedStatement.bind("bar", byteBuffer);

Looking at the DataStax API docs, it seems like there should be a way to be
able to do this, but I can't tell for sure.

This would be particularly useful when I have some application logic in
which I have very long queries with lots of bound variables and then
sometimes extend them with different clauses.  Right now this code gets
very verbose, because I cannot figure out how to break up my "bind"
statements to bind different values to a bound statement in separate
statements.  In other words, I'd like to be able to do something like:

BoundStatement boundStatement = // Create from a prepared statement
boundStatement = boundStatement.bind( ... ); // Bind all of the values
that I use in every flavor of this query
if ( ... )  {
 boundStatement = boundStatement.bind("some field", someVal);
else {
 boundStatement = boundStatement.bind("other field", otherVal);

Thanks!

Best regards,
Clint


Re: Buffering for lots of INSERT or UPDATE calls with DataStax Java driver?

2014-02-08 Thread Clint Kelly
Ah yes, thanks!  I had forgotten about the use of batches for this purpose.

Appreciate the help, cheers!

Best regards,
Clint

On Sat, Feb 8, 2014 at 1:24 PM, DuyHai Doan  wrote:
> "Is there a recommended way to perform lots of INSERT operations in a row
> when using the DataStax Java driver?"  --> Yes, use UNLOGGED batches. More
> info here:
> http://www.datastax.com/documentation/cql/3.0/webhelp/index.html#cql/cql_reference/batch_r.html
>
>
> On Sat, Feb 8, 2014 at 10:19 PM, Clint Kelly  wrote:
>>
>> Folks,
>>
>> Is there a recommended way to perform lots of INSERT operations in a
>> row when using the DataStax Java driver?
>>
>> I notice that the RecordWriter for the CQL3 Hadoop implementation in
>> Cassandra does some per-data-node buffering of CQL3 queries.  The
>> DataStax Java driver, on the other hand, supports asynchronous query
>> execution.
>>
>> In a situation in which we are doing lots of and lots of writes (like
>> a RecordWriter for Hadoop), do we need to do any buffering or can we
>> just fire away session.executeAsync(...) calls as soon as they are
>> ready?
>>
>> Thanks!
>>
>> Best regards,
>> Clint
>
>


Re: Buffering for lots of INSERT or UPDATE calls with DataStax Java driver?

2014-02-08 Thread DuyHai Doan
"Is there a recommended way to perform lots of INSERT operations in a row
when using the DataStax Java driver?"  --> Yes, use UNLOGGED batches. More
info here:
http://www.datastax.com/documentation/cql/3.0/webhelp/index.html#cql/cql_reference/batch_r.html


On Sat, Feb 8, 2014 at 10:19 PM, Clint Kelly  wrote:

> Folks,
>
> Is there a recommended way to perform lots of INSERT operations in a
> row when using the DataStax Java driver?
>
> I notice that the RecordWriter for the CQL3 Hadoop implementation in
> Cassandra does some per-data-node buffering of CQL3 queries.  The
> DataStax Java driver, on the other hand, supports asynchronous query
> execution.
>
> In a situation in which we are doing lots of and lots of writes (like
> a RecordWriter for Hadoop), do we need to do any buffering or can we
> just fire away session.executeAsync(...) calls as soon as they are
> ready?
>
> Thanks!
>
> Best regards,
> Clint
>


Buffering for lots of INSERT or UPDATE calls with DataStax Java driver?

2014-02-08 Thread Clint Kelly
Folks,

Is there a recommended way to perform lots of INSERT operations in a
row when using the DataStax Java driver?

I notice that the RecordWriter for the CQL3 Hadoop implementation in
Cassandra does some per-data-node buffering of CQL3 queries.  The
DataStax Java driver, on the other hand, supports asynchronous query
execution.

In a situation in which we are doing lots of and lots of writes (like
a RecordWriter for Hadoop), do we need to do any buffering or can we
just fire away session.executeAsync(...) calls as soon as they are
ready?

Thanks!

Best regards,
Clint


Re: Occasional NPE using DataStax Java driver

2013-12-19 Thread David Tinker
Done. https://datastax-oss.atlassian.net/browse/JAVA-231

On Thu, Dec 19, 2013 at 10:42 AM, Sylvain Lebresne  wrote:
> Mind opening a ticket on https://datastax-oss.atlassian.net/browse/JAVA?
> It's almost surely a bug.
>
> --
> Sylvain
>
>
> On Thu, Dec 19, 2013 at 8:21 AM, David Tinker 
> wrote:
>>
>> We are using Cassandra 2.0.3-1 installed on Ubuntu 12.04 from the
>> DataStax repo with the DataStax Java driver version 2.0.0-rc1. Every
>> now and then we get the following exception:
>>
>> 2013-12-19 06:56:34,619 [sql-2-t15] ERROR core.RequestHandler  -
>> Unexpected error while querying /x.x.x.x
>> java.lang.NullPointerException
>> at
>> com.datastax.driver.core.HostConnectionPool.waitForConnection(HostConnectionPool.java:203)
>> at
>> com.datastax.driver.core.HostConnectionPool.borrowConnection(HostConnectionPool.java:107)
>> at com.datastax.driver.core.RequestHandler.query(RequestHandler.java:112)
>> at
>> com.datastax.driver.core.RequestHandler.sendRequest(RequestHandler.java:93)
>> at com.datastax.driver.core.Session$Manager.execute(Session.java:513)
>> at com.datastax.driver.core.Session$Manager.executeQuery(Session.java:549)
>> at com.datastax.driver.core.Session.executeAsync(Session.java:172)
>>
>> This happens during a big data load process which will do up to 256
>> executeAsync's in parallel.
>>
>> Any ideas? Its not causing huge problems because the operation is just
>> retried by our code but it would be nice to eliminate it.
>
>



-- 
http://qdb.io/ Persistent Message Queues With Replay and #RabbitMQ Integration


Re: Occasional NPE using DataStax Java driver

2013-12-19 Thread Mikhail Stepura
I would suggest to file an issue at 
https://datastax-oss.atlassian.net/browse/JAVA


-Mikhail

On 12/18/13, 23:21, David Tinker wrote:

We are using Cassandra 2.0.3-1 installed on Ubuntu 12.04 from the
DataStax repo with the DataStax Java driver version 2.0.0-rc1. Every
now and then we get the following exception:

2013-12-19 06:56:34,619 [sql-2-t15] ERROR core.RequestHandler  -
Unexpected error while querying /x.x.x.x
java.lang.NullPointerException
at 
com.datastax.driver.core.HostConnectionPool.waitForConnection(HostConnectionPool.java:203)
at 
com.datastax.driver.core.HostConnectionPool.borrowConnection(HostConnectionPool.java:107)
at com.datastax.driver.core.RequestHandler.query(RequestHandler.java:112)
at com.datastax.driver.core.RequestHandler.sendRequest(RequestHandler.java:93)
at com.datastax.driver.core.Session$Manager.execute(Session.java:513)
at com.datastax.driver.core.Session$Manager.executeQuery(Session.java:549)
at com.datastax.driver.core.Session.executeAsync(Session.java:172)

This happens during a big data load process which will do up to 256
executeAsync's in parallel.

Any ideas? Its not causing huge problems because the operation is just
retried by our code but it would be nice to eliminate it.






Re: Occasional NPE using DataStax Java driver

2013-12-19 Thread Mikhail Stepura
I would suggest to file an issue at 
https://datastax-oss.atlassian.net/browse/JAVA


-Mikhail

On 12/18/13, 23:21, David Tinker wrote:

We are using Cassandra 2.0.3-1 installed on Ubuntu 12.04 from the
DataStax repo with the DataStax Java driver version 2.0.0-rc1. Every
now and then we get the following exception:

2013-12-19 06:56:34,619 [sql-2-t15] ERROR core.RequestHandler  -
Unexpected error while querying /x.x.x.x
java.lang.NullPointerException
at 
com.datastax.driver.core.HostConnectionPool.waitForConnection(HostConnectionPool.java:203)
at 
com.datastax.driver.core.HostConnectionPool.borrowConnection(HostConnectionPool.java:107)
at com.datastax.driver.core.RequestHandler.query(RequestHandler.java:112)
at com.datastax.driver.core.RequestHandler.sendRequest(RequestHandler.java:93)
at com.datastax.driver.core.Session$Manager.execute(Session.java:513)
at com.datastax.driver.core.Session$Manager.executeQuery(Session.java:549)
at com.datastax.driver.core.Session.executeAsync(Session.java:172)

This happens during a big data load process which will do up to 256
executeAsync's in parallel.

Any ideas? Its not causing huge problems because the operation is just
retried by our code but it would be nice to eliminate it.






Re: Occasional NPE using DataStax Java driver

2013-12-19 Thread Sylvain Lebresne
Mind opening a ticket on https://datastax-oss.atlassian.net/browse/JAVA?
It's almost surely a bug.

--
Sylvain


On Thu, Dec 19, 2013 at 8:21 AM, David Tinker wrote:

> We are using Cassandra 2.0.3-1 installed on Ubuntu 12.04 from the
> DataStax repo with the DataStax Java driver version 2.0.0-rc1. Every
> now and then we get the following exception:
>
> 2013-12-19 06:56:34,619 [sql-2-t15] ERROR core.RequestHandler  -
> Unexpected error while querying /x.x.x.x
> java.lang.NullPointerException
> at
> com.datastax.driver.core.HostConnectionPool.waitForConnection(HostConnectionPool.java:203)
> at
> com.datastax.driver.core.HostConnectionPool.borrowConnection(HostConnectionPool.java:107)
> at com.datastax.driver.core.RequestHandler.query(RequestHandler.java:112)
> at
> com.datastax.driver.core.RequestHandler.sendRequest(RequestHandler.java:93)
> at com.datastax.driver.core.Session$Manager.execute(Session.java:513)
> at com.datastax.driver.core.Session$Manager.executeQuery(Session.java:549)
> at com.datastax.driver.core.Session.executeAsync(Session.java:172)
>
> This happens during a big data load process which will do up to 256
> executeAsync's in parallel.
>
> Any ideas? Its not causing huge problems because the operation is just
> retried by our code but it would be nice to eliminate it.
>


Occasional NPE using DataStax Java driver

2013-12-18 Thread David Tinker
We are using Cassandra 2.0.3-1 installed on Ubuntu 12.04 from the
DataStax repo with the DataStax Java driver version 2.0.0-rc1. Every
now and then we get the following exception:

2013-12-19 06:56:34,619 [sql-2-t15] ERROR core.RequestHandler  -
Unexpected error while querying /x.x.x.x
java.lang.NullPointerException
at 
com.datastax.driver.core.HostConnectionPool.waitForConnection(HostConnectionPool.java:203)
at 
com.datastax.driver.core.HostConnectionPool.borrowConnection(HostConnectionPool.java:107)
at com.datastax.driver.core.RequestHandler.query(RequestHandler.java:112)
at com.datastax.driver.core.RequestHandler.sendRequest(RequestHandler.java:93)
at com.datastax.driver.core.Session$Manager.execute(Session.java:513)
at com.datastax.driver.core.Session$Manager.executeQuery(Session.java:549)
at com.datastax.driver.core.Session.executeAsync(Session.java:172)

This happens during a big data load process which will do up to 256
executeAsync's in parallel.

Any ideas? Its not causing huge problems because the operation is just
retried by our code but it would be nice to eliminate it.


Re: Dynamic Columns Question Cassandra 1.2.5, Datastax Java Driver 1.0

2013-06-06 Thread Joe Greenawalt
Hey, this is good info.  Seems like i have the same capabilities, i just
need to twist my brain a bit to see it better.  Thanks for all the
feedback, much appreciated.

Joe


On Thu, Jun 6, 2013 at 11:27 AM, Alain RODRIGUEZ  wrote:

> Not sure if you remember this Jonathan, but Sylvain already wrote a very
> clear documentation about it :
> http://www.datastax.com/dev/blog/thrift-to-cql3 (OCTOBER 26, 2012)
>
> Yet a second page will give to this important topic a greater visibility.
>
>
> 2013/6/6 Jonathan Ellis 
>
>> This is becoming something of a FAQ, so I wrote an more in-depth
>> answer:
>> http://www.datastax.com/dev/blog/does-cql-support-dynamic-columns-wide-rows
>>
>> On Thu, Jun 6, 2013 at 8:02 AM, Joe Greenawalt 
>> wrote:
>> > Hi,
>> > I'm having some problems figuring out how to append a dynamic column on
>> a
>> > column family using the datastax java driver 1.0 and CQL3 on Cassandra
>> > 1.2.5.  Below is what i'm trying:
>> >
>> > cqlsh:simplex> create table user (firstname text primary key, lastname
>> > text);
>> > cqlsh:simplex> insert into user (firstname, lastname) values
>> > ('joe','shmoe');
>> > cqlsh:simplex> select * from user;
>> >
>> >  firstname | lastname
>> > ---+--
>> >joe |shmoe
>> >
>> > cqlsh:simplex> insert into user (firstname, lastname, middlename) values
>> > ('joe','shmoe','lester');
>> > Bad Request: Unknown identifier middlename
>> > cqlsh:simplex> insert into user (firstname, lastname, middlename) values
>> > ('john','shmoe','lester');
>> > Bad Request: Unknown identifier middlename
>> >
>> > I'm assuming you can do this based on previous based thrift based
>> clients
>> > like pycassa, and also by reading this:
>> >
>> > The Cassandra data model is a dynamic schema, column-oriented data
>> model.
>> > This means that, unlike a relational database, you do not need to model
>> all
>> > of the columns required by your application up front, as each row is not
>> > required to have the same set of columns. Columns and their metadata
>> can be
>> > added by your application as they are needed without incurring downtime
>> to
>> > your application.
>> >
>> > here: http://www.datastax.com/docs/1.2/ddl/index
>> >
>> > Is it a limitation of CQL3 and its connection vs. thrift?
>> > Or more likely i'm just doing something wrong?
>> >
>> > Thanks,
>> > Joe
>>
>>
>>
>> --
>> Jonathan Ellis
>> Project Chair, Apache Cassandra
>> co-founder, http://www.datastax.com
>> @spyced
>>
>
>


Re: Dynamic Columns Question Cassandra 1.2.5, Datastax Java Driver 1.0

2013-06-06 Thread Alain RODRIGUEZ
Not sure if you remember this Jonathan, but Sylvain already wrote a very
clear documentation about it :
http://www.datastax.com/dev/blog/thrift-to-cql3 (OCTOBER 26, 2012)

Yet a second page will give to this important topic a greater visibility.


2013/6/6 Jonathan Ellis 

> This is becoming something of a FAQ, so I wrote an more in-depth
> answer:
> http://www.datastax.com/dev/blog/does-cql-support-dynamic-columns-wide-rows
>
> On Thu, Jun 6, 2013 at 8:02 AM, Joe Greenawalt 
> wrote:
> > Hi,
> > I'm having some problems figuring out how to append a dynamic column on a
> > column family using the datastax java driver 1.0 and CQL3 on Cassandra
> > 1.2.5.  Below is what i'm trying:
> >
> > cqlsh:simplex> create table user (firstname text primary key, lastname
> > text);
> > cqlsh:simplex> insert into user (firstname, lastname) values
> > ('joe','shmoe');
> > cqlsh:simplex> select * from user;
> >
> >  firstname | lastname
> > ---+--
> >joe |shmoe
> >
> > cqlsh:simplex> insert into user (firstname, lastname, middlename) values
> > ('joe','shmoe','lester');
> > Bad Request: Unknown identifier middlename
> > cqlsh:simplex> insert into user (firstname, lastname, middlename) values
> > ('john','shmoe','lester');
> > Bad Request: Unknown identifier middlename
> >
> > I'm assuming you can do this based on previous based thrift based clients
> > like pycassa, and also by reading this:
> >
> > The Cassandra data model is a dynamic schema, column-oriented data model.
> > This means that, unlike a relational database, you do not need to model
> all
> > of the columns required by your application up front, as each row is not
> > required to have the same set of columns. Columns and their metadata can
> be
> > added by your application as they are needed without incurring downtime
> to
> > your application.
> >
> > here: http://www.datastax.com/docs/1.2/ddl/index
> >
> > Is it a limitation of CQL3 and its connection vs. thrift?
> > Or more likely i'm just doing something wrong?
> >
> > Thanks,
> > Joe
>
>
>
> --
> Jonathan Ellis
> Project Chair, Apache Cassandra
> co-founder, http://www.datastax.com
> @spyced
>


Re: Dynamic Columns Question Cassandra 1.2.5, Datastax Java Driver 1.0

2013-06-06 Thread Eric Stevens
Your data model should take into consideration the number of items you're
storing in a collection.  If you expect it will grow over time with no
small upper bound, don't use a collection.  You don't need to read before
write to answer this question, it's a decision made at modeling time
(before you ever write your very first record).

If the possible values are finite and small, use a collection.  Otherwise
normalize.

Over time if you find your collections are getting large, then either an
assumption changed or you modeled poorly.  Either way it's time to refactor.

DON'T STORE MORE THEN 100 THINGS IN A COLLECTION
>

Actually that's probably a bit too hard edged.  You could easily have a
Set whose typical size is 1000.  If the data doesn't change often, and
you always need to know all those values at the same time as each other,
there's actually no problem with this.  Constantly mutating values are a
problem as the collection gets large, or cases where you need to know only
a subset of the the collection at a time.

-Eric Stevens
ProtectWise, Inc.


On Thu, Jun 6, 2013 at 10:59 AM, Edward Capriolo wrote:

> The problem about "being careful about how much you store in a collection"
> is that Cassandra is a blind-write system. Knowing how much data is
> currently in the collection before you write is an anti-pattern, read
> before write.
>
> Cassandra Rule 1: DON'T READ BEFORE WRITE
> Cassandra Rule 2: ROWS CAN HAVE 2 BILLION COLUMNS
> Collection Rule 1: DON'T STORE MORE THEN 100 THINGS IN A COLLECTION
>
> Why does are user confused? Its simple.
>
>
>
>
>
>
>
>
>
> On Thu, Jun 6, 2013 at 10:51 AM, Eric Stevens  wrote:
>
>>  CQL3 does now support dynamic columns. For tags or metadata values you
>>> could use a Collection:
>>>
>>
>> This should probably be clarified.  A collection is a super useful tool,
>> but it is *not* the same thing as a dynamic column.  It has many
>> advantages, but there is one huge disadvantage in that you have to be
>> careful how much data you store in a collection. When you read a single
>> value out of a collection, the *entire* collection is always read, which
>> of course is true for appending data to the collection as well.
>>
>> With a traditional dynamic column, you could have added things like event
>> logs to a record in the form of keys named "event:someEvent:TS" (or
>> juxtapose the order as your needs dictate).  You could basically do this
>> practically indefinitely with little degradation in performance.  This was
>> also a common way of representing cross-family relationships (one-to-many
>> style).
>>
>> If you try to do the same thing with a collection, performance will
>> degrade as your data grows.  For small or relatively static data sets (eg
>> tags) that's fine.  For open-ended data sets (logs, events, one-to-many
>> relationships that grow regularly), you should instead normalize such data
>> into a separate column family.
>>
>> -Eric Stevens
>> ProtectWise, Inc.
>>
>>
>> On Thu, Jun 6, 2013 at 9:49 AM, Francisco Andrades Grassi <
>> bigjoc...@gmail.com> wrote:
>>
>>> Hi,
>>>
>>> CQL3 does now support dynamic columns. For tags or metadata values you
>>> could use a Collection:
>>>
>>> http://www.datastax.com/dev/blog/cql3_collections
>>>
>>> For wide rows there's the enhanced primary keys, which I personally
>>> prefer over the composite columns of yore:
>>>
>>> http://www.datastax.com/dev/blog/cql3-for-cassandra-experts
>>> http://thelastpickle.com/2013/01/11/primary-keys-in-cql/
>>>
>>> --
>>> Francisco Andrades Grassi
>>> www.bigjocker.com
>>> @bigjocker
>>>
>>> On Jun 6, 2013, at 8:32 AM, Joe Greenawalt 
>>> wrote:
>>>
>>> Hi,
>>> I'm having some problems figuring out how to append a dynamic column on
>>> a column family using the datastax java driver 1.0 and CQL3 on Cassandra
>>> 1.2.5.  Below is what i'm trying:
>>>
>>> *cqlsh:simplex> create table user (firstname text primary key, lastname
>>> text);
>>> cqlsh:simplex> insert into user (firstname, lastname) values
>>> ('joe','shmoe');
>>> cqlsh:simplex> select * from user;
>>>
>>>  firstname | lastname
>>> ---+--
>>>joe |shmoe
>>>
>>> cqlsh:simplex> insert into user (firstname, lastname, middlename) values
>>> ('joe','shmoe','

Re: Dynamic Columns Question Cassandra 1.2.5, Datastax Java Driver 1.0

2013-06-06 Thread Jonathan Ellis
This is becoming something of a FAQ, so I wrote an more in-depth
answer: 
http://www.datastax.com/dev/blog/does-cql-support-dynamic-columns-wide-rows

On Thu, Jun 6, 2013 at 8:02 AM, Joe Greenawalt  wrote:
> Hi,
> I'm having some problems figuring out how to append a dynamic column on a
> column family using the datastax java driver 1.0 and CQL3 on Cassandra
> 1.2.5.  Below is what i'm trying:
>
> cqlsh:simplex> create table user (firstname text primary key, lastname
> text);
> cqlsh:simplex> insert into user (firstname, lastname) values
> ('joe','shmoe');
> cqlsh:simplex> select * from user;
>
>  firstname | lastname
> ---+--
>joe |shmoe
>
> cqlsh:simplex> insert into user (firstname, lastname, middlename) values
> ('joe','shmoe','lester');
> Bad Request: Unknown identifier middlename
> cqlsh:simplex> insert into user (firstname, lastname, middlename) values
> ('john','shmoe','lester');
> Bad Request: Unknown identifier middlename
>
> I'm assuming you can do this based on previous based thrift based clients
> like pycassa, and also by reading this:
>
> The Cassandra data model is a dynamic schema, column-oriented data model.
> This means that, unlike a relational database, you do not need to model all
> of the columns required by your application up front, as each row is not
> required to have the same set of columns. Columns and their metadata can be
> added by your application as they are needed without incurring downtime to
> your application.
>
> here: http://www.datastax.com/docs/1.2/ddl/index
>
> Is it a limitation of CQL3 and its connection vs. thrift?
> Or more likely i'm just doing something wrong?
>
> Thanks,
> Joe



-- 
Jonathan Ellis
Project Chair, Apache Cassandra
co-founder, http://www.datastax.com
@spyced


Re: Dynamic Columns Question Cassandra 1.2.5, Datastax Java Driver 1.0

2013-06-06 Thread Edward Capriolo
The problem about "being careful about how much you store in a collection"
is that Cassandra is a blind-write system. Knowing how much data is
currently in the collection before you write is an anti-pattern, read
before write.

Cassandra Rule 1: DON'T READ BEFORE WRITE
Cassandra Rule 2: ROWS CAN HAVE 2 BILLION COLUMNS
Collection Rule 1: DON'T STORE MORE THEN 100 THINGS IN A COLLECTION

Why does are user confused? Its simple.









On Thu, Jun 6, 2013 at 10:51 AM, Eric Stevens  wrote:

> CQL3 does now support dynamic columns. For tags or metadata values you
>> could use a Collection:
>>
>
> This should probably be clarified.  A collection is a super useful tool,
> but it is *not* the same thing as a dynamic column.  It has many
> advantages, but there is one huge disadvantage in that you have to be
> careful how much data you store in a collection. When you read a single
> value out of a collection, the *entire* collection is always read, which
> of course is true for appending data to the collection as well.
>
> With a traditional dynamic column, you could have added things like event
> logs to a record in the form of keys named "event:someEvent:TS" (or
> juxtapose the order as your needs dictate).  You could basically do this
> practically indefinitely with little degradation in performance.  This was
> also a common way of representing cross-family relationships (one-to-many
> style).
>
> If you try to do the same thing with a collection, performance will
> degrade as your data grows.  For small or relatively static data sets (eg
> tags) that's fine.  For open-ended data sets (logs, events, one-to-many
> relationships that grow regularly), you should instead normalize such data
> into a separate column family.
>
> -Eric Stevens
> ProtectWise, Inc.
>
>
> On Thu, Jun 6, 2013 at 9:49 AM, Francisco Andrades Grassi <
> bigjoc...@gmail.com> wrote:
>
>> Hi,
>>
>> CQL3 does now support dynamic columns. For tags or metadata values you
>> could use a Collection:
>>
>> http://www.datastax.com/dev/blog/cql3_collections
>>
>> For wide rows there's the enhanced primary keys, which I personally
>> prefer over the composite columns of yore:
>>
>> http://www.datastax.com/dev/blog/cql3-for-cassandra-experts
>> http://thelastpickle.com/2013/01/11/primary-keys-in-cql/
>>
>> --
>> Francisco Andrades Grassi
>> www.bigjocker.com
>> @bigjocker
>>
>> On Jun 6, 2013, at 8:32 AM, Joe Greenawalt 
>> wrote:
>>
>> Hi,
>> I'm having some problems figuring out how to append a dynamic column on a
>> column family using the datastax java driver 1.0 and CQL3 on Cassandra
>> 1.2.5.  Below is what i'm trying:
>>
>> *cqlsh:simplex> create table user (firstname text primary key, lastname
>> text);
>> cqlsh:simplex> insert into user (firstname, lastname) values
>> ('joe','shmoe');
>> cqlsh:simplex> select * from user;
>>
>>  firstname | lastname
>> ---+--
>>joe |shmoe
>>
>> cqlsh:simplex> insert into user (firstname, lastname, middlename) values
>> ('joe','shmoe','lester');
>> Bad Request: Unknown identifier middlename
>> cqlsh:simplex> insert into user (firstname, lastname, middlename) values
>> ('john','shmoe','lester');
>> Bad Request: Unknown identifier middlename*
>>
>> I'm assuming you can do this based on previous based thrift based clients
>> like pycassa, and also by reading this:
>>
>> The Cassandra data model is a dynamic schema, column-oriented data model.
>> This means that, unlike a relational database, you do not need to model all
>> of the columns required by your application up front, as each row is not
>> required to have the same set of columns. Columns and their metadata can be
>> added by your application as they are needed without incurring downtime to
>> your application.
>> here: http://www.datastax.com/docs/1.2/ddl/index
>>
>> Is it a limitation of CQL3 and its connection vs. thrift?
>> Or more likely i'm just doing something wrong?
>>
>> Thanks,
>> Joe
>>
>>
>>
>


Re: Dynamic Columns Question Cassandra 1.2.5, Datastax Java Driver 1.0

2013-06-06 Thread Eric Stevens
>
> CQL3 does now support dynamic columns. For tags or metadata values you
> could use a Collection:
>

This should probably be clarified.  A collection is a super useful tool,
but it is *not* the same thing as a dynamic column.  It has many
advantages, but there is one huge disadvantage in that you have to be
careful how much data you store in a collection. When you read a single
value out of a collection, the *entire* collection is always read, which of
course is true for appending data to the collection as well.

With a traditional dynamic column, you could have added things like event
logs to a record in the form of keys named "event:someEvent:TS" (or
juxtapose the order as your needs dictate).  You could basically do this
practically indefinitely with little degradation in performance.  This was
also a common way of representing cross-family relationships (one-to-many
style).

If you try to do the same thing with a collection, performance will degrade
as your data grows.  For small or relatively static data sets (eg tags)
that's fine.  For open-ended data sets (logs, events, one-to-many
relationships that grow regularly), you should instead normalize such data
into a separate column family.

-Eric Stevens
ProtectWise, Inc.


On Thu, Jun 6, 2013 at 9:49 AM, Francisco Andrades Grassi <
bigjoc...@gmail.com> wrote:

> Hi,
>
> CQL3 does now support dynamic columns. For tags or metadata values you
> could use a Collection:
>
> http://www.datastax.com/dev/blog/cql3_collections
>
> For wide rows there's the enhanced primary keys, which I personally prefer
> over the composite columns of yore:
>
> http://www.datastax.com/dev/blog/cql3-for-cassandra-experts
> http://thelastpickle.com/2013/01/11/primary-keys-in-cql/
>
> --
> Francisco Andrades Grassi
> www.bigjocker.com
> @bigjocker
>
> On Jun 6, 2013, at 8:32 AM, Joe Greenawalt 
> wrote:
>
> Hi,
> I'm having some problems figuring out how to append a dynamic column on a
> column family using the datastax java driver 1.0 and CQL3 on Cassandra
> 1.2.5.  Below is what i'm trying:
>
> *cqlsh:simplex> create table user (firstname text primary key, lastname
> text);
> cqlsh:simplex> insert into user (firstname, lastname) values
> ('joe','shmoe');
> cqlsh:simplex> select * from user;
>
>  firstname | lastname
> ---+--
>joe |shmoe
>
> cqlsh:simplex> insert into user (firstname, lastname, middlename) values
> ('joe','shmoe','lester');
> Bad Request: Unknown identifier middlename
> cqlsh:simplex> insert into user (firstname, lastname, middlename) values
> ('john','shmoe','lester');
> Bad Request: Unknown identifier middlename*
>
> I'm assuming you can do this based on previous based thrift based clients
> like pycassa, and also by reading this:
>
> The Cassandra data model is a dynamic schema, column-oriented data model.
> This means that, unlike a relational database, you do not need to model all
> of the columns required by your application up front, as each row is not
> required to have the same set of columns. Columns and their metadata can be
> added by your application as they are needed without incurring downtime to
> your application.
> here: http://www.datastax.com/docs/1.2/ddl/index
>
> Is it a limitation of CQL3 and its connection vs. thrift?
> Or more likely i'm just doing something wrong?
>
> Thanks,
> Joe
>
>
>


Re: Dynamic Columns Question Cassandra 1.2.5, Datastax Java Driver 1.0

2013-06-06 Thread Francisco Andrades Grassi
Hi,

CQL3 does now support dynamic columns. For tags or metadata values you could 
use a Collection:

http://www.datastax.com/dev/blog/cql3_collections

For wide rows there's the enhanced primary keys, which I personally prefer over 
the composite columns of yore:

http://www.datastax.com/dev/blog/cql3-for-cassandra-experts
http://thelastpickle.com/2013/01/11/primary-keys-in-cql/

--
Francisco Andrades Grassi
www.bigjocker.com
@bigjocker

On Jun 6, 2013, at 8:32 AM, Joe Greenawalt  wrote:

> Hi, 
> I'm having some problems figuring out how to append a dynamic column on a 
> column family using the datastax java driver 1.0 and CQL3 on Cassandra 1.2.5. 
>  Below is what i'm trying:
> 
> cqlsh:simplex> create table user (firstname text primary key, lastname text);
> cqlsh:simplex> insert into user (firstname, lastname) values ('joe','shmoe');
> cqlsh:simplex> select * from user;
> 
>  firstname | lastname
> ---+--
>joe |shmoe
> 
> cqlsh:simplex> insert into user (firstname, lastname, middlename) values 
> ('joe','shmoe','lester');
> Bad Request: Unknown identifier middlename
> cqlsh:simplex> insert into user (firstname, lastname, middlename) values 
> ('john','shmoe','lester');
> Bad Request: Unknown identifier middlename
> 
> I'm assuming you can do this based on previous based thrift based clients 
> like pycassa, and also by reading this:
> The Cassandra data model is a dynamic schema, column-oriented data model. 
> This means that, unlike a relational database, you do not need to model all 
> of the columns required by your application up front, as each row is not 
> required to have the same set of columns. Columns and their metadata can be 
> added by your application as they are needed without  incurring downtime to 
> your application.
> 
> here: http://www.datastax.com/docs/1.2/ddl/index
> 
> Is it a limitation of CQL3 and its connection vs. thrift? 
> Or more likely i'm just doing something wrong?
> 
> Thanks,
> Joe



Re: Dynamic Columns Question Cassandra 1.2.5, Datastax Java Driver 1.0

2013-06-06 Thread Shahab Yunus
Dynamic columns are not supported in CQL3. We just had a discussion a day
or two ago about this where Eric Stevens explained it. Please see this:
http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/CQL-3-returning-duplicate-keys-td7588181.html

Regards,
Shahab


On Thu, Jun 6, 2013 at 9:02 AM, Joe Greenawalt wrote:

> Hi,
> I'm having some problems figuring out how to append a dynamic column on a
> column family using the datastax java driver 1.0 and CQL3 on Cassandra
> 1.2.5.  Below is what i'm trying:
>
> *cqlsh:simplex> create table user (firstname text primary key, lastname
> text);
> cqlsh:simplex> insert into user (firstname, lastname) values
> ('joe','shmoe');
> cqlsh:simplex> select * from user;
>
>  firstname | lastname
> ---+--
>joe |shmoe
>
> cqlsh:simplex> insert into user (firstname, lastname, middlename) values
> ('joe','shmoe','lester');
> Bad Request: Unknown identifier middlename
> cqlsh:simplex> insert into user (firstname, lastname, middlename) values
> ('john','shmoe','lester');
> Bad Request: Unknown identifier middlename*
>
> I'm assuming you can do this based on previous based thrift based clients
> like pycassa, and also by reading this:
>
> The Cassandra data model is a dynamic schema, column-oriented data model.
> This means that, unlike a relational database, you do not need to model all
> of the columns required by your application up front, as each row is not
> required to have the same set of columns. Columns and their metadata can be
> added by your application as they are needed without incurring downtime to
> your application.
> here: http://www.datastax.com/docs/1.2/ddl/index
>
> Is it a limitation of CQL3 and its connection vs. thrift?
> Or more likely i'm just doing something wrong?
>
> Thanks,
> Joe
>


Dynamic Columns Question Cassandra 1.2.5, Datastax Java Driver 1.0

2013-06-06 Thread Joe Greenawalt
Hi,
I'm having some problems figuring out how to append a dynamic column on a
column family using the datastax java driver 1.0 and CQL3 on Cassandra
1.2.5.  Below is what i'm trying:

*cqlsh:simplex> create table user (firstname text primary key, lastname
text);
cqlsh:simplex> insert into user (firstname, lastname) values
('joe','shmoe');
cqlsh:simplex> select * from user;

 firstname | lastname
---+--
   joe |shmoe

cqlsh:simplex> insert into user (firstname, lastname, middlename) values
('joe','shmoe','lester');
Bad Request: Unknown identifier middlename
cqlsh:simplex> insert into user (firstname, lastname, middlename) values
('john','shmoe','lester');
Bad Request: Unknown identifier middlename*

I'm assuming you can do this based on previous based thrift based clients
like pycassa, and also by reading this:

The Cassandra data model is a dynamic schema, column-oriented data model.
This means that, unlike a relational database, you do not need to model all
of the columns required by your application up front, as each row is not
required to have the same set of columns. Columns and their metadata can be
added by your application as they are needed without incurring downtime to
your application.
here: http://www.datastax.com/docs/1.2/ddl/index

Is it a limitation of CQL3 and its connection vs. thrift?
Or more likely i'm just doing something wrong?

Thanks,
Joe


Re: Datastax Java Driver connection issue

2013-04-23 Thread aaron morton
> Just for clarification, why it is necessary to set the server rpc address to 
> 127.0.0.1?
It's not necessary for it to be 127.0.0.1. But it is necessary for the server 
to be listening for client connections (the rpc_address) on the same interface 
/ IP you are trying to connect to. 

In your case the error message said you could not find the server at 127.0.0.1, 
so the simple thing to do is make sure the server is listening there. 

You can set rpc_address to whatever you like (see the yaml) just make sure it's 
the same address you are connecting to. 

Cheers
 
-
Aaron Morton
Freelance Cassandra Consultant
New Zealand

@aaronmorton
http://www.thelastpickle.com

On 23/04/2013, at 7:06 AM, Abhijit Chanda  wrote:

> Aaron,
> 
> Just for clarification, why it is necessary to set the server rpc address to 
> 127.0.0.1?
> 
> 
> On Mon, Apr 22, 2013 at 2:22 AM, aaron morton  wrote:
> Make sure that the server rpc_address is set to 127.0.0.1
> 
> Cheers
> 
> -
> Aaron Morton
> Freelance Cassandra Consultant
> New Zealand
> 
> @aaronmorton
> http://www.thelastpickle.com
> 
> On 20/04/2013, at 1:47 PM, Techy Teck  wrote:
> 
>> I am also running into this problem. I have already enabled 
>> start_native_transport: true
>> 
>> And by this, I am trying to make a connection-
>> 
>> private CassandraDatastaxConnection() {
>> 
>> try{
>> cluster = Cluster.builder().addContactPoint("localhost").build();
>> session = cluster.connect("my_keyspace");
>> } catch (NoHostAvailableException e) {
>> throw new RuntimeException(e);
>> }
>> }
>> 
>> And everytime it gives me the same exception-
>> 
>> com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) 
>> tried for query failed (tried: [localhost/127.0.0.1])
>> 
>> Any idea how to fix this problem?
>> 
>> Thanks for the help.
>> 
>> 
>> 
>> 
>> 
>> 
>> On Fri, Apr 19, 2013 at 6:41 AM, Abhijit Chanda  
>> wrote:
>> @Gabriel, @Wright: thanks, such a silly of me. 
>> 
>> 
>> On Fri, Apr 19, 2013 at 6:48 PM, Keith Wright  wrote:
>> Did you enable the binary protocol in Cassandra.yaml?
>> 
>> Abhijit Chanda  wrote:
>> 
>> Hi,
>> 
>> I have downloaded the CQL driver provided by Datastax using 
>>
>> com.datastax.cassandra
>> cassandra-driver-core
>> 1.0.0-beta2
>> 
>> 
>> Then tried a sample program to connect to the cluster
>> Cluster cluster = Cluster.builder()
>> .addContactPoints(db1)
>> .withPort(9160)
>> .build();
>> 
>> But sadly its returning 
>> com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) 
>> tried for query failed   
>> 
>> I am using cassandra 1.2.2
>> 
>> Can any one suggest me whats wrong with that. 
>> 
>> And i am really sorry for posting  datastax java driver related question in 
>> this forum, can't find a better place for the instant reaction 
>> 
>> 
>> -Abhijit
>> 
>> 
>> 
>> -- 
>> -Abhijit
>> 
> 
> 
> 
> 
> -- 
> -Abhijit



Re: com.datastax.driver.core.exceptions.InvalidQueryException using Datastax Java driver

2013-04-23 Thread aaron morton
> Can I insert into Column Family (that I created from CLI mode) using Datastax 
> Java driver or not with Cassandra 1.2.3?
No. 
Create you table using CQL 3 via the cqlsh. 

Cheers

-
Aaron Morton
Freelance Cassandra Consultant
New Zealand

@aaronmorton
http://www.thelastpickle.com

On 23/04/2013, at 6:29 AM, Techy Teck  wrote:

> I am using correct keyspace name for that column family. I have verified that 
> as well.
> 
> Can I insert into Column Family (that I created from CLI mode) using Datastax 
> Java driver or not with Cassandra 1.2.3?
> 
> 
> On Mon, Apr 22, 2013 at 5:05 AM, Internet Group  wrote:
> It seems to me that you are not saying the keyspace of your column family 
> 'profile'.
> 
> Regards,
> Francisco.
> 
> 
> On Apr 20, 2013, at 9:56 PM, Techy Teck  wrote:
> 
>> I created my column family like this from the CLI-
>> 
>> 
>> create column family profile
>> with key_validation_class = 'UTF8Type'
>> and comparator = 'UTF8Type'
>> and default_validation_class = 'UTF8Type'
>> and column_metadata = [
>>   {column_name : account, validation_class : 'UTF8Type'}
>>   {column_name : advertising, validation_class : 'UTF8Type'}
>>   {column_name : behavior, validation_class : 'UTF8Type'}
>>   {column_name : info, validation_class : 'UTF8Type'}
>>   ];
>> 
>> 
>> 
>> Now I was trying to insert into this column family using the Datastax Java 
>> driver-
>> 
>> 
>> public void upsertAttributes(final String userId, final Map 
>> attributes) {
>> 
>>  
>> String batchInsert = "INSERT INTO PROFILE(id, account, advertising, 
>> behavior, info) VALUES ( '12345', 'hello11', 'bye2234', 'bye1', 'bye2') "; 
>> 
>> 
>> 
>> 
>>  
>> CassandraDatastaxConnection.getInstance().getSession().execute(batchInsert);
>> 
>> 
>>  }
>> 
>> I always get this exception-
>> 
>> 
>> 
>> com.datastax.driver.core.exceptions.InvalidQueryException: unconfigured 
>> columnfamily profile
>> 
>> 
>> And by this way, I am trying to create connection/session initialization to 
>> Cassandra-
>> 
>> 
>> private CassandraDatastaxConnection() {
>> 
>> 
>>  try{
>>  cluster = Cluster.builder().addContactPoint("localhost").build();
>>  session = cluster.connect("my_keyspace");   
>>  } catch (NoHostAvailableException e) {
>> 
>> 
>> 
>> 
>>  throw new RuntimeException(e);
>> 
>>  }
>> }
>> 
>> I am running Cassandra 1.2.3. And I am able to connect to Cassandra using 
>> the above code. The only problem I am facing is while inserting.
>> 
>> Any idea why it is happening?
>> 
> 
> 



Re: Datastax Java Driver connection issue

2013-04-22 Thread Abhijit Chanda
Aaron,

Just for clarification, why it is necessary to set the server rpc address
to 127.0.0.1?


On Mon, Apr 22, 2013 at 2:22 AM, aaron morton wrote:

> Make sure that the server rpc_address is set to 127.0.0.1
>
> Cheers
>
> -
> Aaron Morton
> Freelance Cassandra Consultant
> New Zealand
>
> @aaronmorton
> http://www.thelastpickle.com
>
> On 20/04/2013, at 1:47 PM, Techy Teck  wrote:
>
> I am also running into this problem. I have already enabled 
> *start_native_transport:
> true*
>
> And by this, I am trying to make a connection-
>
> private CassandraDatastaxConnection() {
>
> try{
> cluster =
> Cluster.builder().addContactPoint("localhost").build();
> session = cluster.connect("my_keyspace");
> } catch (NoHostAvailableException e) {
> throw new RuntimeException(e);
> }
> }
>
> And everytime it gives me the same exception-
>
> *com.datastax.driver.core.exceptions.NoHostAvailableException: All
> host(s) tried for query failed (tried: [localhost/127.0.0.1])*
>
> Any idea how to fix this problem?
>
> Thanks for the help.
> *
> *
>
>
>
>
> On Fri, Apr 19, 2013 at 6:41 AM, Abhijit Chanda  > wrote:
>
>> @Gabriel, @Wright: thanks, such a silly of me.
>>
>>
>> On Fri, Apr 19, 2013 at 6:48 PM, Keith Wright wrote:
>>
>>>  Did you enable the binary protocol in Cassandra.yaml?
>>>
>>> Abhijit Chanda  wrote:
>>>
>>>
>>>  Hi,
>>>
>>>  I have downloaded the CQL driver provided by Datastax using
>>> 
>>> com.datastax.cassandra
>>> cassandra-driver-core
>>> 1.0.0-beta2
>>> 
>>>
>>>  Then tried a sample program to connect to the cluster
>>>  Cluster cluster = Cluster.builder()
>>> .addContactPoints(db1)
>>>     .withPort(9160)
>>> .build();
>>>
>>>  But sadly its returning 
>>> c*om.datastax.driver.core.exceptions.NoHostAvailableException:
>>> All host(s) tried for query failed   *
>>> *
>>> *
>>> I am using cassandra 1.2.2
>>>
>>>  Can any one suggest me whats wrong with that.
>>>
>>>  And i am really sorry for posting  datastax java driver related
>>> question in this forum, can't find a better place for the instant reaction
>>>
>>>
>>>  -Abhijit
>>>
>>
>>
>>
>> --
>> -Abhijit
>>
>
>
>


-- 
-Abhijit


Re: com.datastax.driver.core.exceptions.InvalidQueryException using Datastax Java driver

2013-04-22 Thread Techy Teck
I am using correct keyspace name for that column family. I have verified
that as well.

Can I insert into Column Family (that I created from CLI mode) using
Datastax Java driver or not with Cassandra 1.2.3?


On Mon, Apr 22, 2013 at 5:05 AM, Internet Group wrote:

> It seems to me that you are not saying the keyspace of your column family
> 'profile'.
>
> Regards,
> Francisco.
>
>
> On Apr 20, 2013, at 9:56 PM, Techy Teck  wrote:
>
> I created my column family like this from the CLI-
>
>
> create column family profile
> with key_validation_class = 'UTF8Type'
> and comparator = 'UTF8Type'
> and default_validation_class = 'UTF8Type'
> and column_metadata = [
>   {column_name : account, validation_class : 'UTF8Type'}
>   {column_name : advertising, validation_class : 'UTF8Type'}
>   {column_name : behavior, validation_class : 'UTF8Type'}
>   {column_name : info, validation_class : 'UTF8Type'}
>   ];
>
>
> Now I was trying to insert into this column family using the Datastax Java
> driver-
>
>
> public void upsertAttributes(final String userId, final Map 
> attributes) {
>   
> String batchInsert = "INSERT INTO PROFILE(id, account, advertising, behavior, 
> info) VALUES ( '12345', 'hello11', 'bye2234', 'bye1', 'bye2') ";
>
>
>
>   
> CassandraDatastaxConnection.getInstance().getSession().execute(batchInsert);
>
>   }
>
> *
> *
> *I always get this exception-*
>
>
> com.datastax.driver.core.exceptions.InvalidQueryException: unconfigured 
> columnfamily profile
>
>
> And by this way, I am trying to create connection/session initialization
> to Cassandra-
>
>
> private CassandraDatastaxConnection() {
>
>   try{
>   cluster = Cluster.builder().addContactPoint("localhost").build();
>   session = cluster.connect("my_keyspace");   
>   } catch (NoHostAvailableException e) {
>
>
>
>   throw new RuntimeException(e);
>   }
> }
>
>
> I am running Cassandra 1.2.3. And I am able to connect to Cassandra using
> the above code. The only problem I am facing is while inserting.
>
> Any idea why it is happening?
>
>
>


Re: com.datastax.driver.core.exceptions.InvalidQueryException using Datastax Java driver

2013-04-22 Thread Internet Group
It seems to me that you are not saying the keyspace of your column family 
'profile'.

Regards,
Francisco.

On Apr 20, 2013, at 9:56 PM, Techy Teck  wrote:

> I created my column family like this from the CLI-
> 
> 
> create column family profile
> with key_validation_class = 'UTF8Type'
> and comparator = 'UTF8Type'
> and default_validation_class = 'UTF8Type'
> and column_metadata = [
>   {column_name : account, validation_class : 'UTF8Type'}
>   {column_name : advertising, validation_class : 'UTF8Type'}
>   {column_name : behavior, validation_class : 'UTF8Type'}
>   {column_name : info, validation_class : 'UTF8Type'}
>   ];
> 
> 
> Now I was trying to insert into this column family using the Datastax Java 
> driver-
> 
> 
> public void upsertAttributes(final String userId, final Map 
> attributes) {
>   
> String batchInsert = "INSERT INTO PROFILE(id, account, advertising, behavior, 
> info) VALUES ( '12345', 'hello11', 'bye2234', 'bye1', 'bye2') "; 
> 
> 
>   
> CassandraDatastaxConnection.getInstance().getSession().execute(batchInsert);
> 
>   }
> 
> I always get this exception-
> 
> 
> 
> com.datastax.driver.core.exceptions.InvalidQueryException: unconfigured 
> columnfamily profile
> 
> 
> And by this way, I am trying to create connection/session initialization to 
> Cassandra-
> 
> 
> private CassandraDatastaxConnection() {
> 
>   try{
>   cluster = Cluster.builder().addContactPoint("localhost").build();
>   session = cluster.connect("my_keyspace");   
>   } catch (NoHostAvailableException e) {
> 
> 
>   throw new RuntimeException(e);
>   }
> }
> 
> I am running Cassandra 1.2.3. And I am able to connect to Cassandra using the 
> above code. The only problem I am facing is while inserting.
> 
> Any idea why it is happening?
> 



Re: com.datastax.driver.core.exceptions.InvalidQueryException using Datastax Java driver

2013-04-21 Thread Techy Teck
Can anyone help me out here?
The thing that I wanted to know is whether I can insert into Column Family
(that I created from CLI mode) using Datastax Java driver or not? As soon
as I am trying to insert into Column Family which I have created in CLI
mode, I always get the above exception.

But If I try to insert into table that I have created in CQLsh mode, I am
able to insert into that.

Any help will be appreciated.

I am running Cassandra 1.2.3



On Sat, Apr 20, 2013 at 5:56 PM, Techy Teck  wrote:

> I created my column family like this from the CLI-
>
>
> create column family profile
> with key_validation_class = 'UTF8Type'
> and comparator = 'UTF8Type'
> and default_validation_class = 'UTF8Type'
> and column_metadata = [
>   {column_name : account, validation_class : 'UTF8Type'}
>   {column_name : advertising, validation_class : 'UTF8Type'}
>   {column_name : behavior, validation_class : 'UTF8Type'}
>   {column_name : info, validation_class : 'UTF8Type'}
>   ];
>
>
> Now I was trying to insert into this column family using the Datastax Java
> driver-
>
>
> public void upsertAttributes(final String userId, final Map 
> attributes) {
>   
> String batchInsert = "INSERT INTO PROFILE(id, account, advertising, behavior, 
> info) VALUES ( '12345', 'hello11', 'bye2234', 'bye1', 'bye2') ";
>
>
>   
> CassandraDatastaxConnection.getInstance().getSession().execute(batchInsert);
>
>   }
>
> *
> *
> *I always get this exception-*
>
>
> com.datastax.driver.core.exceptions.InvalidQueryException: unconfigured 
> columnfamily profile
>
>
> And by this way, I am trying to create connection/session initialization
> to Cassandra-
>
>
> private CassandraDatastaxConnection() {
>
>   try{
>   cluster = Cluster.builder().addContactPoint("localhost").build();
>   session = cluster.connect("my_keyspace");   
>   } catch (NoHostAvailableException e) {
>
>
>   throw new RuntimeException(e);
>   }
> }
>
>
> I am running Cassandra 1.2.3. And I am able to connect to Cassandra using
> the above code. The only problem I am facing is while inserting.
>
> Any idea why it is happening?
>
>


Re: Datastax Java Driver connection issue

2013-04-21 Thread aaron morton
Make sure that the server rpc_address is set to 127.0.0.1

Cheers

-
Aaron Morton
Freelance Cassandra Consultant
New Zealand

@aaronmorton
http://www.thelastpickle.com

On 20/04/2013, at 1:47 PM, Techy Teck  wrote:

> I am also running into this problem. I have already enabled 
> start_native_transport: true
> 
> And by this, I am trying to make a connection-
> 
> private CassandraDatastaxConnection() {
> 
> try{
> cluster = Cluster.builder().addContactPoint("localhost").build();
> session = cluster.connect("my_keyspace");
> } catch (NoHostAvailableException e) {
> throw new RuntimeException(e);
> }
> }
> 
> And everytime it gives me the same exception-
> 
> com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) 
> tried for query failed (tried: [localhost/127.0.0.1])
> 
> Any idea how to fix this problem?
> 
> Thanks for the help.
> 
> 
> 
> 
> 
> 
> On Fri, Apr 19, 2013 at 6:41 AM, Abhijit Chanda  
> wrote:
> @Gabriel, @Wright: thanks, such a silly of me. 
> 
> 
> On Fri, Apr 19, 2013 at 6:48 PM, Keith Wright  wrote:
> Did you enable the binary protocol in Cassandra.yaml?
> 
> Abhijit Chanda  wrote:
> 
> Hi,
> 
> I have downloaded the CQL driver provided by Datastax using 
>
> com.datastax.cassandra
> cassandra-driver-core
> 1.0.0-beta2
> 
> 
> Then tried a sample program to connect to the cluster
> Cluster cluster = Cluster.builder()
> .addContactPoints(db1)
> .withPort(9160)
> .build();
> 
> But sadly its returning 
> com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) 
> tried for query failed   
> 
> I am using cassandra 1.2.2
> 
> Can any one suggest me whats wrong with that. 
> 
> And i am really sorry for posting  datastax java driver related question in 
> this forum, can't find a better place for the instant reaction 
> 
> 
> -Abhijit
> 
> 
> 
> -- 
> -Abhijit
> 



Re: Retrieve data from Cassandra database using Datastax java driver

2013-04-20 Thread Abhijit Chanda
You have the collection attributeNames, just iterate it
  Iterator it = result.iterator();
while(it.hasNext()){
Row row = it.next();
for(String column : attributeNames) {
//not sure how to put the columnName and columnValue that
came back from the database
attributes.put(column,row.getString(column));
}
}


Cheers


On Sun, Apr 21, 2013 at 10:24 AM, Techy Teck wrote:

> Thanks Dave for the suggestion. I have all my columns name in this
> collection-
>
>  *final Collection attributeNames*
> *
> *
> And all my results back in this resultset-
>
> *ResultSet result =
> CassandraDatastaxConnection.getSession().execute(query);*
> *
> *
> Now I need to store the column name and its corresponding value in the
> Below Map-
>
>   *Map attributes = new
> ConcurrentHashMap();*
> *
> *
> What's the best way to do that in this case?
>
> Thanks for the help.
>
>
>
>
>
>
> On Sat, Apr 20, 2013 at 9:36 PM, Dave Brosius wrote:
>
>>  getColumnDefinitions only returns meta data, to get the data, use the
>> iterator to navigate the rows
>>
>>
>> Iterator it = result.iterator();
>>
>> while (it.hasNext()) {
>>     Row r = it.next();
>> //do stuff with row
>>
>> }
>>
>> On 04/21/2013 12:02 AM, Techy Teck wrote:
>>
>>  I am working with Datastax java-driver. And I am trying to retrieve few
>> columns from the database basis on the input that is being passed to the
>> below method-
>>
>>
>>  public Map getAttributes(final String userId, final
>> Collection attributeNames) {
>>
>>  String query="SELECT " +attributeNames.toString().substring(1,
>> attributeNames.toString().length()-1)+ " from profile where id = '"+userId+
>> "';";
>>   CassandraDatastaxConnection.getInstance();
>>
>>  ResultSet result =
>> CassandraDatastaxConnection.getSession().execute(query);
>>
>>  Map attributes = new ConcurrentHashMap> String>();
>>   for(Definition def : result.getColumnDefinitions()) {
>>  //not sure how to put the columnName and columnValue that came back from
>> the database
>>  attributes.put(column name, column value);
>>  }
>>   return attributes;
>>  }
>>
>>  Now I got the result back from the database in *result*
>> *
>> *
>> Now how to put the colum name and column value that came back from the
>> database in a map?
>>
>>  I am not able to understand how to retrieve colum value for a
>> particular column in datastax java driver?
>>
>>  Any thoughts will be of great help.
>>
>>
>>
>


-- 
-Abhijit


Re: Retrieve data from Cassandra database using Datastax java driver

2013-04-20 Thread Techy Teck
Thanks Dave for the suggestion. I have all my columns name in this
collection-

 *final Collection attributeNames*
*
*
And all my results back in this resultset-

*ResultSet result =
CassandraDatastaxConnection.getSession().execute(query);*
*
*
Now I need to store the column name and its corresponding value in the
Below Map-

  *Map attributes = new
ConcurrentHashMap();*
*
*
What's the best way to do that in this case?

Thanks for the help.






On Sat, Apr 20, 2013 at 9:36 PM, Dave Brosius wrote:

>  getColumnDefinitions only returns meta data, to get the data, use the
> iterator to navigate the rows
>
>
> Iterator it = result.iterator();
>
> while (it.hasNext()) {
> Row r = it.next();
> //do stuff with row
>
> }
>
> On 04/21/2013 12:02 AM, Techy Teck wrote:
>
>  I am working with Datastax java-driver. And I am trying to retrieve few
> columns from the database basis on the input that is being passed to the
> below method-
>
>
>  public Map getAttributes(final String userId, final
> Collection attributeNames) {
>
>  String query="SELECT " +attributeNames.toString().substring(1,
> attributeNames.toString().length()-1)+ " from profile where id = '"+userId+
> "';";
>   CassandraDatastaxConnection.getInstance();
>
>  ResultSet result =
> CassandraDatastaxConnection.getSession().execute(query);
>
>  Map attributes = new ConcurrentHashMap();
>   for(Definition def : result.getColumnDefinitions()) {
>  //not sure how to put the columnName and columnValue that came back from
> the database
>  attributes.put(column name, column value);
>  }
>   return attributes;
>  }
>
>  Now I got the result back from the database in *result*
> *
> *
> Now how to put the colum name and column value that came back from the
> database in a map?
>
>  I am not able to understand how to retrieve colum value for a particular
> column in datastax java driver?
>
>  Any thoughts will be of great help.
>
>
>


Re: Retrieve data from Cassandra database using Datastax java driver

2013-04-20 Thread Dave Brosius
getColumnDefinitions only returns meta data, to get the data, use the 
iterator to navigate the rows



Iterator it = result.iterator();

while (it.hasNext()) {
Row r = it.next();
//do stuff with row
}

On 04/21/2013 12:02 AM, Techy Teck wrote:
I am working with Datastax java-driver. And I am trying to retrieve 
few columns from the database basis on the input that is being passed 
to the below method-



public Map getAttributes(final String userId, final 
Collection attributeNames) {


String query="SELECT " +attributeNames.toString().substring(1, 
attributeNames.toString().length()-1)+ " from profile where id = 
'"+userId+ "';";

CassandraDatastaxConnection.getInstance();

ResultSet result = 
CassandraDatastaxConnection.getSession().execute(query);


Map attributes = new ConcurrentHashMap();
for(Definition def : result.getColumnDefinitions()) {
//not sure how to put the columnName and columnValue that came back 
from the database

attributes.put(column name, column value);
}
return attributes;
}

Now I got the result back from the database in *result*
*
*
Now how to put the colum name and column value that came back from the 
database in a map?


I am not able to understand how to retrieve colum value for a 
particular column in datastax java driver?


Any thoughts will be of great help.




Retrieve data from Cassandra database using Datastax java driver

2013-04-20 Thread Techy Teck
I am working with Datastax java-driver. And I am trying to retrieve few
columns from the database basis on the input that is being passed to the
below method-


public Map getAttributes(final String userId, final
Collection attributeNames) {

String query="SELECT " +attributeNames.toString().substring(1,
attributeNames.toString().length()-1)+ " from profile where id = '"+userId+
"';";
 CassandraDatastaxConnection.getInstance();

ResultSet result = CassandraDatastaxConnection.getSession().execute(query);

Map attributes = new ConcurrentHashMap();
 for(Definition def : result.getColumnDefinitions()) {
 //not sure how to put the columnName and columnValue that came back from
the database
attributes.put(column name, column value);
 }
 return attributes;
 }

Now I got the result back from the database in *result*
*
*
Now how to put the colum name and column value that came back from the
database in a map?

I am not able to understand how to retrieve colum value for a particular
column in datastax java driver?

Any thoughts will be of great help.


com.datastax.driver.core.exceptions.InvalidQueryException using Datastax Java driver

2013-04-20 Thread Techy Teck
I created my column family like this from the CLI-


create column family profile
with key_validation_class = 'UTF8Type'
and comparator = 'UTF8Type'
and default_validation_class = 'UTF8Type'
and column_metadata = [
  {column_name : account, validation_class : 'UTF8Type'}
  {column_name : advertising, validation_class : 'UTF8Type'}
  {column_name : behavior, validation_class : 'UTF8Type'}
  {column_name : info, validation_class : 'UTF8Type'}
  ];


Now I was trying to insert into this column family using the Datastax Java
driver-


public void upsertAttributes(final String userId, final Map attributes) {

String batchInsert = "INSERT INTO PROFILE(id, account, advertising,
behavior, info) VALUES ( '12345', 'hello11', 'bye2234', 'bye1',
'bye2') ";


CassandraDatastaxConnection.getInstance().getSession().execute(batchInsert);

}

*
*
*I always get this exception-*


com.datastax.driver.core.exceptions.InvalidQueryException:
unconfigured columnfamily profile


And by this way, I am trying to create connection/session initialization to
Cassandra-


private CassandraDatastaxConnection() {

try{
cluster = Cluster.builder().addContactPoint("localhost").build();
session = cluster.connect("my_keyspace");   
} catch (NoHostAvailableException e) {

throw new RuntimeException(e);
}
}


I am running Cassandra 1.2.3. And I am able to connect to Cassandra using
the above code. The only problem I am facing is while inserting.

Any idea why it is happening?


Re: Datastax Java Driver connection issue

2013-04-19 Thread Techy Teck
I am also running into this problem. I have already enabled
*start_native_transport:
true*

And by this, I am trying to make a connection-

private CassandraDatastaxConnection() {

try{
cluster =
Cluster.builder().addContactPoint("localhost").build();
session = cluster.connect("my_keyspace");
} catch (NoHostAvailableException e) {
throw new RuntimeException(e);
}
}

And everytime it gives me the same exception-

*com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s)
tried for query failed (tried: [localhost/127.0.0.1])*

Any idea how to fix this problem?

Thanks for the help.
*
*




On Fri, Apr 19, 2013 at 6:41 AM, Abhijit Chanda
wrote:

> @Gabriel, @Wright: thanks, such a silly of me.
>
>
> On Fri, Apr 19, 2013 at 6:48 PM, Keith Wright wrote:
>
>>  Did you enable the binary protocol in Cassandra.yaml?
>>
>> Abhijit Chanda  wrote:
>>
>>
>>  Hi,
>>
>>  I have downloaded the CQL driver provided by Datastax using
>> 
>> com.datastax.cassandra
>> cassandra-driver-core
>> 1.0.0-beta2
>> 
>>
>>  Then tried a sample program to connect to the cluster
>>  Cluster cluster = Cluster.builder()
>> .addContactPoints(db1)
>> .withPort(9160)
>> .build();
>>
>>  But sadly its returning 
>> c*om.datastax.driver.core.exceptions.NoHostAvailableException:
>> All host(s) tried for query failed   *
>> *
>> *
>> I am using cassandra 1.2.2
>>
>>  Can any one suggest me whats wrong with that.
>>
>>  And i am really sorry for posting  datastax java driver related
>> question in this forum, can't find a better place for the instant reaction
>>
>>
>>  -Abhijit
>>
>
>
>
> --
> -Abhijit
>


Re: Datastax Java Driver connection issue

2013-04-19 Thread Abhijit Chanda
@Gabriel, @Wright: thanks, such a silly of me.


On Fri, Apr 19, 2013 at 6:48 PM, Keith Wright  wrote:

>  Did you enable the binary protocol in Cassandra.yaml?
>
> Abhijit Chanda  wrote:
>
>
>  Hi,
>
>  I have downloaded the CQL driver provided by Datastax using
> 
> com.datastax.cassandra
> cassandra-driver-core
> 1.0.0-beta2
> 
>
>  Then tried a sample program to connect to the cluster
>  Cluster cluster = Cluster.builder()
> .addContactPoints(db1)
> .withPort(9160)
> .build();
>
>  But sadly its returning 
> c*om.datastax.driver.core.exceptions.NoHostAvailableException:
> All host(s) tried for query failed   *
> *
> *
> I am using cassandra 1.2.2
>
>  Can any one suggest me whats wrong with that.
>
>  And i am really sorry for posting  datastax java driver related question
> in this forum, can't find a better place for the instant reaction
>
>
>  -Abhijit
>



-- 
-Abhijit


  1   2   >