Jeff and Jon, thanks for your outstanding work, and expect for the two update.
^^
zha...@easemob.com
From: Jeff Jirsa
Date: 2016-09-28 13:01
To: user@cassandra.apache.org
Subject: Re: ask for help about exmples of Data Types the document shows
Jon Haddad opened a ticket (presumably based
Jon Haddad opened a ticket (presumably based off this email) -
https://issues.apache.org/jira/browse/CASSANDRA-12718
Will get it corrected shortly.
From: "zha...@easemob.com"
Reply-To: "user@cassandra.apache.org"
Date: Tuesday,
hi, Ben, you are right, I copy the statement from document but not check it
carefully. It's very helpful and thank a lot !
By the way, is there any way to report to the document author and update the
document? I think it may be confused for a green hand just like me.
zha...@easemob.com
My best guess it that you need to remove the quotes from around the zip
values (ie change if to zip: 20500 rather than zip: ‘20500’ ) as zip is
defined as an int.
Cheers
Ben
On Wed, 28 Sep 2016 at 14:38 zha...@easemob.com wrote:
> Hi, Ben Slater, thank you very much for
Hi, Ben Slater, thank you very much for your replay!
my cassandra version is 3.7, so I think there must be some thing I
misunderstand ahout frozen type. I add a comma between } and ‘work’, the
result is like below. Is there some special form about " type frozen"?
cqlsh:counterks> INSERT
Hi,
I think you are right about the typo in (1). For (2), I think you’re
missing a comma between } and ‘work’ so the JSON is invalid.
I think reading this JIRA
https://issues.apache.org/jira/browse/CASSANDRA-7423 that the change
requiring a UDT as part of a collection to be explicitly marked as
hi, everyone, I'm learning Cassandra now , and have some problems about the
document of "Data Types" . I don't know where to report or ask for help, so
I'm very sorry if this mail bother you.
In the chapter The Cassandra Query Language (CQL)/Data Types
Ok... Thanks for the reply...
I'm going to retry nodetool rebuild with following changes as you said
net.ipv4.tcp_keepalive_time=60 net.ipv4.tcp_keepalive_probes=3
net.ipv4.tcp_keepalive_intvl=10
Hope this changes would be enough on the new node where I'm running
'nodetool rebuild' and hope NOT
Yeah this is likely to be caused by idle connections being shut down, so
you may need to update your tcp_keepalive* and/or network/firewall settings.
2016-09-27 15:29 GMT-03:00 laxmikanth sadula :
> Hi paul,
>
> Thanks for the reply...
>
> I'm getting following streaming
Hi paul,
Thanks for the reply...
I'm getting following streaming exceptions during nodetool rebuild in
c*-2.0.17
*04:24:49,759 StreamSession.java (line 461) [Stream
#5e1b7f40-8496-11e6-8847-1b88665e430d] Streaming error occurred*
*java.io.IOException: Connection timed out*
*at
What type of streaming timeout are you getting? Do you have a stack trace?
What version are you in?
See more information about tuning tcp_keepalive* here:
https://docs.datastax.com/en/cassandra/2.0/cassandra/troubleshooting/trblshootIdleFirewall.html
2016-09-27 14:07 GMT-03:00 laxmikanth sadula
@Paulo Motta
Even we are facing Streaming timeout exceptions during 'nodetool rebuild' ,
I set streaming_socket_timeout_in_ms to 8640 (24 hours) as suggested in
datastax blog - https://support.datastax.com/hc/en-us/articles/206502913-
Hi,
I'm trying to add new data center - DC3 to existing c*-2.0.17 cluster with
2 data centers DC1, DC2 with replication DC1:3 , DC2:3 , DC3:3.
I'm getting following exception repeatedly on new nodes after I run
'nodetool rebuild'.
*DEBUG
Didn't know about (2), and I actually have a time drift between the nodes.
Thanks a lot Paulo!
Regards,
Stefano
On Thu, Sep 22, 2016 at 6:36 PM, Paulo Motta
wrote:
> There are a couple of things that could be happening here:
> - There will be time differences between
That is a very large heap size for C* - most installations I’ve seen are
running in the 8-12MB heap range. Apparently G1GC is better for larger
heaps so that may help. However, you are probably better off digging a bit
deeper into what is using all that heap? Massive IN clause lists? Massive
Hi, all
I have a C* cluster with 12 nodes. My cassandra version is 2.1.14; Just now
two nodes crashed and client fails to export data with read consistency QUORUM.
The following are logs of failed nodes:
ERROR [SharedPool-Worker-159] 2016-09-26 20:51:14,124 Message.java:538 -
Unexpected
16 matches
Mail list logo