ssd2/data/KeyspaceMetadata/x-x/lb-26203-big-Data.db >
>> /dev/null"
>> If you get an error message, it's probably a hardware issue.
>>
>> - Erik -
>>
>> --
>> *From:* Philip Ó Condúin
>> *Sent:* Thursday, August 8, 2019 09:58
>> *To:* user@cassa
5a-11e9-8920-9f72868b8375] Sending
completed merkle tree to /x.x.x.x for KeyspaceMetadata.CF_CcIndex
ERROR 21:30:35 Stopping RPC server
INFO 21:30:35 Stop listening to thrift clients
ERROR 21:30:35 Stopping native transport
INFO 21:30:35 Stop listening for CQL clients
On Thu, 8 Aug 2019 a
older CentOS versions. You'll want to
> be using a version > 4.15.
>
> On Thu, Aug 8, 2019 at 9:31 AM Philip Ó Condúin
> wrote:
>
>> *@Jeff *- If it was hardware that would explain it all, but do you think
>> it's possible to have every server in the cluster
> scrub tool) so to limit spread of corruptions – right?
>
> Just curious to know – you’re not using lz4/default compressor for all
> tables there must be some reason for it.
>
>
>
>
>
>
>
> *From:* Philip Ó Condúin [mailto:philipocond...@gmail.com]
> *Sent:*
not
sure where to go with this.
Any help would be very much appreciated before I lose the last bit of hair
I have on my head.
Kind Regards,
Phil
On Wed, 7 Aug 2019 at 20:51, Nitan Kainth wrote:
> Repair during upgrade have caused corruption too.
>
> Also, dropping and adding columns
Hi All,
I am currently experiencing multiple datafile corruptions across most nodes
in my cluster, there seems to be no pattern to the corruption. I'm
starting to think it might be a bug, we're using Cassandra 2.2.13.
Without going into detail about the issue I just want to confirm something.
C
Hi All,
I currently have one node in a DC. I am trying to add a second node into
the cluster which is in a different DC.
Obviously, Cassandra on node 1 will need to be able to talk to node 2 over
port 7000 and therefore the firewall rules will need to be correct.
I have been told by the team re
> I'm not too sure what suits you the best.
>
> C*heers,
> ---
> Alain Rodriguez - al...@thelastpickle.com
> France / Spain
>
> The Last Pickle - Apache Cassandra Consulting
> http://www.thelastpickle.com
>
> Le mer. 24 oct. 2018 à 12:46, Phi
Hi All,
I have a problem that I'm trying to work out and can't find anything online
that may help me.
I have been asked to delete 4K records from a Column Family that has a
total of 1.8 million rows. I have been given an excel spreadsheet with a
list of the 4K PRIMARY KEY numbers to be deleted.
s to list jvm processes.
> See https://issues.apache.org/jira/browse/CASSANDRA-9242 for detail.
>
> You can work around by doing what Riccardo said.
> On Tue, Sep 18, 2018 at 9:41 PM Philip Ó Condúin
> wrote:
> >
> > Hi Riccardo,
> >
> > Yes that works for me:
a-env.sh
> you can connect using jmxterm by issuing 'open 127.0.0.1:7199'. Would
> that work for you?
>
> HTH,
>
>
>
> On Tue, Sep 18, 2018 at 2:00 PM, Philip Ó Condúin <
> philipocond...@gmail.com> wrote:
>
>> Further info:
>>
>> I w
ing up the JVM for Cassandra for some reason.
Can someone point me in the right direction? Is there settings in the
cassandra-env.sh file I need to amend to get jmxterm to find the cass jvm?
Im not finding much about it on google.
Thanks,
Phil
On Tue, 18 Sep 2018 at 12:09, Philip Ó Condúin
wrote:
&g
Hi All,
I need a little advice. I'm trying to access the JMX terminal using
*jmxterm-1.0-alpha-4-uber.jar* with a very simple default install of C*
3.11.3
I keep getting the following:
[cassandra@reaper-1 conf]$ java -jar jmxterm-1.0-alpha-4-uber.jar
Welcome to JMX terminal. Type "help" for ava
13 matches
Mail list logo