Hi Paolo,
a) was there any large insertion done?
b) are the a lot of files in the saved_caches directory?
c) would you consider to increase the HEAP_NEWSIZE to, say, 1200M?
Regards,
Mike Yeap
On Fri, May 27, 2016 at 12:39 AM, Paolo Crosato <
paolo.cros...@targaubiest.com> wrote:
> Hi,
>
> we
Thanks Alex. I was able to verify the issue by adding the rt file from the jre
and it did not throw exceptions.
-Tony
On Thursday, May 26, 2016 2:25 PM, Alex Popescu wrote:
Tony,
This thread will have much better chances to get answers and feedback if posted
on
Tony,
This thread will have much better chances to get answers and feedback if
posted on the Java driver mailing list:
https://groups.google.com/a/lists.datastax.com/forum/m/#!forum/java-driver-user
On Thursday, May 26, 2016, Tony Anecito wrote:
> Okay, think I understand
Okay, think I understand the issue. Seems netty/casandra driver expect the
javax.security.cert.X509Certificate to be readily available which it is for
normal standalone java clients but it is not for the server side. That class is
in the rt.jar for the jre. But since the server side does not
The time the first streaming failure occurs varies from a few hours to 1+
day.
We also experience slowness problems with the destination node on Amazon.
Rebuild is slow. That may also contribute to the problem.
Unfortunately we only kept the logs of the source node and there is no
other error
If I were you, I'd do both. If you're trying to build a multi-tenanted
system, it's probably a better idea to include tenant ID as the partition
key of every cross-tenant table. You can easily run Cassandra with a 4 gig
heap, but I'd never plan on doing so for a production use except for very
Unfortunately, read immediately after write is another antipattern in
*all* eventually
consistent databases (though reading and writing both at quorum should
effectively produce immediate consistency).
The upcoming Change Data Capture feature that Michael Laing linked might be
a useful feature to
Hi,
we are running a cluster of 4 nodes, each one has the same sizing: 2
cores, 16G ram and 1TB of disk space.
On every node we are running cassandra 2.0.17, oracle java version
"1.7.0_45", centos 6 with this kernel version 2.6.32-431.17.1.el6.x86_64
Two nodes are running just fine, the
If it's a single node cluster, then it's not consistency level related as
all consistencies are essentially the same. This looks instead like a
usage pattern that's entirely driven by Titan's read pattern which appears
to be lots of tiny reads (probably not too surprising for a graph
database).
Try:
unfilteredRowIterator.next().clustering().toString(update.metadata())
To get the raw values, you can use:
unfilteredRowIterator.next().clustering().getRawValues()
On Thu, May 26, 2016 at 7:25 AM, Siddharth Verma <
verma.siddha...@snapdeal.com> wrote:
> Hi Sam,
> Sorry, I couldn't
On Thu, May 26, 2016 at 4:36 AM, Adarsh Kumar wrote:
>
> 1). Is there any other way to configure no of buckets along with
> bloom_fileter_fp_chance, to avoid this exception?
>
No, it's hard coded, although we could theoretically hard code it to
support a higher number of
How long does it take after you trigger the rebuild process before it fails?
Was there any error before [STREAM-IN-/192.168.1.141] on the destination
node or [STREAM-OUT-/172.31.22.104] on the source node? Those are showing
consequences of the root error. In particular what were the last messages
Hi Sam,
Sorry, I couldn't understand.
I am already using
UnfilteredRowIterator unfilteredRowIterator =partition.unfilteredIterator();
while(unfilteredRowIterator.hasNext()){
next.append(unfilteredRowIterator.next().toString()+"\001");
}
Is there another way to access it?
If you just want the string representations you can just use
Unfiltered::clustering to get the Clustering instance for each Unfiltered,
then call its toString(CFMetadata), passing update.metadata().
On Thu, May 26, 2016 at 12:01 PM, Siddharth Verma <
verma.siddha...@snapdeal.com> wrote:
>
I tried again with setting streaming_socket_timeout_in_ms to 1 day on all
nodes and after having upgraded to 2.1.14.
My tcp_keep_alive_time is set to 2 hours and tcp_keepalive_probes to 9.
That should be ok I would believe.
I get streaming error again, shortly after starting the rebuild process.
Tried the following as well. Still no result.
update.metadata().clusteringColumns().toString() -> get clustering column
names
update.columns().toString() -> gets no primary key
colulmns
update.partitionKey().toString() -> gets token range
Any help would be
Hi,
I did an analysis around some bloom_fileter_fp_chance values to analyse
size of bloom filter created when setting different values. For 1 million
dataset I got following data:
*1).* For bloom_filter_fp_chance = .01(default) : *1.20 MB (1,259,720
bytes*)
* 2).* For bloom_filter_fp_chance =
Eric,
thanks for the hint. Titan 0.5.4 uses ONE, not LOCAL_ONE. I can try and patch
the version. Given that it is a single node cluster for the time being, would
your remarks apply to that particular setup?
Thanks again!
Ralf
> On 24.05.2016, at 19:18, Eric Stevens
Hi Alain,
Thanks for your response :)
> A replication factor of 3 for a 3 node cluster does not balance the load:
> since you ask for 3 copies of the data (rf=3) on 3 nodes cluster,
> each node will have a copy of the data and you are overloading all nodes.
> May be you should try with a rf =
hi,
I am creating a trigger in cassandra
---
public class GenericAuditTrigger implements ITrigger
{
private static SimpleDateFormat dateFormatter = new SimpleDateFormat
20 matches
Mail list logo