I thought I'd follow up on this again, a long time later.
We gave up on the Dockerised version of Riak.
But I notice we're getting an awful lot of these incomplete_hint errors on
the regular, non-docker, cluster now.
We had a sudden power failure in that server room recently, so there would
have b
Joe,
TS records in a given table all have the same structure and are stored
and retrieved as single objects (in Riak KV sense); the backend cannot
introspect them and only extract some fields.
Full records are read from backend and these are delivered, in chunks,
to the coordinator node (the node
I'm not 100% certain but I do not believe that is the case. Part of the reason
for structured data is efficient retrieval. I believe the data is read but only
the data selected leaves the leveldb backend, unselected data never leaves
leveldb so there's no overhead when passing data from level to
Suppose I have the following table in RiakTS:
CREATE TABLE T1 (
idVARCHAR NOT NULL,
eventtime TIMESTAMP NOT NULL,
field2 SINT64,
data BLOB NOT NULL,
primary key (id, QUANTUM(eventtime, 365, 'd')),id)
)
Assume the BLOB field is close to the max size for a
Is there a Kafka Connect (https://www.confluent.io/product/connectors/)
connector for RiakTS?
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com