Today I configured incremental backups in a test node which already has some
data on it,
and I found that backups are not created for STTables created by a compact:
mddione@life:~/src/works/orange/Cassandra$ sudo find
/var/lib/cassandra/data/one_cf
/var/lib/cassandra/data/one_cf
We used to have a nice test cluster with 2 nodes and everything was peachy.
At some point we (re)added a third node, which seems to work allright. But then
we try to delete one CF and requery it and we get this:
root@pnscassandra03:~# cqlsh -3
[cqlsh 2.2.0 | Cassandra 1.1.2hebex1 | CQL spec
De : mdione@orange.com [mailto:mdione@orange.com]
We used to have a nice test cluster with 2 nodes and everything was
peachy. At some point we (re)added a third node, which seems to work
allright. But then we try to delete one CF and requery it and we get
this:
Seems we've got
De : mdione@orange.com [mailto:mdione@orange.com]
In particular, I'm thinking on a restore like this:
* the app does something stupid.
* (if possible) I stop writes to the KS or CF.
In fact, given that I'm about to restore the KS/CF to an old state, I can
safely do this:
*
De : Sylvain Lebresne [mailto:sylv...@datastax.com]
2) copy the snapshot sstable in the right place and call the JMX
method
loadNewSSTables() (in the column family MBean, which mean you need to
do that per-CF).
How does this affect the contents of the CommitLogs? I mean, I
De : mdione@orange.com [mailto:mdione@orange.com]
De : Sylvain Lebresne [mailto:sylv...@datastax.com]
2) copy the snapshot sstable in the right place and call the JMX
method
loadNewSSTables() (in the column family MBean, which mean you need
to do that per-CF).
How
De : Sylvain Lebresne [mailto:sylv...@datastax.com]
2) copy the snapshot sstable in the right place and call the JMX method
loadNewSSTables() (in the column family MBean, which mean you need to
do that per-CF).
How does this affect the contents of the CommitLogs? I mean,
I imagine
According to this post[1], one's supposed to start C* with
-Dcassandra.renew_counter_id=true as one of the steps of
restoring a counter column family. I have two questions related to this:
a) how does that setting affect C* in a non-restoring start?
b) if it's bad (for some value of
De : mdione@orange.com [mailto:mdione@orange.com]
restoring a counter column family. I have two questions related to
this:
a) how does that setting affect C* in a non-restoring start?
b) if it's bad (for some value of that), should I stop C*+remove
the setting+start C* after
One of the scenarios I have to have in account for a small Cassandra cluster
(N=4) is
restoring the data back in time. I will have full backups for 15 days, and it's
possible
that I will need to restore, let's say, the data from 10 days ago (don't ask,
I'm not
going into the details why).
De : Pierre-Yves Ritschard [mailto:p...@spootnik.org]
Snapshot and restores are great for point in time recovery. There's no
particular side-effect if you're willing to accept the downtime.
Are you sure? The system KS has no book-keeping about the KSs/CFs?
For instance, schema changes, etc?
De : aaron morton [mailto:aa...@thelastpickle.com]
The secondary index will be build using the compaction features,
you can check the progress with nodetool compactionstats
When they are build the output from describe. will list the build indexes.
Built indexes: []
Should I understand
De : mdione@orange.com [mailto:mdione@orange.com]
Should I understand that when the indexes are finished being built a)
the «Built indexes» list should be empty and b) there should be no pending
compactions? Because that's exactly what I have now but I still can't
use the column
De : mdione@orange.com [mailto:mdione@orange.com]
[default@avatars] describe HBX_FILE;
ColumnFamily: HBX_FILE
Key Validation Class: org.apache.cassandra.db.marshal.BytesType
Default column value validator:
org.apache.cassandra.db.marshal.BytesType
Columns sorted
I understand the error message, but I don't understand why I get it.
Here's the CF:
cqlsh:avatars describe columnfamily HBX_FILE;
CREATE COLUMNFAMILY HBX_FILE (
KEY blob PRIMARY KEY,
HBX_FIL_DATE text,
HBX_FIL_LARGE ascii,
HBX_FIL_MEDIUM ascii,
HBX_FIL_SMALL ascii,
HBX_FIL_STATUS
De : Dave Brosius [mailto:dbros...@mebigfatguy.com]
Works for me on trunk... what version are you using?
Beh, I forgot that detail: 1.0.9.
--
Marcos Dione
SysAdmin
Astek Sud-Est
pour FT/TGPF/OPF/PORTAIL/DOP/HEBEX @ Marco Polo
04 97 12 62 45 - mdione@orange.com
De : mdione@orange.com [mailto:mdione@orange.com]
De : Dave Brosius [mailto:dbros...@mebigfatguy.com]
Works for me on trunk... what version are you using?
Beh, I forgot that detail: 1.0.9.
I also forgot to mention: the index was recently created, after the database
was
De : phuduc nguyen [mailto:duc.ngu...@pearson.com]
How are you passing a blob or binary stream to the CLI? It sounds like
you're passing in a representation of a binary stream as ascii/UTF8
which will create the problems you describe.
So this is only a limitation of Cassandra-cli?
--
Marcos
We're building a database to stock the avatars for our users in three sizes.
Thing is,
We planned to use the blob field with a ByteType validator, but if we try to
inject the
binary data as read from the image file, we get a cannot parse as hex bytes
error. The
same happens if we convert
De : Erik Forkalsud [mailto:eforkals...@cj.com]
Which client are you using? With Hector or straight thrift, your
should
be able to store byte[] directly.
So far, cassandra-cli only, but we're also testing phpcassa with CQL
support[1].
--
[1] https://github.com/thobbs/phpcassa
--
Marcos
20 matches
Mail list logo