Hi,
I have to store some files (Images, documents, etc.) for my users in a
webapp. I use Cassandra for all of my data and I would like to know if this
is a good idea to store these files into blob on a Cassandra CF ?
Is there some contraindications, or special things to know to achieve this ?
http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/Storing-photos-images-docs-etc-td6078278.html
Of significance from that link (which was great until feeling lucky
was removed...):
Google of terms cassandra large files + feeling lucky
First, thanks everyone for the input. Appreciate it. The number
crunching would already have been completed, and all statistics per
game defined, and inserted into the appropriate CF/row/cols ...
So, that being said, Solandra appears to be the right way to go ...
except, this would require that
Let's be more precise in saying that this all depends on the
expected size of the documents. If you know that the documents
will be on the few hundreds kilobytes mark on average and
no more than a few megabytes (say 5MB, even though there is
no magic number), then storing them as blob will work
store your images / documents / etc. somewhere and reference them
in Cassandra. That's the consensus that's been bandied about on this
list quite frequently
Thank you for your answers.
I think I have to detail my configuration. On every server of my cluster, I
deploy :
- a Cassandra node
- a
BytesType vs. UTF8Type. which is better in performance?
I assume Bytes be faster in compare.. but how much faster is it?
For large large large data set, will it have significant different?
I love to use UTF8 and be able to read value from cli. :-)
*IF* it doesn't degrade performance too much.
On Wed, Jun 22, 2011 at 11:19 AM, Jeesoo Shin bsh...@gmail.com wrote:
BytesType vs. UTF8Type. which is better in performance?
I assume Bytes be faster in compare.. but how much faster is it?
They don't differ at all as far as comparison is involved. They actually use the
exact same function to
Well solandra is running Cassandra so you can use Cassandra as you do today,
but index some of the data in solr.
On Jun 22, 2011, at 3:41 AM, Sasha Dolgy sdo...@gmail.com wrote:
First, thanks everyone for the input. Appreciate it. The number
crunching would already have been completed, and
2011/6/22 aaron morton aa...@thelastpickle.com
I think I have to detail my configuration. On every server of my cluster, I
deploy :
- a Cassandra node
- a Tomcat instance
- the webapp, deployed on Tomcat
- Apache httpd, in front of Tomcat with mod_jakarta
You will have a bunch of
Is there a limitation on the data type of a column value (not column
name) in cassandra?
I'm saving data using a pycassa client, for a UTF8 column family, and
I get an error when I try saving integer data values.
Only when convert the values to string can I save the data.
Looking at the pycassa
There is a comparator type (fort he name) and a validation type (for the value)
If you have set the validation to be UTF8 you can only store data that is valid
UTF8 there.
The default validation is BytesType so it should accept everything unless
otherwise specified.
I cannot tell anything
OK, got some results (below).
2 nodes, one on localhost, second on LAN, reading with
ConsistencyLevel.ONE, buffer_size=512 rows (that's how many rows
pycassa will get on one connection, than it will use last row_id as
start row for next query)
Queries types:
1) get_range - just added limit of
If the Cassandra JVM is down, Tomcat and Httpd will continue to handle
requests. And Pelops will redirect these requests to another Cassandra node
on another server (maybe am I wrong with this assertion).
I was thinking of the server been turned off / broken / rebooting /
disconnected from
I woke up this morning to all 4 of 4 of my cassandra instances reporting
they were down in my cluster. I quickly started them all, and everything
seems fine. I'm doing a postmortem now, but it appears they all OOM'd at
roughly the same time, which was not reported in any cassandra log, but I
We had a similar problem a last month and found that the OS eventually
in the end killed the Cassandra process on each of our nodes ... I've
upgraded to 0.8.0 from 0.7.6-2 and have not had the problem since, but
i do see consumption levels rising consistently from one day to the
next on each node
In this case, the load balancer has to detect (or is configured) that the
server is down and does not route request to this one anymore.
2011/6/22 aaron morton aa...@thelastpickle.com
If the Cassandra JVM is down, Tomcat and Httpd will continue to handle
requests. And Pelops will redirect
Well, I managed to run 50 days before an OOM, so any changes I make will
take a while to test ;-) I've seen the GCInspector log lines appear
periodically in my logs, but I didn't see a correlation with the crash.
I'll read the instructions on how to properly do a rolling upgrade today,
practice
Yes ... this is because it was the OS that killed the process, and
wasn't related to Cassandra crashing. Reviewing our monitoring, we
saw that memory utilization was pegged at 100% for days and days
before it was finally killed because 'apt' was fighting for resource.
At least, that's as far as I
Thank you Vivek. I'll start playing with the clients today. Thank you very
much!
Best,
Daniel
On Tue, Jun 21, 2011 at 9:33 AM, Vivek Mishra vivek.mis...@impetus.co.inwrote:
Hi Daniel,
Just saw your email regarding kundera download.
Kundera snapshot jar is available at:
I was wondering/I figured that /var/log/kern indicated the OS was killing
java (versus an internal OOM).
The nodetool repair is interesting. My application never deletes, so I
didn't bother running it. But, if that helps prevent OOMs as well, I'll add
it to the crontab
(plan A is still
Hi All. I set the compaction threshold at minimum 2, maximum 2 and try
to run compact, but it's not doing anything. There are over 69 sstables
now, read performance is horrible, and it's taking an insane amount of
space. Maybe I don't quite get how the new per bucket stuff works, but I
think this
On Tue, Jun 21, 2011 at 10:58 PM, AJ a...@dude.podzone.net wrote:
**
On 6/21/2011 3:36 PM, Stephen Connolly wrote:
writes are not atomic.
the first side can succeed at quorum, and the second side can fail
completely... you'll know it failed, but now what... you retry, still
failed... erh
The CLI is posted, I assume that's the defaults (I didn't touch anything).
The machines basically just run cassandra (and standard Centos5 background
stuff).
will
On Wed, Jun 22, 2011 at 9:49 AM, Jake Luciani jak...@gmail.com wrote:
Are you running with the default heap settings? what else is
I'm running 0.7.4 from rpm (riptano). If I do a yum upgrade, it's trying to
do 0.7.6. To get 0.8.x I have to do install apache-cassandra08. But that
is going to install two copies.
Is there a semi-official way of properly upgrading to 0.8 via rpm?
--
Will Oberman
Civic Science, Inc.
3030
Hello,
I was wondering if anyone had architecture thoughts of creating a simple
bank account program that does not use transactions. I think creating an
example project like this would be a good thing to have for a lot of the
discussions that pop up about transactions and Cassandra (and
Is C* suitable for storing customer account (financial) data, as well as
billing, payroll, etc? This is a new company so migration is not an
issue... starting from scratch.
If you need only store them - then yes, but if you require transactions spanning
multiple rows or column families,
I'd implement the concept of a bank account using counters in a
counter column family. one row per account ... each column for
transaction data and one column for the actual balance.
just so long as you use whole numbers ... no one needs pennies anymore.
-sd
On Wed, Jun 22, 2011 at 4:18 PM,
but you can store the -details- of a transaction as json data and do
some sanity checks to validate that the data you currently have stored
aligns with the recorded transactions. maybe a batch job run every 24
hours ...
On Wed, Jun 22, 2011 at 4:19 PM, Oleg Anastastasyev olega...@gmail.com
On Wed, Jun 22, 2011 at 10:19 AM, Oleg Anastastasyev olega...@gmail.comwrote:
Is C* suitable for storing customer account (financial) data, as well as
billing, payroll, etc? This is a new company so migration is not an
issue... starting from scratch.
If you need only store them - then
Sasha,
How would you deal with a transfer between accounts in which only one half
of the operation was successfully completed?
Thank you.
Trevor
On Wed, Jun 22, 2011 at 10:23 AM, Sasha Dolgy sdo...@gmail.com wrote:
I'd implement the concept of a bank account using counters in a
counter
Good day everyone,
a few days ago I was suggested to use the Hector Object Mapper here but it
seems that the code wasn't upgraded to support Cassandra 0.8 yet. There is a
reference for it here: https://github.com/rantav/hector/wiki/Versioning. The
project's
I would still maintain a record of the transaction ... so that I can
do analysis post to determine if/when problems occurred ...
On Wed, Jun 22, 2011 at 4:31 PM, Trevor Smith tre...@knewton.com wrote:
Sasha,
How would you deal with a transfer between accounts in which only one half
of the
Right -- that's the part that I am more interested in fleshing out in this
post.
Must one have background jobs checking the integrity of all transactions at
some time interval? This gets hairy pretty quick with bank transactions (one
unrolled transaction could cause many others to become
I just did a remove then install, and it seems to work.
For those of you out there with JMX issues, the default port moved from 8080
to 7199 (which includes the internal default to nodetool). I was confused
why nodetool ring would fail on some boxes and not others. I had to add -p
depending on
I have a question about auto_bootstrap. When I originally brought up the
cluser, I did:
-seed with auto_boot = false
-1,2,3 with auto_boot = true
Now that I'm doing a rolling upgrade, do I set them all to auto_boot =
true? Or does the seed stay false? Or should I mark them all false? I
have
Doesn't matter. auto_bootstrap only applies to first start ever.
On Wed, Jun 22, 2011 at 10:48 AM, William Oberman
ober...@civicscience.com wrote:
I have a question about auto_bootstrap. When I originally brought up the
cluser, I did:
-seed with auto_boot = false
-1,2,3 with auto_boot =
The way compaction works, x same-sized files are merged into a new SSTable.
This repeats itself and the SSTable get bigger and bigger.
So what is the upper limit?? If you are not deleting stuff fast enough,
wouldn't the SSTable sizes grow indefinitely?
I ask because we have some rather
This is almost certainly caused by the weird connection process JMX
uses. JMX actually uses a 2 connection process, the second connection
is determined by the 'JVM_OPTS=$JVM_OPTS
-Djava.rmi.server.hostname=public name' setting in your
cassandra-env.sh configuration file.
By default that setting
On Wed, Jun 22, 2011 at 12:35 PM, Jonathan Colby
jonathan.co...@gmail.com wrote:
The way compaction works, x same-sized files are merged into a new
SSTable. This repeats itself and the SSTable get bigger and bigger.
So what is the upper limit?? If you are not deleting stuff fast
Yes, if you are not deleting fast enough they will grow. This is not
specifically a cassandra problem /var/log/messages has the same issue.
There is a JIRA ticket about having a maximum size for SSTables, so they
always stay manageable
You fall into a small trap when you force major compaction
Thanks for the explanation. I'm still a bit skeptical.
So if you really needed to control the maximum size of compacted SSTables, you
need to delete data at such a rate that the new files created by compaction are
less than or equal to the sum of the segments being merged.
Is anyone else
So the take-away is try to avoid major compactions at all costs! Thanks Ed
and Eric.
On Jun 22, 2011, at 7:00 PM, Edward Capriolo wrote:
Yes, if you are not deleting fast enough they will grow. This is not
specifically a cassandra problem /var/log/messages has the same issue.
There is
There are a couple of pre-canned scripts for local mult-node clusters,
particularly:
https://github.com/pcmanus/ccm
On Tue, Jun 21, 2011 at 6:35 AM, Sasha Dolgy sdo...@gmail.com wrote:
Personally speaking, I do not run JMX on 8080, and never have. The
tools, like cassandra-cli and nodetool
Hi Trevor,
I hope to post on my practical experiences in this area soon - we rely
heavily on complex serialized operations in FightMyMonster.com. Probably the
most simple serialized operation we do is updating nugget balances when, for
example, there has been a trade of monsters.
Currently we
On Wed, Jun 22, 2011 at 10:00 AM, Jonathan Colby
jonathan.co...@gmail.com wrote:
Thanks for the explanation. I'm still a bit skeptical.
So if you really needed to control the maximum size of compacted SSTables,
you need to delete data at such a rate that the new files created by
Second, compacting such large files is an IO killer. What can be tuned
other than compaction_threshold to help optimize this and prevent the files
from getting too big?
Thanks!
Just a personal implementation note - I make heavy use of column TTL,
so I have very specifically tuned
Speaking purely from my personal experience, I haven't found cassandra
optimal for storing big fat rows. Even if it is only 100s of KB I didn't
find cassandra suitable for it. In my case I am looking at 400 writes + 400
reads per sec and grow 20%-30% every ear with file sizes from 70k-300k. What
I
I would not say avoid major compactions at all cost.
In the old days 0.6.5 IIRC the only way to clear tombstones was a major
compaction. The nice thing about major compaction is if you have a situation
with 4 SSTables at 2GB each (that is total 8GB). Under normal write
conditions it could be
The current release of Hector Object Mapper works fine with the most
recent Hector and Apache Cassandra 0.8.0 releases:
http://repo2.maven.org/maven2/me/prettyprint/hector-object-mapper/1.1-01/
On Wed, Jun 22, 2011 at 10:10 AM, Daniel Colchete d...@cloud3.tc wrote:
Good day everyone,
a few
Thanks Jonathan. I'm sure it's been true for everyone else as well, but the
rolling upgrade seems to have worked like a charm for me (other than the JMX
port # changing initial confusion).
One minor thing that probably particular to my case: when I removed the old
package, it unlinked my symlink
Thanks Ryan. Done that : ) 1 TB is the striped size.We might look into
bigger disks for our blades.
On Jun 22, 2011, at 7:09 PM, Ryan King wrote:
On Wed, Jun 22, 2011 at 10:00 AM, Jonathan Colby
jonathan.co...@gmail.com wrote:
Thanks for the explanation. I'm still a bit skeptical.
unsubscribe
From: William Oberman [mailto:ober...@civicscience.com]
Sent: Wednesday, June 22, 2011 1:46 PM
To: user@cassandra.apache.org
Subject: Re: rpm from 0.7.x - 0.8?
Thanks Jonathan. I'm sure it's been true for everyone else as well, but the
rolling upgrade seems to have worked
Awesome tip on TTL. We can really use this as a catch-all to make sure all
columns are purged based on time. Fits our use-case good. I forgot this
feature existed.
On Jun 22, 2011, at 7:11 PM, Eric tamme wrote:
Second, compacting such large files is an IO killer.What can be tuned
According to the README.txt in examples/bmt BinaryMemtable is being deprecated.
What's the recommended way to do bulk loading?
Cheers,
Steve
Awesome, thanks!
-Original Message-
From: Jeremy Hanna [mailto:jeremy.hanna1...@gmail.com]
Sent: Wednesday, June 22, 2011 3:08 PM
To: user@cassandra.apache.org
Subject: Re: bulk load
This ticket's outcome replaces what BMT was supposed to do:
Heya!
I know I should probably be able to figure this out on my own, but...
The Cassandra Munin plugins (all of them) define in their
storageproxy_latency.conf the following (this is from a 0.6 config):
read_latency.jmxObjectName org.apache.cassandra.db:type=StorageProxy
I'm planning on using Cassandra as a product's core data store, and it is
imperative that it never goes down or loses data, even in the event of a
data center failure. This uptime requirement (five nines: 99.999% uptime)
w/ WAN capabilities is largely what led me to choose Cassandra over other
On Wed, Jun 22, 2011 at 2:24 PM, Les Hazlewood l...@katasoft.com wrote:
I'm planning on using Cassandra as a product's core data store, and it is
imperative that it never goes down or loses data, even in the event of a
data center failure. This uptime requirement (five nines: 99.999% uptime)
Just to be clear:
I understand that resources like [1] and [2] exist, and I've read them. I'm
just wondering if there are any 'gotchas' that might be missing from that
documentation that should be considered and if there are any recommendations
in addition to these documents.
Thanks,
Les
[1]
On 4/9/2011 7:52 PM, aaron morton wrote:
My understanding of what they did with locking (based on the examples)
was to achieve a level of transaction isolation
http://en.wikipedia.org/wiki/Isolation_(database_systems)
http://en.wikipedia.org/wiki/Isolation_%28database_systems%29
I think the
I understand that every environment is different and it always 'depends' :)
But recommending settings and techniques based on an existing real
production environment (like the user's suggestion to run nodetool repair as
a regular cron job) is always a better starting point for a new Cassandra
To be precise, you made n requests for non-existent keys, got n negative
responses, and BloomFilterFalsePositives also went up by n?
On 06/21/2011 11:06 PM, Preston Chang wrote:
Hi,all:
I have a problem with bloom filter. When made a test which tried to get
some nonexistent keys, it
On 06/22/2011 08:53 AM, Sasha Dolgy wrote:
Yes ... this is because it was the OS that killed the process, and
wasn't related to Cassandra crashing. Reviewing our monitoring, we
saw that memory utilization was pegged at 100% for days and days
before it was finally killed because 'apt' was
Implement monitoring and be proactive...that will stop you waking up to a
big surprise. i'm sure there were symltoms leading up to all 4 nodes going
down. willing to wager that each node went down at different times and not
all went down at once...
On Jun 22, 2011 11:50 PM, Les Hazlewood
Sadly, they all went down within minutes of each other.
Sent from my iPhone
On Jun 22, 2011, at 6:16 PM, Sasha Dolgy sdo...@gmail.com wrote:
Implement monitoring and be proactive...that will stop you waking up
to a big surprise. i'm sure there were symltoms leading up to all 4
nodes going
I think Sasha's idea is worth studying more. Here is a supporting read
referenced in the O'Reilly Cassandra book that talks about alternatives
to 2-phase commit and synchronous transactions:
http://www.eaipatterns.com/ramblings/18_starbucks.html
If it can be done without locks and the
http://www.twitpic.com/5fdabn
http://www.twitpic.com/5fdbdg
i do love a good graph. two of the weekly memory utilization graphs
for 2 of the 4 servers from this ring... week 21 was a nice week ...
the week before 0.8.0 went out proper. since then, bumped up to 0.8
and have seen a steady
On 06/22/2011 05:33 PM, Les Hazlewood wrote:
Just to be clear:
I understand that resources like [1] and [2] exist, and I've read them. I'm
just wondering if there are any 'gotchas' that might be missing from that
documentation that should be considered and if there are any recommendations
it will probably be better to denormalize and store
some precomputed data
Yes, if you know there are queries you need to serve it is better to support
those directly in the data model.
Cheers
-
Aaron Morton
Freelance Cassandra Developer
@aaronmorton
Do all of the reductions in Used on that graph correspond to node restarts?
My Zabbix for reference: http://img194.imageshack.us/img194/383/2weekmem.png
On 06/22/2011 06:35 PM, Sasha Dolgy wrote:
http://www.twitpic.com/5fdabn
http://www.twitpic.com/5fdbdg
i do love a good graph. two of
yes. each one corresponds with taking a node down for various
reasons. i think more people should show their graphs. it's great.
hoping Oberman has some.so we can see what his look like ,,
On Thu, Jun 23, 2011 at 12:40 AM, Chris Burroughs
chris.burrou...@gmail.com wrote:
Do all of the
I have a couple of questions regarding the coordination of Cassandra nodetool
snapshots with Amazon EBS snapshots as part of a Cassandra backup/restore
strategy.
Background: I have a cluster running in EC2. Its nodes are configured like so:
* Instance type: m1.xlarge
* Cassandra commit log
Setting them to 2 and 2 means compaction can only ever compact 2 files at time,
so it will be worse off.
Lets the try following:
- restore the compactions settings to the default 4 and 32
- run `ls -lah` in the data dir and grab the output
- run `nodetool flush` this will trigger minor
I'm just double-checking, but when using NTS, is it required to specify
ALL the data centers in the strategy_options attribute?
IOW, I do NOT want replication to ALL data centers; only a two of the
three. So, if my property file snitch describes all of the existing
data centers and nodes as:
Committing to that many 9s is going to be impossible since as far as I
know no internet service provier will sla you more the 2 9s . You can
not have more uptime then your isp.
On Wednesday, June 22, 2011, Chris Burroughs chris.burrou...@gmail.com wrote:
On 06/22/2011 05:33 PM, Les Hazlewood
you have to use multiple data centers to really deliver 4 or 5 9's of service
On Wed, Jun 22, 2011 at 7:09 PM, Edward Capriolo edlinuxg...@gmail.com wrote:
Committing to that many 9s is going to be impossible since as far as I
know no internet service provier will sla you more the 2 9s . You
[1] http://www.datastax.com/docs/0.8/operations/index
[2] http://wiki.apache.org/cassandra/Operations
Well if they new some secret gotcha the dutiful cassandra operators of
the world would update the wiki.
As I am new to the Cassandra community, I don't know how 'dutifully' this is
Check the list here
http://wiki.apache.org/cassandra/JmxGotchas
I *think* the jmx server tells the client to connect back on another host/port.
Hope that helps.
-
Aaron Morton
Freelance Cassandra Developer
@aaronmorton
http://www.thelastpickle.com
On 22 Jun 2011, at 21:02,
On Wed, Jun 22, 2011 at 4:11 PM, Peter Lin wool...@gmail.com wrote:
you have to use multiple data centers to really deliver 4 or 5 9's of
service
We do, hence my question, as well as my choice of Cassandra :)
Best,
Les
In my opinion 5 9s don't matter. It's the number of impacted customers. You
might be down during peak for 5 mts causing 1000s of customer turn aways
while you might be down during night causing only few customer turn aways.
There is no magic bullet. It's all about learning and improving. You will
http://wiki.apache.org/cassandra/FAQ#unsubscribe
-
Aaron Morton
Freelance Cassandra Developer
@aaronmorton
http://www.thelastpickle.com
On 23 Jun 2011, at 06:02, Carey Hollenbeck wrote:
unsubscribe
From: William Oberman [mailto:ober...@civicscience.com]
Sent: Wednesday,
so having multiple data centers is step 1 of 4/5 9's.
I've worked on some services that had 3-4 9's SLA. Getting there is
really tough as others have stated. you have to auditing built into
your service, capacity metrics, capacity planning, some kind of
real-time monitoring, staff to respond to
Forget the 5 9's - I apologize for even writing that. It was my shorthand
way of saying 'this can never go down'. I'm not asking for philosophical
advice - I've been doing large scale enterprise deployments for over 10
years. I 'get' the 'it depends' and 'do your homework' philosophy.
All I'm
Start with reading comments on cassandra.yaml and
http://wiki.apache.org/cassandra/Operations
http://wiki.apache.org/cassandra/Operations
As far as I know there is no comprehensive list for performance tuning. More
specifically common setting applicable to everyone. For most part issues
revolve
Atomic on a single machine yes.
-
Aaron Morton
Freelance Cassandra Developer
@aaronmorton
http://www.thelastpickle.com
On 23 Jun 2011, at 09:42, AJ wrote:
On 4/9/2011 7:52 PM, aaron morton wrote:
My understanding of what they did with locking (based on the examples) was
to
I have architected, built and been responsible for systems that support 4-5
9s for years. This discussion is not about how to do that generally. It
was intended to be about concrete techniques that have been found valuable
when deploying Cassandra in HA environments beyond what is documented in
Yep, that was [2] on my existing list. Thanks very much for actually
addressing my question - it is greatly appreciated!
If anyone else has examples they'd like to share (like their own cron
techniques, or JVM settings and why, etc), I'd love to hear them!
Best regards,
Les
On Wed, Jun 22,
1. Is it feasible to run directly against a Cassandra data directory restored
from an EBS snapshot? (as opposed to nodetool snapshots restored from an EBS
snapshot).
I dont have experience with the EBS snapshot, but I've never been a fan of OS
level snapshots that are not coordinated with
Les Hazlewood wrote:
I have architected, built and been responsible for systems that support
4-5
9s for years.
So have most of us. But probably by now it should be clear that no
technology can provide concrete recommendations. They can only provide what
might be helpful which varies from
That looks like a packaging bug. The package manually creates the
commitlog/data/saved_caches directories under the /var/lib/cassandra/
directory. It really doesn't need to though, since as long as it sets
the permissions correctly on /var/lib/cassandra/ those directories
will get created
Thanks Aaron!
On 6/22/2011 5:25 PM, aaron morton wrote:
Atomic on a single machine yes.
-
Aaron Morton
Freelance Cassandra Developer
@aaronmorton
http://www.thelastpickle.com
On 23 Jun 2011, at 09:42, AJ wrote:
On 4/9/2011 7:52 PM, aaron morton wrote:
My understanding of
Quorum read/writes guarantees consistency. But, when a keyspace spans
multiple data centers, does local quorum read/writes also guarantee
consistency? I'm thinking maybe not if two data centers get partitioned.
Thanks!
On Wed, Jun 22, 2011 at 4:35 PM, mcasandra mohitanch...@gmail.com wrote:
might be helpful which varies from env to env. That's why I suggest look at
the comments in cassandra.yaml and see which are applicable in your
scenario. I learn something new everytime I read it.
Yep, and this was
LOCAL_QUORUM gurantees consistency in the local data center only. Other
replica nodes in the same DC and other DC not part of the QUORUM will be
eventually consistent. If you want to ensure consistency accross DCs you can
use EACH_QUORUM but keep in mind the latency involved assuming DCs are not
Hi Les,
I wanted to offer a couple thoughts on where to start and strategies for
approaching development and deployment with reliability in mind.
One way that we've found to more productively think about the reliability of
our data tier is to focus our thoughts away from a concept of uptime or
I think that Les's question was reasonable. Why *not* ask the community for the
'gotchas'?
Whether the info is already documented or not, it could be an opportunity to
improve the documentation based on users' perception.
The you just have to learn responses are fair also, but that reminds me
Hi Scott,
First, let me say that this email was amazing - I'm always appreciative of
the time that anyone puts into mailing list replies, especially ones as
thorough, well-thought and articulated as this one. I'm a firm believer
that these types of replies reflect a strong and durable
Hi Thoku,
You were able to more concisely represent my intentions (and their
reasoning) in this thread than I was able to do so myself. Thanks!
On Wed, Jun 22, 2011 at 5:14 PM, Thoku Hansen tho...@gmail.com wrote:
I think that Les's question was reasonable. Why *not* ask the community for
On 6/22/2011 5:56 PM, mcasandra wrote:
LOCAL_QUORUM gurantees consistency in the local data center only. Other
replica nodes in the same DC and other DC not part of the QUORUM will be
eventually consistent. If you want to ensure consistency accross DCs you can
use EACH_QUORUM but keep in mind
On 6/22/2011 6:50 PM, AJ wrote:
On 6/22/2011 5:56 PM, mcasandra wrote:
LOCAL_QUORUM gurantees consistency in the local data center only. Other
replica nodes in the same DC and other DC not part of the QUORUM will be
eventually consistent. If you want to ensure consistency accross DCs
you can
1 - 100 of 110 matches
Mail list logo