bigger data density with Cassandra 4.0?

2018-08-25 Thread onmstester onmstester
I've noticed this new feature of 4.0: Streaming optimizations 
(https://cassandra.apache.org/blog/2018/08/07/faster_streaming_in_cassandra.html)
 Is this mean that we could have much more data density with Cassandra 4.0 
(less problems than 3.X)? I mean > 10 TB of data on each node without worrying 
about node join/remove? This is something needed for Write-Heavy applications 
that do not read a lot. When you have like 2 TB of data per day and need to 
keep it for 6 month, it would be waste of money to purchase 180 servers (even 
Commodity or Cloud).  IMHO, even if 4.0 fix problem with streaming/joining a 
new node, still Compaction is another evil for a big node, but we could 
tolerate that somehow Sent using Zoho Mail

Re: benefits oh HBase over Cassandra

2018-08-25 Thread daemeon reiydelle
Messenger can allow for some losses in degenerate infra cases, given a
given infra footprint. Also some ability to handle scale up faster as
demand increases, peak loads, etc. It therefore becomes a use case specific
optimization. Also hBase can run in Hadoop more easily, leveraging blobs
(HDFS), etc. So, depends on your use case.

<==>
Be the reason someone smiles today.
Or the reason they need a drink.
Whichever works.

*Daemeon C.M. Reiydelle*

*email: daeme...@gmail.com *
*San Francisco 1.415.501.0198/London 44 020 8144 9872/Skype
daemeon.c.m.reiydelle*



On Fri, Aug 24, 2018 at 10:40 PM Vitaliy Semochkin 
wrote:

> Thank you very much for fast reply, Dinesh!
> I was under impression that with tunable consistency  Cassandra can
> act as CP (in case it is needed), e.g  by setting  ALL on both reads
> and writes.
> Do you agree with this statement?
>
> PS Are there any other benefits of HBase you have found? I'd be glad
> to hear usecases list.
>
>
>
> On Sat, Aug 25, 2018 at 12:44 AM dinesh.jo...@yahoo.com.INVALID
>  wrote:
> >
> > I've worked with both databases. They're suitable for different
> use-cases. If you look at the CAP theorem; HBase is CP while Cassandra is a
> AP. If we talk about a specific use-case, it'll be easier to discuss.
> >
> > Dinesh
> >
> >
> > On Friday, August 24, 2018, 1:56:31 PM PDT, Vitaliy Semochkin <
> vitaliy...@gmail.com> wrote:
> >
> >
> > Hi,
> >
> > I read that once Facebook chose HBase over Cassandra for it's messenger,
> > but I never found what are the benefits for HBase over Cassandra,
> > can someone list, if there are any?
> >
> > Regards,
> > Vitaliy
> >
> > -
> > To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
> > For additional commands, e-mail: user-h...@cassandra.apache.org
> >
>
> -
> To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
> For additional commands, e-mail: user-h...@cassandra.apache.org
>
>


Re: why returned achievedConsistencyLevel is null

2018-08-25 Thread Andy Tolbert
Hi Vitaliy,

That method 
(https://docs.datastax.com/en/latest-java-driver-api/com/datastax/driver/core/ExecutionInfo.html#getAchievedConsistencyLevel--)
is a bit confusing as it will return null when your desired
consistency level is achieved:

> If the query returned without achieving the requested consistency level due 
> to the RetryPolicy, this return the biggest consistency level that has been 
> actually achieved by the query.
>
> Note that the default RetryPolicy (DefaultRetryPolicy) will never allow a 
> query to be successful without achieving the initially requested consistency 
> level and hence with that default policy, this method will always return 
> null. However, it might occasionally return a non-null with say, 
> DowngradingConsistencyRetryPolicy.

As long as you are using a RetryPolicy that doesn't downgrade
Consistency Level on retry, you can expect this method to always
return null.  I heavily discourage downgrading consistency levels on
retry, you can read the driver team's rationale about it here
(https://docs.datastax.com/en/developer/java-driver/3.5/upgrade_guide/#3-5-0).

> Is it possible to make DataStax driver throw an exception in case
> desired consistency level was not achieved during the insert?

This is actually the default behavior.  If consistency level cannot be
met within Cassandra's configured timeouts, or if not enough replicas
are available to service the consistency level from the start, C* will
raise ReadTimeout, WriteTimeout or Unavailable exceptions
respectively.  The driver can be configured to retry on those errors
per RetryPolicy, although there is some nuance when it comes to it not
retrying statements that are non-idempotent
(https://docs.datastax.com/en/developer/java-driver/3.5/manual/retries/#retries-and-idempotence).
If the driver is not configured to retry, it will raise the exception
to the user.

In summary, as long as you aren't using some form of downgrading
consistency retry policy, if you get a successfully completed request,
you can assume the consistency level you have configured was met for
your operations.

Thanks,
Andy



On Fri, Aug 24, 2018 at 4:14 PM Vitaliy Semochkin  wrote:
>
> HI,
>
> While using DataStax driver
> session.execute("some insert
> query")getExecutionInfo().getAchievedConsistencyLevel()
> is already returned as null, despite data is stored. Why could it be?
>
> Is it possible to make DataStax driver throw an exception in case
> desired consistency level was not achieved during the insert?
>
> Regards,
> Vitaliy
>
> -
> To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
> For additional commands, e-mail: user-h...@cassandra.apache.org
>

-
To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
For additional commands, e-mail: user-h...@cassandra.apache.org