[ 
https://issues.apache.org/jira/browse/BEAM-9008?focusedWorklogId=376934&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-376934
 ]

ASF GitHub Bot logged work on BEAM-9008:
----------------------------------------

                Author: ASF GitHub Bot
            Created on: 24/Jan/20 17:35
            Start Date: 24/Jan/20 17:35
    Worklog Time Spent: 10m 
      Work Description: vmarquez commented on issue #10546: [BEAM-9008] Add 
CassandraIO readAll method
URL: https://github.com/apache/beam/pull/10546#issuecomment-577996918
 
 
   @iemejia I think that could work, thanks for your patience as I try to 
understand what you're thinking.  Some questions:
   
   1.  If our `ReadFn<A> extends DoFn<Read<A>, A>` and the only way we have 
connection information is from the `Read<A>` passed in to the processElement, 
that means we need to re-establish a DB connection for each  batch of queries 
we run?  As in, the connection would be established in the `processElement` 
method and could not be in `setup` method? 
   
   2. How would that work for the end user of a 
`PTransform<PCollection<Read<A>, PCollection<A>>`?  Here is what I did in the 
test and would wand to document how end users could generate 'queries',
   
https://github.com/apache/beam/pull/10546/files#diff-8ba4ea3b09d563a67e29ff8584269d35R499
   Would we instead want to return a `PCollection<Read<A>>` by using something 
like `return CassandraIO.read().withRingRange(new RingRange(start, finish))`?  
If we do that however, we'd need to do the `withHosts` and all the other 
connection information, no?  The other option is establishing one `ReadAll` 
PTransform that maps over the `Read<A>` input and enriches the db connection 
information? 
   
   3.  Originally I had wanted to have the ReadFn operate on a *collection* of 
'query' objects to ensure a way to enforce linearizability with our queries 
(mainly so we don't oversaturate a single node/shard).  Currently the groupBy 
function a user passes in operates on the `RingRange` object, would we keep it 
that way and just, under the hood, allow for a single `Read<A>` to hold a 
collection of RingRanges? 
   
   
 
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
-------------------

    Worklog Id:     (was: 376934)
    Time Spent: 3h 50m  (was: 3h 40m)

> Add readAll() method to CassandraIO
> -----------------------------------
>
>                 Key: BEAM-9008
>                 URL: https://issues.apache.org/jira/browse/BEAM-9008
>             Project: Beam
>          Issue Type: New Feature
>          Components: io-java-cassandra
>    Affects Versions: 2.16.0
>            Reporter: vincent marquez
>            Assignee: vincent marquez
>            Priority: Minor
>          Time Spent: 3h 50m
>  Remaining Estimate: 0h
>
> When querying a large cassandra database, it's often *much* more useful to 
> programatically generate the queries needed to to be run rather than reading 
> all partitions and attempting some filtering.  
> As an example:
> {code:java}
> public class Event { 
>    @PartitionKey(0) public UUID accountId;
>    @PartitionKey(1)public String yearMonthDay; 
>    @ClusteringKey public UUID eventId;  
>    //other data...
> }{code}
> If there is ten years worth of data, you may want to only query one year's 
> worth.  Here each token range would represent one 'token' but all events for 
> the day. 
> {code:java}
> Set<UUID> accounts = getRelevantAccounts();
> Set<String> dateRange = generateDateRange("2018-01-01", "2019-01-01");
> PCollection<TokenRange> tokens = generateTokens(accounts, dateRange); 
> {code}
>  
>  I propose an additional _readAll()_ PTransform that can take a PCollection 
> of token ranges and can return a PCollection<T> of what the query would 
> return. 
> *Question: How much code should be in common between both methods?* 
> Currently the read connector already groups all partitions into a List of 
> Token Ranges, so it would be simple to refactor the current read() based 
> method to a 'ParDo' based one and have them both share the same function.  
> Reasons against sharing code between read and readAll
>  * Not having the read based method return a BoundedSource connector would 
> mean losing the ability to know the size of the data returned
>  * Currently the CassandraReader executes all the grouped TokenRange queries 
> *asynchronously* which is (maybe?) fine when all that's happening is 
> splitting up all the partition ranges but terrible for executing potentially 
> millions of queries. 
>  Reasons _for_ sharing code would be simplified code base and that both of 
> the above issues would most likely have a negligable performance impact. 
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to