[ 
https://issues.apache.org/jira/browse/BEAM-9008?focusedWorklogId=376582&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-376582
 ]

ASF GitHub Bot logged work on BEAM-9008:
----------------------------------------

                Author: ASF GitHub Bot
            Created on: 23/Jan/20 21:26
            Start Date: 23/Jan/20 21:26
    Worklog Time Spent: 10m 
      Work Description: vmarquez commented on issue #10546: [BEAM-9008] Add 
CassandraIO readAll method
URL: https://github.com/apache/beam/pull/10546#issuecomment-576998681
 
 
   Thanks for taking a look, I've pushed a new commit (I won't be squashing 
anymore until we finalize everything) that hopefully addresses all the minor 
style issues.  LMK if I missed anything. 
   
   I do want to make sure we keep the cassandraIO 'idiomatic' to the rest of 
the IO connectors, but I don't think modeling this after the SOLR one will 
work.  For one thing, if we want to share the `ReadFn` class between both Read 
and ReadAll, it means we have to have some way of having both use it and pass 
in 'connection' information, which we can't do if the signature of ReadFn is 
`ReadFn extends DoFn<ReadAll<A>, A>`. (unless we copy everything over into a 
new ReadAll<A> whenever we create a Read<A> which seems a bit clumsy?)  I think 
another class to look at for something that has both Read and ReadAll 
PTransforms is the SpannerIO, which is modeled similarly to how I did it here 
(though not exactly). 
   
   
https://github.com/apache/beam/blob/master/sdks/java/io/google-cloud-platform/src/main/java/org/apache/beam/sdk/io/gcp/spanner/SpannerIO.java#L315
   
   They have a configuration class there is public (which we can't do since we 
want to keep backwards compatibility with the current way `Read` works), it has 
two different PTransforms, `Read` uses `ReadAll` internally, etc.  
   
   I do think instead of taking a collection of RingRanges, taking some sort of 
'Query' object makes sense, and the idea that it doesn't have to tie in to the 
actual connection means we can split up the CassandraConfig class.  Thoughts on 
that? 
 
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
-------------------

    Worklog Id:     (was: 376582)
    Time Spent: 2h 50m  (was: 2h 40m)

> Add readAll() method to CassandraIO
> -----------------------------------
>
>                 Key: BEAM-9008
>                 URL: https://issues.apache.org/jira/browse/BEAM-9008
>             Project: Beam
>          Issue Type: New Feature
>          Components: io-java-cassandra
>    Affects Versions: 2.16.0
>            Reporter: vincent marquez
>            Assignee: vincent marquez
>            Priority: Minor
>          Time Spent: 2h 50m
>  Remaining Estimate: 0h
>
> When querying a large cassandra database, it's often *much* more useful to 
> programatically generate the queries needed to to be run rather than reading 
> all partitions and attempting some filtering.  
> As an example:
> {code:java}
> public class Event { 
>    @PartitionKey(0) public UUID accountId;
>    @PartitionKey(1)public String yearMonthDay; 
>    @ClusteringKey public UUID eventId;  
>    //other data...
> }{code}
> If there is ten years worth of data, you may want to only query one year's 
> worth.  Here each token range would represent one 'token' but all events for 
> the day. 
> {code:java}
> Set<UUID> accounts = getRelevantAccounts();
> Set<String> dateRange = generateDateRange("2018-01-01", "2019-01-01");
> PCollection<TokenRange> tokens = generateTokens(accounts, dateRange); 
> {code}
>  
>  I propose an additional _readAll()_ PTransform that can take a PCollection 
> of token ranges and can return a PCollection<T> of what the query would 
> return. 
> *Question: How much code should be in common between both methods?* 
> Currently the read connector already groups all partitions into a List of 
> Token Ranges, so it would be simple to refactor the current read() based 
> method to a 'ParDo' based one and have them both share the same function.  
> Reasons against sharing code between read and readAll
>  * Not having the read based method return a BoundedSource connector would 
> mean losing the ability to know the size of the data returned
>  * Currently the CassandraReader executes all the grouped TokenRange queries 
> *asynchronously* which is (maybe?) fine when all that's happening is 
> splitting up all the partition ranges but terrible for executing potentially 
> millions of queries. 
>  Reasons _for_ sharing code would be simplified code base and that both of 
> the above issues would most likely have a negligable performance impact. 
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to