[ 
https://issues.apache.org/jira/browse/SQOOP-793?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13536512#comment-13536512
 ] 

Guido Serra aka Zeph edited comment on SQOOP-793 at 12/19/12 10:51 PM:
-----------------------------------------------------------------------

uhmmm, ok... then if I pass the INSERTs to the rest of the logic (reading the 
file from hdfs) and I create something that could parse the CREATEs ... done! 
no? :)

...also, Pentaho needs these informations, created in a "pentaho_mapping" 
table, either else it is not capable of determining how to use the data that 
sqoop ingested (I just tested the sqoop ingestion via command line and not via 
Pentaho's UI, I'll probably attempt it tomorrow)

in any case, I want to minimize the impact of it on the production SQL nodes... 
we already have the dumps on filesystem so... why not using them, no?

...also, Percona seems to be able to create partial/incremental dumps... I'd 
like to leverage that instead of having to execute SQL statements with filters 
on a date field (that some tables might not even have, or not respected by the 
business logic of the application)
                
      was (Author: zeph):
    uhmmm, ok... then if I pass the INSERTs to the rest of the logic (reading 
the file reading from hdfs) and I create something that could parse the CREATEs 
... done! no? :)

...also, Pentaho needs these informations, created in a "pentaho_mapping" 
table, either else it is not capable of determining how to use the data that 
sqoop ingested (I just tested the sqoop ingestion via command line and not via 
Pentaho's UI, I'll probably attempt it tomorrow)

in any case, I want to minimize the impact of it on the production SQL nodes... 
we already have the dumps on filesystem so... why not using them, no?

...also, Percona seems to be able to create partial/incremental dumps... I'd 
like to leverage that instead of having to execute SQL statements with filters 
on a date field (that some tables might not even have, or not respected by the 
business logic of the application)
                  
> mysqldump > file > hdfs > sqoop
> -------------------------------
>
>                 Key: SQOOP-793
>                 URL: https://issues.apache.org/jira/browse/SQOOP-793
>             Project: Sqoop
>          Issue Type: New Feature
>          Components: connectors/mysql
>            Reporter: Guido Serra aka Zeph
>            Assignee: Guido Serra aka Zeph
>            Priority: Minor
>
> extend the MySQLDump module to be able to read from a mysqldump generated 
> file,
> saved on hdfs, instead of triggering the "--direct" option or connect via jdbc

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

Reply via email to