I think the simplest one would be finding some key( incremental primary key
or datetime column etc) to partition you data.


On 09-5-20 下午11:48, "dealmaker" <vin...@gmail.com> wrote:

> 
> Other parts of the non-hadoop system will continue to add records to mysql db
> when I move  those records (and remove the very same records from mysql db
> at the same time) to hadoop for processing.  That's why I am doing those
> mysql commands.
> 
> What are you suggesting?  If I do it like you suggest, dump all records from
> mysql db to a file in hdfs, how do I remove those very same records from the
> mysql db at the same time?  Just rename it first and then dump them and then
> read them from the hdfs file?
> 
> or should I do it my way?  which way is faster?
> Thanks.
> 
> 
> Edward J. Yoon-2 wrote:
>> 
>> Hadoop is a distributed filesystem. If you wanted to backup your table
>> data to hdfs, you can use SELECT * INTO OUTFILE 'file_name' FROM
>> tbl_name; Then, put it to hadoop dfs.
>> 
>> Edward
>> 
>> On Thu, May 21, 2009 at 12:08 AM, dealmaker <vin...@gmail.com> wrote:
>>> 
>>> No, actually I am using mysql.  So it doesn't belong to Hive, I think.
>>> 
>>> 
>>> owen.omalley wrote:
>>>> 
>>>> 
>>>> On May 19, 2009, at 11:48 PM, dealmaker wrote:
>>>> 
>>>>> 
>>>>> Hi,
>>>>>  I want to backup a table and then create a new empty one with
>>>>> following
>>>>> commands in Hadoop.  How do I do it in java?  Thanks.
>>>> 
>>>> Since this is a question about Hive, you should be asking on
>>>> hive-u...@hadoop.apache.org
>>>> .
>>>> 
>>>> -- Owen
>>>> 
>>>> 
>>> 
>>> --
>>> View this message in context:
>>> http://www.nabble.com/How-to-Rename---Create-DB-Table-in-Hadoop--tp23629956p
>>> 23637131.html
>>> Sent from the Hadoop core-user mailing list archive at Nabble.com.
>>> 
>>> 
>> 
>> 
>> 
>> -- 
>> Best Regards, Edward J. Yoon @ NHN, corp.
>> edwardy...@apache.org
>> http://blog.udanax.org
>> 
>> 


Reply via email to