I do a lot of processing on large amount of data.
The common pattern we follow is:
1. Iterate through a large data set
2. Do some sort of processing (i.e. NLP processing like tokenization,
capitalization, regex parsing, ... )
3. Insert the new result in another table.
Right now we are
On Thu, 21 Feb 2013 12:52:42 -0800 (PST), Victor Ng vicng...@gmail.com
wrote:
I do a lot of processing on large amount of data.
The common pattern we follow is:
1. Iterate through a large data set
2. Do some sort of processing (i.e. NLP processing like tokenization,
capitalization,
Um sure.
That still doesn't answer my question.
I am interested to persist changes in my db as I am iterating through
yield_per.
On Thursday, February 21, 2013 1:03:49 PM UTC-8, A.M. wrote:
On Thu, 21 Feb 2013 12:52:42 -0800 (PST), Victor Ng
vicn...@gmail.comjavascript:
wrote:
On 21 Feb 2013, at 22:44, Victor Ng vicng...@gmail.com wrote:
On Thursday, February 21, 2013 1:03:49 PM UTC-8, A.M. wrote:
On Thu, 21 Feb 2013 12:52:42 -0800 (PST), Victor Ng vicn...@gmail.com
wrote:
I do a lot of processing on large amount of data.
The common pattern we follow is:
On Feb 21, 2013, at 3:52 PM, Victor Ng vicng...@gmail.com wrote:
I do a lot of processing on large amount of data.
The common pattern we follow is:
1. Iterate through a large data set
2. Do some sort of processing (i.e. NLP processing like tokenization,
capitalization, regex