Keith Medcalf expressed precisely :
Mostly the problems experienced by people is that they make some home-
brew CSV importer that does not realise how to correctly read
output from a standards-based exporter such as Excel, and then try to
change things like separation or quoting methods to "fix"
Alok Singh wrote on 1/7/2011 :
> Hi Guys ,
>
> I again stuck with this. kindly give your suggestion.
> Its taking time for to insert 65k of docs around 20 min.
> kindly have a look and help me on my basic code.hope this will help you all
> over my problem.
> i think something going to have huge me
Simon Slavin explained on 1/6/2011 :
> On 7 Jan 2011, at 3:55am, GS wrote:
>
>> The irony is that my data files don't use commas or tabs as delimiters
>> (for good reasons) and so I can't use SQLite's commandline import
>> feature.
>
> You ca
Simon Slavin explained on 1/6/2011 :
> http://www.tech-archive.net/Archive/Data/microsoft.public.data.ado/2006-11/msg00033.html
Thanks again. I appreciate you taking the time to find this code.
This offering from RB writes one record at a time, which is what I'd do
currently either from an array
Thanks, Simon, for the enlightenment! Sorry for assuming you were
familiar with Micro$oft db terminology. Olaf has tuned me up on the
fact that SQLite is used with many other languages (not necessarily
Micro$oft languages). Olaf was also able to steer me toward a solution
via his SQLite wrapper
Thanks, Olaf! That's very helpful. I will certainly look at your demo.
As for the enlightening experience of browsing this forum, I connect to
it back when I first downloaded your RichClient package. While I
haven't read them all, to date I have over 62K posts to use for
learning. That said, I
Simon Slavin wrote :
> Ah, okay. First, I don't understand what you mean by 'recordset'.
Hi Simon,
Thanks for the info. I understand about the command line feature tool
and CSV/TSV file import.
Recordsets are common db programming objects, and so I'm a bit awe
struck that you wouldn't know wha
Simon Slavin formulated the question :
> On 6 Jan 2011, at 8:18pm, GS wrote:
>
>> I just checked a 21,000 line x 30 column delimited file and it is
>> 817KB. I draw the line (for performance and/or convenience working with
>> the data) at about 50K lines before I'
Olaf Schmidt pretended :
> "Alok Singh" schrieb
> im Newsbeitrag
> news:aanlktikhcyfsuybpjtv=+cd4asrddt-9+f7qx_qpq...@mail.gmail.com...
>
>> yeah that's correct Simon, its in 0.6 sec to insert
>> for 10.5K rows with 20 columns (2 files both
>> having 10.5k rows)
>
> That's the timing I would expec
Simon Slavin was thinking very hard :
> On 3 Jan 2011, at 9:08pm, GS wrote:
>
>> Looping through the array gives me access to each record if I start the
>> loop at vaDataArray(1), thus the loop parameters of '1 To
>> UBound(vaDataArray)'.
>
> I have twice
It happens that Alok Singh formulated :
> Hi Garry,
>
> can you show me with code how You are inserting with maintaining 1st row as
> to make header of table ,and next other inserting fastest way into db...
> please show me your code view for that. little bit i m confuse over its
> inserting proces
Simon Slavin wrote :
> Also you are still using a 2D array to store your values. This is slow and
> requires a lot of memory. Instead, use one array to read in the first row
> (with the names of the columns) and then use a second 1D array for each row
> of values: read one line in then write t
12 matches
Mail list logo