What I was going to say ::

I was always told to stay clear of the varchar for performance, and that was
on the mysql training course and the dev's.

So I'd take that its relevant........

Jerry

----- Original Message ----- 
From: "Nick Arnett" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Saturday, May 31, 2003 3:49 PM
Subject: RE: Insert query


> > -----Original Message-----
> > From: Ulterior [mailto:[EMAIL PROTECTED]
> > Sent: Saturday, May 31, 2003 7:35 AM
> > To: [EMAIL PROTECTED]
> > Subject: Re: Insert query
> >
> >
> > sow what would you suggest, Jerry?
> >
> > ( I need a very FAST search on this table's filename field)
> >
> > Ulterior
> >
> > > Don't use varchar unless you absolutely have to, that should help.
> > >
> > > Jerry
> > >
> > > ----- Original Message -----
> > > From: "Ulterior" <[EMAIL PROTECTED]>
> > > To: <[EMAIL PROTECTED]>
> > > Sent: Saturday, May 31, 2003 3:13 PM
> > > Subject: Insert query
> > >
> > >
> > > > Hi,
> > > >
> > > > I have a database table:
> > > >
> > > > CREATE TABLE FTPFILE (
> > > >   ID int(11) NOT NULL auto_increment,
> > > >   FTPSITEID int(11) DEFAULT '0' NOT NULL,
> > > >
> > > >   FILENAME varchar(254) DEFAULT 'Not defined' NOT NULL,
> > > >   FILEPATH varchar(254) DEFAULT 'Not defined' NOT NULL,
> > > >   FILEEXTN varchar(3) DEFAULT '---' NOT NULL,
> > > >   FILESIZE int(11) NOT NULL,
> > > >   FILEDATE datetime DEFAULT '0000-00-00 00:00:00' NOT NULL,
> > > >
> > > >   PRIMARY KEY (ID)
> > > > );
> > > >
> > > > when this table reaches 10000 records, insert queries are very slow,
> > > > aproximately 2 records a second.
> > > > And worst of all mysql-ntd.exe reaches 99% of my processor timing.
> > > >
> > > > I am using native mysql C API mysql_query() func for inserting data.
> > > >
> > > > Is there any way to speedup insert querys to this table? ( I
> > am planning
> > > to
> > > > have at least 2-5 million entries in this table )
>
> Your VARCHAR columns can become CHAR columns, which should help a lot.
But
> CHAR columns can't be longer than 255, so you're about at the limit.  Are
> you locking the table before inserting, then using the multiple insert
> syntax?  Those should help if you're not doing them, but I don't know what
> it is in the C API.  How many indexes do you have? If you reduce the
number
> of indexes, insertions will go faster, but that might slow down some of
your
> queries, of course.
>
> If you're inserting quite a few at a time, you might disable indexing, do
> the inserts, then allow the indexing to happen all at once.  For a large
> number of inserts, you might also try writing them to a file, then loading
> it.  That's much faster for really large numbers of inserts, and you can
> also suspend indexing as I mentioned.  I use the latter for building
> externally generated text indexes (frequency tables and such), generating
a
> couple of million records at a shot.  It made an enormous difference.  But
> I'm working in Python and can't help you with the C API, as I said.
>
> Nick
>
>
> -- 
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:    http://lists.mysql.com/[EMAIL PROTECTED]
>
>


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:    http://lists.mysql.com/[EMAIL PROTECTED]

Reply via email to