sql,query
Hi,
I am looking at the following situation:
I am reading some files arriving every minute and parsing them and
creating a set of files ready to be inserted into tables.
on the fly. While I am waiting for the next burst of files, I want to
insert these in
ÐáñÜèåóç Gerald Clark
<[EMAIL PROTECTED]>:
> chown -R mysql.mysql /usr/local/mysql
Thanks, but apparently it still cannot find it:
linux:/usr/local/mysql # chown -R
mysql.mysql /usr/local/mysql
linux:/usr/local/mysql # scripts/mysql_install_db
Installing all prepared tables
020919 22:44:19
Trying to install 4.03 from the tarball...
Any idea whay I am doing wrong??
Thanks, S.Alexiou
Here is what I get
...linux:/usr/local/mysql # scripts/mysql_install_db
Installing all prepared tables
020918 21:55:07 ./bin/mysqld: Shutdown Complete
To start mysqld at boot time you have to copy su
Looks like there is a problem with 4.0.3 or I am doing
something wrong:
First I tried to upgrade from rpm,
linux:/home/me/TMPOUT #
rpm -e MySQL-4.0.3-0.i386.rpm
error: package MySQL-4.0.3-0.i386.rpm is not installed
linux:/home/me/TMPOUT # rpm -i MySQL-4.0.3-0.i386.rpm
package MySQL-4.0
Thanks
It's 2.4.18
So, I will try to unistall and install from the
tarball, yes?
>
>
> Spiros,
>
> > linux:/home/db/TMPOUT # rpm -q mysql
> > package mysql is not installed
> >
> > linux:/home/db/TMPOUT # rpm -qa | grep MySQL
> > MySQL-shared-4.0.3-0
> > MySQL-client-4.0.3-0
> > MySQL-devel-4.
I have a lot of tables, and not all of them are filled
equally.
Inserts to tables that have a lot of entries(see the
count below), take a long time (about .06 secs on the
average in mysql, over 0.09-0.1 in DBI), for example
mysql> INSERT INTO T1 VALUES
('3CCF571C1A88118801040302','072','
I am doing some automated mysql -u user -ppasswd
database < insertfile.sql
in a loop and have some skip locking presumably because
there are a number of jobs running to do this
(with different files, which however use the same
tables). Should I rather do line by line inserts for
speed?
Th
Well, it will NOT let me add more ibdata files:
Any ideas??
Thanks, Spiros
Here is hostname.err
(On a 2GB RAM, 2x1000 CPU, )
InnoDB: Database was not shut down normally.
InnoDB: Starting recovery from log files...
InnoDB: Starting log scan based on checkpoint at
InnoDB: log sequence number 3 7
I am running Mysql 4.0 with InnoDB on a linux 2.4.0
machine
I am doing a "mass import" of a file with some 40
inserts
and I get a strange "unknown error 1114"
Interestingly enough , this is not exactly
reproducible, i.e.
the error occurs in slightly different import
positions.
I have b
Hi,
I am using 4.0 and switching from MyISAM to InnoDB,
so quite newbie on this:
I have a couple of questions:
1) First, I read in the docs that the minimal thing to
do is to add to /etc/my.cnf
innodb_data_file_path=ibdata/ibdata1:2000M
(although the ibdata file is some 67M in /var/lib/mysql)
I am sending this mail in order to get things straight
about table corruption which I am experiencing with
4.0(as well as previous versions).
1)System specs: PIII x1000MHz, 1GB RAM, HD 37GB SCSI,
AHA29160N SCSI controller
2) Database MyISam tables, BUT 21000 tables in the
database(this is
I am sending this mail in order to get things straight
about table corruption which I am experiencing with
4.0(as well as previous versions).
1)System specs: PIII x1000MHz, 1GB RAM, HD 37GB SCSI,
AHA29160N SCSI controller
2) Database MyISam tables, BUT 21000 tables in the
database(this is fo
Hi,
I am trying version 4 with myIsam tables. (I am almost
ready to move to some
other hopefully safer type ). The problem is I get
table
corruption.
I have a large database(21000 tables, but
the whole /var/lib/mysql/mydatabase directory is less
than 800MB) database. I know this is stra
I am posting this in the hope it will be useful; I seem
to have solved the problem.
The problem was(on 3.23.38) and Linux 2.2.18 or 2.4.0
This is a huge system, all SCSI, 2GB or RAM, 2PIII
x1000MHz processors and 37GB in the volume group
corresponding to /var/lib/mysql
There is an AHA2916
Can anyone give an example of how to use the
where option in mysqldump to get a dump of only those
records whose values of field fieldx are larger than
something?
For example
"for all database tables of db mydatabase
which possess a field named fieldx store in
the dump only those records
These are some remarks on mysql(3.23.32) with regard to
my experience
with data crash and recovery. It is not meant to be
negative in any sense
and I am actually very thankful there is a database
like Mysql around.
On the other hand, if these experiences are not due
to some mistake on m
Hi,
two questions:
1)On the mysql manual, section 4.12.16 Mysql-Win32
compared to Unix MySQL it sas that:
"Mysql uses a blocking read for each connection. This
means that:
A connection will not be disconnected automatically
after 8 hours, as happens with the Unix Version of
Mysql".
This is of
17 matches
Mail list logo