'.
--
--
Johannes Ullrich [EMAIL PROTECTED]
pgp key: http://johannes.homepc.org/PGPKEYS
--
We regret to inform you that we do not enable any of the
security functions within the routers that we install
. :-/ )
--
--
Johannes Ullrich [EMAIL PROTECTED]
pgp key: http://johannes.homepc.org/PGPKEYS
--
We regret to inform you that we do not enable any of the
security functions within
.
--
--
Johannes Ullrich [EMAIL PROTECTED]
pgp key: http://johannes.homepc.org/PGPKEYS
--
We regret to inform you that we do not enable any of the
security functions within the routers that we install.
[EMAIL
Two solutions to your problem:
table full errors: check the 'AVG_ROW_LENGTH' and 'MAX_ROWS' option
for create and alter table. You can change these on the fly using 'alter
table', but it will take quite a time for a table your size (few hours-1 day
depending on machine).
The exact values for
on mysql 4.0.10-gamma (rpm install, Redhat advanced server), I
am running into 'too many open files' issues ( error 24 ).
I am using a rather large merge table (30 distinct tables merged),
which is likely the culprit. The error shows up as I have about a
dozen of connections.
I did
I am having problems setting up replication between two 4.0.10
servers.
What I did so far:
- generate a dump of the current state of the server using
'mysqldump' (its a mix of mostly innodb tables and some MyISAM
tables)
- dropped all databases from the slave
- imported the dump into
Check the user 'repl' has REPLICATION SLAVE privilege.
Ah. that fixed it. Actually, the real reason was that I had not
yet updated the mysql tables and the new privileges did not take
effect as a result.
mysql_fix_privilege_tables , followed by the 'GRANT' command
and 'flush privileges' fixed
I just had to alter a large (25Gig, 100Million rows) table to
increase the max_rows parameter. The 'alter table' query is now
running 60+ hours, the last 30+hours it spend in 'repair with
keycache' mode. Is there any way to speed up this operation?
I realize, it is probably too late now. But next
the easiest way to do this is to use mysql's own 'password' function.
to add a new user use:
insert into table (username,passwd) values ('jianping',password('jian1830'))
to validate the password:
select count(*) from table where username='jianping' and
passwd=password('whatwasentered');
or
Does anyone have any hints on how to or where to look to find out how
to import a number of tab-delimited text files with some header info
that reside on a ftp server into a MySQL database using PHP?
the 'key' command is 'load data infile'. It is very flexible in
handling various delimited
I'm wondering how well MySQL compress data. I'm about to design a
database which will hold mainly _a lot_ of FLOAT-values, and since I do
not really know how well MySQL compress data or how I could calculate
this I'd really appriciate a little guidance.
see chapter 6.2.6 of the mysql manual.
On Thu, 9 Jan 2003 22:56:04 +0200
Gelu Gogancea [EMAIL PROTECTED] wrote:
All this is very interesting, BUT i have two binary builds (4.0.7
4.0.8),
Ha,Hayou are gentle.
(load avg 10-60 Query/sec), and 4.0.8 crash (in some
hardware/software) after 2 seconds work :(
Did you see
Then, the master write, update, delete, ecc..
the slaves answer SELECTs
If this is impossible, which is the utility to have slaves???
it is possible. Follow the manual for a good start. A couple
caveats:
- your application has to be able to send the selects to the different
(read only)
I am having 'issues' with MySQL running on Redhat Advanced Server
on a 8 Gigbyte machine (dual P4-Xeon).
After large imports ('load data infile', file size about 1 Gigbyte) into
a large table (20-30 GByte, 100 Million rows), the database crashes.
I did try several key_buffer_size settings. The
MySQL 3.23.51
Linux Kernel 2.4.16
we do have 1 GB of RAM
the main problem seems to be a table with about 8.597.146 records.
Similar situation here (100 Million rows).
things I found that help:
- be selective on what rows to index. Try to limit yourself to one
row.
- increase the key
Yesterday, I increased the memory in my mysql server from
2 GByte to 4 GByte.
Here the log file as it died:
Number of processes running now: 1
mysqld process hanging, pid 1015 - killed
020808 09:40:12 mysqld restarted
020808 9:40:12 Can't start server: Bind on TCP/IP port: Address
What the hell is error 28? Where can I find description?
mysqlcheck --all-databases # says everything is ok
mysql does come with a little utiltiy, perror, which can be
used to translate error numbers into readable messages:
$ perror 28
Error code 28: No space left on device
Did you check
17 matches
Mail list logo