Re: deleting big tables

2015-05-17 Thread Pau Marc Muñoz Torres
this is my process list


++--+---+--+-++-+--+
| Id | User | Host  | db   | Command | Time   |
State   | Info |
++--+---+--+-++-+--+
| 37 | pau  | localhost | UTR  | Killed  | 260012 | query
end   | delete from ensemblmotive|
| 58 | pau  | localhost | UTR  | Query   |  81396 | Waiting for table
metadata lock | drop index iutr on ensemblmotive |
| 59 | pau  | localhost | UTR  | Query   |  45331 | Waiting for table
metadata lock | drop table ensemblmotive |
| 66 | pau  | localhost | UTR  | Query   |  0 |
NULL| show processlist |
++--+---+--+-++-+--+

process with id 37 have been there for a long time, i tried to kill it and
drop the table. what can i do?


Pau Marc Muñoz Torres
skype: pau_marc
http://www.linkedin.com/in/paumarc
http://www.researchgate.net/profile/Pau_Marc_Torres3/info/


2015-05-17 7:23 GMT+02:00 Adarsh Sharma eddy.ada...@gmail.com:

 Hi Pau,

 Ideally drop table should not take that much time , you have to check if
 your command is executing or it is in waiting stage. May be you are not
 able to get lock on that table.

 Cheers,
 Adarsh Sharma


 On Sat, 16 May 2015 at 23:34 Pau Marc Muñoz Torres paum...@gmail.com
 wrote:

 Hello every body

  i have a big table in my sql server and i want to delete it, it also have
 some indexes. I tried to drop table and delete commands but i
 eventually get a time out. Wath can i do with it, does it exist any method
 to delete tables quicly?

 i know that drop and delete are not equivalent but i want to get rid of
 all
 information inside

 thanks

 Pau Marc Muñoz Torres
 skype: pau_marc
 http://www.linkedin.com/in/paumarc
 http://www.researchgate.net/profile/Pau_Marc_Torres3/info/




Re: deleting big tables

2015-05-17 Thread Pau Marc Muñoz Torres
i solved the problem by rebooting my computer. i just drop the table in
seconds

thanks

Pau Marc Muñoz Torres
skype: pau_marc
http://www.linkedin.com/in/paumarc
http://www.researchgate.net/profile/Pau_Marc_Torres3/info/


2015-05-17 12:00 GMT+02:00 Pau Marc Muñoz Torres paum...@gmail.com:

 this is the innodb output,

  i tried to kill the process using kil, kill query and kill connection but
 doesn't worked. what can i do?

 thanks

 150517 11:50:46 INNODB MONITOR OUTPUT
 =
 Per second averages calculated from the last 3 seconds
 -
 BACKGROUND THREAD
 -
 srv_master_thread loops: 140779 1_second, 121940 sleeps, 13482 10_second,
 11383 background, 7600 flush
 srv_master_thread log flush and writes: 154479
 --
 SEMAPHORES
 --
 OS WAIT ARRAY INFO: reservation count 2091707, signal count 9920537
 Mutex spin waits 26944439, rounds 111751039, OS waits 966302
 RW-shared spins 5087632, rounds 68696066, OS waits 929958
 RW-excl spins 2980761, rounds 27893952, OS waits 158867
 Spin rounds per wait: 4.15 mutex, 13.50 RW-shared, 9.36 RW-excl
 
 TRANSACTIONS
 
 Trx id counter 154B1E14
 Purge done for trx's n:o  154B1E0A undo n:o  0
 History list length 1136
 LIST OF TRANSACTIONS FOR EACH SESSION:
 ---TRANSACTION 0, not started
 MySQL thread id 67, OS thread handle 0x7f11bc426700, query id 244
 localhost pau
 SHOW ENGINE INNODB STATUS
 ---TRANSACTION 154B1E00, ACTIVE 265942 sec rollback
 mysql tables in use 1, locked 1
 ROLLING BACK 297751 lock struct(s), heap size 35387832, 74438247 row
 lock(s), undo log entries 66688203
 MySQL thread id 37, OS thread handle 0x7f11bc4b9700, query id 110
 localhost pau query end
 delete from ensemblmotive
 
 FILE I/O
 
 I/O thread 0 state: waiting for completed aio requests (insert buffer
 thread)
 I/O thread 1 state: waiting for completed aio requests (log thread)
 I/O thread 2 state: waiting for completed aio requests (read thread)
 I/O thread 3 state: waiting for completed aio requests (read thread)
 I/O thread 4 state: waiting for completed aio requests (read thread)
 I/O thread 5 state: waiting for completed aio requests (read thread)
 I/O thread 6 state: waiting for completed aio requests (write thread)
 I/O thread 7 state: waiting for completed aio requests (write thread)
 I/O thread 8 state: waiting for completed aio requests (write thread)
 I/O thread 9 state: waiting for completed aio requests (write thread)
 Pending normal aio reads: 0 [0, 0, 0, 0] , aio writes: 0 [0, 0, 0, 0] ,
  ibuf aio reads: 0, log i/o's: 0, sync i/o's: 0
 Pending flushes (fsync) log: 1; buffer pool: 1
 13648332 OS file reads, 34442363 OS file writes, 1064506 OS fsyncs
 84.73 reads/s, 16384 avg bytes/read, 49.74 writes/s, 1.75 fsyncs/s
 -
 INSERT BUFFER AND ADAPTIVE HASH INDEX
 -
 Ibuf: size 1, free list len 6150, seg size 6152, 5407097 merges
 merged operations:
  insert 0, delete mark 206521397, delete 0
 discarded operations:
  insert 0, delete mark 0, delete 0
 Hash table size 276671, node heap has 101 buffer(s)
 370.88 hash searches/s, 150.28 non-hash searches/s
 ---
 LOG
 ---
 Log sequence number 117269257408
 Log flushed up to   117269225038
 Last checkpoint at  117268694768
 1 pending log writes, 0 pending chkp writes
 319455 log i/o's done, 0.75 log i/o's/second
 --
 BUFFER POOL AND MEMORY
 --
 Total memory allocated 137363456; in additional pool allocated 0
 Dictionary memory allocated 130206
 Buffer pool size   8191
 Free buffers   0
 Database pages 5931
 Old database pages 2170
 Modified db pages  4679
 Pending reads 0
 Pending writes: LRU 120, flush list 0, single page 0
 Pages made young 22819462, not young 0
 89.73 youngs/s, 0.00 non-youngs/s
 Pages read 13648346, created 340720, written 33498386
 84.73 reads/s, 0.00 creates/s, 47.49 writes/s
 Buffer pool hit rate 929 / 1000, young-making rate 76 / 1000 not 0 / 1000
 Pages read ahead 0.00/s, evicted without access 0.00/s, Random read ahead
 0.00/s
 LRU len: 5931, unzip_LRU len: 0
 I/O sum[8119]:cur[172], unzip sum[0]:cur[0]
 --
 ROW OPERATIONS
 --
 0 queries inside InnoDB, 0 queries in queue
 1 read views open inside InnoDB
 Main thread process no. 1173, id 139714143528704, state: flushing log
 Number of rows inserted 0, updated 0, deleted 74140498, read 74808849
 0.00 inserts/s, 0.00 updates/s, 0.00 deletes/s, 0.00 reads/s
 
 END OF INNODB MONITOR OUTPUT
 


 Pau Marc Muñoz Torres
 skype: pau_marc
 http://www.linkedin.com/in/paumarc
 http://www.researchgate.net/profile/Pau_Marc_Torres3/info/


 2015-05-17 10:31 GMT+02:00 Pothanaboyina Trimurthy 
 skd.trimur...@gmail.com:

 Hi Pou,
 Before killing those connections first check for the undo log entries
 from the engine innodb status. If there are too many undo log

Re: deleting big tables

2015-05-17 Thread Pau Marc Muñoz Torres
this is the innodb output,

 i tried to kill the process using kil, kill query and kill connection but
doesn't worked. what can i do?

thanks

150517 11:50:46 INNODB MONITOR OUTPUT
=
Per second averages calculated from the last 3 seconds
-
BACKGROUND THREAD
-
srv_master_thread loops: 140779 1_second, 121940 sleeps, 13482 10_second,
11383 background, 7600 flush
srv_master_thread log flush and writes: 154479
--
SEMAPHORES
--
OS WAIT ARRAY INFO: reservation count 2091707, signal count 9920537
Mutex spin waits 26944439, rounds 111751039, OS waits 966302
RW-shared spins 5087632, rounds 68696066, OS waits 929958
RW-excl spins 2980761, rounds 27893952, OS waits 158867
Spin rounds per wait: 4.15 mutex, 13.50 RW-shared, 9.36 RW-excl

TRANSACTIONS

Trx id counter 154B1E14
Purge done for trx's n:o  154B1E0A undo n:o  0
History list length 1136
LIST OF TRANSACTIONS FOR EACH SESSION:
---TRANSACTION 0, not started
MySQL thread id 67, OS thread handle 0x7f11bc426700, query id 244 localhost
pau
SHOW ENGINE INNODB STATUS
---TRANSACTION 154B1E00, ACTIVE 265942 sec rollback
mysql tables in use 1, locked 1
ROLLING BACK 297751 lock struct(s), heap size 35387832, 74438247 row
lock(s), undo log entries 66688203
MySQL thread id 37, OS thread handle 0x7f11bc4b9700, query id 110 localhost
pau query end
delete from ensemblmotive

FILE I/O

I/O thread 0 state: waiting for completed aio requests (insert buffer
thread)
I/O thread 1 state: waiting for completed aio requests (log thread)
I/O thread 2 state: waiting for completed aio requests (read thread)
I/O thread 3 state: waiting for completed aio requests (read thread)
I/O thread 4 state: waiting for completed aio requests (read thread)
I/O thread 5 state: waiting for completed aio requests (read thread)
I/O thread 6 state: waiting for completed aio requests (write thread)
I/O thread 7 state: waiting for completed aio requests (write thread)
I/O thread 8 state: waiting for completed aio requests (write thread)
I/O thread 9 state: waiting for completed aio requests (write thread)
Pending normal aio reads: 0 [0, 0, 0, 0] , aio writes: 0 [0, 0, 0, 0] ,
 ibuf aio reads: 0, log i/o's: 0, sync i/o's: 0
Pending flushes (fsync) log: 1; buffer pool: 1
13648332 OS file reads, 34442363 OS file writes, 1064506 OS fsyncs
84.73 reads/s, 16384 avg bytes/read, 49.74 writes/s, 1.75 fsyncs/s
-
INSERT BUFFER AND ADAPTIVE HASH INDEX
-
Ibuf: size 1, free list len 6150, seg size 6152, 5407097 merges
merged operations:
 insert 0, delete mark 206521397, delete 0
discarded operations:
 insert 0, delete mark 0, delete 0
Hash table size 276671, node heap has 101 buffer(s)
370.88 hash searches/s, 150.28 non-hash searches/s
---
LOG
---
Log sequence number 117269257408
Log flushed up to   117269225038
Last checkpoint at  117268694768
1 pending log writes, 0 pending chkp writes
319455 log i/o's done, 0.75 log i/o's/second
--
BUFFER POOL AND MEMORY
--
Total memory allocated 137363456; in additional pool allocated 0
Dictionary memory allocated 130206
Buffer pool size   8191
Free buffers   0
Database pages 5931
Old database pages 2170
Modified db pages  4679
Pending reads 0
Pending writes: LRU 120, flush list 0, single page 0
Pages made young 22819462, not young 0
89.73 youngs/s, 0.00 non-youngs/s
Pages read 13648346, created 340720, written 33498386
84.73 reads/s, 0.00 creates/s, 47.49 writes/s
Buffer pool hit rate 929 / 1000, young-making rate 76 / 1000 not 0 / 1000
Pages read ahead 0.00/s, evicted without access 0.00/s, Random read ahead
0.00/s
LRU len: 5931, unzip_LRU len: 0
I/O sum[8119]:cur[172], unzip sum[0]:cur[0]
--
ROW OPERATIONS
--
0 queries inside InnoDB, 0 queries in queue
1 read views open inside InnoDB
Main thread process no. 1173, id 139714143528704, state: flushing log
Number of rows inserted 0, updated 0, deleted 74140498, read 74808849
0.00 inserts/s, 0.00 updates/s, 0.00 deletes/s, 0.00 reads/s

END OF INNODB MONITOR OUTPUT



Pau Marc Muñoz Torres
skype: pau_marc
http://www.linkedin.com/in/paumarc
http://www.researchgate.net/profile/Pau_Marc_Torres3/info/


2015-05-17 10:31 GMT+02:00 Pothanaboyina Trimurthy skd.trimur...@gmail.com
:

 Hi Pou,
 Before killing those connections first check for the undo log entries from
 the engine innodb status. If there are too many undo log entries it will
 take some time to clean up those entries. If you force fully kill those
 connections there are more chances to crash the DB instance.
 On 17 May 2015 1:54 pm, Adarsh Sharma eddy.ada...@gmail.com wrote:

 Hi Pou,

 This is the reason why your drop commands taking too much time because
 they
 are in waiting state.Even it is quite surprising to me the purpose of the
 delete command. I would say

deleting big tables

2015-05-16 Thread Pau Marc Muñoz Torres
Hello every body

 i have a big table in my sql server and i want to delete it, it also have
some indexes. I tried to drop table and delete commands but i
eventually get a time out. Wath can i do with it, does it exist any method
to delete tables quicly?

i know that drop and delete are not equivalent but i want to get rid of all
information inside

thanks

Pau Marc Muñoz Torres
skype: pau_marc
http://www.linkedin.com/in/paumarc
http://www.researchgate.net/profile/Pau_Marc_Torres3/info/


Re: from excel to the mySQL

2010-08-03 Thread Marc Guay
 I'm needs a way to upload data from excel to the mySQL database. Dear all, I 
 need help is how to upload data from excel columns and load into mysql 
 database using php?

http://www.sqldbu.com/eng/sections/tips/mysqlimport.html

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Table which can reference a number of other tables

2010-07-26 Thread Marc Guay
I keep running into problems like this and have another example of it
that might be clearer.

I have 4 tables, Newsletters, Contacts, Industries, and Contact Groups.

We send Newsletters to Contacts, either grouped by Industry or Contact Group.

Contact Groups must be associated with an Industry.  Contacts must be
associated with an Industry, but not necessarily a Contact Group.

For example, sometimes we would like to send a Newsletter to all
Contacts who are in the real estate Industry, and sometimes, we only
want to send newsletters to Contacts who are members of the Planet
Earth Real Estate Board.  So far, I have the following:

Newsletters
--
id
content
contact_group_id  (optional)
industry_id  (optional)

Contacts
--
id
name
email
industry_id (FK)
contact_group_id (FK) (optional)

Industries
--
id
name

Contact Groups
--
id
name
industry_id (FK)


Are suggested solutions for this problem any different from those I've
received for the previous example?  The real problem is when a table
(Newsletters) can be associated with 2 or more other tables (Contact
Group or Industry).

Thanks for any guidance.
Marc

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Table which can reference a number of other tables

2010-07-21 Thread Marc Guay
Hi everyone,

I have a question regarding database design, I hope that this is
appropriate for the list.  Let's say that I have the following tables:

clients (id,name)
contacts (id, name, phone, client_id (FK))

companies (id, name)
employees (id, name, phone, company_id (FK))

logins (id, username, password)


What's the best way to connect contacts and employees to the logins
table?  I've thought of duplicating the username  password fields
into both the contacts and employees tables, adding both contact_id
and employee_id foreign keys to the logins table, and adding login_id
foreign keys to the contacts and employees tables, but none of these
solutions seem very smart.

Thanks,
Marc

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Select form a list

2008-02-26 Thread Pau Marc Munoz Torres
Hi

 i have list where I would like make a select, this list look like this

id  Properties   Others
*
11   sss
22   sss
32   a etc...
42
52
61
72
82
91
..
imagine that  from  this list  I only want to select those registers that
propiertis are 1, but now all the registers but only 2 each time. For
example, i got id 1, so i would like to select only registers with 1 and 6,
and if i starts from id 6, i should be able to select 6 and 9 and no more.

How can I do that?

Thanks

Pau

-- 
Pau Marc Muñoz Torres

Laboratori de Biologia Computacional
Institut de  Biotecnologia   i Biomedicina Vicent
Villar
Universitat Autonoma de Barcelona
E-08193 Bellaterra (Barcelona)

telèfon: 93 5812807
Email : [EMAIL PROTECTED]


indexing tables using my owns functions

2007-11-13 Thread Pau Marc Munoz Torres
Hi

 I've created a function that return a float value the code for it is :

create function IDR(pin1 varchar(20),pin4 varchar(20),pin6 varchar(20),pin7
varchar(20),pin9 varchar(20),MOL varchar(20)) returns float
DETERMINISTIC
begin


declare output float;
declare P1 float;
declare P4 float;
declare P6 float;
declare P7 float;
declare P9 float;


select VALUE into P1 from PSSMS where AA=pin1 and POS='1'
and MOLEC=MOL;
select VALUE into P4 from PSSMS where AA=pin4 and POS='4'
and MOLEC=MOL;
select VALUE into P6 from PSSMS where AA=pin6 and POS='6'
and MOLEC=MOL;
select VALUE into P7 from PSSMS where AA=pin7 and POS='7'
and MOLEC=MOL;
select VALUE into P9 from PSSMS where AA=pin9 and POS='9'
and MOLEC=MOL;

select P1+P4+P6+P7+P9 into output;

return output;
end
//


And it works, now, i would like index a table using this function.
The table description is:
mysql describe precalc;
+---+-+--+-+-++
| Field | Type| Null | Key | Default | Extra  |
+---+-+--+-+-++
| id  | int(6)| NO   | PRI | NULL| auto_increment |
| P1| char(1) | YES  || NULL||
| P4| char(1) | YES  || NULL||
| P6| char(1) | YES  || NULL||
| P7| char(1) | YES  || NULL||
| P9| char(1) | YES  ||  NULL||
+---+-+--+-+-++
6 rows in set (0.01 sec)

and i try index by the following command:

mysql create index AA on  precalc (IDR(P1,P4,P6,P7,P9,'HLA-DRB13'));

But i Get the following error:

ERROR 1064 (42000): You have an error in your SQL syntax; check the manual
that corresponds to your MySQL server version for the right syntax to use
near ''P1','P4','P6','P7','P9','HLA-DRB13'))' at line 1

Some one knows where is the error?

Thanks

Pau

-- 
Pau Marc Muñoz Torres

Laboratori de Biologia Computacional
Institut de  Biotecnologia   i Biomedicina Vicent
Villar
Universitat Autonoma de Barcelona
E-08193 Bellaterra (Barcelona)

telèfon: 93 5812807
Email : [EMAIL PROTECTED]


-- 
Pau Marc Muñoz Torres

Laboratori de Biologia Computacional
Institut de  Biotecnologia   i Biomedicina Vicent
Villar
Universitat Autonoma de Barcelona
E-08193 Bellaterra (Barcelona)

telèfon: 93 5812807
Email : [EMAIL PROTECTED]


Fwd: loading scripts to mysql

2007-11-13 Thread Pau Marc Munoz Torres
hi,

 Tanks for your help, finally i found the source command. It work like
this:

mysql source /Lhome/geruppa/mhc/Pack_Ref_web/prova.sql


2007/11/9, Michael Gargiullo  [EMAIL PROTECTED]:

 On Fri, 2007-11-09 at 13:22 +0100, Pau Marc Munoz Torres wrote:

  Hi everybody
 
   I'm writing a function script in a flat file using vim, now i would
 like
  load it into my sql, there is some command to do it similar to load
 data
  into to fill tables?
 
  thanks
 


 Sure,
 From command line:
 mysql -u username -p databasefile-containing-sql

 -Mike




-- 
Pau Marc Muñoz Torres

Laboratori de Biologia Computacional
Institut de  Biotecnologia   i Biomedicina Vicent
Villar
Universitat Autonoma de Barcelona
E-08193 Bellaterra (Barcelona)

telèfon: 93 5812807
Email : [EMAIL PROTECTED]


-- 
Pau Marc Muñoz Torres

Laboratori de Biologia Computacional
Institut de  Biotecnologia   i Biomedicina Vicent
Villar
Universitat Autonoma de Barcelona
E-08193 Bellaterra (Barcelona)

telèfon: 93 5812807
Email : [EMAIL PROTECTED]


indexing tables using my owns functions

2007-11-13 Thread Pau Marc Munoz Torres
Hi

 I've created a function that return a float value the code for it is :

create function IDR(pin1 varchar(20),pin4 varchar(20),pin6 varchar(20),pin7
varchar(20),pin9 varchar(20),MOL varchar(20)) returns float
DETERMINISTIC
begin


declare output float;
declare P1 float;
declare P4 float;
declare P6 float;
declare P7 float;
declare P9 float;


select VALUE into P1 from PSSMS where AA=pin1 and POS='1'
and MOLEC=MOL;
select VALUE into P4 from PSSMS where AA=pin4 and POS='4'
and MOLEC=MOL;
select VALUE into P6 from PSSMS where AA=pin6 and POS='6'
and MOLEC=MOL;
select VALUE into P7 from PSSMS where AA=pin7 and POS='7'
and MOLEC=MOL;
select VALUE into P9 from PSSMS where AA=pin9 and POS='9'
and MOLEC=MOL;

select P1+P4+P6+P7+P9 into output;

return output;
end
//


And it works, now, i would like index a table using this function.
The table description is:
mysql describe precalc;
+---+-+--+-+-++
| Field | Type| Null | Key | Default | Extra  |
+---+-+--+-+-++
| id  | int(6)| NO   | PRI | NULL| auto_increment |
| P1| char(1) | YES  || NULL||
| P4| char(1) | YES  || NULL||
| P6| char(1) | YES  || NULL||
| P7| char(1) | YES  || NULL||
| P9| char(1) | YES  ||  NULL||
+---+-+--+-+-++
6 rows in set (0.01 sec)

and i try index by the following command:

mysql create index AA on  precalc (IDR(P1,P4,P6,P7,P9,'HLA-DRB13'));

But i Get the following error:

ERROR 1064 (42000): You have an error in your SQL syntax; check the manual
that corresponds to your MySQL server version for the right syntax to use
near ''P1','P4','P6','P7','P9','HLA-DRB13'))' at line 1

Some one knows where is the error?

Thanks

Pau

-- 
Pau Marc Muñoz Torres

Laboratori de Biologia Computacional
Institut de  Biotecnologia   i Biomedicina Vicent
Villar
Universitat Autonoma de Barcelona
E-08193 Bellaterra (Barcelona)

telèfon: 93 5812807
Email : [EMAIL PROTECTED]


-- 
Pau Marc Muñoz Torres

Laboratori de Biologia Computacional
Institut de  Biotecnologia   i Biomedicina Vicent
Villar
Universitat Autonoma de Barcelona
E-08193 Bellaterra (Barcelona)

telèfon: 93 5812807
Email : [EMAIL PROTECTED]


-- 
Pau Marc Muñoz Torres

Laboratori de Biologia Computacional
Institut de  Biotecnologia   i Biomedicina Vicent
Villar
Universitat Autonoma de Barcelona
E-08193 Bellaterra (Barcelona)

telèfon: 93 5812807
Email : [EMAIL PROTECTED]


Re: indexing tables using my owns functions

2007-11-13 Thread Pau Marc Munoz Torres
, as far as i can see, from mysql 5.0 and upper it is possible create
index using functions.

http://www.faqs.org/docs/ppbook/r24254.htm

But i keep having problems with the exemple from the link. Is there any bug
in  mysql 5.0.24a-log?

2007/11/13, Martijn Tonies [EMAIL PROTECTED]:

mysql create index AA on  precalc (IDR(P1,P4,P6,P7,P9,'HLA-DRB13'));
 
 But i Get the following error:
 
 ERROR 1064 (42000): You have an error in your SQL syntax; check the
 manual
 that corresponds to your MySQL server version for the right syntax to use
 near ''P1','P4','P6','P7','P9','HLA-DRB13'))' at line 1
 
 Some one knows where is the error?

 As far as I can see (
 http://dev.mysql.com/doc/refman/5.0/en/create-index.html )
 you can only use columns, not a function.

 Martijn Tonies
 Database Workbench - tool for InterBase, Firebird, MySQL, NexusDB, Oracle
 
 MS SQL Server
 Upscene Productions
 http://www.upscene.com
 My thoughts:
 http://blog.upscene.com/martijn/
 Database development questions? Check the forum!
 http://www.databasedevelopmentforum.com


 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




-- 
Pau Marc Muñoz Torres

Laboratori de Biologia Computacional
Institut de  Biotecnologia   i Biomedicina Vicent
Villar
Universitat Autonoma de Barcelona
E-08193 Bellaterra (Barcelona)

telèfon: 93 5812807
Email : [EMAIL PROTECTED]


loading scripts to mysql

2007-11-09 Thread Pau Marc Munoz Torres
Hi everybody

 I'm writing a function script in a flat file using vim, now i would like
load it into my sql, there is some command to do it similar to load data
into to fill tables?

thanks

-- 
Pau Marc Muñoz Torres

Laboratori de Biologia Computacional
Institut de  Biotecnologia   i Biomedicina Vicent
Villar
Universitat Autonoma de Barcelona
E-08193 Bellaterra (Barcelona)

telèfon: 93 5812807
Email : [EMAIL PROTECTED]


Problems with create procedure

2007-11-07 Thread Pau Marc Munoz Torres
Hi

 I'm working with mysql 5.0.24a-log trying to create a procedure as is
indicated at mysql web page

and i get the following error before delimiter ;

mysql delimiter //

mysql CREATE FUNCTION hello (s CHAR(20)) RETURNS CHAR(50)
- RETURN CONCAT('Hello, ',s,'!');
- //
Query OK, 0 rows affected (0.00 sec)

mysql delimiter ;



and i get the following error before delimiter ;

ERROR 1418 (HY000): This function has none of DETERMINISTIC, NO SQL, or
READS SQL DATA in its declaration and binary logging is enabled (you *might*
want to use the less safe log_bin_trust_function_creators variable)

Can some body tell me what should I do?

thanks

Pau


-- 
Pau Marc Muñoz Torres

Laboratori de Biologia Computacional
Institut de  Biotecnologia   i Biomedicina Vicent
Villar
Universitat Autonoma de BarcelonaE-08193 Bellaterra (Barcelona)

telèfon: 93 5812807
Email : [EMAIL PROTECTED]


mysqladmin claims password in crontab !

2006-08-17 Thread Marc MENDEZ

Hi,

I work under Mandrake 10.1
First, I have a .my.cnf, which works quite well, since I can enter mysql 
without entering any password.

Even mysqladmin works !
But, I have a batch process run by crontab as root, which tests if mysql is 
running (mysqladmin version --silent).

Unfortunately, this command failed

/usr/bin/mysqladmin: connect to server at 'localhost' failed
error: 'Access denied for user: '[EMAIL PROTECTED]' (Using password: NO)'

So, I did a simple test : I just run mysqladmin version using the batch 
command (it runs a tasks as cron, but immediately) : it works !


So , what !! Help please !




p4.vert.ukl.yahoo.com uncompressed Thu Aug 17 06:27:00 GMT 2006 
	


___ 
Découvrez un nouveau moyen de poser toutes vos questions quelque soit le sujet ! 
Yahoo! Questions/Réponses pour partager vos connaissances, vos opinions et vos expériences. 
http://fr.answers.yahoo.com 




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: load-file() doesn't work [SOLVED]

2006-07-17 Thread Marc Alff

Hi Fleet


[EMAIL PROTECTED] wrote:
 SHORT SYNOPSIS:
 The statement
 INSERT INTO table (blob_column) VALUES
 (load_file('/home/somebody/image.jpg'));

 Produces no error; but fails to load the image file.

 SOLUTION:
 The image file (or other binary file, I assume) MUST reside in / or /tmp,
 ie /image.jpg or /tmp/image.jpg. (At least in MySQL 3.23.36)

 I *hope* this is a bug!

   - fleet -

   

Just to check, is the /home partition actually mounted on the server,
and at the same place ?

In other words, does /home/fleet/image.jpg exists :
- when seen from the host running mysql
- when seen from the host running mysqld

Marc.


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Question on mysql-test-run sp failed

2006-07-14 Thread Marc Alff

Hi Jenny

Jenny Chen wrote:
 Hi,

 Recently, I built mysql5.0 on Solaris 10 amd64, but when running make
 test, the bundled sp test failed
 with the following error:

 mysqltest: At line 1527: query 'call fib(20)' failed: 1436: Thread stack
 overrun:  186944 bytes used of a 262144 byte stack, and 81920 bytes
 needed.  Use 'mysqld -O thread_stack=#' to specify a bigger stack.
 (the last lines may be the most important ones)


From what I understand (I am also building MySQL from the source),
there are memory constraints with the number of threads and the stack
size for each thread.

See the following (it's not for 5.0/Solaris, but might help anyway) :
http://dev.mysql.com/doc/refman/5.1/en/source-notes-linux.html

Marc Alff.


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Slow query log

2006-04-05 Thread Mechain Marc
Hello,

 

Is there a way to enable the Slow Query Log on the fly without having to
restart mysqld

 

Regards,

 

Marc.



RE: Slow query log

2006-04-05 Thread Mechain Marc
Hi,

Thank you for your answer.
But is there a chance to be able to do it one day?
I think it could be a nice feature. 

Marc.

-Message d'origine-
De : Petr Chardin [mailto:[EMAIL PROTECTED] 
Envoyé : mercredi 5 avril 2006 13:06
À : Mechain Marc
Cc : MySQL
Objet : Re: Slow query log

On Wed, 2006-04-05 at 11:38 +0200, Mechain Marc wrote:
 Is there a way to enable the Slow Query Log on the fly without having to
 restart mysqld

No.

Petr


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Question about interactive timeout

2006-03-15 Thread Mechain Marc
I have a Mysql Server (4.1.8) where some sessions stay connected for a
value greater than Interactive timeout value.

 

Here is an abstract of the show processlist command:

 

| 129996 | fret | mtt04.back:33598   | fret | Sleep   |   61756 |

| 129998 | fret | mtt04.back:33599   | fret | Sleep   |   61759 |

| 12 | ets | mtt04.back:33600   | ets | Sleep   |   61759 |

| 13 | ets | mtt04.back:33601   | ets | Sleep   |   61759 |

| 130001 | tls  | mtt04.back:33602   | tls  | Sleep   |   61755 |

 

The show variables command gives me:

Interactive_timeout 28800

Wait_timeout 28800

 

Why those connections do still remains on the server with a value of
61700s while in a Sleep Command?

It is rather strange for me; they normally should have disappeared after
28800s of inactivity.

 

Could you give me a clue?

 

Marc.

 

 

 



install mysql on linux AMD64 processor

2005-12-30 Thread jean-marc Jegou
SEND-PR: -*- send-pr -*-
SEND-PR: Lines starting with `SEND-PR' will be removed automatically, as
SEND-PR: will all comments (text enclosed in `' and `').
SEND-PR:
From: root
To: mysql@lists.mysql.com
Subject: [50 character or so descriptive subject here (for reference)]






Description:
i try to install mysql-standard-5.0.17_linux-x86_64-glibc23, but it did
not work !

when i run ./mysql_safe --user=mysql  - message error is cannot
execute binary file !

mysqld did not work too !




How-To-Repeat:
code/input/activities to reproduce the problem (multiple lines)
Fix:
how to correct or work around the problem, if known (multiple lines)

Submitter-Id:  submitter ID
Originator:root
Organization:
 organization of PR author (multiple lines)
MySQL support: [none | licence | email support | extended email
support ]
Synopsis:  synopsis of the problem (one line)
Severity:  [ non-critical | serious | critical ] (one line)
Priority:  [ low | medium | high ] (one line)
Category:  mysql
Class: [ sw-bug | doc-bug | change-request | support ] (one line)
Release:   mysql-5.0.17-standard (MySQL Community Edition - Standard
(GPL))

C compiler:gcc (GCC) 3.2.3 20030502 (Red Hat Linux 3.2.3-52)
C++ compiler:  gcc (GCC) 3.2.3 20030502 (Red Hat Linux 3.2.3-52)
Environment:
machine, os, target, libraries (multiple lines)
System: Linux localhost.localdomain 2.6.9-10.2.aur.2 #1 Thu Feb 10
04:34:27 EST 2005 i686 athlon i386 GNU/Linux
Architecture: i686

Some
paths:  /usr/bin/perl /usr/bin/make /usr/bin/gmake /usr/bin/gcc /usr/bin/cc
GCC: Lecture des spécification à partir de /usr/lib/gcc/i386-redhat-
linux/3.4.2/specs
Configuré avec: ../configure --prefix=/usr --mandir=/usr/share/man --
infodir=/usr/share/info --enable-shared --enable-threads=posix --
disable-checking --with-system-zlib --enable-__cxa_atexit --disable-
libunwind-exceptions --enable-java-awt=gtk --host=i386-redhat-linux
Modèle de thread: posix
version gcc 3.4.2 20041017 (Red Hat 3.4.2-6)
Compilation info: CC='gcc'  CFLAGS=''  CXX='gcc'  CXXFLAGS=''
LDFLAGS=''  ASFLAGS=''
LIBC: 
lrwxrwxrwx  1 root root 13 sep  8 10:40 /lib/libc.so.6 - libc-2.3.3.so
-rwxr-xr-x  1 root root 1504728 oct 28  2004 /lib/libc-2.3.3.so
-rw-r--r--  1 root root 2404716 oct 28  2004 /usr/lib/libc.a
-rw-r--r--  1 root root 204 oct 28  2004 /usr/lib/libc.so
Configure command: ./configure '--prefix=/usr/local/mysql' '--
localstatedir=/usr/local/mysql/data' '--libexecdir=/usr/local/mysql/bin'
'--with-comment=MySQL Community Edition - Standard (GPL)' '--with-extra-
charsets=complex' '--with-server-suffix=-standard' '--enable-thread-
safe-client' '--enable-local-infile' '--enable-assembler' '--disable-
shared' '--with-zlib-dir=bundled' '--with-big-tables' '--with-readline'
'--with-archive-storage-engine' '--with-innodb' 'CC=gcc' 'CXX=gcc'




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



install mysql-5.0.17 on linux AMD 64 processor

2005-12-30 Thread jean-marc Jegou
  SEND-PR: -*- send-pr -*-
SEND-PR: Lines starting with `SEND-PR' will be removed automatically, as
SEND-PR: will all comments (text enclosed in `' and `').
SEND-PR:
From: root
To: mysql@lists.mysql.com
Subject: install mysql-SEND-PR: -*- send-pr -*-
SEND-PR: Lines starting with `SEND-PR' will be removed automatically, as
SEND-PR: will all comments (text enclosed in `' and `').
SEND-PR:
From: root
To: mysql@lists.mysql.com
Subject: install mysql-5.0.17 on linux AMD 64 processor




Description:
i try to install mysql-standard-5.0.17_linux-x86_64-glibc23, but it did
not work !

when i run ./mysql_safe --user=mysql  - message error is cannot
execute binary file !

mysqld did not work too !




How-To-Repeat:
code/input/activities to reproduce the problem (multiple
lines)
Fix:
how to correct or work around the problem, if known (multiple
lines)

Submitter-Id:  submitter ID
Originator:root
Organization:
 organization of PR author (multiple lines)
MySQL support: [none | licence | email support | extended email
support ]
Synopsis:  synopsis of the problem (one line)
Severity:  [ non-critical | serious | critical ] (one line)
Priority:  [ low | medium | high ] (one line)
Category:  mysql
Class: [ sw-bug | doc-bug | change-request | support ] (one
line)
Release:   mysql-5.0.17-standard (MySQL Community Edition -
Standard
(GPL))

C compiler:gcc (GCC) 3.2.3 20030502 (Red Hat Linux 3.2.3-52)
C++ compiler:  gcc (GCC) 3.2.3 20030502 (Red Hat Linux 3.2.3-52)
Environment:
machine, os, target, libraries (multiple lines)
System: Linux localhost.localdomain 2.6.9-10.2.aur.2 #1 Thu Feb 10
04:34:27 EST 2005 i686 athlon i386 GNU/Linux
Architecture: i686

Some
paths:  /usr/bin/perl /usr/bin/make /usr/bin/gmake /usr/bin/gcc /usr/bin/cc
GCC: Lecture des spécification à partir de /usr/lib/gcc/i386-redhat-
linux/3.4.2/specs
Configuré avec: ../configure --prefix=/usr --mandir=/usr/share/man --
infodir=/usr/share/info --enable-shared --enable-threads=posix --
disable-checking --with-system-zlib --enable-__cxa_atexit --disable-
libunwind-exceptions --enable-java-awt=gtk --host=i386-redhat-linux
Modèle de thread: posix
version gcc 3.4.2 20041017 (Red Hat 3.4.2-6)
Compilation info: CC='gcc'  CFLAGS=''  CXX='gcc'  CXXFLAGS=''
LDFLAGS=''  ASFLAGS=''
LIBC: 
lrwxrwxrwx  1 root root 13 sep  8 10:40 /lib/libc.so.6 - libc-2.3.3.so
-rwxr-xr-x  1 root root 1504728 oct 28  2004 /lib/libc-2.3.3.so
-rw-r--r--  1 root root 2404716 oct 28  2004 /usr/lib/libc.a
-rw-r--r--  1 root root 204 oct 28  2004 /usr/lib/libc.so
Configure command: ./configure '--prefix=/usr/local/mysql' '--
localstatedir=/usr/local/mysql/data' '--libexecdir=/usr/local/mysql/bin'
'--with-comment=MySQL Community Edition - Standard (GPL)' '--with-extra-
charsets=complex' '--with-server-suffix=-standard' '--enable-thread-
safe-client' '--enable-local-infile' '--enable-assembler' '--disable-
shared' '--with-zlib-dir=bundled' '--with-big-tables' '--with-readline'
'--with-archive-storage-engine' '--with-innodb' 'CC=gcc' 'CXX=gcc'






Description:
i try to install mysql-standard-5.0.17_linux-x86_64-glibc23, but it did
not work !

when i run ./mysql_safe --user=mysql  - message error is cannot
execute binary file !

mysqld did not work too !




How-To-Repeat:
code/input/activities to reproduce the problem (multiple
lines)
Fix:
how to correct or work around the problem, if known (multiple
lines)

Submitter-Id:  submitter ID
Originator:root
Organization:
 organization of PR author (multiple lines)
MySQL support: [none | licence | email support | extended email
support ]
Synopsis:  synopsis of the problem (one line)
Severity:  [ non-critical | serious | critical ] (one line)
Priority:  [ low | medium | high ] (one line)
Category:  mysql
Class: [ sw-bug | doc-bug | change-request | support ] (one
line)
Release:   mysql-5.0.17-standard (MySQL Community Edition -
Standard
(GPL))

C compiler:gcc (GCC) 3.2.3 20030502 (Red Hat Linux 3.2.3-52)
C++ compiler:  gcc (GCC) 3.2.3 20030502 (Red Hat Linux 3.2.3-52)
Environment:
machine, os, target, libraries (multiple lines)
System: Linux localhost.localdomain 2.6.9-10.2.aur.2 #1 Thu Feb 10
04:34:27 EST 2005 i686 athlon i386 GNU/Linux
Architecture: i686

Some
paths:  /usr/bin/perl /usr/bin/make /usr/bin/gmake /usr/bin/gcc /usr/bin/cc
GCC: Lecture des spécification à partir de /usr/lib/gcc/i386-redhat-
linux/3.4.2/specs
Configuré avec: ../configure --prefix=/usr --mandir=/usr/share/man --
infodir=/usr/share/info --enable-shared --enable-threads=posix --
disable-checking --with-system-zlib --enable-__cxa_atexit --disable-
libunwind-exceptions --enable-java-awt=gtk --host=i386-redhat-linux
Modèle de thread: posix
version gcc 3.4.2 20041017 (Red Hat 3.4.2-6)
Compilation info: CC='gcc'  CFLAGS=''  CXX='gcc'  CXXFLAGS=''
LDFLAGS=''  ASFLAGS=''
LIBC: 
lrwxrwxrwx  1 root root 13 sep  8 10:40 /lib/libc.so.6 - libc-2.3.3.so
-rwxr-xr-x  1 root 

Re: How to insert CURDATE() as default

2005-12-30 Thread Marc
Thanks, Danny. This little info was all I needed. It works perfectly
now.

Have a Happy New Year!

--Marc


On Fri, 2005-12-30 at 10:28 +0100, Danny Stolle wrote:
 Marc,
 
 In MySql (I am using 4.1.9 and 4.1.15; so i am not sure about 5) it is 
 not possible to use functions as default values; you could create:
 create table tester (f_date date default curdate()); But this doesn;t 
 work. You have to struggle through your knoda how to present the current 
 date. Be sure you have your field datatype set to DATE.
 Create your form and set the datasource to the table having the 
 date-field. Create your textbox and assign the field to it.
 Put the %NOWDATE% in the 'default value' field of your textbox. The 
 currentdate will be shown after you run the form.
 
 Hope this little info helps you :-)
 
 Danny
 

-- 
Marc [EMAIL PROTECTED]


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: install mysql-5.0.17 on linux AMD 64 processor

2005-12-30 Thread jean-marc Jegou
Re: install mysql-5.0.17 on linux AMD 64 processor
http://lists.mysql.com/mysql/193352

dir Duncan Hill ,

thank you to answer to my question !

my processeur is AMD 64 2800+ ( 64 Bits processeurs!) and linux is
seeing an Athlon 32 bits processor (see  previous message).

My OS IS Linux Aurox ver 10.2 (OS 32 Bits ),therefore the trouble is
probably resolved !

What do you think ?

best regards



 



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



How to insert CURDATE() as default

2005-12-29 Thread Marc
I've got a MySQL table that I'd like to have the current date,
CURDATE(), as the default in a column. I'm using knoda to worj with this
table. How do I use knoda to get this done? I can enter CURDATE() in the
default using the GridColumns button, but all that does is insert the
phrase CURDATE(). I've got the column with Date for the ColummnType.

Thanks.


-- 
Marc [EMAIL PROTECTED]


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



How to insert CURDATE() as default

2005-12-29 Thread Marc
I forgot to mention - I'm running MySQL 4.0.14 on Linux. And I'm not very 
knowledgeable on databases. I just use MySQL with Knoda to get the job done.

==
I've got a MySQL table that I'd like to have the current date,
CURDATE(), as the default in a column. I'm using knoda to worj with this
table. How do I use knoda to get this done? I can enter CURDATE() in the
default using the GridColumns button, but all that does is insert the
phrase CURDATE(). I've got the column with Date for the ColummnType.

Thanks.


-- 
Marc [EMAIL PROTECTED]


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: mysqld crash with archive engine 2gb

2005-11-21 Thread Marc Rossi
David -

Sorry for the delayed response.  ulimit is unlimited. 
I am using the RPMs for RH EL3 downloaded from a
mirror so I would assumed they have been built to
allow  2GB archives.

I am most likely going a different route at this point
as my query response time is much too slow even with a
1GB archive table.  Using a separate MyISAM table for
each day of data (~ 132mb/3.2m rows) gives me the
response time I need -- plus there are no scenarios in
which I will be performing cross-day queries (I would
guess I could use the merge table if necessary).

Thanks again,
Marc 

--- Logan, David (SST - Adelaide)
[EMAIL PROTECTED] wrote:

 Hi Marc,
 
 I would be a bit suspicious of the version of zlib
 or something similar.
 If it has only been compiled with a 32bit compiler,
 this could be
 causing an artificial limit of 2Gb on a pointer.
 
 The ARCHIVE engine uses the zlib for its
 compression, that comes with
 mysql. I am presuming if it has been compiled in
 32bit mode that a
 pointer or two maybe overflowing.
 
 These are just thoughts as I can't really find any
 reason in your logs
 as to why this should be happening. Do you have a
 ulimit set for the
 user? This could be constricting your file growth to
 2Gb.
 
 Regards
 
 David Logan
 Database Administrator
 HP Managed Services
 148 Frome Street,
 Adelaide 5000
 Australia
 
 +61 8 8408 4273 - Work
 +61 417 268 665 - Mobile
 +61 8 8408 4259 - Fax
 
 

 
 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: mysqld crash with archive engine 2gb

2005-11-17 Thread Marc Rossi

 Is there any message in the log files? or any error
 message from the OS?

David - 

When the mysqld process restarts, there is nothing in
the logfile except the basic startup info.  I have
listed it below:0:33).

  051116 10:06:33 [Note] /usr/sbin/mysqld: ready for
connections.
  Version: '5.0.15-standard'  socket:
'/var/lib/mysql/mysql.sock'  port: 3306  MySQL
Community Edition - Standard (GPL)
  Number of processes running now: 0
  051116 20:33:05  mysqld restarted
  051116 20:33:05  InnoDB: Started; log sequence
number 0 43665
  051116 20:33:05 [Note] /usr/sbin/mysqld: ready for
connections.
  Version: '5.0.15-standard'  socket:
'/var/lib/mysql/mysql.sock'  port: 3306  MySQL
Community Edition - Standard (GPL)

As you can see from the log I am running 5.0.15.  I
installed using the provided RPMs on a RH ES3 box. 

Below is the tablestatus.

mysql show table status like 'trade' \G
*** 1. row
***
   Name: trade
 Engine: ARCHIVE
Version: 10
 Row_format: Compressed
   Rows: 0
 Avg_row_length: 4137
Data_length: 2147483647
Max_data_length: 0
   Index_length: 0
  Data_free: 0
 Auto_increment: NULL
Create_time: 2005-11-16 20:33:05
Update_time: 2005-11-16 20:33:05
 Check_time: NULL
  Collation: latin1_swedish_ci
   Checksum: NULL
 Create_options: max_rows=4294967295
avg_row_length=4137
Comment:
1 row in set (0.01 sec)
 
Thanks for any assistance you can give.  I am also
looking at alternative solutions in which I use
multiple ARCHIVE dbs with each being  2 gb.

Marc


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



mysqld crash with archive engine 2gb

2005-11-16 Thread Marc Rossi
I am trying to populate a table using the archive
engine that I estimate will take up ~ 8gb of disk
space when finished.  Right now I am crashing the
server as soon as my file gets to the 2gb mark.  OS is
linux and there are other files on the same filesystem
that are 30gb+ so I know the fs has support.

Any ideas?  I tried to do the ALTER TABLE x
AVG_ROW_LENGTH=x MAX_ROWS=y using inputs that would
exceed 2GB, same results.

TIA,
Marc

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Edit MySQL through MS Access?

2005-11-03 Thread Marc Pidoux

Eh, eh, sorry, stupid question for some of you, I'm sure...

I'm wondering if there is a way to edit a MySQL DB through MS Access 
like you can for an MSSQL DB? I want to edit a lot of data, tables 
etc... and doing it through phpMyAdmin just isn't very efficient.


Thanks,

Marc


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Edit MySQL through MS Access?

2005-11-03 Thread Marc Pidoux
Yes! That's so cool, took me a little while to have it working but 
it works.


I can retrieve a MySQL table in MS Access and even upload a new table 
from MS Access to MySQL but I cannot update it from MS Access, when I 
try to upload an updated table, I get an error Table already exists 
which makes sense but I want to update/overwrite it. Something like 
downloading the table from MySQL, edit some of the fields and data in MS 
Access and reupload the whole thing. Is this possible too?


Thanks again!

Marc


Dan Nelson wrote:


In the last episode (Nov 03), Marc Pidoux said:
 


Eh, eh, sorry, stupid question for some of you, I'm sure...

I'm wondering if there is a way to edit a MySQL DB through MS Access
like you can for an MSSQL DB? I want to edit a lot of data, tables
etc... and doing it through phpMyAdmin just isn't very efficient.
   



Sure.  Just install the MySQL ODBC connector and link to the tables
same as you would for MS SQL.  There is a whole section in the manual
detailing this:

http://dev.mysql.com/doc/refman/5.0/en/msaccess.html

 



PHP/MySQL vs ASP/MSSQL?

2005-11-03 Thread Marc Pidoux
I've been using ASP for years but I have to work on a bigger project 
involving many users and data so of course, Access is not an option 
anymore. Since it's a project requiring thousand of files and several 
applications all linked together, I can't create it once and change it 
later. Basically, which option is the best between PHP/MySQL and 
ASP/MSSQL? It's not a giant project but it might include around 20'000 
members interacting daily through forums, blogs, messages etc...


Thanks,

Marc


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: UNIONS 'Got error 12 from storage engine'

2005-04-21 Thread Mechain Marc
Memory problem.

Error 12 = Cannot allocate memory 

Marc.

-Message d'origine-
De : Cliff Daniel [mailto:[EMAIL PROTECTED]
Envoyé : jeudi 21 avril 2005 07:43
À : mysql@lists.mysql.com
Objet : UNIONS  'Got error 12 from storage engine'


Have a query that consists of 10 unions.  For a period of time it will
constantly return with an error 12.  Moments later it might actually
work once or twice.  I can find nothing in any logs or whatever.   The
odd thing is that when it isn't working, simply reducing the unions to 8
or less returns with no problem.

A) Very little to be read about on the net for Error 12
B) What little there is talks mostly of BDB, which is of no relevance.

I've tried reproducing the problem with lots of unions, even tried
joining some simple table to try to get the same error, but to no avail.
I'm guess it is related to my specific schema.

I'm on 4.1.11, up from 4.1.10a but that didn't fix it.

Any debugging avenues suggested?

I know someone is going to want an explain, and it'll have to wait until
the morning.



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Regarding the loading of data usning load data infile

2005-04-20 Thread Mechain Marc
May be a clue, for the data records you load into the table, the value for 
numeric field such as DEPARTMENT_ID, LOCATION_ID should not be enclosed in 
quote. If it is the case, mysql has to make a translation from character to 
numeric.

Marc.

-Message d'origine-
De : [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED]
Envoyé : mercredi 20 avril 2005 11:59
À : mysql@lists.mysql.com
Cc : [EMAIL PROTECTED]; [EMAIL PROTECTED];
[EMAIL PROTECTED]
Objet : Regarding the loading of data usning load data infile




Hi,
   I had 99,990 records to be loaded into a table which is having
unique constraints and foreign-key constraints as below

CREATE TABLE `teldir` (
  `NAME` varchar(21) default '',
  `PHONE_NO` varchar(26) default '',
  `PRIME_VALUE` char(1) default '',
  `COMMENT_TEXT1` varchar(30) default '',
  `COMMENT_TEXT2` varchar(30) default '',
  `DEPARTMENT_ID` int(4) default NULL,
  `LOCATION_ID` int(4) default NULL,
  `COMPONENT_ID` int(3) default NULL,
  `NAME_AND_PHONE_NO` varchar(48) NOT NULL default '',
  `HI_CASE_IND` int(11) default NULL,
  `LOW_CASE_IND` int(11) default NULL,
  `PRIVACY` char(1) default '',
  `COLLECTED` char(1) default '',
  `HOMENODE_ID` int(3) NOT NULL default '0',
  `CLUSTERID` int(3) NOT NULL default '0',
  `PLID_CABINET` int(3) default NULL,
  `PLID_SHELF` int(3) default NULL,
  `PLID_SLOT` int(3) default NULL,
  `PLID_CIRCUIT` int(3) default NULL,
  `DEVICE_TYPE` int(2) default NULL,
  `IDS_ID` varchar(255) default '',
  `ISIDSMANAGED` char(1) default '',
  `MACADDRESS` varchar(12) default '',
  `CESID` varchar(10) default '',
  `hvgPIN` varchar(8) default '',
  `tdUID` varchar(38) default '',
  PRIMARY KEY  (`NAME_AND_PHONE_NO`),
  UNIQUE KEY `TD_PHONE_KEY_IDX` (`PHONE_NO`,`NAME_AND_PHONE_NO`),
  UNIQUE KEY `TD_KEYS_IDX` (`NAME_AND_PHONE_NO`,`NAME`,`PHONE_NO`),
  UNIQUE KEY `TD_COMP_KEY_IDX` (`COMPONENT_ID`,`NAME_AND_PHONE_NO`),
  KEY `COMPONENT_ID` (`COMPONENT_ID`),
  KEY `HOMENODE_ID` (`HOMENODE_ID`),
  KEY `TD_COMP_PLID_IDX`
(`COMPONENT_ID`,`PLID_CABINET`,`PLID_SHELF`,`PLID_SLOT`,`PLID_CIRCUIT`),
  KEY `TD_IDSID_IDX` (`IDS_ID`),
  KEY `TD_IDSUNMGT_IDX` (`ISIDSMANAGED`),
  CONSTRAINT `FK_TELDIR_COMPONENT` FOREIGN KEY (`COMPONENT_ID`)
REFERENCES `component` (`ID`),
  CONSTRAINT `FK_TELDIR_HOMENODE` FOREIGN KEY (`HOMENODE_ID`) REFERENCES
`component` (`ID`)
) TYPE=InnoDB



Used load data infile 'teldir.lst' into table teldir. But teldir.lst
contains data of 99,990 records (whose fields are separated by tab and
rows are separated by newline). To load these many records into teldir
table whose structure as above taking around 100 minutes. I.e taking too
much of time.

If I drop the unique and foreign key constraints it is taking around 25
minutes, which is also large time.


Please advise me for a better solution so that the loading of data
should be faster. According to the load data standards for innodb it
should load 2000 records for second. Please explain me the proper
solution for this.


Thanks,
Narasimha





Confidentiality Notice


The information contained in this electronic message and any attachments to 
this message are intended
for the exclusive use of the addressee(s) and may contain confidential or 
privileged information. If
you are not the intended recipient, please notify the sender at Wipro or [EMAIL 
PROTECTED] immediately
and destroy all copies of this message and any attachments.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



turning off replication

2005-03-24 Thread Marc Dumontier
Hi,
i'm working with mysql 4.1
I had a slave machine (ralph) as backup to a master machine (barney). I 
then made ralph the production server, and turned off barney. I now want 
to make barney a backup to ralph (so ralph would be the master).

I see in the logs that ralph is still trying to connect to barney 
because the values from CHANGE MASTER TO...  are still there.. how do 
i completely clear this? so i don't have any problems when i bring 
barney back to life.

Marc Dumontier
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Memory limit?

2005-02-10 Thread Marc Slemko
On Thu, 10 Feb 2005 10:19:32 +0900, Batara Kesuma
[EMAIL PROTECTED] wrote:
 Hi Tobias,
 
 On Wed, 9 Feb 2005 14:48:16 +0100 (CET)
 Tobias Asplund [EMAIL PROTECTED] wrote:
 
   I try to install MySQL 4.1.9 (official RPM from mysql.com). My machine
   is running linux 2.6.9, and it has 4GB of RAM. The problem is MySQL
   won't start if I set innodb_buffer_pool_size to = 2GB. Here is my
   ulimit.
 
  Are you trying this on a 32-bit cpu machine?
 
 Sorry I forgot to mention. Yes, it is a 32-bit CPU machine.

Yup, most Linux glibc's limit a malloc() to 2 gigs in a misplaced(?)
attempt to avoid errors due to signed/unsigned conversions.  Since
innodb just uses malloc() for things, getting above two gigs doesn't
work.

(the details can be a lot more complicated, ie. needing a kernel with
a 4G/4G split, changing the base address mmap()ed regions start at,
etc.)

I don't think it would be hard at all to change innodb to let you use
a 3-4 gig buffer pool on a 32 bit Linux box, but I've never had the
time to look into it that deeply.

It is unfortunate.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Slow Replication

2005-02-10 Thread Marc Slemko
On Wed, 9 Feb 2005 22:07:19 +0100, Hannes Rohde [EMAIL PROTECTED] wrote:
 Hi all,
 
 We use MySQL as a database backend on a portal site. We have a two
 database server setup (one master, one slave). The master is a PIV 3,2 GHz.,
 2 GB Ram and a 80GB Raid-1 system. The slave is a PIV 3.2 GHz., 4 GB Ram and
 a 80GB Raid-0 system. Both run on MySQL 4.1.9 and only use InnoDB. Even
 though the slave is a bigger system and is quite fast with selects, it
 always falls behind in replication (Seconds behind the server keeps growing
 at high-load times).
 Is there any way to speed up the replication a little more? I have already
 tried a whole lot of things but have never been successful, yet :-(

Your config settings suggest you are using innodb.

That can be problematic since innodb allows much higher concurrency
than myisam, although you can still have this issue with myisam.

What you have to realize is that due to how mysql replication works,
every transaction needs to be serialized.  The slave is only running a
single statement at once.  So if you have multiple CPUs on the server,
or multiple disks that can't be saturated by a single concurrent
operation ... then multiple simultaneous operations can get better
performance on the server than you can get in replication to the
client.

If most of your stuff is innodb, then setting the innodb option to not
sync to disk on every transaction may speed things up a lot ... if you
don't care about your data.  But, then again, I don't think mysql
replication is actually fully transactional yet anyway.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Slow Replication

2005-02-10 Thread Mechain Marc
I think he is talking about the innodb_flush_log_at_trx_commit parameter.
Try to put it at a value of 0.

innodb_flush_log_at_trx_commit = 0

If you have an IO bottleneck, this may help.

Marc.



-Message d'origine-
De : Hannes Rohde [mailto:[EMAIL PROTECTED]
Envoyé : jeudi 10 février 2005 11:46
À : 'Marc Slemko'
Cc : mysql@lists.mysql.com
Objet : AW: Slow Replication


I don't quite get what you mean with the second paragraph. Do you
mean increasing the thread concurrency to 6 or something like that? I have
already put it on 4 because we do have HT active on the cpu. On the other it
is just a single processor P IV system. On other hand I think it wouldn't
speed up the SQL thread on the slave a whole lot. It would be excellent if
you could run two or more SQL threads on the slave you priories them
somehow.

Anyway I will try your last paragraph's suggestion. 

Thanks,

Hannes


-Ursprüngliche Nachricht-
Von: Marc Slemko [mailto:[EMAIL PROTECTED] 
Gesendet: Donnerstag, 10. Februar 2005 11:24
An: Hannes Rohde
Cc: mysql@lists.mysql.com
Betreff: Re: Slow Replication

On Wed, 9 Feb 2005 22:07:19 +0100, Hannes Rohde [EMAIL PROTECTED] wrote:
 Hi all,
 
 We use MySQL as a database backend on a portal site. We have a two
 database server setup (one master, one slave). The master is a PIV 3,2
GHz.,
 2 GB Ram and a 80GB Raid-1 system. The slave is a PIV 3.2 GHz., 4 GB Ram
and
 a 80GB Raid-0 system. Both run on MySQL 4.1.9 and only use InnoDB. Even
 though the slave is a bigger system and is quite fast with selects, it
 always falls behind in replication (Seconds behind the server keeps
growing
 at high-load times).
 Is there any way to speed up the replication a little more? I have already
 tried a whole lot of things but have never been successful, yet :-(

That can be problematic since innodb allows much higher concurrency
than myisam, although you can still have this issue with myisam.

What you have to realize is that due to how mysql replication works,
every transaction needs to be serialized.  The slave is only running a
single statement at once.  So if you have multiple CPUs on the server,
or multiple disks that can't be saturated by a single concurrent
operation ... then multiple simultaneous operations can get better
performance on the server than you can get in replication to the
client.

If most of your stuff is innodb, then setting the innodb option to not
sync to disk on every transaction may speed things up a lot ... if you
don't care about your data.  But, then again, I don't think mysql
replication is actually fully transactional yet anyway.



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: performance on query with ORDER BY clause

2005-02-03 Thread Marc Dumontier
Thanks for your reply,
Just to be clear...performing my query without the order by clause will 
always return the list sorted by the primary identifier?
so that

SELECT SubmitId from BINDSubmit ORDER BY SubmitId == SELECT SubmitId from 
BINDSubmit
in this case

Marc
Dathan Pattishall wrote:
This tells the optimizer to do a table scan. If you used INNODB it's
already sorted by the primary key since INNODB supports clustered
indexes. Doing a table scan on innodb is very slow due to it's MVCC
control.
It's going to take a long time.

DVP

Dathan Vance Pattishall http://www.friendster.com

 

-Original Message-
From: Marc Dumontier [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, February 02, 2005 12:02 PM
To: mysql@lists.mysql.com
Subject: performance on query with ORDER BY clause

Hi,
I have a simple query with an ORDER BY clause, and it's 
taking forever to run on this table. I hope i've included all 
relevent information...it might just be one of the4 server 
variables which need adjustment.

the query is
SELECT SubmitId from BINDSubmit ORDER BY SubmitId
SubmitId is the primary Key, about 150,000 records table type 
is INNODB

mysql describe BINDSubmit;
+-+-+--+-+
-++
| Field   | Type| Null | Key | 
Default | Extra  |
+-+-+--+-+
-++
| SubmitId| int(10) unsigned|  | PRI | 
NULL| auto_increment |
| BindId  | int(10) unsigned|  | MUL | 
0   ||
| UserId  | int(10) unsigned|  | MUL | 
0   ||
| Delegate| int(10) unsigned|  | MUL | 
0   ||
| Visible | tinyint(1)  |  | | 
1   ||
| Private | tinyint(1)  |  | | 
0   ||
| Compressed  | tinyint(1)  |  | | 
0   ||
| Verified| tinyint(1)  |  | | 
0   ||
| Status  | tinyint(3) unsigned |  | MUL | 
0   ||
| CurationType| tinyint(3) unsigned |  | | 
1   ||
| RecordType  | tinyint(3) unsigned |  | MUL | 
0   ||
| DateCreated | datetime|  | MUL | -00-00 
00:00:00 ||
| DateLastRevised | datetime|  | MUL | -00-00 
00:00:00 ||
| XMLRecord   | longblob|  | 
| ||
+-+-+--+-+
-++
14 rows in set (0.00 sec)

mysql select count(*) from BINDSubmit;
+--+
| count(*) |
+--+
|   144140 |
+--+
1 row in set (5.09 sec)
mysql explain select SubmitId from BINDSubmit ORDER BY SubmitId;
++---+---+-+-+
--++-+
| table  | type  | possible_keys | key | key_len | 
ref  | rows   
| Extra   |
++---+---+-+-+
--++-+
| BINDSubmit | index | NULL  | PRIMARY |   4 | 
NULL | 404947 
| Using index |
++---+---+-+-+
--++-+
1 row in set (0.00 sec)


# The MySQL server
[mysqld]
port= 3306
socket  = /tmp/mysql.sock
skip-locking
key_buffer = 128M
max_allowed_packet = 40M
table_cache = 256
sort_buffer_size = 1M
read_buffer_size = 1M
myisam_sort_buffer_size = 64M
thread_cache = 8
query_cache_size= 16M
# Try number of CPU's*2 for thread_concurrency thread_concurrency = 4
# Uncomment the following if you are using InnoDB tables 
innodb_data_home_dir = /usr/local/mysql/data/ 
innodb_data_file_path = ibdata1:100M:autoextend 
innodb_log_group_home_dir = /usr/local/mysql/data/ 
innodb_log_arch_dir = /usr/local/mysql/data/ # You can set 
.._buffer_pool_size up to 50 - 80 % # of RAM but beware of 
setting memory usage too high innodb_buffer_pool_size = 512M 
innodb_additional_mem_pool_size = 20M # Set .._log_file_size 
to 25 % of buffer pool size innodb_log_file_size = 64M 
innodb_log_buffer_size = 8M innodb_flush_log_at_trx_commit = 
1 innodb_lock_wait_timeout = 50


Any help would be appreciated, so far query has been running 
for 3000 seconds

Marc Dumontier
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]

   

 


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


performance on query with ORDER BY clause

2005-02-02 Thread Marc Dumontier
Hi,
I have a simple query with an ORDER BY clause, and it's taking forever 
to run on this table. I hope i've included all relevent information...it 
might just be one of the4 server variables which need adjustment.

the query is
SELECT SubmitId from BINDSubmit ORDER BY SubmitId
SubmitId is the primary Key, about 150,000 records
table type is INNODB
mysql describe BINDSubmit;
+-+-+--+-+-++
| Field   | Type| Null | Key | 
Default | Extra  |
+-+-+--+-+-++
| SubmitId| int(10) unsigned|  | PRI | 
NULL| auto_increment |
| BindId  | int(10) unsigned|  | MUL | 
0   ||
| UserId  | int(10) unsigned|  | MUL | 
0   ||
| Delegate| int(10) unsigned|  | MUL | 
0   ||
| Visible | tinyint(1)  |  | | 
1   ||
| Private | tinyint(1)  |  | | 
0   ||
| Compressed  | tinyint(1)  |  | | 
0   ||
| Verified| tinyint(1)  |  | | 
0   ||
| Status  | tinyint(3) unsigned |  | MUL | 
0   ||
| CurationType| tinyint(3) unsigned |  | | 
1   ||
| RecordType  | tinyint(3) unsigned |  | MUL | 
0   ||
| DateCreated | datetime|  | MUL | -00-00 
00:00:00 ||
| DateLastRevised | datetime|  | MUL | -00-00 
00:00:00 ||
| XMLRecord   | longblob|  | 
| ||
+-+-+--+-+-++
14 rows in set (0.00 sec)

mysql select count(*) from BINDSubmit;
+--+
| count(*) |
+--+
|   144140 |
+--+
1 row in set (5.09 sec)
mysql explain select SubmitId from BINDSubmit ORDER BY SubmitId;
++---+---+-+-+--++-+
| table  | type  | possible_keys | key | key_len | ref  | rows   
| Extra   |
++---+---+-+-+--++-+
| BINDSubmit | index | NULL  | PRIMARY |   4 | NULL | 404947 
| Using index |
++---+---+-+-+--++-+
1 row in set (0.00 sec)


# The MySQL server
[mysqld]
port= 3306
socket  = /tmp/mysql.sock
skip-locking
key_buffer = 128M
max_allowed_packet = 40M
table_cache = 256
sort_buffer_size = 1M
read_buffer_size = 1M
myisam_sort_buffer_size = 64M
thread_cache = 8
query_cache_size= 16M
# Try number of CPU's*2 for thread_concurrency
thread_concurrency = 4
# Uncomment the following if you are using InnoDB tables
innodb_data_home_dir = /usr/local/mysql/data/
innodb_data_file_path = ibdata1:100M:autoextend
innodb_log_group_home_dir = /usr/local/mysql/data/
innodb_log_arch_dir = /usr/local/mysql/data/
# You can set .._buffer_pool_size up to 50 - 80 %
# of RAM but beware of setting memory usage too high
innodb_buffer_pool_size = 512M
innodb_additional_mem_pool_size = 20M
# Set .._log_file_size to 25 % of buffer pool size
innodb_log_file_size = 64M
innodb_log_buffer_size = 8M
innodb_flush_log_at_trx_commit = 1
innodb_lock_wait_timeout = 50

Any help would be appreciated, so far query has been running for 3000 
seconds

Marc Dumontier
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Fixed with Fields

2005-01-28 Thread Marc Michalowski
I was wondering if there is a way to create fixed width fields. Example:
The field is set to 18 but data contained is 11. I need the length to
remain 18. Is there anyway to do this? Thanks for your help.

-Marc


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: MyODBC 3.5.9 and MySQL 4.1.8

2005-01-05 Thread Mechain Marc

Have a look at:

http://dev.mysql.com/doc/mysql/en/Old_client.html

Marc.

-Message d'origine-
De : nikos [mailto:[EMAIL PROTECTED]
Envoyé : mercredi 5 janvier 2005 10:09
À : mysql@lists.mysql.com
Objet : MyODBC 3.5.9 and MySQL 4.1.8



Hello list and happy new year.

Recently I've install mysql 4.1 on win 2000 with IIS 5 and works
perfect. My problem is that when i'm trying to make a connection with
myodbc (latest release) as localhost I got the following message:
Client does not support authentication protocol requestet by server.
Consider upgrading mysql client. 

MyODBC whorks fine because I 've allready make connection throw lan on a
Linux RH-9 with apache and mysql 4.0.22

Any suggestions?
Thanky you


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



mysql-python compiling error on Fedora Core 3 x86_64

2005-01-04 Thread Marc Petitmermet
Dear List Members
Can anybody give me a hint about how to compile mysql-python on a 64bit 
opteron system (see the error below) with mysql tarball?

- Fedora Core 3 x86_64
- python-2.3.4
- MySQL-python-1.1.8
- mysql-standard-4.1.8-unknown-linux-x86_64-glibc23.tar.gz
Thanks in advance
Marc
python setup.py build
running build
running build_py
running build_ext
building '_mysql' extension
gcc -pthread -shared build/temp.linux-x86_64-2.3/_mysql.o 
-L/usr/local/mysql/lib -lmysqlclient_r -lssl -lcrypto -lz -lcrypt -o 
build/lib.linux-x86_64-2.3/_mysql.so
/usr/bin/ld: /usr/local/mysql/lib/libmysqlclient_r.a(libmysql.o): 
relocation R_X86_64_32 against `a local symbol' can not be used when 
making a shared object; recompile with -fPIC
/usr/local/mysql/lib/libmysqlclient_r.a: could not read symbols: Bad 
value
collect2: ld returned 1 exit status
error: command 'gcc' failed with exit status 1

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


RE: Sum on Float type

2004-12-24 Thread Mechain Marc

You can use the format() function:

select t.custcode, format(sum(t.cost),2) as Sub-Total
from customer c, transaction t
where c.custcode = t.custcode
group by t.custcode

Marc.

-Message d'origine-
De : sam wun [mailto:[EMAIL PROTECTED]
Envoyé : vendredi 24 décembre 2004 11:23
À : mysql@lists.mysql.com
Objet : Sum on Float type 


Hi,

I created a Transaction table with a field Cost which is a Float type 
with only 2 precision eg. 123.01.
When I use the following sql statement to make a sum of this field, it 
returned a Float number with more than 2 precision numbers  eg. 456.92384933

select t.custcode, sum(t.cost) as Sub-Total
from customer c, transaction t
where c.custcode = t.custcode
group by t.custcode

I don't know why the Sum function returns more than 2 precision number.
If I should not use Float type for the Cost field, what type should I use?

Thanks
Sam


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



slow date query

2004-12-07 Thread Marc Dumontier
Hi,
I'd really appreciate any help in speeding up this type of query
SELECT BindId,RecordType from BrowseData WHERE 
DATE_FORMAT(DateLastRevised,'%Y-%m-%d') = '2004-12-07';

On a MYISAM table of 122,000 rows, this query takes very long, in the 
neighbourhood of 20 minutes.

i'm using mysqld 4.0.20.
I have an index on DateLastRevised
mysql show indexes from BrowseData;
++++--+-+---+-+--++--++-+
| Table  | Non_unique | Key_name   | Seq_in_index | 
Column_name | Collation | Cardinality | Sub_part | Packed | Null | 
Index_type | Comment |
++++--+-+---+-+--++--++-+
| BrowseData |  0 | PRIMARY|1 | 
BindId  | A |  122850 | NULL | NULL   |  | 
BTREE  | |
| BrowseData |  1 | bbs_dlr|1 | 
DateLastRevised | A |  122850 | NULL | NULL   |  | 
BTREE  | |
| BrowseData |  1 | bbs_bid_recordtype |1 | 
BindId  | A |  122850 | NULL | NULL   |  | 
BTREE  | |
| BrowseData |  1 | bbs_bid_recordtype |2 | 
RecordType  | A |  122850 | NULL | NULL   |  | 
BTREE  | |
++++--+-+---+-+--++--++-+


mysql explain SELECT BindId,RecordType from BrowseData WHERE 
DATE_FORMAT(DateLastRevised,'%Y-%m-%d') = '2004-12-07';
++--+---+--+-+--++-+
| table  | type | possible_keys | key  | key_len | ref  | rows   | 
Extra   |
++--+---+--+-+--++-+
| BrowseData | ALL  | NULL  | NULL |NULL | NULL | 122850 | 
Using where |
++--+---+--+-+--++-+
1 row in set (0.00 sec)

thanks,
Marc Dumontier
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: slow date query

2004-12-07 Thread Marc Dumontier
Thanks, works like a charm.
Marc
Dathan Pattishall wrote:
Well 1st of all Date_format doesn't allow the use of a key.
Do this.
SELECT ..
WHERE DateLastRevised = '2004-12-07' AND DateLastRevisted 
'2004-12-08'; 


-Original Message-
From: Marc Dumontier [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, December 07, 2004 11:34 AM
To: [EMAIL PROTECTED]
Subject: slow date query

Hi,
I'd really appreciate any help in speeding up this type of query
SELECT BindId,RecordType from BrowseData WHERE
DATE_FORMAT(DateLastRevised,'%Y-%m-%d') = '2004-12-07';
On a MYISAM table of 122,000 rows, this query takes very long, in the
neighbourhood of 20 minutes.
i'm using mysqld 4.0.20.
I have an index on DateLastRevised
mysql show indexes from BrowseData;
++++--+-
+---+-+--++--+--
--+-+
| Table  | Non_unique | Key_name   | Seq_in_index | 
Column_name | Collation | Cardinality | Sub_part | Packed | Null | 
Index_type | Comment |
++++--+-
+---+-+--++--+--
--+-+
| BrowseData |  0 | PRIMARY|1 | 
BindId  | A |  122850 | NULL | NULL   |  | 
BTREE  | |
| BrowseData |  1 | bbs_dlr|1 | 
DateLastRevised | A |  122850 | NULL | NULL   |  | 
BTREE  | |
| BrowseData |  1 | bbs_bid_recordtype |1 | 
BindId  | A |  122850 | NULL | NULL   |  | 
BTREE  | |
| BrowseData |  1 | bbs_bid_recordtype |2 | 
RecordType  | A |  122850 | NULL | NULL   |  | 
BTREE  | |
++++--+-
+---+-+--++--+--
--+-+


mysql explain SELECT BindId,RecordType from BrowseData WHERE
DATE_FORMAT(DateLastRevised,'%Y-%m-%d') = '2004-12-07';
++--+---+--+-+--++--
---+
| table  | type | possible_keys | key  | key_len | ref  | rows   | 
Extra   |
++--+---+--+-+--++--
---+
| BrowseData | ALL  | NULL  | NULL |NULL | NULL | 122850 | 
Using where |
++--+---+--+-+--++--
---+
1 row in set (0.00 sec)

thanks,
Marc Dumontier
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]
 


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


RE: Server Configuration Help

2004-12-06 Thread Mechain Marc
In your my.cnf there is no:

Query_cache_size - 
http://dev.mysql.com/doc/mysql/en/Query_Cache_Configuration.html

Thread_cache_size - 
http://dev.mysql.com/doc/mysql/en/Server_system_variables.html

Marc.


-Message d'origine-
De : ManojSW [mailto:[EMAIL PROTECTED]
Envoyé : lundi 6 décembre 2004 09:21
À : [EMAIL PROTECTED]
Objet : Server Configuration Help


Greetings,
I am running MySQL (version 4.0.15 max) database on Linux (RH9) box.
This linux box is a dedicated database server with following h/w
configuration:

CPU: 2 * 2.4 Ghz Xeon Processor, 512 K 533 FSB
Ram :6GB
Hdd:36GB * 5 raid config

Typically, this database has less number of client connections but those
who connect generally run highly analytical stuff off the database. Also the
database size is pretty huge (around 40 gb). After reading though the
manuals, specifically some of the performance enhancement tips, I build the
my.cnf as show below.

Now on to the real question, Do you MySQL gurus think that given all the
details, Is there anyway to enhance the my.cnf file for better
performance/speed ?

Your kind help would be greatly appreciated.

Best Regards

Manoj

--- my.cnf file -

[client]

port=3306

socket=/tmp/mysql.sock

[mysqld]

user=mysql

port=3306

key_buffer=512M

table_cache=512

sort_buffer=2M

read_buffer_size=4M

read_rnd_buffer_size=4M

max_connection=100

max_allowed_packet= 1M

default-table-type=innodb

log_slow_queries=/home/mysql/log/slow.query.log

log_error=/home/mysql/log/mysqld.err.log

log_long_format



# innodb_options

innodb_data_home_dir=/usr/local/mysql

innodb_data_file_path=ibdata/ibdata1:3G;ibdata/ibdata2:3G:autoextend

innodb_mirrored_log_groups=1

innodb_log_group_home_dir=/usr/local/mysql/ibdata/log

innodb_log_arch_dir=ibdata/log

innodb_log_files_in_group=2

innodb_log_file_size=512M

innodb_log_buffer_size=8M

innodb_buffer_pool_size=1G

innodb_additional_mem_pool_size=4M

innodb_flush_log_at_trx_commit=0

innodb_flush_method=O_DIRECT


--- End of my.cnf file -



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: How to retrieve constraints and links from MySQL tables?

2004-11-26 Thread Mechain Marc
You can use:

show create table employee;

or 

show table status like 'employee';
in the column comment you have the information you are looking for.

Marc.

-Message d'origine-
De : Varakorn Ungvichian [mailto:[EMAIL PROTECTED]
Envoyé : vendredi 26 novembre 2004 09:32
À : [EMAIL PROTECTED]
Objet : How to retrieve constraints and links from MySQL tables?


So, I'm running a MySQL database (version: 4.0.21) for
a personal project of mine, and I would like to learn
how to retrieve constraints and links from the table
after it's been created. The create statements read
thusly:

CREATE TABLE positions (  
position_id tinyint(4) DEFAULT '0' NOT NULL
AUTO_INCREMENT, 
position_name varchar(20) NOT NULL, 
position_salary float  NOT NULL,
PRIMARY KEY (position_id),  
UNIQUE position_id (position_id)
);

CREATE TABLE employees (  
employee_id tinyint(4) DEFAULT '0' NOT NULL
AUTO_INCREMENT,  
employee_first varchar(20) NOT NULL,
employee_last varchar(20) NOT NULL,  
employee_address varchar(255) NOT NULL,
position_id tinyint(4) NOT NULL default 1,
employee_start date,
employee_temp bool default 0,
FOREIGN KEY (position_id) references
positions(position_id), 
PRIMARY KEY (employee_id),  
UNIQUE employee_id (employee_id)
);

When I run show columns from employees, there is no
indication that the position_id field in employees
is linked to that of positions. This is the
resulting table:

+--+--+--+-+-++
| Field| Type | Null | Key |
Default | Extra  |
+--+--+--+-+-++
| employee_id  | tinyint(4)   |  | PRI | NULL 
  | auto_increment |
| employee_first   | varchar(20)  |  | |  
  ||
| employee_last| varchar(20)  |  | |  
  ||
| employee_address | varchar(255) |  | |  
  ||
| position_id  | tinyint(4)   |  | | 1
  ||
| employee_start   | date | YES  | | NULL 
  ||
| employee_temp| tinyint(1)   | YES  | | 0
  ||
+--+--+--+-+-++

Is there a command or something that will display what
constraints (or links) exist in a given table?

Varakorn Ungvichian



__ 
Do you Yahoo!? 
Yahoo! Mail - Helps protect you from nasty viruses. 
http://promotions.yahoo.com/new_mail

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: mysql optimizing large table

2004-10-22 Thread Mechain Marc
If you don't want to have those error messages:

Increase the value of Open File using ulimit shell command. 

Do a ulimit -a to see the current value
Do a ulimit -n newvalue to increase it

You can also modify the /etc/limits file

Marc.

-Message d'origine-
De : Richard Bennett [mailto:[EMAIL PROTECTED]
Envoyé : vendredi 22 octobre 2004 00:48
À : [EMAIL PROTECTED]
Objet : Re: mysql optimizing large table


Hi, 
On Thursday 21 October 2004 22:00, Dathan Vance Pattishall wrote:
 My.cnf.huge is not good enough for your system specs. Calculate the Key
 efficiency from the show status command. I bet the key efficiency is less
 then 90% or so. 
By my count it is 99.88, the status output is:
Key_blocks_used 375052
Key_read_requests   1022090447
Key_reads   1219141
Key_write_requests  262155905
Key_writes  47437589

 In this case increase the key_buffer_size try 512M. A good 
 stat for a proper key_buffer_size in the sum of all index files block size.
 This would be optimal since the index remains in memory. Increase your
 tmp_table_size to 64 MB your prob going to tmp_table and mysql uses this
 buffer for some internal optimizations. Also try increasing range alloc
 block size  a little bit, you might see a 5% perf boost.

ok, i changed these settings.
When I restart mysqld I get some error:
041021 0:09:05 Warning: setrlimit couldn't increase number of open files to 
more than 1024 (request: 1134)
041021 0:09:05 Warning: Changed limits: max_connections 100 table_cahce 457

Mysql runs normally though.
Would they be relevant here?

It seems to be quite a bit faster already actually...

Thanks for your help,

Richard .

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



MySQL doesn't startup anymore

2004-10-14 Thread Marc
I'm using MySQL on Gentoo Linux and today it stopped starting up. It was
working a few days ago, and I haven't played with anything in the system
- no new installs, config changes, etc. It just decided to go on
vacation. 

I run mysqld_safe and it quits right away with no error messages. I've
got a MySQL book and it doesn't help much. I also tried mysqld_safe
--debug, but no trace file is created.

Thanks for any help.


-- 
Marc [EMAIL PROTECTED]


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: MySQL doesn't startup anymore

2004-10-14 Thread Marc
I'm getting:


041014 08:55:53  mysqld started
Warning: Ignoring user change to 'mysql' because the user was set to
'mysql' earlier on the command line
041014  8:55:53  Can't start server: Bind on TCP/IP port: Address
already in use
041014  8:55:53  Do you already have another mysqld server running on
port: 3306 ?
041014  8:55:53  Aborting

041014  8:55:53  /usr/sbin/mysqld: Shutdown Complete

041014 08:55:53  mysqld ended
==

How do I check on what is binding port 3306? I don't see mysqld running
and running mysql gets a Can't connect failure.

--Marc




On Thu, 2004-10-14 at 09:43, Victor Pendleton wrote:
 The error log should be located in your data directory if you have not 
 specified another location. The name may be host.err.
 Marc wrote:
 
 Where is the error log? I'm searching for localhost.err, but nothing
 comes up.
 
 --Marc
 
 On Thu, 2004-10-14 at 08:00, Victor Pendleton wrote:
   
 
 What is written to ther error log?
 
 Marc wrote:
 
 
 
 I'm using MySQL on Gentoo Linux and today it stopped starting up. It was
 working a few days ago, and I haven't played with anything in the system
 - no new installs, config changes, etc. It just decided to go on
 vacation. 
 
 I run mysqld_safe and it quits right away with no error messages. I've
 got a MySQL book and it doesn't help much. I also tried mysqld_safe
 --debug, but no trace file is created.
 
 Thanks for any help.
 
 
  
 
   
 
-- 
Marc [EMAIL PROTECTED]


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: MySQL doesn't startup anymore

2004-10-14 Thread Marc
Thanks to all for the help. I did find a mysqld running using ps -aux
and killed that. Looks like I'm all set.

--Marc


On Thu, 2004-10-14 at 11:33, Melanie wrote:
 I had this problem too: I identified the pid with ps -aux and then kill 
 them manually. (kill -9 pid)
 hope it will help you.
 
 
 Marc wrote:
 
 I'm getting:
 
 
 041014 08:55:53  mysqld started
 Warning: Ignoring user change to 'mysql' because the user was set to
 'mysql' earlier on the command line
 041014  8:55:53  Can't start server: Bind on TCP/IP port: Address
 already in use
 041014  8:55:53  Do you already have another mysqld server running on
 port: 3306 ?
 041014  8:55:53  Aborting
 
 041014  8:55:53  /usr/sbin/mysqld: Shutdown Complete
 
 041014 08:55:53  mysqld ended
 ==
 
 How do I check on what is binding port 3306? I don't see mysqld running
 and running mysql gets a Can't connect failure.
 
 --Marc
 
 
 
 
 On Thu, 2004-10-14 at 09:43, Victor Pendleton wrote:
   
 
 The error log should be located in your data directory if you have not 
 specified another location. The name may be host.err.
 Marc wrote:
 
 
 
 Where is the error log? I'm searching for localhost.err, but nothing
 comes up.
 
 --Marc
 
 On Thu, 2004-10-14 at 08:00, Victor Pendleton wrote:
  
 
   
 
 What is written to ther error log?
 
 Marc wrote:
 

 
 
 
 I'm using MySQL on Gentoo Linux and today it stopped starting up. It was
 working a few days ago, and I haven't played with anything in the system
 - no new installs, config changes, etc. It just decided to go on
 vacation. 
 
 I run mysqld_safe and it quits right away with no error messages. I've
 got a MySQL book and it doesn't help much. I also tried mysqld_safe
 --debug, but no trace file is created.
 
 Thanks for any help.
 
 
 
 
  
 
   
 
-- 
Marc [EMAIL PROTECTED]


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Charset problem

2004-10-11 Thread Jean-Marc Fontaine
Hi,
sometime ago my boss imported a dump into a base using Cocoa MySQL on 
Mac. Unfortunatly he switch the charset from ISO-8859-1 to something 
wrong, probably UTF-8. From this time we have such weird characters in 
our fields : FerrandiËre instead of Ferrandière, CitÈ instead of Citée 
and so on.

The other problem is that he noticed mistake a few days after the import 
and he had trashed the correctly encoded dump in the meantime. So now we 
have only a base with wrongly encoded fields values.

What is the way to fix that please ?
Thanks in advance.
Jean-Marc
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


init.d - need 'status' option

2004-10-06 Thread Marc Knoop
I am implementing mysql on a Redhat Cluster for high availability and will 
be using replication.  I wanted to use 4.1.5, but learned that the cluster 
suite requires an init.d script that will answer to the 'status' command. 

'status' does not seem to be included anymore.  [RH will only support an 
older 3.* version on the CS.] 

1)  Is it possible to add it to the included mysql.server init.d script? 

2a)  What was the last version of mysql that included status?
2b)  Is replication stable on that version? 

--
../mk
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: tuning suggestion for large query

2004-09-02 Thread Marc Slemko
On Wed, 1 Sep 2004 11:40:34 -0400, Sun, Jennifer
[EMAIL PROTECTED] wrote:
 Hi,
 
 We have a job that do 'select * from big-table' on a staging mysql database, then 
 dump to data warehouse, it is scheduled to run once a day, but may be run manually. 
 Also we have several other small OLTP database on the same server.
 When the big job run, it would use all the physical mem and swap, all other process 
 slow down because of this.
 
 I would like to limit the resource usage for each mysql client  so that they can use 
 only certain max amount of RAM, and don't select everything into memory before 
 display it to users. However, I couldn't find any parameter would let me implement 
 it.
 Anybody ever encounter the same issue before? Please share your experience.

How exactly are you running this select * from big-table?  From the
mysql command line client?  Is that what is using memory?  It isn't
clear from your post if it is the server or something else using
memory.

If it is the mysql command line client that is the issue, try adding a
-q parameter.  If you are using myisam tables, however, keep in mind
that table will be effectively locked for the whole duration... but if
it is the mysql command line client (which defaults to buffering
everything in memory), it may be faster to use -q anyway.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: tuning suggestion for large query

2004-09-02 Thread Marc Slemko
Due to the nature of myisam tables, when you are doing a query then
the table will be locked for writes.  Reads will still be permitted
until another write request is made, at which time all further reads
and writes will be blocked until the query completes.

This, however, is already happening even without -q and adding the
-q will likely significantly shorten the time to execute, depending
on exactly how large this table is.

myisam is a very limiting table type as soon as you want to do
anything more than read from or write to a single row at a time using
indexed lookups.  innodb tables do not have this problem, although
they have limitations of their own.

On Thu, 2 Sep 2004 14:30:24 -0400, Sun, Jennifer
[EMAIL PROTECTED] wrote:
 The command is issued from mysql command line. Is there any parameters or options I 
 can use without locking the table?
 
 
 
 
 -Original Message-
 From: Marc Slemko [mailto:[EMAIL PROTECTED]
 Sent: Thursday, September 02, 2004 2:24 PM
 To: Sun, Jennifer
 Cc: [EMAIL PROTECTED]
 Subject: Re: tuning suggestion for large query
 
 On Wed, 1 Sep 2004 11:40:34 -0400, Sun, Jennifer
 [EMAIL PROTECTED] wrote:
  Hi,
 
  We have a job that do 'select * from big-table' on a staging mysql database, then 
  dump to data warehouse, it is scheduled to run once a day, but may be run 
  manually. Also we have several other small OLTP database on the same server.
  When the big job run, it would use all the physical mem and swap, all other 
  process slow down because of this.
 
  I would like to limit the resource usage for each mysql client  so that they can 
  use only certain max amount of RAM, and don't select everything into memory before 
  display it to users. However, I couldn't find any parameter would let me implement 
  it.
  Anybody ever encounter the same issue before? Please share your experience.
 
 How exactly are you running this select * from big-table?  From the
 mysql command line client?  Is that what is using memory?  It isn't
 clear from your post if it is the server or something else using
 memory.
 
 If it is the mysql command line client that is the issue, try adding a
 -q parameter.  If you are using myisam tables, however, keep in mind
 that table will be effectively locked for the whole duration... but if
 it is the mysql command line client (which defaults to buffering
 everything in memory), it may be faster to use -q anyway.


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: tuning suggestion for large query

2004-09-02 Thread Marc Slemko
On Thu, 2 Sep 2004 15:19:44 -0400, Sun, Jennifer
[EMAIL PROTECTED] wrote:
 Thanks Marc,
 
 What version of myisam table you are talking about? We are on 4.0.20, when I ran the 
 big table query, I tried to insert to it twice without any issues.
 The -q worked good for mysql client. Thanks.

There is an optimization that can allow inserts (note: not updates)
and selects to happen at the same time, which may be what you are
seeing.  There are lots of corner cases, etc. so your best bet is to
check out the documentation which does a reasonable job of explaining
them, in particular:

http://dev.mysql.com/doc/mysql/en/Internal_locking.html
http://dev.mysql.com/doc/mysql/en/Table_locking.html

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



MIN and JOIN - USING TEMPORARY

2004-08-25 Thread Marc Debold
Hi there,
 
I have trouble with a SQL statement that uses too much load on our
server due to heavy traffic. MySQL uses temporary files and filesort, so
I narrowed the problem down to this one here:
 
 
TABLE A:
ID  INTEGER   PRIMARY KEY
TEXTVARCHAR(10)
 
TABLE B:
ID  INTEGER   PRIMARY KEY
REF_ID  INTEGER
NUMBER  FLOAT(7,2)
 
No matter, how many entries I have in A and B and no matter what indexes
I create, I always get a USING TEMPORARY, USING FILESORT when EXPLAINing
the following simple statement:
 
SELECT a.id, MIN(b.number) AS low FROM a JOIN b ON (a.id = b.ref_id)
GROUP BY a.id ORDER BY low
 
What can I do to speed up this query? I need to get a list of rows from
table a with it's lowest reference number from table b.
 
Thanks in advance,
Marc


Re: InnoDB TableSpace Question

2004-08-03 Thread Marc Slemko
On Tue, 3 Aug 2004 10:07:25 -0400 , David Seltzer [EMAIL PROTECTED] wrote:
 Hi all,
 
 I've been searching the archives  mysql documentation for a while and I
 can't seem to find an answer to my question -
 
 Is there a way to force InnoDB to shrink its filesize? I just dropped a 7GB
 table, but it hasn't freed up the disk space and I need it back. From what
 I've been reading, a restart will cause this to happen, but I'm in a
 production environment, and I'm afraid that InnoDB will take its sweet time
 while my users are holding their breath.
 
 Does anyone have any experience with this?

No, a restart will not shrink it.

Currently the only option I can think of is to do a dump and restore,
using mysqldump (since innodb hot backup just copies the data file, it
won't be of any use in shrinking it).

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: InnoDB TableSpace Question

2004-08-03 Thread Marc Slemko
On Tue, 3 Aug 2004 12:42:03 -0400 , David Seltzer [EMAIL PROTECTED] wrote:
 Thanks Marc,
 
 Is there really no way to reclaim unused space in an InnoDB table space? If
 not, why is this not considered a tremendous limitation?

Some do consider it a tremendous limitation.  It all depends on how it
is being used.

Oh, and one thing I forgot... in newer 4.1 versions, if you set things
up so each table has its own file with innodb_file_per_table, then I
think if you do an optimize table it will end up shrinking the file
for that table since it will recreate it.  However that really is just
a workaround, and there are a lot of disadvantages to that method ...
especially the fact that free space is now per table instead of per
tablespace.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: using mysql in commercial software

2004-08-02 Thread Marc Slemko
On Mon, 2 Aug 2004 01:35:44 -0700, Jeremy Zawodny [EMAIL PROTECTED] wrote:
 On Mon, Jul 26, 2004 at 01:26:15PM -0500, gerald_clark wrote:
 
  Steve Richter wrote:
 
  exactly!  Is Linux distributed under the same type of license as MySql?  If
  I sell software that runs on linux I dont have to give away my code, right?
  To use my software you first have to install no charge Linux and MySql.  Why
  would that not be permitted?
 
  Because the MySQL license does not allow you to use it free with
  commercial software that requires MySQL.  If you are running
  commercial software that requires MySQL you must buy a license.
 
 And this is where the confusion start.  MySQL is covered by the GPL.
 So is Linux.

As far as the server goes, sure.

However there is a key difference in that APIs such as glibc on Linux
are licensed under the LGPL.  The mysql client libraries used to be
the same way, then they changed them to be under the GPL.  This means
that, according to the most common interpretation of the GPL, just
linking with them automatically requires your code be under the GPL. 
Does this still apply to, say, Java code where you are using the
standard JDBC interface?  How about if you use mysql specific SQL
calls?  I would suggest perhaps not, but it isn't a simple question.

So, for example, you could not sell an application under terms not
compatible with the GPL and include the mysql client drivers.  Even
selling an application that is linked against them, but requires the
user to get them themselves, is arguably not permitted.  You can go
read the FSF's FAQ for their interpretation, but that is just their
interpretation.

However, remember the GPL only covers copying, distribution, and
modification.  Not use.

Also note that MySQL AB allows an exception designed for the client
libraries to be more compatible with other open source licenses:
http://dev.mysql.com/doc/mysql/en/MySQL_FOSS_License_Exception.html

I believe that MySQL AB is deliberately vague and confusing on their
licensing page to try to get people to buy mysql licenses.  All their
words there don't matter though, what matters is the actual license. 
It would, however, be nice if their commentary were a bit closer to
the reality of what the GPL means.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



better performance with optimize!?? (jboss)

2004-08-01 Thread Marc
I use Mysql with JBOSS as applicationserver.

i have strange response-time differences, which i can't explain.

1) after reboot the computer, it takes about 300ms to read 12 entities (cmp,
read ahead, 2 rows each entity) One entity is accessed by primary key, the
others by
foreign key.

2) when i reboot the computer and run mysql optimize first, it takes only
about 80ms to read the 12 entities!!

3) when i do the same with pure java / jdbc (outside jboss), it takes only
30ms!!! it doesn't matter, if i run optimize or not!

does anybody knows, why there is a difference if run optimize first (java/jdbc
is always fast with/without optimize)

i use mysql 4.0, jconnector mysql-connector-java-3.0.14-production-bin.jar

(i hope you understand my english... :o))

Thanks for your help



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: InnoDB 4.1.3: count(*) and number of rows does not match

2004-07-31 Thread Marc Slemko
On Sat, 31 Jul 2004 17:50:38 -0500, Keith Thompson [EMAIL PROTECTED] wrote:

 I just discovered that two of my tables (out of about 300)
 show a very unusual behavior.  This is that select count(*) ...
 and selecting all the rows and counting them do not produce
 the same number.
 
 This is on MySQL 4.1.3 on Solaris9.  Look at this:
 
 $ mysql -e select count(*) from pstat.plist
 +--+
 | count(*) |
 +--+
 |15315 |
 +--+
 $ mysql -e select * from pstat.plist | wc -l
15372
 
 Actually, these counts shouldn't quite be the same.  The second
 produces a header line that's getting counted, so it should be
 one more than the count(*).  But, it's off by 57!  The other bad
 table is off by 3.

First, have you verified there is no data in the table with embedded
newlines or some such?

Perhaps there is some index corruption..  Do an explain on the
count(*), it is likely doing an index scan.

Then try a select column_in_index_that_is_being_used from pstat.plist
and see if that returns the same as the count(*), or try doing the
select count(*) with an ignore index of whichever index it is using.

If it seems to be related to that one index, you could try dropping
and rebuilding the index.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: MySQL 4.0.x charset

2004-07-29 Thread Jean-Marc PULVAR
You can see it by executing the SHOW VARIABLES query on your server. 
You will find the used charset in the 'character_set' variable.

You can also use the query SHOW VARIABLES LIKE 'character_set' which 
will directly match what you want.

Yves wrote:
Hello,
How can I see what char set is being used as the
default char set on the server or database?
Also, is there a way to change the default setting?
Thanks,
Yves
__ 
Post your free ad now! http://personals.yahoo.ca

--
---
 ___   _
 __ / __\ ___   ___  _ __ | |_ _   _
(__)   /__\/// _ \ / _ \| '_ \| __| | | |
 ||   / \/  \ (_) | (_) | | | | |_| |_| |
 ||   \_/\___/ \___/|_| |_|\__|\__, |
 ___||__.._  |___/
/\
\/~~~ Jean-Marc PULVAR (Web Programmer)
  Boonty SA
  4 bis villa du mont tonnerre
  75015 PARIS
---
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: MySQL 4.0.x charset

2004-07-29 Thread Jean-Marc PULVAR
It's possibly due to your ssh connection but I couldn't answer you about 
this because I only use mysql with phpmyadmin.

Regards
Yves wrote:
Thanks,
As it turns out, I was trying show variables like --
And had a syntax mistake
Does SHOW VARIABLES only show a certain number of
lines? When I ran this command, character_set was not
listed... the list seemed cut off just before it.
Unless it is an issue with SSH only showing x amount
of lines..
thanks,
Yves A
 --- Jean-Marc PULVAR [EMAIL PROTECTED] wrote: 

You can see it by executing the SHOW VARIABLES
query on your server. 
You will find the used charset in the
'character_set' variable.

You can also use the query SHOW VARIABLES LIKE
'character_set' which 
will directly match what you want.

Yves wrote:

Hello,
How can I see what char set is being used as the
default char set on the server or database?
Also, is there a way to change the default
setting?
Thanks,
Yves


__
Post your free ad now! http://personals.yahoo.ca
--
---
 ___   _
 __ / __\ ___   ___  _ __ | |_ _   _
(__)   /__\/// _ \ / _ \| '_ \| __| | | |
 ||   / \/  \ (_) | (_) | | | | |_| |_| |
 ||   \_/\___/ \___/|_| |_|\__|\__, |
 ___||__.._  |___/
/\
\/~~~ Jean-Marc PULVAR (Web
Programmer)
  Boonty SA
  4 bis villa du mont tonnerre
  75015 PARIS
---


__ 
Post your free ad now! http://personals.yahoo.ca

--
---
 ___   _
 __ / __\ ___   ___  _ __ | |_ _   _
(__)   /__\/// _ \ / _ \| '_ \| __| | | |
 ||   / \/  \ (_) | (_) | | | | |_| |_| |
 ||   \_/\___/ \___/|_| |_|\__|\__, |
 ___||__.._  |___/
/\
\/~~~ Jean-Marc PULVAR (Web Programmer)
  Boonty SA
  4 bis villa du mont tonnerre
  75015 PARIS
---
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: INSERT if record NOT EXISTS

2004-07-26 Thread Marc Slemko
On Mon, 26 Jul 2004 17:47:37 +0100, Adaikalavan Ramasamy
[EMAIL PROTECTED] wrote:
 This seems more like the solution I want. I am using perl-DBI and when
 there is an error (i.e. duplicate insert), the rest of the scrip it not
 executed. But this is gives me the following error. What am I doing
 wrong ?
 
 mysql desc tb;
 +---+-+--+-+-++
 | Field | Type| Null | Key | Default | Extra  |
 +---+-+--+-+-++
 | myID  | int(11) |  | PRI | NULL| auto_increment |
 | firstname | varchar(10) | YES  | MUL | NULL||
 | lastname  | varchar(10) | YES  | | NULL||
 +---+-+--+-+-++
 3 rows in set (0.00 sec)
 
 mysql INSERT INTO tb (firstname, lastname) VALUES ('Jack', 'Doe') ON
 DUPLICATE KEY UPDATE lastname = lastname;
 ERROR 1064: You have an error in your SQL syntax near 'ON DUPLICATE KEY
 UPDATE lastname = lastname' at line 1

You are probably running an older version of mysql that doesn't support this.

Try insert ignore.

 
 Alternatively, I am looking for 'try' equivalent in perl, so that if the
 insert is duplicate, the rest of the script is still run. Thank you.

eval.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: SHOW INNODB STATUS

2004-07-26 Thread Marc Slemko
On Mon, 26 Jul 2004 18:03:25 -0700, Matt Solnit [EMAIL PROTECTED] wrote:


 How is it possible to have a hit rate of 1000/1000?  Doesn't the buffer
 get inOn Mon, 26 Jul 2004 18:03:25 -0700, Matt Solnit [EMAIL PROTECTED] wrote:
 
 How is it possible to have a hit rate of 1000/1000?  Doesn't the buffer
 get initialized by cache misses?

That is a number after rounding so it may not be exactly 100%, and
ISTR it is one of the states that is either reset every time you read
or every so many seconds so any misses before then won't be included.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Consolidating Data

2004-07-23 Thread Marc Knoop
Greetings! 

I have several (~12) web servers which all record web metrics to their own 
local mysql database.  I would like to consolidate data from each web server 
database to one master DB to within the hour.  I wish to avoid running 
multiple instances of mysql on the master server, so replication is not an 
option. 

What are the best practices for managing the consolidation of data?  Is it 
best to export the data on each web server and perform frequent bulk loads 
on the master server?  Or, is it better to have a robust Perl script on 
the master server that is responsible for pulling records from each web 
server?  I estimate 10,000 to 30,000 records per web server, per day with 
the average row size of 100 Bytes.  The web servers are all in remote 
locations. 

The end goal is to have all web metrics available on *one* server from which 
a reporting server (M$ SQL server). 

Lastly, are there any experts on this list willing and available to code and 
document this, given more details, of course? 

--
../mk 

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: after upgrade unicode characters changed to question marks

2004-07-22 Thread Jean-Marc PULVAR
You were managing unicode characters with mysql 3.23?
It was just storing and retrieving the data then?
Because mysql can really manage unicode in 4.1 version, isn't it?
Stefan Klopp wrote:
Hello All,
We recently upgraded our mysql server from 3.23 to 4.0.18 and have found
that all of our Unicode characters are now being displayed as question marks
(?). Anyway this only happens when viewing over the web as when we view via
the shell mysql we can see the characters fine. In addition if we update our
information in the database or insert new Unicode data we have no problems
seeing it. Basically it is just the old data that is in the database that we
have problems seeing via the web. Any ideas would be great.
Stefan Klopp

---
Checked by AVG anti-virus system (http://www.grisoft.com).
Version: 6.0.725 / Virus Database: 480 - Release Date: 19/07/2004

--

   ___   _
  / __\ ___   ___  _ __ | |_ _   _
 /__\/// _ \ / _ \| '_ \| __| | | |  Jean-MARC PULVAR (Web Programmer)
/ \/  \ (_) | (_) | | | | |_| |_| |
\_/\___/ \___/|_| |_|\__|\__, |  Boonty SA
 |___/   4 bis villa du Mont Tonnerre
 75015 Paris

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Multi-languages sites with Mysql 4.0

2004-07-21 Thread Jean-Marc PULVAR
Hi,
I'm using a mysql 4.0 version and i wanted to know whether i can realize
a multi-language site with eastern languages like japanese.
I'm using a database which has already data encoded with the latin 
charset (iso-8859-1) and need to include japanese data.

How may I proceed to have comparisons, sort and all stuf working?
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Multi-languages sites with Mysql 4.0

2004-07-21 Thread Jean-Marc PULVAR
Thanks for your reply that's was really what i thought.
 My understanding is that you have to have MySQL 4.1 for this sort of 
thing
 to work at all. Which is why I am so keen to see 4.1 reach production
 ASAP. Pre-4.1, you can store UTF-8, but it will not sort correctly.

 Alec

Jean-Marc PULVAR [EMAIL PROTECTED] wrote on 21/07/2004 10:45:22:

I'm using a mysql 4.0 version and i wanted to know whether i can realize
a multi-language site with eastern languages like japanese.
I'm using a database which has already data encoded with the latin 
charset (iso-8859-1) and need to include japanese data.

How may I proceed to have comparisons, sort and all stuf working?


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: RAM-usage and hardware upgrade 10gb RAM

2004-07-20 Thread Marc Slemko
On Mon, 19 Jul 2004 18:13:36 +0200, Jan Kirchhoff [EMAIL PROTECTED] wrote:
 Hi,
 
 We are currently using a 4.0.16-replication-setup (debian-linux, kernel
 2.4.21, xfs) of two 2.4ghz Intel-Pentium4 systems with  3gig RAM each
 and SCSI-Hardware-Raid, connected via gigabit-ethernet. We are reaching
 the limit of those systems and are going to buy new hardware as well as
 upgrade to mysql 4.1.x. We will start testing our applications on 4.1.3
 within the next few weeks but our main problem is that we are not quite
 sure what hardware to buy...
 
 We are planning to buy something like a dual-xeon system with 10-16gb of
 RAM and hardware raid10 with 8 sata-disks and as much cache as possible.
 Will mysql be able to use the ram efficiently or are we hitting limits?
 AMD or Intel? 32bit or 64bit?

Whatever you do, get a 64 bit system.  Opteron recommended, if you
really prefer Intel and can get your hands on one of their 64-bit
Xeons that is acceptable, although it may take a little longer for
Linux to catch up.

Even if the software isn't there yet (it is, it may just be a bit of a
hassle to all get working), in the worst case you'll have to run it in
32-bit mode until you can figure that out.

You can't use more than 2 gig most of the time / close to 4 gig if you
hack things up right innodb cache on a 32 bit system.  The rest of the
memory will be used by the OS (less efficiently than on a 64-bit
system though), but that may or may not be as efficient as innodb
doing it.  That depends a lot on your application's data access
patterns.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Linux 2GB Memory Limit

2004-07-14 Thread Marc Slemko
On Tue, 13 Jul 2004 23:26:48 +0100, Marvin Wright
[EMAIL PROTECTED] wrote:
 Hi,
 
 I'm now running redhat AS 3.0 with kernel version 2.4 and have 8GB of RAM.
 
 If I set my innodb_buffer_pool to 2048M, it just will not start, I get this
 error.
 
 040713 22:10:24  mysqld started
 040713 22:10:24  Warning: Asked for 196608 thread stack, but got 126976
 InnoDB: Fatal error: cannot allocate 2147500032 bytes of
 InnoDB: memory with malloc! Total allocated memory

Now I remember what I tracked down the limit to be ...

2147500032  is just above 2 gigabytes of memory.  From what I have
seen, glibc (not sure if this is fixed in recent versions) just
refuses to allocate chunks of memory larger than 2 gigs in a single
call.  This seems a little odd given the library the malloc code is
based on, but I haven't dug deeper.

You can probably get around this if you do both of:

1. replace the call to malloc() in the innodb source with one that does a mmap()
2. run a kernel that has the 4G/4G patch, and possibly also moves
where mmap()ed regions start to be a bit lower than 1 gig (not sure
what the 4G/4G patch does with that).

A pain in the ass.  I strongly encourage people wanting larger innodb
buffers to consider 64-bit Opterons or, less desirably, Intel's xeons
w/64-bit support when they become generally available fairly soon.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Replication - multiple masters

2004-07-13 Thread Marc Knoop
Jeremy Zawodny writes: 

The web servers record web metrics to local mysql databases.  I would 
like those local databases to be consolidated onto the DEPOT [as three 
separate DBs]. 

You cannot do that.
snip 
You'd need to run 3 instances of MySQL on DEPOT, one for each WWW
server you'd like to mirror.
Can you, or anyone comment on the praticality of doing so?  I estimate 
10,000 to 30,000 records per web server, per day using 3 remote web servers. 
The number of web servers would not likely grow to more than 12. 

My planned DEPOT server is a Dell PowerEdge - dual Xeon, 2GB memory and 
oodles of disk space. 

Could mysql, or Linux ES 3.0 for that matter, handle it?  Is there anyone on 
this list running several instances of mysql on the same box?  Any 
experiences to share? 

--
../mk 

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Replication - multiple masters

2004-07-13 Thread Marc Knoop
Jeremy Zawodny writes: 

Can you, or anyone comment on the praticality of doing so?  I estimate 
10,000 to 30,000 records per web server, per day using 3 remote web 
servers. The number of web servers would not likely grow to more than 12. 

That should be a problem at all.  I know of much larger instances
(millions of records) doing the same on similar (or less) hardware.
Jeremy - good to hear. 

Now that I know this is technically possible, which of the following 
possible solutions would be the cleanest or most efficient from a management 
perspective: 

1)  Use mysql replication to have mirror dbs on the DEPOT server.
  A job would regularly run on DEPOT to consolidate all data
  into one db so that an external system can query/report on.
2)  Do not use mysql replication and instead have a job on DEPOT
  regularly pull from each webserver and consolidate all data
  into one db so that an external system can query/report on.
3)  Same as #2, except the web servers would *push* to DEPOT
  instead of being *pulled* from. 

As another reader commented, #1 could be difficult to manage because of the 
number of DBs (N*2).  Plus, DEPOT is already a master to all web servers for 
read only data. 

#2 and #3 seem to be more appropriate, as long as the jobs are FAST and can 
be managed.  Would Perl be the ideal candidate for this?  Since the web 
servers are remote, performance of DEPOT updates is important - something 
replication was good at. 

It's nice to have different solutions to this puzzle.  Choosing the most 
elegant solution is tricky! 

--
../mk 

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Replication - multiple masters

2004-07-12 Thread Marc Knoop
I have 4 servers in my environment: 

DEPOT - master server
WWW1  - web server #1
WWW2  - web server #2
WWW3  - web server #3 

The web servers record web metrics to local mysql databases.  I would like 
those local databases to be consolidated onto the DEPOT [as three separate 
DBs]. Is configuration as simple as the correct entries in my.cnf?  That is, 
can muliple entries for master-host, master-user... exist?  Any caveats with 
this configuration? 

Question #2 - A small databases exists on the DEPOT which I would like 
replicated to all web servers.  Is there any reason why this would not work 
with the above situation [DEPOT acting as a slave for multiple masters]? 

Many thanks, I hope I have omitted any relevant information... 

--
../mk
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Linux 2GB Memory Limit

2004-07-09 Thread Marc Slemko
On Fri, 9 Jul 2004 15:46:37 +0100 , Marvin Wright
[EMAIL PROTECTED] wrote:
 Hi,
 
 Current Platform
  RH version is 7.3
  IBM Blade Server - 2 x Intel(R) Xeon(TM) CPU 3.20GHz
  32 GB SCSI
  4 GB Ram
 
 This is the platform we are moving to in a week or so
  RH Enterprise AS 2.1 or 3.0
  4 x Intel(R) Xeon(TM) MP CPU 2.70GHz
  128 GB SCSI Raid
  16 GB Ram
 
 So with the new platform I'll be able to have a much bigger InnoDB buffer

Note it will still be limited to something that is definitely no
bigger than 4 gigs, and may be smaller... I haven't had any luck with
~2 gig innodb buffer sizes even on systems with 3 or 3.5 gigs of
addess space available per process, but I never looked into that too
deeply so it may work fine with the right setup.

This is probably a bit late, but I would have definitely recommended
running 64-bit opterons in your configuration since then you could
have a larger innodb buffer.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Concurrency Question

2004-07-05 Thread Marc Slemko
On Mon, 5 Jul 2004 16:07:58 +0100 , Javier Diaz [EMAIL PROTECTED] wrote:
 
 We have changed all our tables to InnoDB and now the server is not able to
 handle the load, even when we are not running the SELECTs statements against
 these tables yet.
 
 As I mentioned in my email we make a lots of INSERTS and UPDATES in these
 tables (more than 3000 per second). So far using MyISAM everything was OK,
 but now when we moved the tables to InnoDB (to be able to make Read/Write
 operations in these tables) the performance was down completely and the
 server can not handle it.
 
 Does anyone have a rough idea when you change from MyISAM to InnoDB how the
 performance is affected?

That all depends on how you are using transactions.  If you are trying
to do each of these operations in a separate transaction, then
definitely that will be a problem since transactions inherently have a
certain cost to them since they need to commit changes to durable
storage.

If this is the case, then a horribly ugly now you don't have
durability any more in your transactions hack you could try is
setting innodb_flush_log_at_trx_commit to 2, see the docs for details.
 Be warned that doing so means you can loose committed transactions if
the machine crashes.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Concurrency Question

2004-07-05 Thread Marc Slemko
On Mon, 5 Jul 2004 18:48:50 +0100 , Javier Diaz [EMAIL PROTECTED] wrote:

 
 I really don't like the idea to set innodb_flush_log_at_trx_commit to 2, the
 information in these tables is important. On the other hand there is nothing
 I can do from the point of view of the number of transactions. Each process
 run its own set of INSERTs and UPDATEs statements, so I can not reduce the
 number of transactions being executed.
 
 Looking to the MySQL documentation:
Since the rotation speed of a disk is typically at most 167
 revolutions/second, that constrains the number of commits to the same
 167th/second
if the disk does not fool the operating system
 
 And that we are doing a LOT MORE INSERTs by second,  I'm afraid maybe the
 only solution is go back to MyISAM :-(
 
 By the way this figure of 167 revolutions/second is based on what kind of
 hard disk?

Well, if you are using myisam you already have even fewer guarantees
about transactional integrity than innodb with
innodb_flush_log_at_trx_commit set to 2.  That is the only reason that
myisam can perform as it does in the manner you are using it.  So if
that is all that is worrying you, no reason not to try innodb setup
that way.

You may want to look more closely at how you may be able to re
architect your system to not require so many transactions, such as by
having a middle tier that can aggregate information before committing
it.  Unfortunately, myisam tricks people into thinking disk based
databases can safely handle the sort of operation you are doing, then
leaves them in an unfortunate situation when they realize that myisam
has no durability guarantees.

A ballpark figure that applies to disk based databases is that you can
do approximately one write operation per rotation, which translates
into one transaction per rotation.  This logic makes some assumptions
and isn't exact with modern disks, but is a reasonable ballpark.

167 revolutions per second is a 10k RPM drive.  You can improve this
with the right type of RAID, you can improve it with faster disks, but
it is still a fairly small number.

You can improve it further with a battery backed disk controller that
can cache writes, although the reliability of some of the cheaper
options there isn't great.  You can improve it with a database that
doesn't commit to disk, such as mysql cluster however that is a whole
different ballpark and a ways from being ready for prime time and has
all sorts of issues of its own.  Some databases can be smart and
coalesce commits from multiple connections into one write to disk, but
this is a fairly uncommon feature.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: strange table speed issue

2004-06-24 Thread Marc Slemko
On Thu, 24 Jun 2004, MerchantSense wrote:

 Seems ok to me...

 It seems to be checking all the rows in the explain for some reason too...

 mysql show index from ip2org;
 +++--+--+-+---+-
 +--++-+
 | Table  | Non_unique | Key_name | Seq_in_index | Column_name | Collation |
 Cardinality | Sub_part | Packed | Comment |
 +++--+--+-+---+-
 +--++-+
 | ip2org |  1 | ip_start |1 | ip_start| A |
 2943079 | NULL | NULL   | |
 | ip2org |  1 | ip_end   |1 | ip_end  | A |
 2943079 | NULL | NULL   | |
 +++--+--+-+---+-
 +--++-+

mysql can only use one index from a particular table in any one
query.  So if you want to do a query that uses both ip_start and
ip_end, you would need to create a multicolumn index on ip_start,ip_end
or vice versa.

What you have is one index on ip_start, and another on ip_end.  So
it can use one of the indexes, but then it has to scan each row that
matches.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



ERROR 1045: Access denied for user: 'foo@host' (Using password: YES)

2004-06-18 Thread Marc Tardif
I'm using mysql-3.23.58-1.9 installed from RPM on Red Hat Linux release 
9 (Shrike). First, I create a database and user to connect to this 
database using the following commands:

  mysql CREATE DATABASE foo;
  mysql GRANT ALL PRIVILEGES ON foo.* TO foo@'%'
IDENTIFIED BY 'password' WITH GRANT OPTION;
  mysql GRANT ALL PRIVILEGES ON foo.* TO [EMAIL PROTECTED]
IDENTIFIED BY 'password' WITH GRANT OPTION;
  mysql FLUSH PRIVILEGES;
Then, I try to connect to the database using the host parameter but I 
get the following error message:

  # mysql --user=foo --password=password -h host foo
  ERROR 1045: Access denied for user: '[EMAIL PROTECTED]' (Using password: YES)
I've only managed to find a single thread about this issue which was a 
bug in the installation of mysql on RedHat 7. Many months have gone by 
since then so I wonder if this is still the same issue.

--
Marc Tardif
Sitepak
(514) 866-8883
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


RE: Replication is currupted ...Please help me...

2004-06-09 Thread Mechain Marc
May be a full off the filesystem where the relay logbin file is ?

Marc.

-Message d'origine-
De?: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
Envoye?: mercredi 9 juin 2004 09:46
A?: [EMAIL PROTECTED]
Objet?: Replication is currupted ...Please help me...


Hi all.


My company has three Mysql DB servers.

one  is master, the other are slaves.

master's version is 3.23.54-log.

two slaves's version is  4.0.17

and Os of all servers  is linux 7.3

2 days ago, another slave's replication was currupted  unexpactedly, but the other 
slave was ok.

I don't know what this replaction was currepted.

Anybody know what I do?

Please let me know...

this is err_message at that time.
-
040609 13:09:14  Error in Log_event::read_log_event(): 'Event too big', data_len: 
1852795251, event_type: 110
040609 13:09:14  Error reading relay log event: slave SQL thread aborted because of 
I/O error

040609 13:09:14  Slave: Could not parse relay log event entry. The possible reasons 
are: the master's binary log is corrupted (you can check this by running 'mysqlbinlog' 
on the binary log), the slave's relay log is corrupted (you can check this by running 
'mysqlbinlog' on the relay log), a network problem, or a bug  in the master's or 
slave's MySQL code. If you want to check the master's or slave's MySQL code. If you 
want to check the master's 
binary log or slave's relay log, you will be able to know their names by issuing 'SHOW 
SLAVE STATUS' on this slave. Error_code: 0
 
040609 13:32:16  Slave I/O thread exiting, read up to log 'www5-bin.001', position 
84904657


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Idea to speed up multiple jdbc connections?

2004-06-08 Thread Marc Slemko
On Mon, 7 Jun 2004, Haitao Jiang wrote:

 Marc

 mysqld runs on a very powerful Operton machine with
 16GB memory and barely any other application process
 running, it is hard to believe that a simple select
 that runs under 2 second will utilize all the
 resources...that is why I tend to think there is
 something in the mysql set up that caused this...any
 idea where I should look?

How many processors?

If there is only one and the query is CPU bound (as it probably is if
everything is cached, given 16 gigs of ram), then why shouldn't it
use all the CPU?

Or, to phrase the question differently: why should the query take 2
seconds to run if there are free resources?

Now, on a multiprocessor box it clearly starts to get more complicated.
mysql has no capability to spread one query across multiple CPUs
in parallel, and while it can spread multiple queries across CPUs the
scalability has its limits.

The fact that is a simple query is irrelevant (some of the simplest can
be the slowest if it has to do a full table scan).  From the fact
that it takes 2 seconds it is clear it is not an entirely trivial query.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Idea to speed up multiple jdbc connections?

2004-06-08 Thread Marc Slemko
On Tue, 8 Jun 2004, Haitao Jiang wrote:


 Each of 4 individual query only took 0.6 seconds,
 there is no other clients, it hardly to believe taht
 mysql query performance will degrade 300% (from 0.6s
 to ~1.9s) if we have 4 concurrent connections...

 As far as I know, MySQL should be able to handle
 hundreds of connections on a single CPU box without
 degrading performance like above.

You are completely missing the point.

It is nothing to do with concurrent _connections_ it has to do with
running concurrent _queries_.

What you are saying is like well, if you can sit down and solve
this equation in 10 minutes, why does it take you 40 minutes to
solve 4 different equations?

There is no magic way for the machine to do a hundred things at
once on a single processor (assuming you don't yet have a quantum
computer), they all get run for brief periods interleaved with one
another.  If you are running 4 at once, then each will only run 1/4 of
the time.  The box is working as hard as it can to process one query,
do you think it should slow down how quickly it processes one concurrent
query just so that number will change less if you have more than one?

I'll repeat what I said before: a query that takes 600ms on such a machine
is not a trivial query.  If you real question is why is my query so slow
then you should probably ask that instead of getting confused about
why your machine can't do 4 things at once.

P.S. Please do not go around reposting your same question on multiple
lists, it has already been answered.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Idea to speed up multiple jdbc connections?

2004-06-07 Thread Marc Slemko
On Mon, 7 Jun 2004, Haitao Jiang wrote:


 Yes. The time I measure like I said is purely around
 statement.execQuery() call. Connection creation is not
 a factor here at all.

 My database has 1.64 million rows and 4 queries are
 all selects, which are identical in both serial and
 parallel cases.

 In serial cases:
 Query 0 took 590
 Query 1 took 431
 Query 2 took 461
 Query 3 took 440

 In parallel cases:
 Queryer 3 query took 1552
 Queryer 1 query took 1632
 Queryer 2 query took 1783
 Queryer 0 query took 1923

 I don't understand why in 4 concurrent connection
 cases (already created not included in the timing) it
 takes more than 3 times longer to exec. a query.

Umh... if your queries are limited by some bottleneck on the server (such
as, for example, CPU) then why would running them in parallel make it any
faster?

It seems that in the sequential case they are taking a total of 1922
(whatever those units are) while in the parallel case they are taking
1923.  What this is telling you is that, in this case, a single query is
able to fully utilize the resources (likely CPU given these numbers,
although it is possible it could be disk) on the server.  If a single
query can fully utilize the server, all that adding more concurrency
can possibly do is slow the total throughput down.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Three quick questions about using MySQL

2004-06-04 Thread Mechain Marc
See answers in the message below.

Marc.

-Message d'origine-
De : Ben Clewett [mailto:[EMAIL PROTECTED]
Envoyé : vendredi 4 juin 2004 10:37
À : [EMAIL PROTECTED]
Objet : Three quick questions about using MySQL


Three quick questions, I hope you can help me.

Using InnoDB on version 4.0.18

Firstly.

It's possible to get information on a table.  Which includes the number
of rows.  This returns instantly.
* It is an estimate number of rows, Not the real one.

However, if I do a SELECT COUNT(*) on the same table, this can take a
number of minutes to return.  (about 1 minute per 1,000,000 rows.)  I
was wondering why this is?  Is the count from the table information
accurate?
* Yes, This is the real number of rows.

Secondly, the table stats return the size of the records and the size of
the index.  Is this the true size of the table in bytes?
* Yes it is.

Lastly and most important.

I want to store and access a large amount of sequential binary data.
Fixed record size.  Is it faster to access one data item per row.  Or is
it faster to access many data items (120+) aggregated into one row?
* I should personnaly choose one row per data item, instead of an aggregate.

Thanks in advance,

Ben.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Is MySQL 4.1 ready?

2004-05-25 Thread Marc Greenstock
I would like to hear from people who have tested version 4.1 to determine if
I should upgrade now or wait until it's in production release?

Marc.



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Is MySQL 4.1 ready?

2004-05-25 Thread Marc Greenstock
Wait until 4.1.2 is out in the next few days. - Do you know where I can
find any indication of when 4.1.2 is expected to be released?

Marc.


Donny Simonton [EMAIL PROTECTED] wrote in message
news:[EMAIL PROTECTED]
 Wait until 4.1.2 is out in the next few days.  I've been using it since
 4.1.0, besides a few little bugs here and there, which almost every
version
 has, it's very stable.  I won't install any other version of any of my
 machines.

 Donny

  -Original Message-
  From: Marc Greenstock [mailto:[EMAIL PROTECTED]
  Sent: Tuesday, May 25, 2004 11:27 PM
  To: [EMAIL PROTECTED]
  Subject: Is MySQL 4.1 ready?
 
  I would like to hear from people who have tested version 4.1 to
determine
  if
  I should upgrade now or wait until it's in production release?
 
  Marc.
 
 
 
  --
  MySQL General Mailing List
  For list archives: http://lists.mysql.com/mysql
  To unsubscribe:
  http://lists.mysql.com/[EMAIL PROTECTED]
 






-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Replication Issue

2004-05-05 Thread Mechain Marc
Hello,

Why such a SQL request running well on the master is not correctly replicated on the 
slave, 

set @providerId='012345';
insert into DATA_TYPE values (1,@providerId,'DATA_TYPE',1);

Here is an extract of the Slave Logfile:

MYBCK.log.1:ERROR: 1048  Column 'PROVIDER_ID' cannot be null
MYBCK.log.1:040503 17:44:18  Slave: error 'Column 'PROVIDER_ID' cannot be null' on 
query 'insert into DATA_TYPE values (1,@providerId,'DATA_TYPE',1)', error_code=1048

The question is: why is the value of @providerId not properly replicated ?

Is it a bug ?

Regards,
Marc.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



How to determine how fragmented a innodb table is ?

2004-04-29 Thread Mechain Marc

Is there a simple way to determine how fragmented a Innodb table is ?

Thanks,
Marc.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: InnoDB Table Locking Issue

2004-04-28 Thread Marc Slemko
On Tue, 27 Apr 2004, Scott Switzer wrote:

 Hi,

 I am having a difficult time with a query.  My environment is MySQL
 v4.0.16 (InnoDB tables) running on Linux (latest 2.4 kernel).
 Basically, I am running a query of the form:

 INSERT INTO temp_tbl
 SELECT c1,c2...
 FROM t1,t2,t3,t4
 WHERE ...

 It is a very complex query, which looks at millions of rows of data to
 produce results.  The issue is this:

 When running from the MySQL command line:
 Normally, when the query is run, it takes about 5 minutes to complete.
 When I LOCK t1,t2,t3,t4 before running, it takes 8 seconds to run.

Are you sure it is actually _working_ when it takes 8 seconds to run?
You say it is a very complex query that looks at millions of rows ...
unless those are all cached (and they could be, depending on your setup),
8 seconds would probably be too fast for it to run no matter how mysql
optimized it.

Triple check that if you start with an empty temp_tbl it actually
inserts everything it should into temp_tbl.

If you aren't locking temp_tbl, I wouldn't expect the query to actually
work.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: INNODB SHOW STATUS

2004-04-21 Thread Marc Slemko
On Tue, 20 Apr 2004, Emmett Bishop wrote:

 Howdy all,

 Quick question about what I'm seeing in the BUFFER
 POOL AND MEMORY section...

 I've configured the innodb_buffer_pool_size to be 128M
 and when I do a show variables like 'innodb%' I see

 | innodb_buffer_pool_size | 134217728  |

 So that looks good. However, I see the following in
 the BUFFER POOL AND MEMORY section of the output from
 the innodb monitor:

 --
 BUFFER POOL AND MEMORY
 --
 Total memory allocated 152389988; in additional pool
 allocated 1048576
 Buffer pool size   8192
 Free buffers   0
 Database pages 7947
 Modified db pages  0
 Pending reads 0
 Pending writes: LRU 0, flush list 0, single page 0
 Pages read 20345325, created 9857, written 763089
 0.00 reads/s, 0.00 creates/s, 0.00 writes/s
 Buffer pool hit rate 1000 / 1000

 Why does it say the buffer pool size is only 8M?
 Shouldn't it be 128M? Also, could someone explain the
 hit rate? I remember seeing in someone's recent post
 that the 1000/1000 is good, but I don't know what that
 means. Can someone suggest a good resouce that
 explains the contents of Innodb show status in detail.
 The page on www.mysql.com gives a very cursory
 overview of the output.

Buffer pool size, free buffers, database pages, and modified database
pages are in 16k pages.

The buffer pool hit rate simply says the fraction of page reads satisfied
from the innodb buffer cache, in this case 1000/1000 == 100%.

Unfortunately, I'm not really aware of a better reference.  Perhaps some
of this is explained in High Performance MySQL, but I don't have a
copy yet.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Replication problem

2004-04-21 Thread Mechain Marc
Add this in your my.cnf slave file:

slave-skip-errors = 1062

Marc.

-Message d'origine-
De : Georg Horn [mailto:[EMAIL PROTECTED]
Envoyé : mercredi 21 avril 2004 17:31
À : [EMAIL PROTECTED]
Objet : Replication problem


Hi,

i'm new to this list, but i use mysql for years an are very happy with it.
However, today i ran into a problem that i couldn't find a solution for:

I set up database replication with a master and one slave, and it works
fine so far. I rewrote my application (web based written in php) so that
it executes all queries that insert, delete or update rows are executed
on the master, and all other queries on the slave. Fine.

But what, if the master fails? I want users to be able to continue working
on the slave, and this works fine for webpages that just do select statements.
I thought that, in case of the master being down, i could execute all
data-modifying queries on the slave, and also store these queries in a special
table or file, and re-execute them later on the master if the master becomes
available again. The problem is, that i then may get Duplicate entry ... for
key ... errors on the slave, if a record was already inserted into a table
with unique keys, and that the sql-thread on the slave then exits.

Is it possible to make the slave ignore such errors (i found no option for this
in the docs) and just stupidly continue replication, or does anyone have a
better idea how to set up such a scenario?

Bye,
Georg


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Help with complex statement

2004-04-21 Thread Marc Greenstock
Hi all,

This is my first post on MySQL.

Wondering if someone could help me with an sql statement

MySQL version 4.0.18

I have three tables:
1. visitor_data (Visitor_ID, IP Address)
2. visit_data (Visit_ID, Visitor_ID)
3. page_data (Page_ID, Visit_ID, Visitor_ID, URI, Page_Time)

If you look closely you can see the relationship in these tables and you
might guess I'm tracking what a user is doing on this site.

visitor_data obviously records only unique visitors, I'm collecting a few
other things as well as what I've written, such as employing a cookie,
tracking their current local time, etc to build an effective method of
knowing who is coming back.

visit_data records each visit to the site and references from the
visitor_data.Visitor_ID. This is only done once a session.

page_data records every move the user makes.

I need to pull data out of these tables to make a graphical table
displaying: Visitor ID, Entry Page, Exit Page and Session Duration. There
will be one row per visit.

a Statement like:

SELECT *
FROM visitor_data, visit_data, page_data
WHERE visitor_data.Visitor_ID = visit_data.Visitor_ID
AND visit_data.Page_ID = page_data.Page_ID
GROUP BY visit_data.Page_ID

would return a all the rows, where I want to limit the pages that appear
first and last in each visit by getting the minimum and maximum Page_Time,
which incidentally is a UNIX timestamp.

So far I've made multiple statements pulling out all the data and using PHP
to formulate the information. But I know there is a better way using mysql.
Only thing is I haven't got the faintest idea how to do it in one query and
keep overheads low. I start to get lost when I'm doing select statements
within select statements.



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: InnoDB Load Problem

2004-04-20 Thread Mechain Marc
 This query is to see if there are duplicate records, I'm not sure how else I
could do this and it only runs once every 10 minutes.

Why don't you put a unique index on locale,ggd,from,to,supplier,date so you won't have 
to play that request any more ...

 How can I determine if the problem is disk bound ?

iostat -x 1

The right column of the display gives you the busy state (in percent) of the disk.

 If I can get some hard evidence of this ...

Try to set innodb_flush_log_at_trx_commit to 0 (the default value is 1)

Marc.

-Message d'origine-
De : Marvin Wright [mailto:[EMAIL PROTECTED]
Envoyé : mardi 20 avril 2004 12:45
À : Dathan Vance Pattishall; Marvin Wright; [EMAIL PROTECTED]
Objet : RE: InnoDB Load Problem


Hi,

Thanks Dathan for your response.

So far I have upgraded the mysql to 4.0.18, this supports O_DIRECT as my
4.0.13 did not.
I increased my buffer pool by another 256 Meg and so far I have not seen any
change in performance.

I've looked at the SHOW INNODB STATUS (pasted further down), my buffer pool
hit rate is constantly at 1000 / 1000, what does this tell you ?

Regarding your other points, a slow query is this, only 1 second though but
you mentioned count(*) as bad.

# Time: 040420 11:10:09
# [EMAIL PROTECTED]: web[web] @  [10.168.78.207]
# Query_time: 1  Lock_time: 0  Rows_sent: 3310  Rows_examined: 185723
select locale,ggd,from,to,supplier,date,count(*) as count from cache group
by locale,ggd,from,to,supplier,date having count  1;

This query is to see if there are duplicate records, I'm not sure how else I
could do this and it only runs once every 10 minutes.

The indexes look fine on all other queries.

The disk layout is probably not good, unfortunately these are standard built
single drive machines by our tech services department and it too bigger
hassle for them to do something different for me :(

The disk layout is 

Filesystem   1k-blocks  Used Available Use% Mounted on
/dev/hda2 34218636  22979948   9500456  71% /
/dev/hda1   101089 19894 75976  21% /boot
none   2005632 0   2005632   0% /dev/shm

The data being on hda2.

How can I determine if the problem is disk bound ?
If I can get some hard evidence of this then I can go to our tech department
and get them to build me a decent box.

Many Thanks.

Marvin.


=
040420 11:15:32 INNODB MONITOR OUTPUT
=
Per second averages calculated from the last 15 seconds
--
SEMAPHORES
--
OS WAIT ARRAY INFO: reservation count 245688, signal count 208343
Mutex spin waits 10498150, rounds 63549544, OS waits 69764
RW-shared spins 113294, OS waits 50819; RW-excl spins 83135, OS waits 26235

TRANSACTIONS

Trx id counter 0 464704697
Purge done for trx's n:o  0 464704447 undo n:o  0 0
Total number of lock structs in row lock hash table 0
LIST OF TRANSACTIONS FOR EACH SESSION:
---TRANSACTION 0 0, not started, process no 22455, OS thread id 19320851
MySQL thread id 38508, query id 2467002 localhost root
SHOW INNODB STATUS
---TRANSACTION 0 0, not started, process no 17652, OS thread id 36874
MySQL thread id 1, query id 2431662 192.168.35.181 web
---TRANSACTION 0 464704694, ACTIVE 0 sec, process no 27189, OS thread id
38580247 inserting, thread declared inside InnoDB 500
mysql tables in use 1, locked 1
3 lock struct(s), heap size 320, undo log entries 39
MySQL thread id 73569, query id 2467003 10.168.77.231 web update
insert into negotiated_classes_cache set id=108245613219642041, route_id=3,
segment_id=1, class='C', num='4'

FILE I/O

I/O thread 0 state: waiting for i/o request (insert buffer thread)
I/O thread 1 state: waiting for i/o request (log thread)
I/O thread 2 state: waiting for i/o request (read thread)
I/O thread 3 state: waiting for i/o request (write thread)
Pending normal aio reads: 0, aio writes: 0,
 ibuf aio reads: 0, log i/o's: 0, sync i/o's: 0
Pending flushes (fsync) log: 0; buffer pool: 0
4831 OS file reads, 41891 OS file writes, 40144 OS fsyncs
0.00 reads/s, 0 avg bytes/read, 20.53 writes/s, 18.13 fsyncs/s
-
INSERT BUFFER AND ADAPTIVE HASH INDEX
-
Ibuf for space 0: size 1, free list len 0, seg size 2,
364 inserts, 364 merged recs, 5 merges
Hash table size 5312557, used cells 469072, node heap has 483 buffer(s)
43.86 hash searches/s, 3380.24 non-hash searches/s
---
LOG
---
Log sequence number 295 378143163
Log flushed up to   295 378138460
Last checkpoint at  295 42216323
0 pending log writes, 0 pending chkp writes
39690 log i/o's done, 17.27 log i/o's/second
--
BUFFER POOL AND MEMORY
--
Total memory allocated 1469851048; in additional pool allocated 4584832
Buffer pool size   81920
Free buffers   56299
Database pages 25138
Modified db pages  18737
Pending reads 0 
Pending writes: LRU 0, flush list 0, single page

Re: Gripe with MySQL

2004-04-19 Thread Marc Slemko
On Mon, 19 Apr 2004, Stormblade wrote:

 Ok. Love MySQL and I will be using it for my personal use and recommending
 it to clients as a lower cost alternative. I've only been using it for a
 very short time but there one major gripe I have with it and I believe it's
 just a design thing.

 MySQL seems to put the burden of error checking and such on the client.

 - All fields have a default value even when I don't tell it to?
 - Allow Null will only result in an error if I explicitly try to set the
 field to null.

 These are the two things that I really dislike. I think it's a poor design
 to rely on clients only for error checking. MySQL supports foreign keys.
 This is good because the database handles referential integrity. But it
 shouldn't stop there. I should also be able to tell the database not to
 allow a field to be empty/null and it should not put anything in there that
 I have not told it to.

Yup, it is a bad idea, and thankfully it seems to be slowly improving
as more and more people try to use mysql as a more serious database.

However, keep in mind that it is not an adhoc arbitrary decision,
but it based on a very fundamental traditional mysql design fundamental:
not to support transactions because they aren't required most of
the time, or so the claim goes.  While there are a few mysql storage
engines now that do support transactions, and at least one is in
widespread use, this history explains why it is the way it is.

If you don't support transactions, what do you do if you are running
a statement that updates multiple rows and get an error with one
row? If you just abort the whole statement, it is really ugly since
then you leave the statement half executed.  If you try to be able
to undo the entire statement, it is really ugly because without
transactions you are unlikely to have the backend support for doing
that or for avoiding dirty reads, etc since that is one of the
fundamentals of what a transaction is.  So ... you bravely soldier
on.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: mysql/innodb configuration

2004-04-17 Thread Marc Slemko
On Fri, 16 Apr 2004, mayuran wrote:

 I would like to optimize the configuration settings
 for this beast of a machine, here are the specs:

 Quad Xeon 3ghz (4x2 = 8 cpus), 512 cache
 16 gigs ram
 running Redhat Enterprise 3.0 AS
 All tables are InnoDB.

 I read this warning in the MySQL documentation:
 *Warning:* On GNU/Linux x86, you must be careful not to set memory
 usage too high. |glibc| will allow the process heap to grow over
 thread stacks, which will crash your server.

 But at the same time it says:

 # Set buffer pool size to 50-80% of your computer's memory,
 # but make sure on Linux x86 total memory usage is  2GB

 Does this mean that MySQL wont make use of the 16gb it has total ?

 I had to set the value to 1G to make it even start up.

You should be able to get higher than 1 gig ... a bit ... 1.5 gigs perhaps.

But yes, unfortunately mysql and innodb can't directly use most of the
memory.

innodb has support on windows for using Intel's paged address
extensions (PAE) to have paged access to more memory using the AWE
interface, with a bit of a performance hit for doing so.  However,
that feature of innodb isn't available on Linux, plus it disables
innodb's adaptive hashing support, which can be annoying especially
considering mysql doesn't otherwise support anything like a hash
join.

The memory will still be used by your OS for caching files, which will
help... but that isn't really as good as if innodb could use it, since
multilevel caching can be a bit sketchy and some features of innodb
(again, adaptive hashing...) can only be done if innodb has the data
in it's cache.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: InnoDB

2004-04-15 Thread Mechain Marc
 How do I know the table is configured by InnoDB instead of normal table ?
Do a show table status or a show create table name_of_my_table

 Should I at least see some entry in the /etc/my.cnf to indicate that InnoDB
 is configured?
No, but if you don't want to use innodb you can add skip-innodb in the /etc/my.cnf

Marc. 

-Message d'origine-
De : Chen, Jenny [mailto:[EMAIL PROTECTED]
Envoyé : jeudi 15 avril 2004 18:23
À : '[EMAIL PROTECTED]'
Objet : InnoDB


MySQL Experts:

I am new for MySQL database.  
We have a 4.0.18 MySQL sit on Linux box.

I am reading on InnoDB. And having a question.

How do I know the table is configured by InnoDB instead of normal table ?
Should I at least see some entry in the /etc/my.cnf to indicate that InnoDB
is configured?

Thanks in advance.
Jenny




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: MySQL Cluster

2004-04-15 Thread Marc Slemko
On Wed, 14 Apr 2004, Tim Cutts wrote:


 On 14 Apr 2004, at 10:57 pm, Adam Erickson wrote:

  (This is probably not the best place for this post, but here goes...)
 
  The (soon to be released) MySQL cluster software docs use a sample
  cluster node configured with Dual Xeons and 16GB of ram.  MySQL  has
  never been able to use more than 2 gigs of system memory (on 32 bit
  platforms.)  With MySQL Cluster, will MySQL finally start using the
  memory paging trick Oracle and others have been using for years?
  Otherwise, what is the point of having 16 gigs of ram for one MySQL
  server?

 Disk cache.  Tables which MySQL doesn't have in its own buffers but
 which nevertheless are frequently accessed will already be in RAM, and
 therefore faster to access.

Well ... that doesn't tie in with what I'm reading about mysql cluster,
namely it being a main memory database where all data is kept in memory.

I guess you can probably run multiple instances of the cluster node
on one machine, having the data split across them in a fairly transparent
manner.

However, there is ... very minimal technical information available
on mysql.com about exactly what mysql cluster (ie. mysql on top
of NDB) is and what it is really designed for.  I looked at the
NDB API docs in the bitkeeper tree, which help a bit ... but not
all that much.

It doesn't look like the current ndb code has any PAE support ...
at least on Unix.  It does some AWE-ish calls on windows but I don't
think those are to actually allows more than somewhere between 2 and 4
gigs per process the way it is being used, unless I am missing
something.

My overview so far is that it is designed for very though transaction rate
systems, with a large number of fairly simple transactions, and also
possibly systems with a large amount of read activity.  All of this needs
to be on a moderately sized data set, since the design is based on it being
an in memory database.

In any case, since the NDB storage engine is used in place of myisam or
innodb... even if it could address more memory using PAE, that wouldn't
mean other storage engines could.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Strange Index Usage: select ... where foo = 90 on a varchar

2004-04-15 Thread Marc Slemko
On Thu, 15 Apr 2004, Max Campos wrote:

 On Apr 13, 2004, at 2:01pm, Michael Stassen wrote:

  You shouldn't be surprised.  This is normal behavior.
  interchangeStatus is a varchar, so
 
select fileName from outDocInterchange where interchangeStatus = 91;
 
  requires that interchangeStatus be converted to an int for each row so
  it can be compared to 91, rendering the index useless.  On the other
  hand,
 
select fileName from outDocInterchange where interchangeStatus =
  '91';
 
  compares interchangeStatus to a string, which the index is designed to
  do. In general, an index on a column won't help if the column is input
  to a function.

 Shouldn't MySQL just cast the constant integer to a string instead?
 Perhaps this optimization isn't done.  Also, I'm not completely sure,
 but I think this type of query was indexed in 3.23.  Or more precisely,
 these queries didn't become slow until after I upgraded to 4.0.18
 (from 3.23.40).   Granted, ultimately I needed (and did) change the
 column type, but I'm curious to see if  why the behavior changed.

Except there are multiple ways that something that is numerically
equal to 91 can be represented as a string, eg. 91.0,  91, etc.
So using the index would result in different behaviour in some situations.

I think that if things worked how I would like them, mysql wouldn't
automatically do the cast at all so it would be obvious that something
that is possibly unexpected is happening.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



  1   2   3   >