Re: Backup question.

2003-11-18 Thread Joseph Bueno
Bernard Clement wrote:
rsynch will do the job correctly only and only if the mysql server on both 
sides are not running.  Rsynch does not deal with files opened for writing 
and my guess is that MySQL is opening the database (i.e. files) for writing.

It will work without stopping mysql server if you make sure that all 
data in memory are flushed to disc and mysql server does not modify 
these files while you read them.
This can be done with:
FLUSH TABLES WITH READ LOCK

NB: I assume that you run mysql server on main server only (not on 
backup server). If you have mysql server on both systems you can also
use mysql built-in replication : set your main server as master and
the backup server as a slave.

As for LVM (AKA Logical Volume Manager) I really do not see how it could be 
used to take a snapshot in MySQL.  However, I could be wrong here.

MySQL manual describes how to make a snapshot with Veritas filesystem
(vxfs). I am sure that it could be done in the same way with LVM (but I
haven't tried it myself):
"If you are using a Veritas filesystem, you can do:
   1. From a client (or Perl), execute: FLUSH TABLES WITH READ LOCK.
   2. From another shell, execute: mount vxfs snapshot.
   3. From the first client, execute: UNLOCK TABLES.
   4. Copy files from snapshot.
   5. Unmount snapshot. "
(from : http://www.mysql.com/doc/en/Backup.html)
Bernard

Hope this helps,
Joseph Bueno
On Tuesday 18 November 2003 05:12, Simon Green wrote:

Has any one use LVM to take a snapshot in MySQL and use this to back up
data?
Simon

-Original Message-
From: Paco Martinez [mailto:[EMAIL PROTECTED]
Sent: 18 November 2003 10:00
To: Christensen, Dave; 'Richard Reina'; [EMAIL PROTECTED]
Subject: Re: Backup question.
Is there any problem executing "rsync /var/lib/mysql/data/" from one host
to another host and therefore having same file in obth machines??
Is it unsafe this method ??

Could be crashed meanwhile transferring ??

Thanks !!!

- Original Message -
From: "Christensen, Dave" <[EMAIL PROTECTED]>
To: "'Richard Reina'" <[EMAIL PROTECTED]>; <[EMAIL PROTECTED]>
Sent: Wednesday, November 12, 2003 10:20 PM
Subject: RE: Backup question.

Yes, you can do it like this:

Prompt> mysqldump --add-drop-table --host=source.IP.addr.spec -uuser
-ppassword databasename | mysql -uuser -ppassword


I've found that it helps things if you add --no-data to the source side
on the first pass, then remove that clause and run it again.
Dave

-Original Message-
From: Richard Reina [mailto:[EMAIL PROTECTED]
Sent: Wednesday, November 12, 2003 3:54 PM
To: [EMAIL PROTECTED]
Subject: Backup question.
I would like to backup databases from a linux MySQL server to another
linux machine on the same private network but I don' see in the docs how
I can do this with mysqlhotcopy or mysqldump.  Is there any way to do
this besides using ftp.
Any help would be appreicated.

Richard

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:   
http://lists.mysql.com/[EMAIL PROTECTED]
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]





--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Problem about the maximum isze of MYISAM table

2003-11-16 Thread Joseph Bueno
MyISAM tables can be larger than 4Gb without using merge tables.
This topic has been discussed several times on this list.
You will find all details in "Google groups" archive:
http://groups.google.com/groups?q=myisam+4Gb+limit+group:mailing.database.mysql&scoring=d
Hope this helps,
Joseph Bueno
Peter Sap wrote:
You could try to use MERGE tables to get around this problem.
Put all your data in several MyISAM tables and create a merge table on top
of them.
Peter Sap.

- Original Message -
From: "Niran Angkawattanawit" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Cc: <[EMAIL PROTECTED]>
Sent: Sunday, November 16, 2003 2:14 PM
Subject: Problem about the maximum isze of MYISAM table


Hi,

  I've found the problem about the maximum size of MYISAM table.
  The error occured while I'm trying to insert data into the table by
using

"LOAD DATA" command.
  I got an error message like this:
  " ERROR 1114 at line 1 in file: 'table.list': The table 'table.list' is
full "
  The maximum data size that MySQL insert into table is 4GB.
  However, I think the table size will be changed but I don't know how to
do.
  Everyone know how to fix this problem, please tell me.
  Thank you very much.
Niran Angkawattanawit








--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: inconsistent query times for same query

2003-10-29 Thread Joseph Bueno
[EMAIL PROTECTED] wrote:
Thanks Joseph.

I just to another look at the query and it actually takes 65 seconds on the
first run, not 10 seconds, so this is important for me to understand.
So my next questions are:
Is there any way to tune the OS file system cache?
This is very OS system dependent and I don't know the answer for W2K (I
am a Unix/Linux guy ;). However it seems that the basic rule in almost
any OS is: Use all memory not used by running processes for file
caching.
Is there any way to flush it or examine what's in it?
I don't think you can examine it.
On Unix/Linux systems, I use a trick to flush it : I read a file bigger
than RAM size. (Unix command: cat big_file > /dev/null)
Seems like if I wait a while, the long query happens again.  Looks like it
flushes itself after  some time - how often does it flush itself?
It is probably flushed because the OS needed some memory to read another
file. On Linux, files stay in the cache as long as the system don't need
to reclaim memory; least recently accessed memory is then reused.
I suppose that W2K has the same behavior.
If those long queries are a problem, you should consider adding more RAM
to your server.
Thanks again,
-Bob
Hope this helps
Joseph Bueno
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: inconsistent query times for same query

2003-10-29 Thread Joseph Bueno
There is caching but it is at OS level: on first query, data are
fetched from disk; other queries read directly from file system cache.
Hope this helps
Joseph Bueno
[EMAIL PROTECTED] wrote:
Hi everyone. I'm using MySQL 4.0.16 with W2K/SP4 on a 512KB RAM/1.2GHz
Athlon machine (my dev workstation).
I'm having a strange problem: when I execute a query via the MySQL Control
Center, it takes 10 seconds to return. About three seconds into the query,
I issue a "mysqladmin processlist" and see the State as "sending data". To
me, that means the query is done and the server is sending the data to the
client (I'm running both on same machine). The CPU is only at about 4%, but
the hard drive light is flashing like an XMas tree on crack.
I examined the query using "Explain" and it's using the correct index as I
specified.
If I re-execute the query, it returns in .13 seconds!!!
I thought maybe it was cached by the client so I kill the client and
execute it again - .13 seconds. I restart the server and do it again - .13
seconds. So it doesn't look like anything's being cached. Variables
query_cache_size=0 and query_cache_type=DEMAND, it seems like no caching
should happen.  This happens to other queries as well.
I just don't get it. Why the difference?
The table "test" has 1,000,000 records, so I'm definitely not complaining
about a subsecond response - but if the public hits the production server
and I get the 10 second version for every query, I'm in trouble.  Since
users dynamically create the queries, I can't count on them being cached -
so any moderately complex query seems to exibit this behavior.  I really
feel like I need to understand why it's happening so I can fix any
underlying problems.
Any ideas
Here's the query - because the users choose several options to build the
query, it's dynamically built via a Java class (don't think that's
relevant, but..):
SELECT
UID,UserName,Gender,City,State,Zip,Country,Age,Photo,OnLine,LastLogon FROM
test USE INDEX (big) WHERE Status=2 AND Viewable=1 AND Age >=18 AND Age <=
99 AND Photo!='' AND Height <=66 AND gender IN ('b') AND Weight >=100 AND
Height >=60 AND Weight <=150 LIMIT 250
The following columns are in the "big" multi-column index:
Status, Viewable, Online, Age, Height.  See the table definition below.
here are the relavent columns:
+---+---+--+-+--+---+
| Field | Type  | Null | Key | Default
| Extra |
+---+---+--+-+--+---+
| UID   | int(11)   |  | PRI | 0
|   |
| UserName  | char(20)  |  | UNI |
|   |
| Status| tinyint(1)|  | MUL | 0
|   |
| Viewable   | tinyint(1)|  | | 0|
|
| City  | char(30)  | YES  | | some city
|   |
| State | char(2)   | YES  | | CA
|   |
| Country   | char(3)   | YES  | | USA
|   |
| Zip   | char(10)  | YES  | MUL | 90210
|   |
| Age   | tinyint(2)| YES  | MUL | NULL
|   |
| Gender| char(1)   | YES  | | NULL
|   |
| Height  | tinyint(3)| YES  | | NULL |
|
| Weight| char(3)   | YES  | MUL | NULL
|   |
| OnLine| tinyint(1)| YES  | | NULL
|   |
| Photo   | char(30)  | YES  | | /path/path/photo.jpg   |
|
+---+---+--+-+--+---+
Thanks for any insight!
-Bob




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: MySQL with OpenMosix

2003-10-13 Thread Joseph Bueno
Dicky 'Mas Saint' wrote:
Hi !

Does anybody have experience running MySQL with OpenMosix ?
How is the performance increase ?
Thank you.

According to this paper:
http://cedar.intel.com/cgi-bin/ids.dll/content/content.jsp?cntKey=Generic+Editorial%3a%3axeon_openmosix&cntType=IDS_EDITORIAL&catCode=BMB&path=3
"OpenMosix, like an SMP system, cannot execute a single process on 
multiple physical CPUs at the same time. This means that openMosix won't 
be able to speed up a single process such as Mozilla, except to migrate 
it to a node where it can execute most efficiently. In addition, 
openMosix doesn't currently offer support for allowing multiple 
cooperating threads to be separated from one another. "

Since mysql is a single multithreaded process, you should not expect any 
performance increase :(

--
Joseph Bueno
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: sorting/grouping

2003-10-06 Thread Joseph Bueno
Have you tried:
select commentref,max(commentid)
  from comments
 group by commentref
Regards,
Joseph Bueno
Cummings, Shawn (GNAPs) wrote:
This doesn't seem to be making sure that the "newest" commentid is the 
result...  Seems to be random.

At 09:11 AM 10/6/2003 -0400, you wrote:

try

 select commentid, commentref
from comments
 ORDER by
 commentref ,commentid DESC;
> -Original Message-
> From: Cummings, Shawn (GNAPs) [mailto:[EMAIL PROTECTED]
> Sent: Thursday, October 02, 2003 2:37 PM
> To: [EMAIL PROTECTED]
> Subject: sorting/grouping
>
>
>
>
> I have a very simple table.
>
> commentid, commentref  (each field is an INT)
>
> Each record is a comment... commentid is the ID (each new record is a
> higher #).. and the commentref field if the "story" the
> comment refers to.
>
> I want to be able to list the stories in order from "most
> recent comment"
> onward.
>
> I've tried
>
> select commentid, commentref from comments GROUP BY
> commentref ORDER by
> commentid DESC;
>
> However, the results are NOT showing me the stories in order
> from most
> active comments onward...  Maybe too much soda and sugar -
> but any thoughts?
>
>
>
>


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Information required

2003-09-26 Thread Joseph Bueno
Waseem Ahmad Zia wrote:
Hello hi
I require information that latest version(4.0.15) of mysql provides
support for executing subqueries and prepared statement,if it is
possible in this new version of mysql then i require information about
drivers for prepared state ment and how to user sub quries in mysql
bye
Waseem Ahmad Zia
Subqueries and prepared statements are not available in 4.0.15.
They will be in 4.1
Check the manual for details : 
http://www.mysql.com/doc/en/ANSI_diff_Subqueries.html
http://www.mysql.com/doc/en/C_API_Prepared_statements.html

Regards,
Joseph Bueno
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Slow query join problem

2003-09-18 Thread Joseph Bueno
Allen wrote:
I have two tables.  One table is 13 million rows the other is about 
250,000.  I am trying to join the two to find users.  The tables are not 
indexed and I know that will effect the speed, but the join never 
completes.  I let it run for about 10 hours and the process was stuck in 
the "sending" state, which I don't really understand.  What is taking so 
long?  The join size should be no greater than 250,000 which is the size 
of the second table.  Yes??? 
I am using the default join_buffer size, which seems to low.  Might this 
be the problem?


Well, running a join on tables without indexes is like running a simple
select on a table that is the cartesian product of both tables.
In your case, you are trying to run a select on a table with:
13million x 250,000 = 3,250,000,000,000 rows !
I am afraid that you won't get any result soon, even with a big join
buffer :(
Regards,
Joseph Bueno
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: negative decimal problem

2003-09-17 Thread Joseph Bueno
Wakan wrote:
I bag your pardon if I post this question again,
but I can't find a clear answer.
I've Mysql version 3.23.41 and 4.0.12, and the manual says
that for version > 3.23 the - sign and the decimal point, there's an 
extra space reserved.
I've a decimal column : DECIMAL(5,2), and my problem (maybe manual 
misunderstand)
comes out from both my installed version.

If I put 8000  ---> it writes correctly 8000,00.
If I put -8000 ---> it writes -999,99
why this???
thanks
Carlo


According to the manual
(http://www.mysql.com/doc/en/Numeric_types.html):
"The NUMERIC and DECIMAL types are implemented as the same type by 
MySQL, as permitted by the SQL-92 standard. They are used for values for 
which it is important to preserve exact precision, for example with 
monetary data. When declaring a column of one of these types the 
precision and scale can be (and usually is) specified; for example:

salary DECIMAL(5,2)

In this example, 5 (precision) represents the number of significant 
decimal digits that will be stored for values, and 2 (scale) represents 
the number of digits that will be stored following the decimal point. In 
this case, therefore, the range of values that can be stored in the 
salary column is from -99.99 to 99.99. (MySQL can actually store numbers 
up to 999.99 in this column because it doesn't have to store the sign 
for positive numbers) "

My understanding is that the range of allowed values for DECIMAL(5,2) is
-99.99 to 99.99 (although mysql MAY accept some values beyond these
limits).
Since your values are clearly out of this range, you should revue your
field specification.
Maybe DECIMAL(7,2) would be better ?
Hope this helps
Joseph Bueno
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Select distinct speed on an indexed column

2003-09-15 Thread Joseph Bueno
Since your primary key is used, maybe you should consider adding an
index on PostedZpdi field only.
You may also check 'key_buffer_size' value (show variables like 'key%').
Since your primary key is more than 200 MB big, allocating a "big" key
buffer (> 256MB) may help.
Joseph Bueno

Nathan Cassano wrote:
mysql> explain select distinct AccountLevelId from PostedZpdi;
++---+---+-+-+--+-+-
+
| table  | type  | possible_keys | key | key_len | ref  | rows|
Extra   |
++---+---+-+-+--+-+-
+
| PostedZpdi | index | NULL  | PRIMARY |  60 | NULL | 3447290 |
Using index |
++---+---+-+-+--+-+-
+
1 row in set (0.01 sec)

Hi,

What do you get when you do an EXPLAIN of your query ?

--
Joseph Bueno




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Select distinct speed on an indexed column

2003-09-15 Thread Joseph Bueno
Nathan Cassano wrote:
Hey everyone,
I have a question about the speed of selecting distinct values on an
indexed column. I have a table with a five column primary key and 3,215,540
records. I want to select all of the distinct values of the first column in
the primary key. This column only has 549 distinct values. To execute this
query takes about a minute and a half on a P4 2.4G. I assume that mysql is
doing a complete table scan thus making the query slow.
My question follows. Shouldn't the distinct values of the first
column in an index be pulled from the index itself instead from the actual
data? Thanks for your insights!
Nathan

show table status;
---+-+++
---+
| Name   | Type   | Row_format | Rows| Avg_row_length | Data_length
| Max_data_length | Index_length | Data_free | Auto_increment | Create_time
| Update_time | Check_time | Create_options | Comment   |
++++--++
-+-+--+---++
---
| PostedZpdi | InnoDB | Dynamic| 3215540 |678 |  2180988928
|NULL |530350080 | 0 |   NULL | NULL | NULL
| NULL   || InnoDB free: 142368768 kB |
Hi,

What do you get when you do an EXPLAIN of your query ?

--
Joseph Bueno
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Joining tables from two different databases

2003-09-15 Thread Joseph Bueno
Jeff McKeon wrote:
I have an existing database with a lot of information, I need to create
a new database to record inventory information that pertains to records
in the first database.  I'd like to keep these two database's separate.
Is it possible to relate a record in one database to a record in another
and do queries that pull from both databases?
Thanks,

Jeff

Hi,

Yes you can do that. Just prefix table name with database name in your 
query:

select *
  from table1,database2.table2
 where table1.field1=database2.table2.field2
Hope this helps,
Joseph Bueno
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Can't open...

2003-09-11 Thread Joseph Bueno
Deependra b. Tandukar wrote:
Dear all,

what is the meaning of

Can't open file: 'products_description.MYI'. (errno: 145)

in mysql?

Regards,
_DT

$  perror 145
145 = Table was marked as crashed and should be repaired
Hope this helps
--
Joseph Bueno
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Selecting only numbers from a string or vice versa

2003-09-04 Thread Joseph Bueno
Aki Karjalainen wrote:
Hi.

I've got a problem where I need to drop everything but numbers from a string and evaluate against it.

E.g. SELECT * FROM MYDATA WHERE DROP_ALL_LETTERS(MYSTRING) = '1254'
or
SELECT * FROM MYDATA WHERE REMOVESUBSTRINGS('A-Z', MYSTRING) = '1254'
10 points to the one whos knows how to do a query like that.




select * from mydata
where mystring regexp '[A-Z]*1[A-Z]*2[A-Z]*5[A-Z]*4[A-Z]*';
should do it.

Regards
Joseph Bueno
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Dates

2003-09-01 Thread Joseph Bueno
Mixo Shiburi wrote:
I am in the process of porting some python scripts to Mysql.
Currently, the scripts I have queries a Pg database for entries (made)
between certain dates.
The python script snippet of the query

mainQuery = """
   select distinct 
id,queue,owner,created,now(),age(now(),%s) from Tickets where
   ( (%s >= '%s'::timestamp) and  ( %s <= '%s'::timestamp + 
'%s'::interval)  )
   and %s and %s(status='%s') order by id
 """ 
%(queryField,queryField,date,queryField,date,age,typeId,statusModifier,status) 



To make this more readable, in plain sql, the query for 7 day old 
entries would be:

 select distinct 
id,queue,owner,created,now(),age(now(),resolved) from Tickets where
   ( (resolved >= '2003-08-26'::timestamp) and  ( resolved 
<= '2003-08-26'::timestamp + ' 7 days'::interval)  )
   and queue=3 and (lastupdatedby=2) and (status='resolved') 
order by id


The problem with this that it uses 'age', 'timestamp' and 'interval'
functions that are specific to Pg or just not implemeted in Mysql
(version 3.23.56). What is the Mysql equivilent query? More than that,
any qeuey that SQLsomeYear compliant would be great.
I am considering the dates to be:
 resolved>='somedate1 sometime1' and resolved<='somedate2 sometime2'
But, I still have a problem with:
 age(now(),resolved)
More info (from a Pg session) to show what the Pg functions do
+++
rt3=# select '2003-08-26'::timestamp + ' 7 days'::interval;
  ?column?
-
 2003-09-02 00:00:00
mysql> select date_format( date_add('2003-08-26', interval 7 day),
  "%Y-%m-%d %H:%i:%s") as mydate;
+-+
| mydate  |
+-+
| 2003-09-02 00:00:00 |
+-+

rt3=# select '2003-08-26'::timestamp;
  timestamp
-
 2003-08-26 00:00:00


mysql> select from_unixtime(unix_timestamp('2003-08-26'));
+-+
| from_unixtime(unix_timestamp('2003-08-26')) |
+-+
| 2003-08-26 00:00:00 |
+---------+
rt3=# select age(now(),'2003-08-26'::timestamp);
  age

 6 days 10:14:43.421275
+++


Time to read mysql manual ? ;)

http://www.mysql.com/doc/en/Date_and_time_functions.html

Regards,
Joseph Bueno
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Fixing Replication

2003-08-29 Thread Joseph Bueno
Keith Bussey wrote:
Hi all,

I was wodnering if someone has a better way of re-sync'ing a slave when
replication fails.
Here's my setup:

1 - MASTER
5 - SLAVES
Now sometimes, one of the slaves will fail. Usually when it's the sql that stops
running, I can fix it and get it back. However sometimes, I also get the "master
binlog corruption" error and haven't found a way to fix that.
What I do is:

- stop mysql on the master and all the slaves
- remove the server-bin files on the master
- remove the mysql data dir on each slave
- copy the master's datadir to each slave
- restart mysql everywhere and everything is fine
Now while this method works, it is extremely time-consuming and I was wondering
if anyone knows a better solution to only fix the affected slave??
I have tried stopping just the affected slave and one other good slave, taking
its datadir and renaming the bin files to the affected server's name. That
method doesn't seem to work though.
Thanks in advance!

Hi,

My setup is: 1 master, 6 slaves.

I have a different approach:

On master:
- "FLUSH TABLES WITH READ LOCK"
- mysqldump relevant tables (we don't replicate all tables)
- "SHOW MASTER STATUS" and write down binary log file and offset
- "UNLOCK TABLES"
On broken slave:
- "SLAVE STOP" (just in case)
- rebuild tables using mysqldump data
- "CHANGE MASTER TO   MASTER_LOG_FILE=xxx, MASTER_LOG_POS=yyy"
- "SLAVE START"
It has on big advantage: it doesn't stop the master and other slaves.
There is only a slowdown while tables are locked on the master but I
do it early in the morning when database activity is low (I can wait
to repair the broken slave : it is removed from the server pool
used by the applications when it fails).
I am sure that it is possible to further optimize this procedure
but, since I had to use it only 3 times since I have setup replication
(18 months ago), I haven't invested more time to it.
However, I like your idea to use another slave instead of the master;
it may be possible to adapt my approach to do it.
PS: I use mysql 3.23, mysql 4.x has a "LOAD DATA FROM MASTER"
that makes it much easier ;)
Hope this helps,
Joseph Bueno
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: External Access

2003-08-22 Thread Joseph Bueno
Christensen, Dave wrote:
This may seem like a stupid question, but I haven't been able to track
down
an answer yet.  We're going to try to access a remote MySQL server that
is
protected behind a firewall.  I know that we need to open a port, such
as
3306, to access the server.  My question is whether that port requires
TCP,
UDP or both to be open.
Thanks!

David Christensen
Brokers International, Ltd.
1200 E Main St
Panora, IA  50216
(641) 755-2775
[EMAIL PROTECTED]


TCP only

Joseph Bueno

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Database name

2003-08-18 Thread Joseph Bueno
select database();

Regards,
Joseph Bueno


Vinay wrote:
Is there an SQL command that i can perform which will return me the name
of the Database i'm currently conected to.
Thanks for your help.
Vinay
--
http://www.uptoten.com - The fun place to learn on-line
UpToTen Kids : 600 educational games and activities.
UpToTen Parents : Practical ideas and advice.
UpToTen Toyshop : Great gifts to give. Great gifts to receive.
http://toyshop.uptoten.com




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: New Group By, order by question

2003-07-24 Thread Joseph Bueno
select  order by abs(field) desc;

Hope this helps,
Joseph Bueno
Martin Moss wrote:
All,

I have a question about grouping numbers.

Lets say I have 10 records each containing a numeric value:-

1
2
3
5
10
-1
-2
-3
-4
-5
What I wish to do is to select the records from the database but group them
like this :-
e.g. by the highest value (ASC or DESC) regarldess of whether the value is
positive or negative?
10
5
-5
-4
3
-3
2
-2
1
-1
Can I do this in one query, or do I need to make two queries

Regards

Marty






--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Frequent Table Corruption - Please Help

2003-07-16 Thread Joseph Bueno
It is a Redhat 7.2 with a 2.4.7-10smp kernel.It was upgraded last
september and mysqld has never been restarted since then (mysqld
uptime is now 316 days).
Hope this helps
Joseph Bueno
Richard Gabriel wrote:
I am running 2.4.18-smp.  You said you upgraded to a 2.4 smp kernel and it
solved the problem?  What 2.4 version exactly did you run and was it a
RedHat kernel?  Thanks.
Richard Gabriel
Director of Technology,
CoreSense Inc.
(518) 306-3043 x3951
- Original Message - 
From: "Joseph Bueno" <[EMAIL PROTECTED]>
To: "Tom Roos" <[EMAIL PROTECTED]>
Cc: "Richard Gabriel" <[EMAIL PROTECTED]>; <[EMAIL PROTECTED]>
Sent: Wednesday, July 16, 2003 4:10 AM
Subject: Re: Frequent Table Corruption - Please Help



Have you tried to use a non SMP kernel ?
I have also experienced table corruptions 2 years ago with
mysql 3.23.xx and a 2.2.14smp kernel. I had several servers
with the same configuration but only the most loaded had this
problem. I had no more corruptions in single CPU mode.
I upgraded later to a newer smp kernel (2.4) that solved the
problem.
It may be a temporary fix until you schedule an upgrade.

Hope this helps
Joseph Bueno
Tom Roos wrote:

hi guys

i am runing the rh 2.4.20-18.7smp kernel with mysql 4.0.13 and i
sometimes experience problems with table corruptions when volumes become
high.
i'm trying different parameters for mysqld to see if the problem goes
away.

tom

-Original Message-
From: Richard Gabriel [mailto:[EMAIL PROTECTED]
Sent: 15 July 2003 22:35
To: Heikki Tuuri; [EMAIL PROTECTED]
Subject: Re: Frequent Table Corruption - Please Help
Thank you very much for the help.  I will schedule the upgrade and see
if

helps.  I have 2 other machines running 2.4.18 without problems, but
they

also do not run the volume that the problematic machine has and they do
not

have RAID.  Take care.

Richard Gabriel
Director of Technology,
CoreSense Inc.
(518) 306-3043 x3951
- Original Message - 
From: "Heikki Tuuri" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Tuesday, July 15, 2003 4:08 PM
Subject: Re: Frequent Table Corruption - Please Help




Richard,

- Original Message - 
From: "Richard Gabriel" <[EMAIL PROTECTED]>
To: "Heikki Tuuri" <[EMAIL PROTECTED]>; <[EMAIL PROTECTED]>
Sent: Tuesday, July 15, 2003 10:53 PM
Subject: Re: Frequent Table Corruption - Please Help




Thanks for the tip.  I'll see about upgrading, but it won't be a small
task.


Any reason why 2.4.18 problems wouldn't have effected MySQL 3.23?  I'm
trying to search for a solution that does not involve upgrading kernels
on


20 machines that are in production use right now.  Thanks again!
it may be worthwhile to test a new kernel in one of those problematic
computers.
We believe corruption problems in RH 2.4.18/drivers are random. Then any
small change can provoke them. But we will probably never know what
exactly


was wrong in some 2.4.18 computers.



Richard Gabriel
Director of Technology,
CoreSense Inc.
(518) 306-3043 x3951
Regards,

Heikki




- Original Message - 
From: "Heikki Tuuri" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Tuesday, July 15, 2003 3:45 PM
Subject: Re: Frequent Table Corruption - Please Help




Richard,

you are running a Red Hat kernel 2.4.18? Kernels 2.4.20 seem to be
much


more


reliable.

Best regards,

Heikki Tuuri
Innobase Oy
http://www.innodb.com
Transactions, foreign keys, and a hot backup tool for MySQL
Order MySQL technical support from https://order.mysql.com/
- Original Message - 
From: "Richard Gabriel" <[EMAIL PROTECTED]>
Newsgroups: mailing.database.mysql
Sent: Tuesday, July 15, 2003 9:52 PM
Subject: Frequent Table Corruption - Please Help




Hi everyone,

Since we upgraded to MySQL 4.0.13 from 3.23, we have been getting
table


corruption often.  It happens about twice per week (with about 500
queries per second average).  I have even set up a cron to run
mysqlcheck every hour to try to do some damage control.  The biggest
problem is that once the table is corrupted, it seems to be locked.
Well, no clients can read from it.  Once repaired, just one record
is


usually lost for each time the corruption occurs.  I am not sure if
this


is a MySQL bug or even how to reproduce it, but I was hoping that
someone here could help.  I have included all the information that I
have about this below.  Any insight is greatly appreciated!
Here is the mysqlbug information:



Release: mysql-4.0.13 (Official MySQL RPM)

C compiler:2.95.3
C++ compiler:  2.95.3
Environment:
System: Linux *.com 2.4.18-14smp #1 SMP Wed Sep 4 12:34:47 EDT
2002


i686 i686 i386 GNU/Linux
Architecture: i686
Some paths:  /usr/bin/perl /usr/bin/make /usr/bin/gmake /usr/bin/gcc
/usr/bin/ccGCC: Reading specs from
/usr/lib/gcc-lib/i386-redhat-linux/3.2/specs
Configured with: ../configu

Re: Frequent Table Corruption - Please Help

2003-07-16 Thread Joseph Bueno
Have you tried to use a non SMP kernel ?
I have also experienced table corruptions 2 years ago with
mysql 3.23.xx and a 2.2.14smp kernel. I had several servers
with the same configuration but only the most loaded had this
problem. I had no more corruptions in single CPU mode.
I upgraded later to a newer smp kernel (2.4) that solved the
problem.
It may be a temporary fix until you schedule an upgrade.

Hope this helps
Joseph Bueno
Tom Roos wrote:
hi guys

i am runing the rh 2.4.20-18.7smp kernel with mysql 4.0.13 and i sometimes experience problems with table corruptions when volumes become high. 

i'm trying different parameters for mysqld to see if the problem goes away.

tom

-Original Message-
From: Richard Gabriel [mailto:[EMAIL PROTECTED]
Sent: 15 July 2003 22:35
To: Heikki Tuuri; [EMAIL PROTECTED]
Subject: Re: Frequent Table Corruption - Please Help
Thank you very much for the help.  I will schedule the upgrade and see if
helps.  I have 2 other machines running 2.4.18 without problems, but they
also do not run the volume that the problematic machine has and they do not
have RAID.  Take care.
Richard Gabriel
Director of Technology,
CoreSense Inc.
(518) 306-3043 x3951
- Original Message - 
From: "Heikki Tuuri" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Tuesday, July 15, 2003 4:08 PM
Subject: Re: Frequent Table Corruption - Please Help



Richard,

- Original Message - 
From: "Richard Gabriel" <[EMAIL PROTECTED]>
To: "Heikki Tuuri" <[EMAIL PROTECTED]>; <[EMAIL PROTECTED]>
Sent: Tuesday, July 15, 2003 10:53 PM
Subject: Re: Frequent Table Corruption - Please Help



Thanks for the tip.  I'll see about upgrading, but it won't be a small
task.

Any reason why 2.4.18 problems wouldn't have effected MySQL 3.23?  I'm
trying to search for a solution that does not involve upgrading kernels
on

20 machines that are in production use right now.  Thanks again!
it may be worthwhile to test a new kernel in one of those problematic
computers.
We believe corruption problems in RH 2.4.18/drivers are random. Then any
small change can provoke them. But we will probably never know what
exactly

was wrong in some 2.4.18 computers.


Richard Gabriel
Director of Technology,
CoreSense Inc.
(518) 306-3043 x3951
Regards,

Heikki



- Original Message - 
From: "Heikki Tuuri" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Tuesday, July 15, 2003 3:45 PM
Subject: Re: Frequent Table Corruption - Please Help



Richard,

you are running a Red Hat kernel 2.4.18? Kernels 2.4.20 seem to be
much

more

reliable.

Best regards,

Heikki Tuuri
Innobase Oy
http://www.innodb.com
Transactions, foreign keys, and a hot backup tool for MySQL
Order MySQL technical support from https://order.mysql.com/
- Original Message - 
From: "Richard Gabriel" <[EMAIL PROTECTED]>
Newsgroups: mailing.database.mysql
Sent: Tuesday, July 15, 2003 9:52 PM
Subject: Frequent Table Corruption - Please Help



Hi everyone,

Since we upgraded to MySQL 4.0.13 from 3.23, we have been getting
table

corruption often.  It happens about twice per week (with about 500
queries per second average).  I have even set up a cron to run
mysqlcheck every hour to try to do some damage control.  The biggest
problem is that once the table is corrupted, it seems to be locked.
Well, no clients can read from it.  Once repaired, just one record
is

usually lost for each time the corruption occurs.  I am not sure if
this

is a MySQL bug or even how to reproduce it, but I was hoping that
someone here could help.  I have included all the information that I
have about this below.  Any insight is greatly appreciated!
Here is the mysqlbug information:


Release: mysql-4.0.13 (Official MySQL RPM)

C compiler:2.95.3
C++ compiler:  2.95.3
Environment:
System: Linux *.com 2.4.18-14smp #1 SMP Wed Sep 4 12:34:47 EDT
2002

i686 i686 i386 GNU/Linux
Architecture: i686
Some paths:  /usr/bin/perl /usr/bin/make /usr/bin/gmake /usr/bin/gcc
/usr/bin/ccGCC: Reading specs from
/usr/lib/gcc-lib/i386-redhat-linux/3.2/specs
Configured with: ../configure --prefix=/usr --mandir=/usr/share/man
--infodir=/usr/share/info --enable-shared --enable-threads=posix
--disable-checking --host=i386-redhat-linux --with-system-zlib
--enable-__cxa_atexit
Thread model: posix
gcc version 3.2 20020903 (Red Hat Linux 8.0 3.2-7)
Compilation info: CC='gcc'  CFLAGS='-O6 -fno-omit-frame-pointer
-mpentium'  CXX='g++'  CXXFLAGS='-O6 -fno-omit-frame-pointer
-felide-constructors -fno-exceptions -fno-rtti -mpentium'
LDFLAGS=''

ASFLAGS=''
LIBC:
lrwxrwxrwx1 root root   14 Nov  1  2002
/lib/libc.so.6

-> libc-2.2.93.so
-rwxr-xr-x1 root root  1235468 Sep  5  2002
/lib/libc-2.2.93.so
-rw-r--r--1 root root  2233342 Sep  5  2002
/usr/lib/libc.a

-rw-r--r--1 root 

Re: error 1130 : "Host xxx.xxx.xxx.xxx not allowed to connect this server" ?

2003-07-09 Thread Joseph Bueno
fatih olcer wrote:
*This message was transferred with a trial version of CommuniGate(tm) Pro*
i got an error when i try to connect mysql server with 
mysqlcc :

error 1130 : "Host xxx.xxx.xxx.xxx not allowed to connect this server" .

does anyone know any solution ?

thanX.


Have you looked at the manual ?
http://www.mysql.com/doc/en/Access_denied.html
Regards,
Joseph Bueno
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: specific records

2003-07-02 Thread Joseph Bueno
Maciej Bobrowski wrote:
Let's say I have 1000 records in a 'table'. I want to select rows from 6
to 11. How can I do this?


SELECT * FROM tablename where column>5 AND column<12;


No, no. I have no numerical fileds in the table. Your example is not good.
Even if I could add the 'id' column to the table, then when I will remove
some records from the middle part o the table I will have holes, and
the next select will give me wrong data.
Regards,
Maciej Bobrowski
Maybe:
SELECT * FROM tablename LIMIT 5,5;
See:
http://www.mysql.com/doc/en/SELECT.html
for details
Regards,
Joseph Bueno


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Problem when use mont and from_unixtime.

2003-06-30 Thread Joseph Bueno
[EMAIL PROTECTED] wrote:
Hello, 
i build one query in my mysql:
mysql> select timestamp from lastauth where month(from_unixtime(timestamp) = 6);
Empty set (0.00 sec)
Query is empty, but:
my timestamp table:
++
| timestamp  |
++
| 1056981323 |
++
1 row in set (0.00 sec)
mysql> select from_unixtime(1056981323);
+---+
| from_unixtime(1056981323) |
+---+
| 2003-06-30 16:55:23   |
+---+
1 row in set (0.01 sec)
and:
mysql> select month(from_unixtime(1056981323));
+--+
| month(from_unixtime(1056981323)) |
+--+
|6 |
+--+
1 row in set (0.00 sec)

Any body know where is my mistake?

Regards,
Condor
___
Vereya Cabel, http://www.vcable.net


Move the parenthesis:
select timestamp from lastauth where month(from_unixtime(timestamp))=6;
Regards,
Joseph Bueno
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: How to drop unique key

2003-06-30 Thread Joseph Bueno
MaFai wrote:
Hello, mysql,

p_asset CREATE TABLE `p_asset` (\
  `ano` int(11) NOT NULL auto_increment,\
  `asset_name` varchar(50) NOT NULL default '',\
  `asset_type` int(11) default NULL,\
  `asset_format` varchar(50) default NULL,\
  `asset_keywords` varchar(200) default NULL,\
  `asset_duration` bigint(20) default NULL,\
  `asset_status` varchar(10) default NULL,\
  `asset_channel` varchar(20) default NULL,\
  PRIMARY KEY  (`ano`),\
  UNIQUE KEY `asset_name_2` (`asset_name`),\
  KEY `p_asset_name_idx01` (`asset_name`)\
) TYPE=InnoDB   
According to the manual,there is no 'drop unique key' in alter command
I attempt to type 'alter table p_asset drop index asset_name',but it doesn't 
functional.
alter table p_asset drop unique key( asset_name),obviously it wouldn't take effect.
How can I drop the unique in p_asset table?

Best regards. 

MaFai
[EMAIL PROTECTED]
2003-06-30


Try:
alter table p_asset drop index asset_name2;
Regards,
Joseph Bueno
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: MySQL/INNODB speed on large databases

2003-06-28 Thread Joseph Bueno
OK, the picture is now a little bit simpler:
1. Your performance problem is coming mainy from one BIG database.
2. Although your servers have a lot of RAM (at least 4Gb), InnoDB
   can only use a small fraction because of addressing limitations of
   32 bits CPU.
Some random ideas:
1. Swith to a 64 bits system so that you can efficiently use more RAM
2. Do all tables in this BIG database need to be transaction safe ?
   If not, you can convert some (all ?) tables to MyISAM format. In this
   case, file caching will be done by the kernel outside of mysqld
   process allowing more efficient use of RAM.
3. Sponsor Innobase so that InnoDB use AWE memory on Linux/x86 (up to
   64Gb). This has already been discussed on this mailing list:
http://groups.google.com/groups?hl=en&lr=&ie=UTF-8&oe=UTF-8&th=cddc8bdca9e4304a&seekm=bbkmji%2417qt%241%40FreeBSD.csie.NCTU.edu.tw
4. Make sure that you are really limited by RAM and disc I/O not CPU
   otherwise you won't gain anything with more RAM.
5. If you have more selects than updates/inserts you should consider
   replication (I use it, it rocks !)
6. If you have more selects than updates/inserts and identical selects
   are run several times, you could consider upgrading to MySQL 4.x.
   Its query cache may speed up your server (I have not tried it
   myself but there are been positive reports on this list).
regards,
Joseph Bueno
Wendell Dingus wrote:
Ahh, there's something I could have explained better. Each of the databases
represents a remote office of a different customer. I specifically want/need
to keep the data separate, it doubles as an offsite backup for these offices
as well as would conflict if it were merged. I currently use replication to
get the data from those offices up to my web server but not exactly in the
way it was intended. I just turn on binary logging and a process every 30
minutes or so checks the latest /var/lib/mysql/systemname-bin.nnn to see if
it contains anything and starts up a new binary log file, uses mysqlbinlog
to grab the contents of that one, sends them to the web server, which
integrates them with it's offiste copy. Works great since some offices have
live internet connections and others are dialup, etc...
Now then, I could divvy things up across multiple servers except that that
one largest database is almost as slow at the customers office with live
local (non-web) usage and that server has over 5GB of RAM. Similar specs to
the web server I described otherwise and it only has that ONE database and
not the ones of all the other customers.
Anyway, beyond that, the LVS approach would still involve having 1 master
write server and all the backend "real" servers being readonly (to the
application) and kept updated via replication slaving from the master. Just
across multiple actual databases and not one... From what I've read so far
that is!
Thanks!

-Original Message-
From: Joseph Bueno [mailto:[EMAIL PROTECTED]
Sent: Saturday, June 28, 2003 2:45 PM
To: Wendell Dingus
Cc: [EMAIL PROTECTED]
Subject: Re: MySQL/INNODB speed on large databases
Maybe my question is too obvious but, since you have several databases
instead of only one, have you considered moving some of them to a
separate server ?
Also, have you checked MySQL replication capabilities ?
It is a very efficient way to distribute the load across several
servers unless you do a lot of insert/updates and very few selects.
Regards,
Joseph Bueno
Wendell Dingus wrote:

Hello, I've got a situation I'm hoping someone here can help me out with.
We

have a web server serving up data via some PHP programs pulling from MySQL
(3.23.56 currently) in INNODB tables. There are 40+ (active) databases
totalling a bit over 28 Gigabytes at this point. As we add more users
speed

is beginning to drop...






--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: INSERT weirdness

2003-06-28 Thread Joseph Bueno
Dan Bowkley wrote:
Hey there everyone,

I'm trying to get a table of computer parts manufacturers populated with
a list culled from my distributor and I'm having a devil of a time
getting anything to work.  I'm totally new to MySQL and don't really
know my SELECT from a hole in the ground just yet...which is probably
the entire problem. ;)  Anyways.
I'm trying to do this

INSERT INTO `manufacturers` (`manufacturers_name`,
`manufacturers_image`) VALUES (`CREATIVE_LABS`,
`manufacturer_CREATIVE_LABS.gif`);
to this table

CREATE TABLE `manufacturers` (
  `manufacturers_id` int(11) NOT NULL auto_increment,
  `manufacturers_name` varchar(32) NOT NULL default '',
  `manufacturers_image` varchar(64) default NULL,
  `date_added` datetime default NULL,
  `last_modified` datetime default NULL,
  PRIMARY KEY  (`manufacturers_id`),
  KEY `IDX_MANUFACTURERS_NAME` (`manufacturers_name`)
) TYPE=MyISAM AUTO_INCREMENT=11 ;
and keep getting this:

Error

SQL-query :  

INSERT INTO `manufacturers` ( `manufacturers_name` ,
`manufacturers_image` ) 
VALUES ( 

`CREATIVE_LABS` , `manufacturer_CREATIVE_LABS.gif` 
) 

MySQL said: 

Unknown column 'CREATIVE_LABS' in 'field list'I know 'CREATIVE_LABS'
isn't a colunm, and I didn't tell anyone that it was either.  Why is
this thing convinced that it is?  help???
Back quotes are used for table and field names.
Use single quotes for values:
INSERT INTO `manufacturers` ( `manufacturers_name` ,
`manufacturers_image` )
VALUES (
'CREATIVE_LABS' , 'manufacturer_CREATIVE_LABS.gif'
)
Regards,
Joseph Bueno
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: MySQL/INNODB speed on large databases

2003-06-28 Thread Joseph Bueno
Maybe my question is too obvious but, since you have several databases
instead of only one, have you considered moving some of them to a
separate server ?
Also, have you checked MySQL replication capabilities ?
It is a very efficient way to distribute the load across several
servers unless you do a lot of insert/updates and very few selects.
Regards,
Joseph Bueno
Wendell Dingus wrote:
Hello, I've got a situation I'm hoping someone here can help me out with. We
have a web server serving up data via some PHP programs pulling from MySQL
(3.23.56 currently) in INNODB tables. There are 40+ (active) databases
totalling a bit over 28 Gigabytes at this point. As we add more users speed
is beginning to drop...
Currently all this is on a single dual P3 1.26Ghz IBM eServer with 4GB of
RAM. 4 x 15K RPM drives RAID5. It's hitting load averages >12 during peak
times and database query speeds are really starting to drag...
I've talked some with the consulting folks at MySQL and have the intention
of contracting them to help analyze our database layout, structure, and
queries to help optimize things. Dealing with belt-tightening management on
the other end though I want to get rolled out some new software and hardware
before doing that. I've been spending time working with a test setup using
the load-balancer routines at www.linuxvirtualserver.com and like what I see
so far.
My quandry is this though.. The existing server has 4GB but I can't get
innodb_buffer_pool_size above about 1.2GB  I'd buy tons more RAM and go that
route if it could be utilized, I've not figured out how though and assume
that's the deal with a 32-bit CPU. Reading some about 64-bit Opterons and a
64-bit address space sounds good.
I'm sure I can get the LVS approach with a load balancer and 3 or 4 back-end
"real" servers going without too much trouble. Where do I spend the money
most wisely though? More and more and more RAM in each backend server or is
anything over 2GB going to mostly not be used (Probably P4 2.4Ghz XEON
boxes). Drive speed? I'm thinking I'll test out using fast SCSI drives and
maybe some SATA drives RAID-0 stiped versus non-striped to see the speed
difference. My whole goal is speed, speed, and more speed out of MySQL! The
actual web applications place a very small load on the server, viewing top
or mytop show that database access is where all the time is going and
primarily on the 2 largest databases. A query against one of the other
databases when the server is mainly idle happen in very few seconds or
tenths. The same query (data layout is basically identical across all the
databases) takes > 2 minutes sometimes on either of the largest 2 databases.
So you see my concern? A 2 minute query on an _unloaded_ server is not going
to improve no matter how many servers I put behind a load balancer. Again
I'll pay the MySQL folks to analyze and hopefully come up with lots of
braindead things we're doing and help performance a lot. I have the bosses
ear for hardware _now_ though and want to move forward before that
opportunity passes. I'll post the database specs at the end of this. If you
were hosting that much data without a ton of inserts all day long but lots
of selects where would you concentrate your efforts? Speed of CPU or
speed/quantity of RAM or speed of drives or what? Would you load balance a
quantity of servers or lean more towards a couple of much larger 64-bit
servers? Running entirely out of RAM is very appealing but somehow I suspect
that will be cost prohibitive :)  Just buy reasonable hardware, load-balance
it, and then get the MySQL folks to help optimize things? I sure don't want
to spend a ton and then be told later that it would have been much better
had I went with a different setup if you know what I mean :)Thanks in
advance! If any additional statistics would help let me know!
PS. Any pointers to any approaches to what I think I read about Google,
_striping_ database data across multiple servers for speed? Also, I'm
unfamiliar with them, but do PostgreSQL or Oracle or DB2 or something like
that have anything that would specifically work better in this scenario?
Mytop output after about 28 hours of uptime so this is just Friday during
business hours access primarily:
MySQL on localhost (3.23.56-Max-log)
up 1+04:56:30 [13:29:15]
 Queries Total: 1,783,317  Avg/Sec: 17.12  Now/Sec: 0.40  Slow: 1,046
 Threads Total: 33Active: 1 Cached: 0
 Key Efficiency: 99.99%  Bytes in: 779,059,153  Bytes out: 543,489,706
The databases themselves from phpMyAdmin:
(This one is the main killer:   40   1.8 GB  5.5 GB
  7.3
GB )
  Database   Table(s)DataIndexes Total
 xxx00 Bytes 0 Bytes 0 Bytes
 xx 44   88.4 KB 182.0 KB270.4 KB
 x 

Re: distributed database architecture for a large database

2003-06-27 Thread Joseph Bueno
Sorry to reply to myself but after reading your post again,
I think you can use replication to maintain member table in sync:
it is possible to restrict replication to a some tables within a
database: check 'replicate-do-table' option.
Hope this helps
Joseph Bueno
Joseph Bueno wrote:
I don't have direct answers to your questions but you should consider
adding an integer 'userid' to member table and using it as a foreign key
in member_log table instead of username. It will make selects and joins
faster, data and index sizes smaller.
Also, I don't know what you mean by 'medint' but if it is a mediumint,
you will be in trouble since its range is [-8388608..8388607].
You should use at least an "unsigned int" and may be an "unsigned
bigint" if you suspect that you will have more than 4 billion rows.
Hope this helps
Joseph Bueno
Aodhan Cullen wrote:

I've got an interesting problem for you all. I'd love to hear what you 
think.

I've simplified the database design of my web application to its root 
problem, so it'll be very easy to see my difficulty, and if you're 
feeling generous possibly offer a solution.

I have two tables.

member table (circa 1,000,000 rows - 100 meg - easily fit in a single 
table)
username varchar(30) pk
password varchar(30)
settings varchar(30)

member_log table (circa 3,000,000,000 rows - a few 100 gigs - a few 
100 very simple updates and selects a second, with some quite long 
selects every minute or so - when the update is being done it needs to 
select the settings for the user from the member table before it does 
the update to the member_log table)

logid medint pk
fk_username varchar(30) fk
description varchar(200)
My read/update ratio would be something along the lines of 1:3, 3 
updates for every read. So it is highly unusual, and more or less 
rules replication out of the picture.

Now ehm ? what's the most efficient way of doing this?

What I would like to do is:

Is have a copy of the member table on every server, then break up the 
member_log based on the username, and spread it across multiple servers.

database server a

full member table
1/4 member_log table
database server b

full member table
1/4 member_log table
database server c

full member table
1/4 member_log table
database server d

full member table
1/4 member_log table
In the future, if the servers start to slow down then I'll just add

database server e

full member table
member_log table
Well that's what I'd like to do, but I don't know how to do this.

My main problem is keeping the full member table in sync.

I can't use replication because my read/update ratio just isn't right 
for it. And I only want to keep one table in sync, not the whole 
database.

So i don't know what to do. How do I do this, and do this efficently?

Any ideas anyone?

regards,

Aodhan.








--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: distributed database architecture for a large database

2003-06-27 Thread Joseph Bueno
I don't have direct answers to your questions but you should consider
adding an integer 'userid' to member table and using it as a foreign key
in member_log table instead of username. It will make selects and joins
faster, data and index sizes smaller.
Also, I don't know what you mean by 'medint' but if it is a mediumint,
you will be in trouble since its range is [-8388608..8388607].
You should use at least an "unsigned int" and may be an "unsigned
bigint" if you suspect that you will have more than 4 billion rows.
Hope this helps
Joseph Bueno
Aodhan Cullen wrote:
I've got an interesting problem for you all. I'd love to hear what you think.

I've simplified the database design of my web application to its root problem, 
so it'll be very easy to see my difficulty, and if you're feeling generous 
possibly offer a solution.

I have two tables.

member table (circa 1,000,000 rows - 100 meg - easily fit in a single table)
username varchar(30) pk
password varchar(30)
settings varchar(30)
member_log table (circa 3,000,000,000 rows - a few 100 gigs - a few 100 very 
simple updates and selects a second, with some quite long selects every minute 
or so - when the update is being done it needs to select the settings for the 
user from the member table before it does the update to the member_log table)

logid medint pk
fk_username varchar(30) fk
description varchar(200)
My read/update ratio would be something along the lines of 1:3, 3 updates for 
every read. So it is highly unusual, and more or less rules replication out of 
the picture.

Now ehm ? what's the most efficient way of doing this?

What I would like to do is:

Is have a copy of the member table on every server, then break up the member_log 
based on the username, and spread it across multiple servers.

database server a

full member table
1/4 member_log table
database server b

full member table
1/4 member_log table
database server c

full member table
1/4 member_log table
database server d

full member table
1/4 member_log table
In the future, if the servers start to slow down then I'll just add

database server e

full member table
member_log table
Well that's what I'd like to do, but I don't know how to do this.

My main problem is keeping the full member table in sync.

I can't use replication because my read/update ratio just isn't right for it. 
And I only want to keep one table in sync, not the whole database.

So i don't know what to do. How do I do this, and do this efficently?

Any ideas anyone?

regards,

Aodhan.





--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Simple Question: MySQL and Shell Scripts

2003-06-24 Thread Joseph Bueno
Zach wrote:
I am writing a shell script using Born Shell. I am trying to get the
result of a SQL statement into a variable. 

For example:

/usr/bin/mysql -uroot -prootpass BOB << EOF
SELECT * FROM Bobstable WHERE Name=1
EOF
How do I get the result into a variable?

Thanks in advance!

May be:
myvar=`/usr/bin/mysql -uroot -prootpass -e 'SELECT * FROM Bobstable 
WHERE Name=1;' BOB`

But this is more a shell than a mysql question, isn't it ?

Joseph Bueno

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Mysql replication with 2 masters and 1 slave

2003-06-22 Thread Joseph Bueno
May be I am misunderstanding your goals but it seems that you want to
have a backup server for two independant masters (with different
databases). Something like:
M1 -->  S  <--  M2
DB1DB1  DB2
   DB2
If this is the case, you can run two mysqld instances on S, each one
replicating from one master:
M1 --> S1
DB1DB1
   S2  <--  M2
   DB2  DB2
Hope this helps
Joseph Bueno
Erik Olsen wrote:
Yes.  The idea was to backup 2 masters, 1 that is ours and 1 that is a
costumer. The slave's job is just going to have a synchronised db of
both servers db. The plan was to have it on a different place in case of
fire. 

But I must find another solution then.
 

Erik Olsen wrote:

Is it possible for slave to connect to 2 different masters and have
synchronized database from both?


So you would have updates on 2 masters M1 and M2 which would be 
replicated to the read-only slave S1 ?

The point of MySQL's replication is that after an replication-event 
there is allways a moment in time where slave == master.

In your model you'd build :   M1  -->  S  <-- M2

An update of M1 would be carried to S but wouldn't be transferred to M2 
so S had no chance to get in the state  S == M2  anymore.
I suppose that'd break the replication process and it'd stop.

You have to do it in a circle as the manual describes :

... -->  M1  -->  M2  -->  M3  -->  M1 ...

I'd rather have it like a star formation but that seams to be 
impossible, too.
Updates on all machines with replication to a central supermaster SM

M1  <-->  SM  <-->  M2

This way I wouldn't rely on all hosts in the replication-circle to stay 
up and do their job since one is off site and hanging on a slow dial-up 
line.





--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: from_days/to_days

2003-03-31 Thread Joseph Bueno
[EMAIL PROTECTED] wrote:
Can someone provide a link to the use of these in queries in the
online MySQL manual? I did find them a few weeks ago but I've been lurking
around the manual now for an hour or so and can't find them at all.
TIA,

Ed


http://www.mysql.com/doc/en/Date_and_time_functions.html

(TIP: type 'from_days' in 'Search the MySQL manual:' at the top-left
of online manual page and click 'GO' ;) )
Regards,
Joseph Bueno
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: AW: multiple mysqld-max running ?

2003-03-28 Thread Joseph Bueno
Hi,

mysqld creates a few threads at startup (3, I think) plus
one thread per active connection.
The manual explains it better than myself:
http://www.mysql.com/doc/en/MySQL_threads.html
Chances are that you have 8 clients connected.
Once again, your mysql server TOTAL memory usage is 40Mb.
If this is too much, consider changing some parameters:
http://www.mysql.com/doc/en/Server_parameters.html
Also, if you don't use InnoDB, make sure that you have
skip-innodb option set in my.cnf. (There were a few
mails on that topic a few hours ago, check mailing list archive).
Regards,
Joseph Bueno
Gerald Fehringer wrote:
hi joseph,

thank you, but still where is defined that it should start 11 threads ??
i don't wanna allocate so much memory for a small server !
thanks
/geri
 

-Ursprungliche Nachricht-
Von: Joseph Bueno [mailto:[EMAIL PROTECTED]
Gesendet: Freitag, 28. Marz 2003 15:44
An: [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]
Betreff: Re: multiple mysqld-max running ?
Hi,

ps,top,.. on Linux show threads as processes. What you see is ONE
process (mysqld) with 11 threads sharing 40Mb of memory.
Check http://www.mysql.com/doc/en/Memory_use.html

Regards,
Joseph Bueno
Gerald Fehringer wrote:

hi list fellows,

os: suse linux 8.1
myqld:  mysql-max-4.0.12-pc-linux-i686
i'm using innoDB and a standard my.cnf (medium) config file.
every time i start the mysqld, there are over 11 deamons running, each
one with 40MB !
the same constellation on a freebsd 4.6.2, same mysqld version, same
config file, runs only twice (like supposed to be, right ?)
how can i prevent this, is there any config value i could set ?

thanks in advance
geri








--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: multiple mysqld-max running ?

2003-03-28 Thread Joseph Bueno
Hi,

ps,top,.. on Linux show threads as processes. What you see is ONE
process (mysqld) with 11 threads sharing 40Mb of memory.
Check http://www.mysql.com/doc/en/Memory_use.html

Regards,
Joseph Bueno
Gerald Fehringer wrote:
hi list fellows,

os: suse linux 8.1
myqld:  mysql-max-4.0.12-pc-linux-i686
i'm using innoDB and a standard my.cnf (medium) config file.
every time i start the mysqld, there are over 11 deamons running, each
one with 40MB !
the same constellation on a freebsd 4.6.2, same mysqld version, same
config file, runs only twice (like supposed to be, right ?)
how can i prevent this, is there any config value i could set ?

thanks in advance
geri



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Speed of SELECT ... LIMIT #,#?

2003-03-25 Thread Joseph Bueno
If you want to speed it up, you have to make it use an index.
You need to add a WHERE or an ORDER BY clause.
Have you tried :
SELECT Message_ID, Body FROM Body_etc ORDER BY Message_ID LIMIT N,M
Of course, I assume that Message_ID is indexed ;)

Regards,
Joseph Bueno
Nick Arnett wrote:
-Original Message-
From: Jeremy Zawodny [mailto:[EMAIL PROTECTED]
Sent: Monday, March 24, 2003 9:17 PM


Please post the query and the output of running it thru EXPLAIN.

It is likely sorting the results without an index and having to weed
thru more and more records the farther back you look in the list.


I'm fairly certain that's true.  The query is very simple: "SELECT
Message_ID, Body FROM Body_etc LIMIT N,M".  Those are the only fields in the
table; Message_ID is a CHAR field, Body is TEXT.
Here's what EXPLAIN gives:

+--+--+---+--+-+--+-+---
+
| table| type | possible_keys | key  | key_len | ref  | rows| Extra
|
+--+--+---+--+-+--+-+---
+
| body_etc | ALL  | NULL  | NULL |NULL | NULL | 1586994 |
|
+--+--+---+--+-+--+-+---
+
Sorry if I didn't make that clear in my first posting.  I can't quite see
how to speed things up, though it dawned on me that perhaps I could add a
sequence field, index it, then use it, rather than the LIMIT clause, to
select the records.  It was quite fast at the beginning and caught me by
surprise that it slowed down so much.
I'm getting there, slowly, by bumping up the max_packet_size and doing three
INSERTs of 5,000 for every SELECT, so I'm grabbing about 15,000 records at a
time, greatly reducing the number of SELECTs from when I was only getting
1,000.  Still taking hours, but I'll be letting it run all night.
By the way, I noticed that in one of your presentations, you noted that
Handler_read_rnd_next is better if it's smaller.  It's at 5.7 million now,
after doing about 900K records.  I'm assuming that's an indication that
there's some inefficiency, but I'm still stumped as to how to address it.
The machine I'm running MySQL on has 1 GB of memory, so I have lots of
headroom to play with.
And since people tend to start asking questions when they realize I'm
indexing large amounts of e-mail, etc., I'll explain a little.  I'm doing
traffic and content analysis to track what's going on in certain large
software development communities.
Thanks!

Nick




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: disabling version number

2003-03-24 Thread Joseph Bueno
Florian Effenberger wrote:
No, why?


Part of my security concept, I generally disable all version numbers.


You can patch mysql source and recompile ;)

However, if someone has enough access rights on your system to run
"select version();", showing mysql version number should be the least
important of your problems.
Regards,
Joseph Bueno
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Replication freezing

2003-03-24 Thread Joseph Bueno
Hi,

Have tried changing the value of 'slave_net_timeout' on slave
side ? By default, it is 3600s, if you make it shorter than
10 mn, I think that the slave will start a new read (and
generate some TCP/IP traffic) before the firewall closes the
connection.
Regards,
Joseph Bueno
Andrew L. Davydov wrote:
Hi!

The mysql replicatian freezing when MASTER and SLAVE servers
connected through FireWall and data not sending in 10 minuts.
The FireWall close the connecting becouse TCP/IP timeout for
connections has left, but MySQL don't know about it.
The MySQL can't send a keepalive packets to keep up connections when 
data is not sending to SLAVE.

The good idea to enable the keepalive packets when MySQL in
replication mode, but how can I make it ?


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: AW: replication on single server

2003-03-24 Thread Joseph Bueno
Hi,

With MySQL, you can join tables that are in different databases as long
as they are on the same server. In your case, you could run:
SELECT s.name FROM A.bouncebacks b, subcriber s where s.email = b.email
Regards,
Joseph Bueno
Brandl, Thomas wrote:
The thing is that this particular table in database 'A' contains
bounceback email addresses, all other databases are customer databases
containing pretty large subscriber tables. Now I need to find matches
between this bounceback table and all other customer subscriber tables.
So I'd like to do a join over these, which is not possible because they
reside in different databases 
(Like SELECT s.name FROM bouncebacks b, subcriber s where s.email =
b.email).
So if I could replicate this table, I could perform the above join.
right now I find matches between the bounceback and the subscriber list
by scripting, which is slooow - with ca. 10,000 bouncebacks and say
up to 15,000 subscribers/customer.

I don't really like this concept, but I can't think of a better one...

Thanks again for suggestions!
Tom 



-Ursprüngliche Nachricht-
Von: Paul DuBois [mailto:[EMAIL PROTECTED]
Gesendet: Freitag, 14. März 2003 19:19
An: Brandl, Thomas; [EMAIL PROTECTED]
Betreff: Re: replication on single server
At 10:49 +0100 3/14/03, Brandl, Thomas wrote:

> Hey everybody,

I want to replicate several tables from one database to all other
databases on the same machine, so to say a table-level-replication
Simply said, imagine this scenario:
A MySQL server with about 10 databases: A and 1 to 9.
Database A contains a 'special' table that is supposed to be
replicated to all other databases on the same machine, so 
it is kinda

like the master database. The table has the same name on every
database.
Is such a thing possible and if yes, how?
Thanks very much for your help!

> Thomas
I don't understand why you'd bother.  Put it in one database and grant
everybody permission to access it.  Wouldn't that accomplish the same
thing, without all the headache of trying to copy it everywhere?
--
Paul DuBois, [EMAIL PROTECTED]
http://www.kitebird.com/
sql, query




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


List-ID Header

2003-03-23 Thread Joseph Bueno
Hello,

It seems that emails coming from mysql mailing list
don't include 'List-ID: ' header anymore.
Is this going to be fixed or should we change our email
filtering rules ?
Thank you
Joseph Bueno
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Mysql And Backup

2003-03-21 Thread Joseph Bueno
Jerry M. Howell II wrote:
sunil sharma said:

Dear Friends

I am new in this list
I am using mysql on linux
My problem is about the backup of mysql database

We are having 50 Databases and each database is of
minimum of 300 MB in size and maximum of 2 G.B size
I am planning to take incremental backup of the
database
For that i am planning to use mysqldump,But i m bit
doughtful about the mysqldump because the size of the
database is very huge
I tried but it's taking a lot of time
funny you asked that my question is similar so I'll
group it with this one, hope you don't mind.
Hello all,

   I have been using mysqldump -a to backup my clients sites.
I have discovered a problem with this aproach. I decided to
try and restore it with mysqlimport on my local hd and it'll
restore it up to a certian point then stop. What I'm looking
for is any other recomendations to backup and restore 71
databases. Any scripts for backing it all up will of course
be welcom as well. second off, how do I wipe these old
databases off my puter :)
thnx,

Hi,

You should not use mysqlimport to restore a backup made with
mysqldump; use mysql instead.
Check:
http://www.mysql.com/doc/en/mysqldump.html
I am quite surprised that you could restore anything with mysqlimport :(

Regards,
Joseph Bueno
-
Before posting, please check:
  http://www.mysql.com/manual.php   (the manual)
  http://lists.mysql.com/   (the list archive)
To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php


Re: Process Limit on Linux ?

2003-03-19 Thread Joseph Bueno
Hi,

You may get a signal 11 because mysqld process is running out of
virtual memory (instead of physical memory).
If you want to run 1500 simultaneous connections, you have to be very
careful on how mysqld allocates its memory:
Since you are on a 32bits system, a process is limited to 4Gb.
On Linux, these 4Gb are split in 3Gb user space/1Gb kernel space
and user space is split in 1Gb code segment/2Gb data segment.
So you have "only" 2Gb for all mysqld data.
You can get more info on how mysqld allocates memory at:
http://www.mysql.com/doc/en/Memory_use.html
and http://www.mysql.com/doc/en/InnoDB_start.html if you
have InnoDB.
As a rule of thumb,
execute 'show variables',
evaluate:
 key_buffer_size
   + innodb_additional_mem_pool_size
   + innodb_buffer_pool_size
   + innodb_log_buffer_size
   = global_data_size
 thread_stack
   + 2 * net_buffer_length (1 connection buffer, 1 result buffer)
   + record_buffer OR record_rnd_buffer (depends on table access method)
   + sort_buffer_size (if resultset needs to be sorted)
   = per_thread_data_size
and make sure that:
global_data_size + per_thread_data_size * nb_of_connections < 2Gb
Although I have never run mysql with more than ~600 connections,
I have used this formula (a simplified version since I don't use
InnoDB) to allocate a key_buffer as big as possible and make sure
that I won't run out of memory.
Hope this helps,
Joseph Bueno
Philipp wrote:
Hi Dan,

i just talked about openmosix because i read about
shared memory segments. my only desire is to make mysql
able to spawn 1000-1500 connections and *not* to tell
my client "cant create new thread, perhaps out of memory"
while 1.5 GB of RAM is only used for caching. Is that really
possible that mysql is not able to handle this amount of connections ?
My best Regards,
Philipp
On Wed, 19 Mar 2003 10:01:38 -0600
Dan Nelson <[EMAIL PROTECTED]> wrote:

In the last episode (Mar 19), Philipp said:

thank you for your answer. While i was reading about clustering some
weeks ago i read the openmosix FAQ claiming that openmosix would not
work with apache, because apache was using shared memory to
communicate with its threads, and i always thought mysql is designed
the same way. Please someone correct me if i am wrong.
SYSV shared memory (shm*) is a block of memory that one process
creates, and depending on the access flags, multiple processes can
attach to and see each other's changes.
Threads use shared memory by definition, because a threaded application
is still one process.  No SYSV shm tricks are needed.
Openmosix won't be able to balance mysql threads, because even if it
were possible to synchronize shared memory between machines with Mosix,
synching thread mutexes would be horrendously slow.  Better to just get
a multi-CPU box.
--
Dan Nelson
[EMAIL PROTECTED]





-
Before posting, please check:
  http://www.mysql.com/manual.php   (the manual)
  http://lists.mysql.com/   (the list archive)
To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php


Re: Process Limit on Linux ?

2003-03-19 Thread Joseph Bueno
Hi,

Since shm (shared memory) in Unix/Linux is only used to share memory
between independent processes and mysql server is a single process
(multi-threaded), I am convinced that shmmax value has no consequence
on mysql. (OTOH Oracle has a multiprocess architecture and uses
shared memory).
Hope this helps,
Joseph Bueno
Philipp wrote:
Dear Walt, dear List,

thank you for your reply. Finally a suggestions at all.
I checked both
/proc/sys/kernel/threads-max
/proc/sys/kernel/shmmax
I dont think threads-max will be a problem, because the value is 14336,
and i dont think my system will ever have to handle this number of threads.
But researching shmmax at google i got several hits. Most are dealing with
postgres but perhaps its the same with mysql.
shmmax ist 32 MB. on one page the author suggest to raise this value to 128
MB.
What are your suggestions for the values:

shmall  shmmax  shmmni ?

Thank you very much,
Yours Philipp
- Original Message -
From: "walt" <[EMAIL PROTECTED]>
To: "Philipp" <[EMAIL PROTECTED]>
Cc: <[EMAIL PROTECTED]>
Sent: Monday, March 17, 2003 6:48 PM
Subject: Re: Process Limit on Linux ?


Philipp wrote:

Hi there,

i wrote several times to the list asking for help with a problem
regarding process limits on linux, but never got an answer.
today i found this story:
http://www.mysql.com/press/user_stories/handy.de.html

here are the relevant sentences:

"We had some process limit problems on our Linux Systems,
but thanks to your support we where able to patch the linux boxes
and move the limit to a size that meets our needs (we've got an average
of

about
1600 concurrent threads per server)."
These people use 2.2 Kernels so i dont know if the mentioned kernel and
glibc
patching is also relevant for me, as i am using 2.4 kernels only.
Here is my problem in detail:

i am using mysql-3.23.55 binary packages on linux 2.4.20 and i raised
ulimit

values and configuration in my.conf to allow more then 1500 threads. but
when
there are around 750 threads a new client connecting is told something
like

that (dont have the errno at the moment, i think its 11):

"cant create new thread, perhaps you are out of memory or there is a
os-depended bug".
The machine only runs apache and mysql and is a Xeon 2x2 2.4 Gz with 2
GB of

RAM.
cat /proc/meminfo sais that more then 1 Gig is used for caching, so
memory

should be no
problem .
Please, if you have any ideas, let me know. If it is a kernel issue,
tell me

to go to linux mailing lists
or if its some kind of "secret issue" only the support will be able to
answer let me know that.
Thanks in advance,
Philipp
-
Before posting, please check:
  http://www.mysql.com/manual.php   (the manual)
  http://lists.mysql.com/   (the list archive)
To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail
<[EMAIL PROTECTED]>

Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
Philipp,
Did you check /proc/sys/kernel/threads-max? I know with oracle 8i, you
are supposed to increase  /proc/sys/kernel/shmmax as well as some other
values. You might check into that and see if changing those values will
help.
Does your syslog say anything when these problems occur?
walt

walt




-
Before posting, please check:
  http://www.mysql.com/manual.php   (the manual)
  http://lists.mysql.com/   (the list archive)
To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php


Re: error

2003-03-18 Thread Joseph Bueno
Hi,

You should run mysqladmin from DOS prompt, not within mysql client.
If you want to create a database from mysql client, you should use:
create database database01;
Regards,
Joseph Bueno
Patrick Geary wrote:
Hi 
I am a new user of Mysql and have received an error.
I enter my C drive using the MS-DOS prompt. I then access the mysql\bin
directory.
I then type "mysql". which welcomes me to Mysql Monitor.
I then try to create a database called "database01" by typing "mysqladmin
create database01\g" but i get the following error:-
  Error 1064:You have an error in your SQL syntax near
'mysqladmin create database01' at line 1.
 
I would be very grateful if you could get back to me about this as i am
using it for a college project.
 
Thanks
Patrick Geary


-
Before posting, please check:
  http://www.mysql.com/manual.php   (the manual)
  http://lists.mysql.com/   (the list archive)
To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php


Re: store an array of double to mysql, How?

2003-03-18 Thread Joseph Bueno
Check BLOB data type:
http://www.mysql.com/doc/en/BLOB.html
Regards,
Joseph Bueno
Lai wrote:
Hi.

I am writing a program to do pattern recognition. I decide to use mysql to store the features calculated. The features are stored as an array of double in my c program.

But now I do think about it, I don't know how. I've searched the internet, none really fit my problem.

Is there a way to do this?

Thank you




-
Before posting, please check:
  http://www.mysql.com/manual.php   (the manual)
  http://lists.mysql.com/   (the list archive)
To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php


Re: automatically incrementing an int value

2003-03-12 Thread Joseph Bueno
Hi,

If you want to get 3,1,1 instead of 3,4,5, you should declare
your table this way:
create table virus (
 n int auto_increment not null,
 name char(128) not null,
 primary key(name,n),
 unique(name(100))
);
You can find a very good example in the manual:
http://www.mysql.com/doc/en/example-AUTO_INCREMENT.html
Regards,
Joseph Bueno
Douglas B. Jones wrote:
Hi,

I understood replace to only increment n when it matches the
name value. There are 122,111 statements, but when you add
up the numbers in the n column, they exceed 122,111. They should
sum up (when you add all them up including the ones I did not
show) to 122,111. If you add up the ones I show, you get way
more than 122,111. I did a little test on:
replace into virus values(NULL,"VBS/LoveLet-E");
replace into virus values(NULL,"VBS/LoveLet-E");
replace into virus values(NULL,"VBS/LoveLet-E");
replace into virus values(NULL,"VBS/LoveLet-G");
replace into virus values(NULL,"WM97/Myna-C");
The results were:

INSERT INTO virus VALUES (3,'VBS/LoveLet-E');
INSERT INTO virus VALUES (4,'VBS/LoveLet-G');
INSERT INTO virus VALUES (5,'WM97/Myna-C');
I would have expected 3,4,5 to be 3,1,1. I was expecting it to start
from zero each time it got a new name, it looks like it takes the last
n value and then start from there with the new name. Does this make
sense what I am asking? Is there a way of doing what I want? Yes, I
could
insert and do a count, but I was looking for another way so that the
select would not be so resource expensive.
Thanks,
Cheers,
Douglas
-Original Message-
From: Paul DuBois [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, March 12, 2003 11:29 AM
To: [EMAIL PROTECTED]; [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]; Douglas B Jones
Subject: RE: automatically incrementing an int value

At 10:22 -0500 3/12/03, Douglas B. Jones wrote:

Hi,

I just tried the below:

create table virus (
   n int auto_increment not null,
   name char(128) not null,
   primary key(n),
   unique(name(100))
);
with a data file that has 122,111 sql commands like:

replace into virus values(NULL,"VBS/LoveLet-E");
replace into virus values(NULL,"VBS/LoveLet-E");
replace into virus values(NULL,"VBS/LoveLet-E");
replace into virus values(NULL,"VBS/LoveLet-G");
replace into virus values(NULL,"WM97/Myna-C");
replace into virus values(NULL,"VBS/LoveLet-G");
replace into virus values(NULL,"WM97/Myna-C");
replace into virus values(NULL,"VBS/LoveLet-G");
replace into virus values(NULL,"VBS/LoveLet-G");
replace into virus values(NULL,"W32/Sircam-A");
Now when I do a:

grep VBS/LoveLet-G sqlfile | wc

I get:

  123   492  6027

123 entries for VBS/LoveLet-G in the file. When I do a mysqldump of
the data file and just grep for VBS:
mysqldump virus|grep VBS

INSERT INTO virus VALUES (3,'VBS/LoveLet-E');
INSERT INTO virus VALUES (111009,'VBS/LoveLet-G');
INSERT INTO virus VALUES (55841,'VBS/Stages-A');
INSERT INTO virus VALUES (121521,'VBS/LoveLet-AS');
INSERT INTO virus VALUES (1208,'VBS/SST-A');
INSERT INTO virus VALUES (85602,'VBS/VBSWG-X');
INSERT INTO virus VALUES (1215,'VBS/VBSWG-Z');
INSERT INTO virus VALUES (5846,'VBS/LoveLet-CL');
INSERT INTO virus VALUES (5996,'VBS/VBSWG-Fam');
INSERT INTO virus VALUES (83835,'VBS/Haptime-Fam');
INSERT INTO virus VALUES (55356,'VBS/LoveLet-F');
INSERT INTO virus VALUES (55546,'VBS/FreeLinks');
INSERT INTO virus VALUES (91207,'VBS/Kakworm');
INSERT INTO virus VALUES (117623,'VBS/Redlof-A');
As you can see, the numbers (n field) are way to high? Is this a bug


Why do you say that?  You indicated that the data file has 122,111
statements
in it.  I see no values for the n column that are larger than that
value.
REPLACE will increment the value of n when you specify a value of NULL
for that column.  It's behaving as it's supposed to.  Perhaps you should
use INSERT instead of REPLACE and process the file with mysql --force
to ignore duplicate key errors.  Specify the statements like this:
INSERT INTO virus (name) VALUES('VBS/xxx');


in mysql or n the sql? Even if I say unique(name) instead of
unique(name(100)),
I get the same results. Please note that I have tried destroying the
table
as well as the db, still get the same results.
Any ideas?

Thanks,
Cheers,
Douglas


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)
To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php



Re: list current db

2003-03-12 Thread Joseph Bueno
select database();

Regards,
Joseph Bueno
Jonathan Li wrote:
A few days ago I asked about how to know which database I am currently
in, I got an answer and I also replied to thank the send but the email
was returned. Any way I forgot to document it. I remember the answer
listed three methods to do it, they are "mysql> \s", "mysql> status".
Could you (or someone) point me the third method?

Thanks in advance
Jonathan
-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)
To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php


-
Before posting, please check:
  http://www.mysql.com/manual.php   (the manual)
  http://lists.mysql.com/   (the list archive)
To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php


Re: Process pb

2003-03-09 Thread Joseph Bueno
Hi,

MySQL server is single process, multi-threaded, and creates one thread
per client connection (more details in the manual:
http://www.mysql.com/doc/en/MySQL_threads.html)
What you see happens only if you run a single client at a time.
Is it your case ?
Regards,
Joseph Bueno
Julien Metayer wrote:
Hi everybody

I have a Redhat 8 with mysql 3.23 max (binary). My problem is simple, I have
a bi-xeon processor on the server, and mysql only launches one linux
process. With high charge, the process uses 100% on one processor only
instead of lauching new process on the other processors. I have not found
how to make mysql launching more than one son process at the startup, and
how to make mysql launching more process if needed.
Please excuse my bad english :)

Julien

--

Julien METAYER
Administration système & réseau - Gestion de projet
SOPRANE - 36a Rue des 20 toises - 38950 Saint Martin le Vinoux
Tel : 04 38 02 39 10 - Fax : 04 38 02 39 39 - GSM : 06 70 61 31 42
"Celui qui veut faire quelque chose a contre lui : ceux qui auraient voulu
faire pareil, ceux qui auraient voulu faire le contraire, et l'immense
majorité de ceux qui ne font jamais rien"



-
Before posting, please check:
  http://www.mysql.com/manual.php   (the manual)
  http://lists.mysql.com/   (the list archive)
To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php


Re: Fwd: data-retrieving udf

2003-03-05 Thread Joseph Bueno
hi,

There is an example (udf_example.cc)  in 'sql' sub-directory of
mysql sources.
You can code your functions in C++ but you must declare your UDF entry
points as 'extern "C"'
Regards,
Joseph Bueno
Maurizio Oristanio wrote:
hi everybody,

i'm trying to add a udf to mysql that itself accesses the database.
anybody ever tried that?
i want a function to dynamically create a database, symbolic links and 
tables.
i thought it could be as easy as just mix up a "normal udf" with my 
table-creating thing.
but when i try to create the function mysql says: errno: 2 
/usr/lib/myso.so: undefined symbol _Z10mysql_initP8st_mysql
(works fine if i pack it into a main-function and compile/link without 
the -shared option)
or is it the wrong library? ( -L'/usr/lib/mysql' -lmysqlclient)

or is it just evil to load a shared client into the server? :-)

...btw. does anyone have an example of a working udf written in c++? i 
had to write everything in c :-|
nobab
northern baltic blue gmbh
[EMAIL PROTECTED]
www.nobab.com
Stammhaus: Maklerstraße 6 - 24159 Kiel
Telefon 0431 33 983 0
Niederlassung West: Siemensring 27 - 47877 Willich
Telefon 02154 48129 00
Büro Haiger (im Hause Pracht): Hansastr. 10 - 35708 Haiger
Niederlassung Südwest (im Hause DPD): Carl-Benz-Straße 15 - 71634 
Ludwigsburg
Telefon 07141 300371 980
Büro Freising bei München (im Hause trans-o-flex): Adolf-Kolping-Str. 10 
- 85356 Freising
Zentralfax 0431 33983 20



-
Before posting, please check:
  http://www.mysql.com/manual.php   (the manual)
  http://lists.mysql.com/   (the list archive)
To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php


Re: Load Balanced Mysql Cluster

2003-03-05 Thread Joseph Bueno
Hi,

We had a similar problem and we have taken a different approach:
Instead of having dedicated slave servers, we configure a slave mysql 
server on each webserver. Each webserver connects only to 'localhost'
slave mysql server. Each slave server only deals with local connections 
(200 in your case) and it doesn't increase when we add more webservers.

With this setup, you only deal with load-balancing at HTTP level which
is much easier than SQL load-balancing.
Hope this helps
Joseph Bueno
Jared Saul wrote:
 I have a database heavy site that utilizes a master mysql server which 
processes all updates and replicates to 4 slave servers that handle the 
queries.   Right now I have a simple perl randomized selection that 
determines which of the four database slaves will receive any given 
query.  This works but is very limited and not fault tolerant at all.  
In particular I have periodic (and unexplained) server load spikes and 
continued requests to a spiking server results in a rapid site wide 
downward spiral of hung connections another topic for another post.  
But even a simple hardware failure would not be tolerated in the current 
scheme.

Anyway,  my hope was to utilize one of our Alteon Ace  Director 3 load 
balancers (AD3) to perform intelligent database load balancing.  I have 
found that this doesn't work in an obvious manner with the following 
complications:

1) There are no suitable health checks and tcp health checks result in 
many half-open connections (seem to hang on authentication) that 
ultimately max out the connections on the DB servers. ICMP work better 
but also causes sporadic hung connections and other bizzarre behaviors.  
I have considered scripted http health checks against a web page that 
ran a script that monitored the database but that seems convoluted.

2) The use of persistent database connections (via. mod_perl) becomes a 
nightmare with each server receiving (total_apache_processess)x(number 
apache servers) persistant connections.  Such that, with 3 apache 
servers running 100-200 processes each all of the 4 mysql servers 
maintains 300-600 persistant connections.  This quickly gets out of 
control.  Note, this issue exists regardless of load balancing methodology.

Well, I've contemplated this situation for months and it's really 
driving me rather nuts.  Any help is much appreciated.

Thanks-



Jared





-
Before posting, please check:
  http://www.mysql.com/manual.php   (the manual)
  http://lists.mysql.com/   (the list archive)
To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php


Re: Problem with registers import

2003-02-27 Thread Joseph Bueno
Have you by any chance defined an unique index on a SMALLINT field ?
In this case have a look at:
http://www.mysql.com/doc/en/Numeric_types.html
Regards
Joseph Bueno
Paulino Michelazzo wrote:
People

I have a txt file with 250.000 lines but, I'm import only 32767 lines.
The database don't say anything (errors) 

I'm using MySQL 3.23.53 in the Linux Slackware 8 system

Anyone can help me to solve this?

Regards


Paulino Michelazzo
[EMAIL PROTECTED]
ICQ: 2911392
NASA = Need Another Seven Astronauts




-
Before posting, please check:
  http://www.mysql.com/manual.php   (the manual)
  http://lists.mysql.com/   (the list archive)
To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php


Re: Replication Questions 2

2003-02-02 Thread Joseph Bueno
Hi,


Guddack Thorsten ICM MP SCM GO 21 wrote:

Hi List,

again some Questions to replication in mysql:

I test the replication with following setup:

1 Server in Europe as the master, 2 servers in Asia and south america as
slaves.

Both slaves does the replication very fine. 
Every insert, update or delete is replicated very fine, even with binary
data in blob stored.
But If I try to stop the slaves
with stop slave
it takes a very long time until the shell comes back?
Any ideas about that?

No idea.



Next Questions:
The connection to asia is very slow. Is there a chance to find out when the
slave has all the data replicated?



You can run "SHOW MASTER STATUS" on the master, "SHOW SLAVE STATUS" on the
slave and compare 'Log_File' and 'Pos' fields.


Thanks

thorsten guddack
	



Regards,
Joseph Bueno


-
Before posting, please check:
  http://www.mysql.com/manual.php   (the manual)
  http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: # of connected user

2003-02-01 Thread Joseph Bueno
SHOW PROCESSLIST
Check http://www.mysql.com/doc/en/SHOW_PROCESSLIST.html for details

Hope this helps,
Joseph Bueno


Mustafa Yelmer wrote:

How i list connected users(active) to mysql server?

it is important to know connected users for me?
Mysql runs in server-client system, and each host of clients are different
(i assume)


Mustafa Yelmer
Software Engineer
+90 212 2128200-1228
[EMAIL PROTECTED]





-
Before posting, please check:
  http://www.mysql.com/manual.php   (the manual)
  http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: partitionong MyISAM tables??

2003-02-01 Thread Joseph Bueno
Hi,

Have you looked at MERGE tables (http://www.mysql.com/doc/en/MERGE.html) ?

Hopre this helps,
Joseph Bueno

Prasanth Krishna wrote:

Is there any way to partition MyISAM tables in mysql? i have a huge
table and want to partition it. 
Do InnoDB tables support partitioning?
thanks.
Prasanth



-
Before posting, please check:
  http://www.mysql.com/manual.php   (the manual)
  http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: TIMESTAMP field is updated unintentionally

2003-01-31 Thread Joseph Bueno
Hi,

It is a feature, the first TIMESTAMP field is automatically
updated each time you update the record. Check the manual
for details:
http://www.mysql.com/doc/en/DATETIME.html

If you want mysql to automatically set it at creation time only,
your workaround is OK.

You can also convert ERSTELL_DATUM to DATETIME type and set is
explicitely to NOW() when you insert a new record.

Hope this helps
--
Joseph Bueno

Marco Deppe wrote:

Hi,

I was already questioning my sanity, but the problem below is
reproduceable:

This is how my table looks:
mysql> describe T_ORDH;
--+--+-+++
Field |Type  |Null |Key |Default |Extra
--+--+-+++
PK_ID |int(10) unsigned  | |PRI |NULL|auto_inc
ERSTELL_DATUM |timestamp(14) |YES  ||NULL|
STATUS|smallint(5) unsigned  | ||0   |

If I do
mysql> update T_ORDH set STATUS=2 where PK_ID=26272;
ERSTELL_DATUM is set to the current date. I know that a timestamp
takes the current time, if set it to NULL, but since I'm not touching
it, it shouldn't change, should it?

A quick workaround is
mysql> update T_ORDH set STATUS=2, ERSTELL_DATUM=ERSTELL_DATUM
-> where PK_ID=26272;

The big question: Is it a bug or a feature?
(mysql  Ver 11.18 Distrib 3.23.51, for pc-linux-gnu (i686))

  



-
Before posting, please check:
  http://www.mysql.com/manual.php   (the manual)
  http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: [ mysqlimport ]

2003-01-31 Thread Joseph Bueno
Hi,

You should not use mysqlimport but mysql:
mysql my_db < file.sql

See http://www.mysql.com/doc/en/mysqldump.html
for details.

Regards,
Joseph Bueno

Elby Vaz wrote:

I created a "file.sql" with mysqldump.

mysqldump my_db my_table > file.sql

What I do to get this file with the mysqlimport?

mysqlimport ??

Thanks,
e.





_
MSN Messenger: converse com os seus amigos online.  
http://messenger.msn.com.br


-
Before posting, please check:
  http://www.mysql.com/manual.php   (the manual)
  http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail 
<[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php



-
Before posting, please check:
  http://www.mysql.com/manual.php   (the manual)
  http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: file-descriptor limits for linux

2003-01-29 Thread Joseph Bueno
Hi,

I think you misunderstood /proc/sys/fs/file-max meaning, it IS
an averall machine limit, not per process.
From linux kernel documentation
(/usr/src/linux/Documentation/sysctl/fs.txt):
"file-max & file-nr:

The kernel allocates file handles dynamically, but as yet it
doesn't free them again.

The value in file-max denotes the maximum number of file-
handles that the Linux kernel will allocate. When you get lots
of error messages about running out of file handles, you might
want to increase this limit.

The three values in file-nr denote the number of allocated
file handles, the number of used file handles and the maximum
number of file handles. When the allocated file handles come
close to the maximum, but the number of actually used ones is
far behind, you've encountered a peak in your usage of file
handles and you don't need to increase the maximum."

Whatever value you set with ulimit, you won't get any error
message until the system actually reaches file-max limit.
It will then deny any new file open (not just those coming from
mysqld).

Hope this helps,
Joseph Bueno


Martin Waite wrote:

Hi,

Does anyone know what the story is for file-descriptor limits
on Linux ?

I read in

http://www.xenoclast.org/doc/benchmark/HTTP-benchmarking-HOWTO/node7.html

that root needs to set /proc/sys/fs/file-max to a high value 
in order that "ulimit -n " will work.  

However, on a Debian Woody box (2.4 kernel), this doesn't seem to 
be necessary.  As root, I can set the value as high as I want 
(up to about 63000) regardless of the value in /proc/sys/fs/file-max.

Also, these are per-process limits.  What is the overall machine 
limit - assuming there is one ?

These questions are related to MySQL, SQL, etc. Honest.

==
Martin


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php



-
Before posting, please check:
  http://www.mysql.com/manual.php   (the manual)
  http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: how to get the Nth record of a result ?

2003-01-27 Thread Joseph Bueno
Hi,
select * from table where  LIMIT N,1
More details at:
http://www.mysql.com/doc/en/SELECT.html

Regards,
Joseph Bueno

alx wrote:

HI all
I'm trying to get the Nth record of a query which i don't know nothing
but the table name.

I mean something like select * from table where  . "give
me the Nth result";

I hope someone can help me.

TIA
Bye
Alx


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




-
Before posting, please check:
  http://www.mysql.com/manual.php   (the manual)
  http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: insert query

2003-01-24 Thread Joseph Bueno
Are you using a field of type CHAR or VARCHAR ?
You should use a TEXT field instead.
According to the documentation (http://www.mysql.com/doc/en/BLOB.html) :
"There is no trailing-space removal for BLOB and TEXT columns when values
are stored, as there is for VARCHAR columns."

Hope this helps,
-- 
Joseph Bueno

Murthy wrote:
> how to insert only blank spaces into a field using insert query. Its getting
> trimmed off.
> 
> 
> Regards,
> Murthy
> 


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Recreating indexes on large tables

2003-01-20 Thread Joseph Bueno
Hi,


Instead of using separate "CREATE INDEX" statements, you can build all
your index at once with "ALTER TABLE":
ALTER TABLE my_table
  ADD INDEX ...,
  ADD INDEX ... ,
  ADD INDEX ... ;

Hope this helps,
-- 
Joseph Bueno

Salvesen, Jens-Petter wrote:
> Hello, everyone
> 
> I have the following situation:
> 
> After "enjoying" problems related to deleting a large portion of a table,
> subsequent slow selects and such, I decided to do an alternate route when
> removing data from a table:
> 
> The table had transactions for one year, and the table really only needs
> transactions for 6 months. So, I did a mysqldump without table def that
> extracted the data I really want. Then, I dumped the table, created a new
> without indexes, and loaded the data. All fine. 64 so million records.
> Having read a bit about mysql and used it for a few years, I understood that
> first loading data, and then creating indexes is faster than vice versa. All
> fine, still. Then, I started creating indexes. First, it seems that mysql
> makes a complete copy of the table, and then in reindexes it, and then drops
> the original table, replacing it with the newly indexed table. Is this
> correct? If so - how do make sure mysql makes one copy, and then performs
> all the reindexing on that table, rather than making a temp table for each
> index? Obviously, 64 million rows will take days no matter, but how do I
> really speed this up? My current hypothesis is that the temp table copy
> operation is my bottleneck, and that I should only _need_ to perform this
> operation once.
> 
> [mysqld]
> port  = 3306
> socket= /var/lib/mysql/mysql.sock
> skip-locking
> datadir = /data/mysql
> set-variable  = key_buffer=384M
> set-variable  = max_allowed_packet=1M
> set-variable  = table_cache=512
> set-variable  = sort_buffer=2M
> set-variable  = record_buffer=2M
> set-variable  = thread_cache=8
> # Try number of CPU's*2 for thread_concurrency
> set-variable  = thread_concurrency=8
> set-variable  = myisam_sort_buffer_size=3000M
> set-variable= myisam_max_sort_file_size=16000M
> set-variable= myisam_max_extra_sort_file_size=2500M
> log-bin
> server-id = 1
> 
> is the relevant portion of the my.cnf file.
> 
> The table is MyISAM, and the mysql version is 3.23.41-log from red hat 7.2
> (yes, I know, 7.3 is better).
> 
> We are performing the reindexing as a series of create index calls.
> 
> Please do not hesitate if you have further comments.
> 
> Thanks in advance!
> 
> Mvh / Best Regards
> 
> Jens-Petter Salvesen
> Lead Developer, Risk Management, Europay Norway
> 
> Phone : +47 2332 5119
> Mobile : +47 9829 7319
> 
> -
> Before posting, please check:
>http://www.mysql.com/manual.php   (the manual)
>http://lists.mysql.com/   (the list archive)
> 
> To request this thread, e-mail <[EMAIL PROTECTED]>
> To unsubscribe, e-mail <[EMAIL PROTECTED]>
> Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
> 



-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Perl DBI secret command?

2003-01-18 Thread Joseph Bueno
No conspiracy, read it again ! ;)
http://www.perldoc.com/cpan/DBI.html#rows

Joseph Bueno

Jeff Snoxell wrote:
> Hi,
> 
> just stumbled across the following piece of code:
> 
> my $rowcount = $sth->rows();
> 
> And I checked it out compared to a count of the fetched rows from an SQL
> QUERY. AND It matched exactly!
> 
> BUT, I can't find the command "rows()" in the DBI documentation.
> 
> What's the story? Have I unearthed a conspiracy?
> 
> 
> Jeff
> 
> 
> -
> Before posting, please check:
>   http://www.mysql.com/manual.php   (the manual)
>   http://lists.mysql.com/   (the list archive)
> 
> To request this thread, e-mail <[EMAIL PROTECTED]>
> To unsubscribe, e-mail
> <[EMAIL PROTECTED]>
> Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
> 



-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: MySQL Server Crashes under heavy load

2003-01-17 Thread Joseph Bueno
We have a site with a similar architecture:
- 6 load balanced front-ends
- 1 dedicated database server (Dual P3 1.4GHz, 2GbRAM, RH7.2)
we serve around 3 million pages/day (all pages are dynamiquely generated,
each page needs an average of 15 SQL queries).

What we have done:
- audit ALL SQL queries and make sure that they are optimised (all selects
  use indexes,...)
- optimise mysqld configuration (tune key_buffer_size, ..)
- setup replication : each front-end is a mysql slave server and replicate
  most used tables.
- modify the site so that heaviest SELECT queries are run by each front-end
  on local slave.

Result:
- on main database server: 300queries/s average (~700q/s peak)
   load : 0.2 average, ~0.7 peak
- on each front-end : 30q/s average (60q/s peak)

We could run more requests on the slaves but since the master server load
is so low, we have postponed those optimisations.

I think you really should audit your queries first. From my experience and what
other users have reported on this list, you should expect to be able to
run several hundred queries/s with the kind of hardware you are
using.

Hope this helps
-- 
Joseph Bueno


Chavvon Smith wrote:
> We are hosting a high volume site that gets about 1 million page views a day
> on RedHat 7.3.  We currently have 3 load balanced servers on the front end
> accessing a MySQL server on the back end.  The MySQL servers is dual P3 1ghz
> with 1 GB of RAM and when the MySQL queries hit about 50 per second, the DB
> crashes and the servers is useless unless you reset the DB.  Memory is only
> at about 50% usage, but the CPU skyrockets to 100%.
> 
> The only solution we can think of is to throw a huge server at the backend
> (i.e. 4-8 processor Compaq 8500) and keep RH 7.3 or switch to Windows 2000
> Advanced Server and cluster a few dual P3 servers together.
> 
> Any other solutions to make MySQL handle a high volume site?
> 
> CS
> 


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Loading a database into RAM

2003-01-14 Thread Joseph Bueno
>From my experience: don't do anything.
Linux filesystem cache is really efficient and as soon as all your database
will be in the cache, you wont get any disk read anymore; only writes when
you INSERT/UPDATE data.

Our main database server has 2Gb RAM and the total size of all databases
is around 1.2 Gb. It is currently serving more than 300 queries/s and 'vmstat 1'
output is:

   procs  memoryswap  io system cpu
 r  b  w   swpd   free   buff  cache  si  sobibo   incs  us  sy  id
 0  0  0  0  83044  22216 1189764   0   0 0 01 0   1   1   1
 0  0  0  0  83044  22216 1189764   0   0 0   312  633   613   1   3  96
 0  0  0  0  83044  22216 1189768   0   0 0 0  927   928   6   6  88
 1  0  0  0  83044  22216 1189768   0   0 0 0  853   910   7   4  89
 0  0  0  0  83044  22216 1189768   0   0 0 0  625   672   0   1  99
 0  0  0  0  83044  22216 1189768   0   0 0 0 1067  1106  13   5  83
 0  0  0  0  82788  22216 1189768   0   0 092  952   935  11   3  85
 0  0  0  0  82788  22216 1189768   0   0 0 0  861   878   4   6  90
 0  1  0  0  82788  22216 1189832   0   0 0   136 2811  4834   8   9  83
 0  0  0  0  82788  22216 1189832   0   0 0 0  822   833   5   4  91
 0  0  0  0  82788  22216 1189832   0   0 0 0  822   877   2   2  96
 0  0  0  0  82788  22216 1189832   0   0 0   324 1036  1055   4   4  92
 0  0  0  0  82788  22216 1189836   0   0 0 0 1059  1124   7   3  90
 0  0  0  0  82788  22216 1189836   0   0 0 0 1115  1137   7   4  89
 2  0  0  0  82548  22216 1189836   0   0 0 0  975  1099  28  13  59
 1  0  0  0  83100  22216 1189840   0   0 0 0 1359  1768  25  12  63

As you can see, no disk reads (bi) and a few disk writes (bo)

Hope this helps,
Joseph Bueno

Steve Quezadas wrote:
> I have a mySQL database that is about 240 megabytes. I am loading it on
> a Linux server with 2 gigs of RAM. I would like to have the whole table
> reside in memory to save time from disk access. Is there any way to load
> the tables into RAM on startup? I am thinking about creating a heap
> table, but I need the heap table to be lodaed when mysql gets loaded. Is
> there anyway to do this? Or perhaps it is best to put the table in a RAM
> disk or something?
> 
> Anyone know a generally recommended solution?
> 
> -Steve
> 
> 
> -
> Before posting, please check:
>   http://www.mysql.com/manual.php   (the manual)
>   http://lists.mysql.com/   (the list archive)
> 
> To request this thread, e-mail <[EMAIL PROTECTED]>
> To unsubscribe, e-mail
> <[EMAIL PROTECTED]>
> Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
> 



-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: MySQL Select SUM Function

2003-01-13 Thread Joseph Bueno
SELECT SUM(colA+colB) FROM TABLE ?

Stevens, Ryan wrote:
> I understand the SQL statement in MySQL to be "SELECT sum(column name) FROM table;" 
>but I would like to get a SUM of multiple columns.  Is this possible??
> 
> Thanks,
> 
> Ryan
> 
> 


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: The diference about raw devices and disk partitions

2003-01-13 Thread Joseph Bueno
Hi,

Since you don't specify your OS, I assume you are using Linux;
if you are using another UNIX like system (BSD, Solaris,...),
my explanations below may be useless :(

A disk partition (as /dev/sda1) is a "bloc device",this is not the
same as a "raw device". You can get more info in Linux HOWTO:
http://www.linux.org/docs/ldp/howto/SCSI-2.4-HOWTO/rawdev.html

The main difference is that a "bloc device" is accessed through
the filesystem cache while accesses to a "raw device" bypass
the cache.

However, I have not yet tried raw devices with MySQL/InnoDB, so
I can't give you more MySQL specific advice.

Hope this helps,
Joseph Bueno
NetClub

Dyego Souza do Carmo wrote:
> 
> I'm confused , what is the difference and raw device and a disk
> partition ?
> 
> ex: on innodb I'm using /dev/sda1 for raw device, is it correct ?
> 
> 
> tnks.
> 
> 
> 
> sql,query
> 
> 


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Ram Usage, and processes

2003-01-11 Thread Joseph Bueno
iority_updates| OFF |
> | lower_case_table_names  | 0   |
> | max_allowed_packet  | 1047552 |
> | max_binlog_cache_size   | 4294967295  |
> | max_binlog_size | 1073741824  |
> | max_connections | 300 |
> | max_connect_errors  | 10  |
> | max_delayed_threads | 20  |
> | max_heap_table_size | 16777216|
> | max_join_size   | 4294967295  |
> | max_sort_length | 1024|
> | max_user_connections| 0   |
> | max_tmp_tables  | 32  |
> | max_write_lock_count| 4294967295  |
> | myisam_max_extra_sort_file_size | 256 |
> | myisam_max_sort_file_size   | 2047|
> | myisam_recover_options  | 0   |
> | myisam_sort_buffer_size | 67108864
> | net_buffer_length   | 523264  |
> | net_read_timeout| 30  |
> | net_retry_count | 10  |
> | net_write_timeout   | 60  |
> | open_files_limit| 0   |
> | pid_file| /serve/mysql//fbi.pid   |
> | port| 3306|
> | protocol_version| 10  |
> | record_buffer   | 2093056 |
> | record_rnd_buffer   | 2093056 |
> | query_buffer_size   | 0   |
> | safe_show_database  | OFF |
> | server_id   | 1   |
> | slave_net_timeout   | 3600|
> | skip_locking| ON  |
> | skip_networking | OFF |
> | skip_show_database  | OFF |
> | slow_launch_time| 2   |
> | socket  | /tmp/mysql.sock |
> | sort_buffer | 2097144 |
> | sql_mode| 0   |
> | table_cache | 512 |
> | table_type  | MYISAM  |
> | thread_cache_size   | 8   |
> | thread_stack| 65536   |
> | transaction_isolation   | READ-COMMITTED  |
> | timezone| GMT |
> | tmp_table_size  | 33554432|
> | tmpdir  | /tmp/   |
> | version | 3.23.52-log |
> | wait_timeout| 28800   |
> +-+-+
> 
> -end show variables output ---
> 
> Just a side question. I noticed in the show variables output the variable
> have_raid? I do have the data directory stored on an external raid device,
> however the variable is set to NO. Will this cause me any problems...

have_raid has nothing to do with HW raid. It was mainly used to
break the 2Gb/file limit of some filesystems by spreading data
of a MyISAM table in several files. This option is not very useful
nowadays since this 2Gb limit has been removed in all recent OS.

> 
> 
> Thank you all so much for your help, MySQL is a great app, and I 
> apreciate all the hard work you have done...
> 

Thanks, but I haven't done that much, I am just a user ;)

> 
> 
> -James...
> 
> 

Hope this helps
Joseph Bueno


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: QUERY question (group by) - please help

2003-01-11 Thread Joseph Bueno
Damir Dezeljin wrote:
> Hi again ;)
> 
> 
>>SELECT COUNT(*)
>>  FROM test
>>  WHERE (kid=1) OR (kid=2) OR (kid=4)
>>  GROUP BY cid,aid;
> 
> This query returns:
> +--+
> | COUNT(*) |
> +--+
> |1 |
> |1 |
> |2 |
> |1 |
> |1 |
> |1 |
> +--+
> 6 rows in set (0.03 sec)
> 
> Inest of this I want to get back '6' << so numbers of rows in resulting
> query;
> 
> The same query WITHOUT 'GROUP BY' returns:
> mysql> SELECT COUNT(*) FROM test WHERE (kid=1) OR (kid=2) OR (kid=4);
> +--+
> | COUNT(*) |
> +--+
> |7 |
> +--+
> 1 row in set (0.00 sec)
> 
> So two rows are grouped because of same cid,did
> 
> Any sugestion?
> 
> Regards,
> Dezo
> 
> 

SELECT COUNT(DISTINCT cid,aid)
  FROM test
  WHERE (kid=1) OR (kid=2) OR (kid=4)

Should give you the number of groups.

Hope this helps
Joseph Bueno


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: QUERY question (group by) - please help

2003-01-11 Thread Joseph Bueno
Damir Dezeljin wrote:
>>>
>>>SELECT 
>>>FROM test
>>>WHERE (kid=1) OR (kid=2) OR (kid=4)
>>>GROUP BY cid,aid;
>>>
>>>Is it posible to get number of rows with such a query from MySQL v3.23.49?
>>
>>You want both a list of rows, and a count of the number of rows -- with
>>a single query.  That doesn't make sense, because those two things are
>>incompatible.
> 
> I make a mistake by not specifing my problem in more detail. I will try to
> be more specific:
> 
> I'm writing my own search engine for my web site. I want to output for
> each result on each page (20 results per page) the no_of_current_result /
> no_of_total_results . So with a first query I want to get a count of the
> number of rows. Then I will execute a query with LIMIT  to get
> results for the current page.
> 
> Is it posible to get number of all results without geting all results in a
> temporary table and then count them? If yes, how?

SELECT COUNT(*)
  FROM test
  WHERE (kid=1) OR (kid=2) OR (kid=4)
  GROUP BY cid,aid;

Should do it, or am I missing something ?

> 
> Thanks and regards,
> Dezo
> 
> 


Hope this Helps
Joseph Bueno


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Why "unsigned" doesn't work?

2002-12-29 Thread Joseph Bueno
Octavian Rasnita wrote:
> Hi all,
> 
> I've tried the following SQL line in MySQL 4.05 for Windows and it told me
> that there is an error starting from "unsigned...".
> 
> mysql> create table aaa(id int not null unsigned, name text);
> 
> Can you tell me why doesn't it work?
> 

Sure ! Your syntax is wrong. It should be:
create table aaa(id int unsigned not null, name text);

Please see the manual for more details:
http://www.mysql.com/doc/en/CREATE_TABLE.html

> Thank you.
> 
> Teddy,
> Teddy's Center: http://teddy.fcc.ro/
> Email: [EMAIL PROTECTED]
> 
Regards,
Joseph Bueno


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: need index date help

2002-12-21 Thread Joseph Bueno
Instead of:
TO_DAYS(adate) >= TO_DAYS(NOW()) - 6
you can try:
adate >= unix_timestamp(now()) - 6 * 24 * 3600
This way, you don't need to apply a function to 'adate'.

You should also note that these expressions are not exactly
equivalent since yours compares day numbers but mine compares
seconds.
If you really need to work on day numbers, it would
be more efficient to use a separate column where you explicitely
insert TO_DAYS(NOW()). This way you can index it and efficiently
use it on SELECTs.

Hope this helps
Joseph Bueno

John Hinton wrote:
> OK.. no takers the first time... I'll try to give more/better
> information..
> 
> I am running into a system wall here. I have at the moment about 2600
> rows of data totaling 650K. I expect this to grow at a rate of about an
> additional 1200-1500 rows per week. I am using PHP to format the returns
> into webspace.
> 
> I have a field named 'adate' which is a mysql 14 character timestamp
> (yes, I need HHMMSS data for other stuff). I am creating an array based
> on a distinct return from the database. I then am in turn looping
> through that array of about 25 entries, (which will remain at about 25
> with time) and running each through 10 queries all based on date.
> The queries are really only two, with the exception of choosing separate
> intervals of time to return, one having distinct fields parsed, the
> other all rows parsed. The following are the two snippets of code which
> get repeated five more times with only the time interval changed.
> 
> $table, $user_net are PHP variables and $page[$i] is the array of 25
> entries.
> 
> SELECT TO_DAYS(adate), mask FROM $table WHERE mask NOT LIKE '$user_net'
> AND page LIKE '$page[$i]' AND TO_DAYS(adate) >= TO_DAYS(NOW()) - 6
> SELECT adate, mask FROM $table WHERE mask NOT LIKE '$user_net' AND page
> = '$page[$i]' AND TO_DAYS(adate) >= TO_DAYS(NOW()) - 6
> 
> All I need is the count from each query.
> 
> So, these ten queries are being run 25 times on 2600 rows of data and it
> is taking about 4-6 seconds. I plan to collect data up to a limit of
> about 70,000 rows. If I can expect the query time to grow linearly, it
> would take about 2 minutes to generate this data. I need to get that
> down to maybe 15 seconds  or as little as possible.
> 
> I have indexed 'adate', but don't think the index really works within
> the functions? Maybe I'm stuck thinking inside of a box here? Perhaps
> there is one blindingly great solution which I have not considered. This
> is the first time I have ever created anything that really taxed a
> system... therefore I am new at thinking in many of these terms. Perhaps
> I should be rolling the data off into a temp file or something and
> running the results using PHP? I really don't know what direction to
> take, but I do see what appears to be a lot of repeating work, with only
> little changes in time chunks. 
> 
> Should I perhaps create a 'date' field, grabbing only MMDD and
> working from there? What am I not thinking about here?
> 
> Any suggestions are very much welcome.
> 
> John Hinton - Goshen, VA.
> http://www.ew3d.com
> 
> Those who dance are considered insane 
> by those who can't hear the music
> 


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Storage Requirements Not Equal to Actual Disk Space Used...

2002-12-19 Thread Joseph Bueno
mysql needs a few more bytes per record for its internal use.
Since you use a VARCHAR, you have dynamic length records;
this is described in the manual:
http://www.mysql.com/doc/en/Dynamic_format.html

Regards,
Joseph Bueno

Andrew Kuebler wrote:
> I have a table with 17,168,035 records. I have the following column
> types and I read about the following storage requirements for each
> column:
> 
> (1) INT Column- Should take up 4 bytes each
> (1) MEDIUMINT Column  - Should take up 3 bytes each
> (1) DATE Column   - Should take up 3 bytes each
> (1) VARCHAR Column- Should take up Length + 1 bytes each
> 
> The INT Column should take up 4 * 17,168,035 = 68,672,140 bytes
> The MEDIUMINT Column should take up 3 * 17,168,035 = 51,504,105 bytes
> The DATE Column should take up 3 * 17,168,035 = 51,504,105 bytes
> I ran a query for the VARCHAR column and it has a total of 141,485,442
> characters plus the additional 17,168,035 characters to store the string
> length. I see this column should be taking up 158,653,477 bytes.
> 
> Total I would think my table should be somewhere around 330,333,827
> byes, but rather it is 444,669,952 bytes. To be over 100 meg off, I must
> be doing something wrong. I did try repairing the table also to make
> sure I was reading the right file size, but it stays the same. Can
> anyone tell me what I am doing wrong?
> 
> Thank you in advance,
> 
> Andrew
> 


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: How can I duplicate a mysql template database?

2002-12-17 Thread Joseph Bueno
You cannot execute several SQL statements within a single query.
You are trying to reinvent 'mysql' client batch processing; if you want
to do that, you have to write an SQL parser in PHP that split
your input in independant SQL statements and execute them one by one :(


Daevid Vincent wrote:
>>You mentioned a
>>  echo "".$sql."";
>>in your first post; what do you see?
> 
> 
> I see exactly what I expected to see. Basically the contents of a db.sql
> file read in that looks like this roughly:
> 
> CREATE TABLE table1 (
>blah int(10),
>foo char(5)
> );
> 
> CREATE TABLE table2 (
>blee int(10),
>fee char(5)
> );
> 
> Etc.. Everything looks exactly like the .sql file does and the .sql file
> works perfectly if I redirect it in via the command line or even via an
> system() call.
> 
> 
>>That's good, and we'll assume for the moment that your script doesn't
>>error through to die but instead makes a successful connection.
> 
> 
> Yes. All that stuff works great.
> 
> 
>>By the way, I see you quoting around your variables like
>>  print "This is " . $var . " here" ;
> 
> 
> Yes, its' for readability in HomeSite. It color codes things, and that's
> a nice way to see it proper.
> 
> 
>>How does it fail?  Give us more detail.
> 
> 
> With the error I posted earlier:
> 
> 
>>1064: You have an error in your SQL syntax near ';
>>CREATE TABLE Departments (
>>  DeptID int(10) unsigned NOT NULL auto_increment,' at line 4
> 
> 
> It's always on the second table. Order is irrelevant.
> Notice the ";" that it chokes on. That only happens when I separate the
> CREATE commands of course. But there is no other way to deliminate them.
> 
> 
>>% I just have an awful feeling that PHP/mySQL won't let me 
>>stack commands
>>% like that because if I just do one table, like this: 
>>% 
>>% $sql = <<>% CREATE TABLE Schedule (
>>...
>>% SQL;
>>% 
>>% It "works". However I'm dreading doing this one table at a 
>>time. Grr.
>>
>>No, that doesn't make sense.  It shouldn't matter how you create and
>>populate the variable, as long as the end result content is the same.
>>Since it works when you lay it out manually, it seems very 
>>probable that your filling it out in steps has some problems.
> 
> 
> I think I'm right ;-)
> 
> Try if for yourself and see what I mean. Just try to create a db and two
> tables and you'll see it choke.
> 
> d 
> 
> 


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: How can I duplicate a mysql template database? [hack]

2002-12-17 Thread Joseph Bueno
I am not a PHP expert but I think you can run external commands from PHP,
so it should be possible to run mysqladmin, mysqldump and mysql from PHP.
This will work even if you database is on a remote server; just specify
the hostname (ex: "mysqladmin -h mysqlhost create db2")


Daevid Vincent wrote:
> Unfortunately, command line is not an option for me. As I said before,
> the php scripts are on a web server and the database is on another
> server.
> 
> 
>>-Original Message-
>>From: Joseph Bueno [mailto:[EMAIL PROTECTED]] 
>>Sent: Tuesday, December 17, 2002 2:47 AM
>>To: [EMAIL PROTECTED]
>>Subject: Re: How can I duplicate a mysql template database? [hack]
>>
>>
>>Hi,
>>
>>There is an easier way to duplicate a database; from the comand
>>line, you can run:
>>
>>mysqladmin create db2
>>mysqldump db1 | mysql db2
>>
>>If you want to duplicate the schema only (without the data):
>>mysqladmin create db2
>>mysqldump --no-data db1 | mysql db2
>>
>>(You may have to add host/user/password options but I just wanted to
>>show the idea).
>>
>>Hope this helps,
>>Joseph Bueno
>>
>>Daevid Vincent wrote:
>>
>>>Seems to me there should be a built in SQL command to duplicate a
>>>database. Jeepers. Or to read in a .sql file from PHP and create a
>>>database out of it (which was the original intent).
>>>
>>>Anyways, here's a 'hack'. I'd still love to hear anyone else's more
>>>elegant solution.
>>>
>>>$V2DB = "V2_SL".$CompanyID;
>>>
>>>$result = mysql_create_db($V2DB, $linkI);
>>>if (!$result) $errorstring .= "Error creating ".$V2DB."
>>>database\n".mysql_errno($linkI).": 
>>
>>".mysql_error($linkI)."\n"; 
>>
>>>mysql_select_db ($V2DB, $linkI) or die ("Could not select ".$V2DB."
>>>Database");
>>>/*
>>>//TODO: None of these below here work. Ugh! so frustrating!!
>>>//$filename = "/mypath/todb/V2_DB.sql";
>>>//$fd = fopen ($filename, "r");
>>>//$sql = fread ($fd, filesize ($filename));
>>>//fclose ($fd);
>>>
>>>//$lines = file($filename);
>>>//foreach ($lines as $line_num => $line) { $sql .= $line; }
>>>
>>>//$sqlTables = explode(";",$sql);
>>>//foreach ($sqlTables as $table)
>>>//{
>>>//   echo "$table\n";
>>>//   $result = mysql_query($sql,$linkI);
>>>//   if (!$result) $errorstring .= "Error creating ".$V2DB."
>>>".$table." table\n".mysql_errno($linkI).":
>>>".mysql_error($linkI)."\n"; 
>>>//}
>>>*/
>>>
>>>//You must have already created the "V2_Template" database. 
>>>//This will make a clone of it, including data.
>>>
>>>$tableResult = mysql_list_tables ("V2_Template");
>>>while ($row = mysql_fetch_row($tableResult)) 
>>>{
>>> $tsql = "CREATE TABLE ".$V2DB.".".$row[0]." AS SELECT * FROM
>>>V2_Template.".$row[0];
>>> echo $tsql."\n";
>>> $tresult = mysql_query($tsql,$linkI);
>>> if (!$tresult) $errorstring .= "Error creating
>>>".$V2DB.".".$row[0]." table\n".mysql_errno($linkI).":
>>>".mysql_error($linkI)."\n"; 
>>>}
>>>
>>>
>>>
>>>
>>>>-Original Message-
>>>>From: Daevid Vincent [mailto:[EMAIL PROTECTED]] 
>>>>Sent: Tuesday, December 17, 2002 1:16 AM
>>>>To: [EMAIL PROTECTED]
>>>>Cc: 'Bill Lovett'
>>>>Subject: RE: How can I duplicate a mysql template database?
>>>>
>>>>
>>>>Thanks for the reply Bill, however this did not solve the problem. I
>>>>still receive the same error with your method as well.
>>>>
>>>>1064: You have an error in your SQL syntax near ';
>>>>CREATE TABLE Departments (
>>>> DeptID int(10) unsigned NOT NULL auto_increment,' at line 4
>>>>
>>>>
>>>>
>>>>>Are you first creating a new database first with mysql_create_db() 
>>>>>before trying to run your schema? What error message do you 
>>>>
>>>>get? The 
>>>>
>>>>
>>>>>code you've posted is only creating a table.
>>>>
>>>&

Re: How can I duplicate a mysql template database? [hack]

2002-12-17 Thread Joseph Bueno
Hi,

There is an easier way to duplicate a database; from the comand
line, you can run:

mysqladmin create db2
mysqldump db1 | mysql db2

If you want to duplicate the schema only (without the data):
mysqladmin create db2
mysqldump --no-data db1 | mysql db2

(You may have to add host/user/password options but I just wanted to
show the idea).

Hope this helps,
Joseph Bueno

Daevid Vincent wrote:
> Seems to me there should be a built in SQL command to duplicate a
> database. Jeepers. Or to read in a .sql file from PHP and create a
> database out of it (which was the original intent).
> 
> Anyways, here's a 'hack'. I'd still love to hear anyone else's more
> elegant solution.
> 
> $V2DB = "V2_SL".$CompanyID;
> 
> $result = mysql_create_db($V2DB, $linkI);
> if (!$result) $errorstring .= "Error creating ".$V2DB."
> database\n".mysql_errno($linkI).": ".mysql_error($linkI)."\n"; 
> 
> mysql_select_db ($V2DB, $linkI) or die ("Could not select ".$V2DB."
> Database");
> /*
> //TODO: None of these below here work. Ugh! so frustrating!!
> //$filename = "/mypath/todb/V2_DB.sql";
> //$fd = fopen ($filename, "r");
> //$sql = fread ($fd, filesize ($filename));
> //fclose ($fd);
> 
> //$lines = file($filename);
> //foreach ($lines as $line_num => $line) { $sql .= $line; }
> 
> //$sqlTables = explode(";",$sql);
> //foreach ($sqlTables as $table)
> //{
> //echo "$table\n";
> //$result = mysql_query($sql,$linkI);
> //if (!$result) $errorstring .= "Error creating ".$V2DB."
> ".$table." table\n".mysql_errno($linkI).":
> ".mysql_error($linkI)."\n"; 
> //}
> */
> 
> //You must have already created the "V2_Template" database. 
> //This will make a clone of it, including data.
> 
> $tableResult = mysql_list_tables ("V2_Template");
> while ($row = mysql_fetch_row($tableResult)) 
> {
>   $tsql = "CREATE TABLE ".$V2DB.".".$row[0]." AS SELECT * FROM
> V2_Template.".$row[0];
>   echo $tsql."\n";
>   $tresult = mysql_query($tsql,$linkI);
>   if (!$tresult) $errorstring .= "Error creating
> ".$V2DB.".".$row[0]." table\n".mysql_errno($linkI).":
> ".mysql_error($linkI)."\n"; 
> }
> 
> 
> 
>>-Original Message-
>>From: Daevid Vincent [mailto:[EMAIL PROTECTED]] 
>>Sent: Tuesday, December 17, 2002 1:16 AM
>>To: [EMAIL PROTECTED]
>>Cc: 'Bill Lovett'
>>Subject: RE: How can I duplicate a mysql template database?
>>
>>
>>Thanks for the reply Bill, however this did not solve the problem. I
>>still receive the same error with your method as well.
>>
>>1064: You have an error in your SQL syntax near ';
>>CREATE TABLE Departments (
>>  DeptID int(10) unsigned NOT NULL auto_increment,' at line 4
>>
>>
>>>Are you first creating a new database first with mysql_create_db() 
>>>before trying to run your schema? What error message do you 
>>
>>get? The 
>>
>>>code you've posted is only creating a table.
>>
>>Yes. I am:
>>
>>$V2DB = "V2_SL".$CompanyID;
>>$result = mysql_create_db($V2DB, $linkI);
>>mysql_select_db ($V2DB, $linkI) or die ("Could not select ".$V2DB."
>>Database");
>>
>>
>>>Also, rather than opening and reading in an external file or 
>>>doing all 
>>>those $sql .= lines, you might have an easier time using 
>>>heredoc syntax, 
>>>which would let you do something like
>>>
>>>$sql = <<>>CREATE TABLE IP_Dept (
>>>   IP_Addr int(10) unsigned NOT NULL default
>>>   DeptID int(10) unsigned NOT NULL default
>>>
>>>   ...etc
>>>
>>>SQL;
>>>
>>>$result = mysql_query($sql,$linkI);
>>
>>Thanks for that tip, however it fails the same as opening the file and
>>same as "$sql .="
>>
>>
>>>(hopefully $linkI is your mysql connection) This way you 
>>>don't have to bother with all that quoting.
>>
>>Yes, $linkI is my connection identifier.
>>
>>I just have an awful feeling that PHP/mySQL won't let me 
>>stack commands
>>like that because if I just do one table, like this: 
>>
>>$sql = <<>CREATE TABLE Schedule (
>>  ScheduleID int(10) unsigned NOT NULL auto_increment,
>>  TargetRange char(255) default NULL,
>>  ScannerID int(10) unsigned NOT 

Re: MySQL optimization

2002-12-17 Thread Joseph Bueno
Hi,

It seems that you don't have any index on your tables.
You should at least create an index on flObjectID in all tables.
You should also use 'explain' on your query to make sure that
indexes are properly used.

Hope this helps
Joseph Bueno

John Glenn wrote:
> 
> 
> I'm looking for recomendations on improving system performance.  Using
> the SQL command below on a MySQL server I find result times averaging
> three seconds.  Being very inexperienced with database programming I
> would appreciate any comments on whether this is expected behaviour, or
> where my design might improve.  The details of my system are below.
> 
> 
> The data I'm looking for starts with tblItems which holds a list of
> Items we're looking to buy.  I want the name of the item from tblStock,
> and statistics on prices we've found in our history (if they exist).
> I've never done a multi table join before and this is what I've come up
> with:
> 
> 
> 300 Mhz, 32 MB RAM
> CPU: 65% idle, RAM: 12MB Free
> OS: Slackware Linux 8.1 (2.4.18)
> Mysqld Ver 3.23.53a for pc-linux-gnu on i686
> 
> mysql> select fldName, fldQuantity, fldTotalCost,
> tblntItems.fldObjectID, avg(fldQuote), count(fldQuote), max(fldQuote),
> min(fldQuote) from tblStock, tblntItems left join tblntQuotes on
> tblntItems.fldObjectID = tblntQuotes.fldObjectID where
> tblntItems.fldObjectID = tblStock.fldObjectID group by fldObjectID;
> 
> 
> 
> mysql> show fields from tblStock;
> +-+--+--+-+-+---+
> | Field   | Type | Null | Key | Default | Extra |
> +-+--+--+-+-+---+
> | fldObjectID | int(11)  | YES  | | NULL|   |
> | fldName | varchar(255) | YES  | | NULL|   |
> | fldCost | int(11)  | YES  | | NULL|   |
> | fldRetail   | int(11)  | YES  | | NULL|   |
> | fldUpdate   | varchar(255) | YES  | | NULL|   |
> +-+--+--+-+-+---+
> 5 rows in set (0.00 sec)
> aprox 7000 items.
> 
> 
> 
> mysql> show fields from tblntQuotes;
> +-+-+--+-+-+---+
> | Field   | Type| Null | Key | Default | Extra |
> +-+-+--+-+-+---+
> | fldObjectID | int(11) | YES  | | NULL|   |
> | fldDate | date| YES  | | NULL|   |
> | fldSource   | int(11) | YES  | | NULL|   |
> | fldQuote| double  | YES  | | NULL|   |
> +-+-+--+-+-+---+
> 4 rows in set (0.00 sec)
> approx 130 records.
> 
> 
> mysql> show fields from tblntItems;
> +--+-+--+-+-+---+
> | Field| Type| Null | Key | Default | Extra |
> +--+-+--+-+-+---+
> | fldObjectID  | int(11) | YES  | | NULL|   |
> | fldQuantity  | int(11) | YES  | | NULL|   |
> | fldTotalCost | double  | YES  | | NULL|   |
> +--+-+--+-+-+---+
> 3 rows in set (0.00 sec)
> approx 100 records.
> 
> 
> Regards,
> 
> John Glenn
> 
> 


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: How to backup without blocking?

2002-12-06 Thread Joseph Bueno
Hi,

You can also use mysql replication: you install a slave server and
run mysqlhotcopy on it. This way, you never freeze the master server.

Hope this helps
Joseph Bueno

Philip Mak wrote:
> sql, query
> 
> Right now, every day I run the equivalent of a mysqlhotcopy on my
> database (read-lock the tables, copy the files, unlock the tables).
> 
> This freezes up everything (websites, etc.) that uses the database for
> 10 minutes. That is too long.
> 
> Is there a way to get a snapshot of the database without locking the
> tables? Do I need to convert from MyISAM to InnoDB perhaps?
> 




-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Multiple Inserts and updates

2002-12-01 Thread Joseph Bueno
If you don't have any error message, what make you think it doesn't
work?

FYI, I have run the following commands from mysql client:

mysql> use test
Database changed
mysql> create table Table1 (Id int not null auto_increment primary key,
Product varchar(80), Qty int);
Query OK, 0 rows affected (0.00 sec)

mysql> Insert Into Table1 (Product, Qty) Values ("56-56", 5),
("27-01", 1), ("15-02", 2);
Query OK, 3 rows affected (0.00 sec)
Records: 3  Duplicates: 0  Warnings: 0

mysql> select * from Table1;
++-+--+
| Id | Product | Qty  |
++-+--+
|  1 | 56-56   |5 |
|  2 | 27-01   |1 |
|  3 | 15-02   |2 |
++-+--+
3 rows in set (0.00 sec)

It really seems to work ;)

Regards,
Joseph Bueno

Ed Reed wrote:
> I created a new table called Table1 and did a test with that command. My
> test table does have all three fields that I mentioned. The ID field is
> an autoincrement. The other fieldnames match what's in my command. 
> 
> I'm running the command from MySQL-Front and it doesn't return any
> error message it just doesn't insert the new records. If I include the
> outer parens then it tells me there's an error in my syntax.
> 
> - Thanks
> 
> 
>>>>Joseph Bueno <[EMAIL PROTECTED]> 12/1/2002 12:00:21 PM >>>
> 
> According to your first post, your fields are named ID, PartNum, Qty.
> May be replacing 'Product' with 'PartNum' will help.
> You should only remove outer parens:
> 
> Insert Into Table1 (Product, Qty)
> Values ("56-56", 5), ("27-01", 1), ("15-02", 2);
> 
> If it does not work, you should also post the error message you
> are getting from mysql.
> 
> Regards,
> Joseph Bueno
> 
> Ed Reed wrote:
> 
>>Thanks for the response. I tried your example but it doesn't work,
>>Here's my SQL, can you tell what I'm doing wrong?
>>
>>Insert Into Table1 (Product, Qty)
>>Values (("56-56", 5), ("27-01", 1), ("15-02", 2));
>>
>>I've tried it with and without the outer parens.
>>
>>Thanks
>>
>>
>>
>>>>>"Adolfo Bello" <[EMAIL PROTECTED]> 12/1/2002 5:42:34 AM >>>
>>
>>INSERT INTO your_table(fields_list) VALUES
>>(record_1),(record_2),...,(record_n);
>>
>>
>>
>>>-Original Message-
>>>From: Ed Reed [mailto:[EMAIL PROTECTED]] 
>>>Sent: Saturday, November 30, 2002 3:47 PM
>>>To: [EMAIL PROTECTED] 
>>>Subject: Multiple Inserts and updates
>>>
>>>
>>>Can anyone give me an example of how to insert and/or update 
>>>multiple records in the same table simultaneously?
>>>
>>>I have a table with three fields; ID, PartNum, Qty. The user 
>>>enters PartNum and Qty data in an unbound grid UI. When the 
>>>user hits "Save" I'd like to build a single SQL statement 
>>
>>>from the data in the grid and have that one statement update 
>>
>>>or insert the rows in the table as necessary. I don't wanna 
>>>have to build a separate SQL statement for each row of the 
>>>grid (which is the only way I know how at this moment).
>>>
>>>Any help would be appreciated.
>>>
>>>
>>


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Multiple Inserts and updates

2002-12-01 Thread Joseph Bueno
According to your first post, your fields are named ID, PartNum, Qty.
May be replacing 'Product' with 'PartNum' will help.
You should only remove outer parens:

Insert Into Table1 (Product, Qty)
Values ("56-56", 5), ("27-01", 1), ("15-02", 2);

If it does not work, you should also post the error message you
are getting from mysql.

Regards,
Joseph Bueno

Ed Reed wrote:
> Thanks for the response. I tried your example but it doesn't work,
> Here's my SQL, can you tell what I'm doing wrong?
> 
> Insert Into Table1 (Product, Qty)
> Values (("56-56", 5), ("27-01", 1), ("15-02", 2));
> 
> I've tried it with and without the outer parens.
> 
> Thanks
> 
> 
>>>>"Adolfo Bello" <[EMAIL PROTECTED]> 12/1/2002 5:42:34 AM >>>
> 
> INSERT INTO your_table(fields_list) VALUES
> (record_1),(record_2),...,(record_n);
> 
> 
>>-Original Message-
>>From: Ed Reed [mailto:[EMAIL PROTECTED]] 
>>Sent: Saturday, November 30, 2002 3:47 PM
>>To: [EMAIL PROTECTED] 
>>Subject: Multiple Inserts and updates
>>
>>
>>Can anyone give me an example of how to insert and/or update 
>>multiple records in the same table simultaneously?
>>
>>I have a table with three fields; ID, PartNum, Qty. The user 
>>enters PartNum and Qty data in an unbound grid UI. When the 
>>user hits "Save" I'd like to build a single SQL statement 
>>from the data in the grid and have that one statement update 
>>or insert the rows in the table as necessary. I don't wanna 
>>have to build a separate SQL statement for each row of the 
>>grid (which is the only way I know how at this moment).
>>
>>Any help would be appreciated.
>>
>>
> 


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: automatically inserting/updating timestamp

2002-12-01 Thread Joseph Bueno
According to the manual, 'timestamp' does exactly what you are asking
for:

"The TIMESTAMP column type provides a type that you can use to
automatically mark INSERT or UPDATE operations with the current date and
time. If you have multiple TIMESTAMP columns, only the first one is
updated automatically.

Automatic updating of the first TIMESTAMP column occurs under any of the
following conditions:

* The column is not specified explicitly in an INSERT or LOAD DATA
INFILE statement.
* The column is not specified explicitly in an UPDATE statement and
some other column changes value. (Note that an UPDATE that sets a column
to the value it already has will not cause the TIMESTAMP column to be
updated, because if you set a column to its current value, MySQL ignores
the update for efficiency.)
* You explicitly set the TIMESTAMP column to NULL. "

(from http://www.mysql.com/doc/en/DATETIME.html)

Or am I missing something ?

Regards,
Joseph Bueno

[EMAIL PROTECTED] wrote:
> I've created a table that contains a column called
> "recordLastModified" and that column uses "timestamp"
> as the data type.  Is there a way in MySQL to
> automatically update timestamp fields as records are
> modified?  I want it to be nearly automatic in the
> same way that using "auto_increment" increases int
> data types, since that is faster than remembering to
> type numbers each time you add a new record. If not,
> how can I structure my insert or
> update statements to automatically capture the
> server's sytem time into the timestamp field, or am I
> required to manually include the date/time when I
> update a record?
> 
> Thanks.
> 
> __
> Do you Yahoo!?
> Yahoo! Mail Plus - Powerful. Affordable. Sign up now.
> http://mailplus.yahoo.com
> 


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Wont use Index when data is not evenly distributed

2002-11-29 Thread Joseph Bueno
Claus Reestrup wrote:
> Cant figure out why MySQL wont use index on a big table.
> Ok, the data is not evenly distributed which might be the problem.
> 
> Look here:
> 
> I have a table with 1 million records, with the following fields:
> IdUser, int
> X, int
> Y, int
> Z, int
> C, char(10)
> 
> no, varchars, text, or blobs.
> 
> IdUser is a user identity.
> As things are right now, only 3 users are registered.
> iduser=2, 34, 39
> User 39 owns 99.999 % of the data in the table.
> 
> When using EXPLAIN, Mysql tells me that when querying the table with IdUser=39, 
>MySQL will not use index.
> Querying the table with all other idusers than 39, causes MySQL to use index.
> 
> Has anyone of you seen this behavious before?
> 
> /Claus
> 
Yes.

It is even described in the manual:

"Note that in some cases MySQL will not use an index, even if one would
be available. Some of the cases where this happens are:

* If the use of the index would require MySQL to access more than
30% of the rows in the table. (In this case a table scan is probably
much faster, as this will require us to do much fewer seeks.) Note that
if such a query uses LIMIT to only retrieve part of the rows, MySQL will
use an index anyway, as it can much more quickly find the few rows to
return in the result."

(see http://www.mysql.com/doc/en/MySQL_indexes.html)

Regards,
Joseph Bueno



-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: mysql 3.23.53a-Max crashes on large memory machine

2002-11-21 Thread Joseph Bueno
Even if you have more than 4 Gb of physical RAM, a single process
cannot address more than 4 Gb of memory (a P4-Xeon is still a
32 bits processor :( ). You have also to take into account that
a process cannot use these 4 Gb for data since some memory is
used for code, stack,... The exact amount available for
data depends on some compile time kernel parameters; I don't know
the values used by Redhat for their Advance Server kernel but I
suspect that the limit is somewhere between 2 Gb and 3.5 Gb.

You should reduce your key_buffer_size (6 Gb is really off limits).
You can try with a "small" value (1 Gb) and increment it until
you run again into trouble.
Or you can contact Redhat support to find out what is the real
limit and consult MySQL manual on memory usage:
http://www.mysql.com/doc/en/Memory_use.html

Hopefully the remaining memory will still be used by the system for
file caching, and it will still speed up your database accesses.

Hope this helps
-- 
Joseph Bueno

Johannes Ullrich wrote:

> I am having 'issues' with MySQL running on Redhat Advanced Server
> on a 8 Gigbyte machine (dual P4-Xeon).
>
> After large imports ('load data infile', file size about 1 Gigbyte) into
> a large table (20-30 GByte, > 100 Million rows), the database crashes.
>
> I did try several key_buffer_size settings. The example below had the
> key_buffer_size set to 6 Gig (but as you see, it shows -50MByte ?).
> A key buffer size of a bit less then 4 Gig didn't help. I did try a couple
> of other versions, including 4.0.4 and all of them crashed.
>
> The machine is running only one mysql instance at the time. The main issue
> I try to solve is how to optimize lookups against this one big table.
> Its index alone is about 10-15 gig...
>
> >From the log file:
>
> -
>
> mysqld got signal 11;
> This could be because you hit a bug. It is also possible that this binary
> or one of the libraries it was linked agaist is corrupt, improperly built,
> or misconfigured. This error can also be caused by malfunctioning
> hardware.
> We will try our best to scrape up some info that will hopefully help
> diagnose
> the problem, but since we have already crashed, something is definitely
> wrong
> and this may fail
>
> key_buffer_size=-50335744
> record_buffer=209711104
> sort_buffer=209715192
> max_used_connections=2
> max_connections=100
> threads_connected=2
> It is possible that mysqld could use up to
> key_buffer_size + (record_buffer + sort_buffer)*max_connections =
> 3161707 K
> bytes of memory
> Hope that's ok, if not, decrease some variables in the equation
>
> Attempting backtrace. You can use the following information to find out
> where mysqld died. If you see no messages after this, something went
> terribly wrong...
> Stack range sanity check OK, backtrace follows:
> 0x806eeb4
> 0x82d9b38
> 0x830592f
> 0x80a194f
> 0x807598a
> 0x80742e7
> Stack trace seems successful - bottom reached
> Please read http://www.mysql.com/doc/U/s/Using_stack_trace.html and follow
> instructions on how to resolve the stack trace. Resolved
> stack trace is much more helpful in diagnosing the problem, so please do
> resolve it
> Trying to get some variables.
> Some pointers may be invalid and cause the dump to abort...
> thd->query at (nil)  is invalid pointer
> thd->thread_id=9
>



-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: MySQL max memory usage on linux/intel

2002-11-21 Thread Joseph Bueno
You can expect that your operating system will use the remaining
memory as a file cache that will speed up your databases accesses.

We don't have that much memory our dedicated database server, it has
only 2 Gb. mysqld itself is using around 600Mb the rest is used
by the OS mainly for file caching. Since the total size of all
databases is around 1.5 Gb they almost fit in memory and the
server is happily serving around 300 queries/s with 200 concurrent
connections and almost no disk I/O (no reads, just a few write
every 5 seconds).

Hope this helps
-- 
Joseph Bueno

[EMAIL PROTECTED] wrote:

> Apologies if this is off topic ... please let me know what a more
> appropriate forum is if so.
>
> Quick question:  since there is normally a per-process limit of 4GB on
> 32-bit linux/intel platforms is there any point to installing more than
> 4gb in an sql server that is only running running 1 instance of the DB
> daemon?
> The total size of the DB is > 30 GB and several tables are > 4GB.
> --
> -Chris Beck - Coradiant, Inc - Research & Development - +1.514.908.6314-
> -- http://www.coradiant.com - Leaders in Web Performance Optimization --
> --- This email represents my opinion, not that of Coradiant. ---
> "We do not inherit the Earth from our Ancestors,
> we borrow it from our Children." -- Indian Proverb
>



-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Index using..

2002-11-20 Thread Joseph Bueno
Hi,

>From the manual (http://www.mysql.com/doc/en/MySQL_indexes.html):

Fabrizio Nesti wrote:
> Hello, it may be a simple question, but I can't get to it:
> 
>   Why this simple query does not use the PRIMARY key (which is oai_ui)?
> 
> mysql> explain select * from idb_metadata where oai_ui like 'doi:10.1088/0%';
> +--+--+---+--+-+--+++
> | table| type | possible_keys | key  | key_len | ref  | rows   | Extra  |
> +--+--+---+--+-+--+++
> | idb_metadata | ALL  | PRIMARY   | NULL |NULL | NULL | 178035 | where used |
> +--+--+---+--+-+--+++
> 
"If the use of the index would require MySQL to access more than 30% of
the rows in the table. (In this case a table scan is probably much
faster, as this will require us to do much fewer seeks.)"

>   While this is correctly using it
> 
> mysql> explain select oai_ui from idb_metadata where oai_ui like 'doi:10.1088/0%';
> 
>+--+---+---+-+-+--++-+
> | table| type  | possible_keys | key | key_len | ref  | rows   | Extra   
>|
> 
>+--+---+---+-+-+--++-+
> | idb_metadata | range | PRIMARY   | PRIMARY |  50 | NULL | 137387 | where 
>used; Using index |
> 
>+--+---+---+-+-+--++-+
> 

"In some cases a query can be optimised to retrieve values without
consulting the datafile. If all used columns for some table are numeric
and form a leftmost prefix for some key, the values may be retrieved
from the index tree for greater speed:"


> The change is just that I select * instead of the column itself.
> 
> Thanks in advance to whoever can help..
> running crazy in tuning,
> Thanks
> 
> Fabrizio Nesti -- MediaLab -- SISSA -- IT
> 
> 
> 
> 
> PS: System info:
> 
> Server:  3.23.38-max-log
> Table desc (it has 178035 records):
> mysql> desc idb_metadata;
> +-+---+--+-+-+---+
> | Field   | Type  | Null | Key | Default | Extra |
> +-+---+--+-+-+---+
> | oai_ui  | varchar(50)   |  | PRI | |   |
> | download| date  | YES  | | NULL|   |
> | datestamp   | date  | YES  | | NULL|   |
> | updated | timestamp(14) | YES  | | NULL|   |
> | submission  | date  | YES  | MUL | NULL|   |
> | replacement | date  | YES  | MUL | NULL|   |
> | title   | text  | YES  | | NULL|   |
> | abstract| text  | YES  | | NULL|   |
> | comment | text  | YES  | | NULL|   |
> | journal | tinytext  | YES  | | NULL|   |
> | url | tinytext  | YES  | | NULL|   |
> | type| tinytext  | YES  | | NULL|   |
> | language| tinytext  | YES  | | NULL|   |
> | rights  | tinytext  | YES  | | NULL|   |
> | publisher   | tinytext  | YES  | | NULL|   |
> | contributor | tinytext  | YES  | | NULL|   |
> | format  | tinytext  | YES  | | NULL|   |
> | source  | tinytext  | YES  | | NULL|   |
> | relation| tinytext  | YES  |     | NULL    |   |
> | coverage| tinytext  | YES  | | NULL|   |
> | profiles| tinyint(4)| YES  | | 1   |   |
> +-+---+--+-+-+---+
> 
> 
>
Hope this Helps
-- 
Joseph Bueno


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Mysql & Encryption

2002-11-18 Thread Joseph Bueno
Have you considered putting your databases on an encrypted filesystem ?
You don't need to add anything to mysql and those filesystems
already exist (although I have not yet used them).

Regards,
Joseph Bueno

mos wrote:

> At 08:15 AM 11/18/2002, you wrote:
>
> > * Alexandre Aguiar
> > > On 14 Nov 2002 Fraser Stuart shaped the electrons to write something
> > > about [Mysql & Encryption]
> > >
> > > > We are about to embark on a project that requires data encryption -
> > > > mainly to stop sensitive information being viewed accidentally (ie
> > >
> > > Isn´t it possible to tunnel MySQL connections through ssl?
> >
> > Yes, but how would that prevent users from viewing the content of the
> > tables? The sentence you cut off continues like this: "(ie viewing
> tables
> > directly through odbc connections or standard mysql clients)."
> >
> > Encrypting the client/server communication will not help.
> >
> > What's wrong with using GRANT?
>
>
>
> Roger,
> Grant becomes almost useless if someone breaks in and steals
> your computer(s). Once he gains root access to your machine he can
> eventually get to your data. This is more common that you might think.
> How many servers are really physically secure? Are they locked up in a
> concrete room with round the clock security? A smash and grab can net
> someone all your databases in less than 2 minutes. He's out the door
> long before the police show up. It is even worse if you have sensitive
> MySQL data running on a laptop. Laptops have a tendency of growing legs
> and walking off when you turn your back. Am I being paranoid? You bet I
> am! That disgruntled employee you fired last week may sell your customer
> list to your competitor. He probably had time to copy  your database
> files on CD-Rom when he was working late one night.
>
> There is also the security problem of sharing a MySQL database
> with an ISP on a shared database server. You have to trust your ISP
> administrator, everyone who works there, and every other company who is
> using the same database server. Would it surprise you to learn your
> major competitor just signed up with your ISP and is running his
> "application" on the same shared database server as you are?
>
> The best way to easily secure your data would be to use table
> wide encryption. Several databases use this technique to encrypt
> everything in the table (data, indexes, blobs). Even with a file editor,
> everything remains scrambled. From the tests I've done with these
> databases, this DOES NOT add a performance hit to the database, at least
> none that I could measure. This password is known only to the developer.
> The ISP would not need to know it. Other companies sharing the same db
> server (ISP) would see only scrambled data if they tried to access one
> of your tables without the password.
>
> I'm sure MySQL could have table wide encryption if a client is
> willing to fund the project. So if you have some cash and want better
> security, talk to MySQL-AB and get the ball rolling.
>
> Mike
>



-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: mysql from redhat8 will only run one process and remote connectionsdie.

2002-11-18 Thread Joseph Bueno
Try to upgrade glibc:
https://rhn.redhat.com/errata/RHSA-2002-197.html



[EMAIL PROTECTED] wrote:

> >Description:
>
> I just installed redhat8 this weekend and am using the rpm's provided
> by them.  when mysql runs it will only run a single process.  and even
> worse... when I try and connect to port 3306 (even via telnet) the
> process is immediately killed and safe_mysqld restarts it.  it logs
> the following to /var/log/mysqld.log
>
> Number of processes running now: 1
> mysqld process hanging, pid 32456 - killed
> 021118 11:32:44  mysqld restarted
> /usr/libexec/mysqld: ready for connections
>
>
>
> >How-To-Repeat:
>
> any mysql command from a remote host will "hang" the process immediately.
> even if I telnet to 3306 it immediately closes and reports the above
> error.
>
> >Fix:
>
>
>
> >Submitter-Id:
> >Originator:  Kevin Quinn
> >Organization:  BitWrench Incorporated
> >MySQL support: none
> >Synopsis:redhat8 mysql install not allowing remote connections
> >Severity:critical
> >Priority:
> >Category:mysql
> >Class:   
> >Release: mysql-3.23.52 (Source distribution)
>
>
> >Environment:
>
>   
> System: Linux lug 2.4.18-14smp #1 SMP Wed Sep 4 12:34:47 EDT 2002 i686
> i686 i386 GNU/Linux
> Architecture: i686
>
> Some paths:  /usr/local/bin/perl /usr/bin/make /usr/bin/gmake
> /usr/bin/gcc /usr/bin/cc
> GCC: Reading specs from /usr/lib/gcc-lib/i386-redhat-linux/3.2/specs
> Configured with: ../configure --prefix=/usr --mandir=/usr/share/man
> --infodir=/usr/share/info --enable-shared --enable-threads=posix
> --disable-checking --host=i386-redhat-linux --with-system-zlib
> --enable-__cxa_atexit
> Thread model: posix
> gcc version 3.2 20020903 (Red Hat Linux 8.0 3.2-7)
> Compilation info: CC='gcc'  CFLAGS='-O2 -march=i386 -mcpu=i686
> -D_GNU_SOURCE -D_FILE_OFFSET_BITS=64 -D_LARGEFILE_SOURCE'  CXX='g++'
> CXXFLAGS='-O2 -march=i386 -mcpu=i686 -D_GNU_SOURCE
> -D_FILE_OFFSET_BITS=64 -D_LARGEFILE_SOURCE -fno-rtti -fno-exceptions'
>  LDFLAGS=''
> LIBC:
> lrwxrwxrwx1 root root   14 Nov 16 22:13 /lib/libc.so.6
> -> libc-2.2.93.so
> -rwxr-xr-x1 root root  1235468 Sep  5 19:12
> /lib/libc-2.2.93.so
> -rw-r--r--1 root root  2233342 Sep  5 18:59 /usr/lib/libc.a
> -rw-r--r--1 root root  178 Sep  5 18:50 /usr/lib/libc.so
> Configure command: ./configure --prefix=/usr --exec-prefix=/usr
> --bindir=/usr/bin --datadir=/usr/share --libdir=/usr/lib
> --mandir=/usr/share/man --infodir=/usr/share/info --without-readline
> --without-debug --enable-shared --with-extra-charsets=complex
> --with-bench --localstatedir=/var/lib/mysql
> --with-unix-socket-path=/var/lib/mysql/mysql.sock
> --with-mysqld-user=mysql --with-extra-charsets=all --with-innodb
> --enable-local-infile --enable-large-files=yes --enable-largefile=yes
> --with-berkeley-db --with-thread-safe-client 'CFLAGS=-O2 -march=i386
> -mcpu=i686 -D_GNU_SOURCE -D_FILE_OFFSET_BITS=64 -D_LARGEFILE_SOURCE'
> 'CXXFLAGS=-O2 -march=i386 -mcpu=i686 -D_GNU_SOURCE
> -D_FILE_OFFSET_BITS=64 -D_LARGEFILE_SOURCE -fno-rtti -fno-exceptions'
>



-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Important Restriction in Query Language?

2002-11-18 Thread Joseph Bueno
This query uses a "subselect" that is not currently supported
by MySQL.
Check the manual for details:
http://www.mysql.com/doc/en/ANSI_diff_Sub-selects.html

Regards,
Joseph Bueno

jorge machado wrote:

> #Hello
> #I'm using hibernate a O/R model from sourceforge and it makes the #bridge
> #between sql and a query languange from hibernate.
> #Why this query dont work
> #at mysql. this query is generated by hibernate but
> #it works in all databases I know:
>
> #You just need to run this in the command
> #prompt of mysql.exe
> #the tables
>
> CREATE TABLE mae(
>  ID int(11) NOT NULL AUTO_INCREMENT,
>  STRINGSET varchar(32) NOT NULL default '',
>  PRIMARY KEY (ID)
> ) TYPE=InnoDB;
>
> CREATE TABLE stringset(
>  ID varchar(32) NOT NULL ,
>  ELEMENT varchar(100) NOT NULL default '',
>  PRIMARY KEY (ID)
> ) TYPE=InnoDB;
>
>
> insert into mae values(1,'key');
> insert into stringset values('key','the string for search');
>
> #the query:
>
> SELECT m.ID AS x0_0_
> FROM mae AS m
> WHERE ('the string for search' in
> (SELECT str0_.ELEMENT
>  FROM stringset str0_
>  WHERE m.STRINGSET=str0_.ID));
>
>
> #it gives an error in the:
> #(SELECT str0_.ELEMENT FROM stringset str0_ WHERE #m.STRINGSET=str0_.ID));
> #I have done this in all versions of mysql and didn't work
>


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: restricting mysql

2002-11-15 Thread Joseph Bueno
You can also restrict port 3306 to localhost with :
bind-address = 127.0.0.1
in /etc/my.cnf

Hope this helps
-- 
Joseph Bueno

David Lubowa wrote:

> port 3306 is the default port for mysql , if you do "grep -i 3306
> /etc/services " you will see what service runs on that port. As for
> restricting access to it without a firewall i guess you would have to
> set up
> specific users who can access it and throw out the rest. Have a look
> at the
> mysql docs and see how you can set up users on your dbases. i hope this
> helps
>
> cheers
> David Ziggy Lubowa
> Network Engineer
> One2net (U)
> web: www.one2net.co.ug
> Tel: +256 41 345466
>
> -Original Message-
> From: Admin-Stress [mailto:meerkapot@;yahoo.com]
> Sent: Friday, November 15, 2002 11:28 AM
> To: [EMAIL PROTECTED]
> Subject: restricting mysql
>
>
> Hi,
>
> After I installed mysql, I noticed there is an open port now, port 3306.
> Can I restrict this so only from localhost can connect? I meant to close
> port 3306.
> But without using firewall :( I dont have knowledge about this.
>
> If there is no way to do this without firewall, then I will learn it.
>
> I tried --skip-networking, but then mysqld could not be started. I got
> this
> from google.com. Is it
> the correct way?
>
> Thanks.
> kapot
>
> __
> Do you Yahoo!?
> Yahoo! Web Hosting - Let the expert host your site
> http://webhosting.yahoo.com
>


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: min() in where clause

2002-11-13 Thread Joseph Bueno
That's right, I should have checked before clicking on Send ;)

gerald_clark wrote:

> That should not be DESC.
> That will give the max value.
>
> Joseph Bueno wrote:
>
> > Erwin Ulreich wrote:
> >
> >
> >
> >> I want to code the following in one statement:
> >>
> >> select min(field1) from table1;
> >> $res=result of the query
> >> select * from table1 where field1=$res;
> >>
> >> does anyone know?
> >>
> >>
> >
> > select * from table1 order by field1 desc limit 1
> >
> >
> >
>
>
>



-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: min() in where clause

2002-11-13 Thread Joseph Bueno
Erwin Ulreich wrote:

> I want to code the following in one statement:
>
> select min(field1) from table1;
> $res=result of the query
> select * from table1 where field1=$res;
>
> does anyone know?
>
select * from table1 order by field1 desc limit 1



-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Using AUTO_INCREMENT like SEQUENCE - Resolved

2002-11-07 Thread Joseph Bueno
Hi all,

Sorry to jump in the middle of this thread but there is a much simpler
way to generate sequence numbers:

You create an auxiliary table with a one row:
CREATE TABLE sequence (
  code int(11) DEFAULT '0' NOT NULL
);
INSERT INTO sequence VALUES (0);

And, each time you need a new sequence number:
UPDATE sequence SET code=LAST_INSERT_ID(code+1);
SELECT LAST_INSERT_ID();

We use this method in our applications and it works well.
(and I didn't invent it, it is described in Paul DuBois'book ;))

Hope this helps
-- 
Joseph Bueno

Mike Hillyer wrote:
> I was thinking of a similar problem as yours (having the select max(id) type
> query get duplicate results) and what I realized was that I could do a
> select max(id) and specify a where clause to a column that had the user id
> of the creator of the row (which I already had in place for auditing) and
> then there was never duplication because the odds of you yourself inserting
> two rows simultaneously was nil. So, that may work for you as long as each
> user is logged in separately, and you can accept putting a field in your row
> referring to the row creator.
> 
> Mike Hillyer
> 
> -Original Message-
> From: Chuck Tomasi [mailto:Chuck.Tomasi@;plexus.com]
> Sent: Monday, October 21, 2002 7:19 AM
> To: [EMAIL PROTECTED]
> Subject: RE: Using AUTO_INCREMENT like SEQUENCE - Resolved
> 
> 
> This past weekend the solution came to me.  Simple, but elegant (well not as
> elegant as using the auto_increment feature natively, but a lot less code to
> write in my case.)
> 
> I'm already used to managing a separate sequence so I too the AUTO_INCREMENT
> feature out of the main table (ex: mytable) and created a second table
> 
> Create table mytable_seq (ID int not null primary key auto_increment);
> 
> To 'seed' the table I put in the highest value from mytable (ex: select
> max(ID) from mytable).  Now when I do this:
> 
> Insert into mytable_seq values (NULL);
> Select LAST_INSERT_ID() from mytable_seq limit 1;
> 
> I get what I want, what I need to create the file, populate mytable, etc.
> LAST_INSERT_ID() takes care of managing contentions (which I tested to no
> end.) That's the basics of my GetCounter($table) routine!
> 
> My only gripe now is that mytable_seq continues to grow and grow (hence the
> need for LIMIT 1).  I know it isn't a large table, but I may consider
> trimming it from time to time once I've verified its usefulness.
> 
> --Chuck
> 
> -Original Message-
> From: Peter Brawley [mailto:peter.brawley@;artfulsoftware.com]
> Sent: Friday, October 18, 2002 11:39 AM
> To: Chuck Tomasi; [EMAIL PROTECTED]
> Subject: Re: Using AUTO_INCREMENT like SEQUENCE
> 
> Chuck,
> 
> last_insert_id() is per-connection. How do you see two users getting the
> same auto_increment value?
> 
> PB
> 
> -
> - Original Message -
> From: "Chuck Tomasi" <[EMAIL PROTECTED]>
> To: <[EMAIL PROTECTED]>
> Sent: Friday, October 18, 2002 9:07 AM
> Subject: Using AUTO_INCREMENT like SEQUENCE
> 
> 
> 
>>General Info:
>>
>>OS: Mac OS X
>>MySQL version: 3.23.52 (Jaguar)
>>PERL: 5.6.0
>>DBI: 1.30
>>
>>I've got an application that was written to use Oracle sequences, as such
>>the program pulls a sequence number then creates a file based on that
>>number and populates a database record with infomation.  I've read
>>everything I can find out AUTO_INCREMENT values, but I can't seem to
>>reproduce the same functionality without causing some possible conflict
>>condition where two people may get the same numbers.
>>
>>At the heart of the matter is a Perl sub I call "GetCounter()".  It
>>accepts a sequence name, pulls a value from the sequence and returns that
>>value.  Originally I was doing this with mSQL sequences.  With a minor
>>change to the code it ran fun in Oracle also.  Now I'm looking at porting
>>my stuff to MySQL, but can't seem to match the functionality.
>>
>>Note, I'm importing a lot of these records from the Oracle DB to MySQL so
>>the counter would not be starting at 0.
>>
>>Thoughts so far:
>>
>>I've thought about "select MAX(ID+1) from mytable", but if two people
>>happen to be doing this at the same time, they'll get the same number -
>>bad.
>>
>>"select LAST_INSERT_ID(ID+1)" seems like a similar solution since two
>>people could get the same LAST_INSERT_ID().
>>
>>I'd like some way to pull the number and make sure it is not pulled again.
>>By the time I need the number, the information has b

Re: find longest string in column

2002-11-06 Thread Joseph Bueno
Mertens Bram wrote:
> Hi,
> 
> How can I find the longest string in a column?
> 
> I can probably write a PHP-script that checks the str_length
> recursively, but I would like to be able to do this within MySQL...
> 
> I would like to have the value and the length of the string if this is
> possible.  Is this possible or should I try to write the PHP-script?
> 
> I couldn't find anything relevant in the documentation or the archives,
> but I hope I just missed it...
> 
> TIA,
> 
> Bram
> [sql, query]

Hi,

You could try:

SELECT string,length(string) as len
  FROM mytable
ORDER BY len DESC
LIMIT 1

regards,
Joseph Bueno


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: select using regexp

2002-11-04 Thread Joseph Bueno
Hi,

REGEXP is much more powerful than LIKE; you can match full words
with this syntax:

SELECT *
FROM TABLE
WHERE field REGEXP "[[:<:]]cat[[:>:]]";

(Easy, isn't it ? ;) )

You can find more examples in the manual:
http://www.mysql.com/doc/en/Regexp.html

Regards,
Joseph Bueno
NetClub

gerald_clark wrote:

> There are too many exceptions for this to be usefull.
> What about lines ending in cat. or cat,
> Your example won't match them.
> Perhaps  "% cat.%" and "% cat,%" patterns might be more helpfull,
> but what about lines that begin with cat?
>
> Peter Lovatt wrote:
>
> > Hi
> >
> > You could use either normal or fulltext searches. Regexp may well be a
> > good
> > answer (not used it myself). The following should also work
> >
> > SELECT *
> > FROM table
> > WHERE
> > field LIKE "% cat %"
> > OR field LIKE "% cat. %"
> > OR field LIKE "% cat, %"
> >
> > (note the spaces to make sure you get only complete words)
> >
> > or if you have a mysql version that supports FULLTEXT
> >
> > SELECT * FROM table
> > WHERE MATCH (field) AGAINST ('cat');
> >
> > http://www.mysql.com/doc/en/Fulltext_Search.html
> >
> > which is much more elegant
> >
> > HTH
> >
> > Peter
> >
> > ---
> > Excellence in internet and open source software
> > ---
> > Sunmaia
> > Birmingham
> > UK
> > www.sunmaia.net
> > tel. 0121-242-1473
> > International +44-121-242-1473
> > ---
> >
> > -Original Message-
> > From: Mark Goodge [mailto:mark@;good-stuff.co.uk]
> > Sent: 04 November 2002 11:21
> > To: [EMAIL PROTECTED]
> > Subject: select using regexp
> >
> >
> > Hi,
> >
> > I've got a problem that I'm hoping someone can help with. I need to do
> > a query against a text column in order to extract entries that will
> > match whole words only - for example, a search for "cat" should match
> > any of:
> >
> >  The cat sat on the mat
> >  It was a large cat.
> >  Cat food is interesting.
> >  Dog. Cat. Fish.
> >
> > but not match
> >
> >  in a catatonic state
> >  it was a catastrophe
> > scattergun approach
> >
> > It looks as if the MySQL REGEXP function is what I need here, but I
> > can't work out from the documentation how to get what I want.
> >
> > Any suggestions?
> >
> > Mark
> >
> > 




-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Problems with "ORDER BY" from C API (works from command-linetool mysql.exe)

2002-11-04 Thread Joseph Bueno
Christer Holmström (at Home) wrote:

> Hi!
>
> This SQL works perfect when running the command line tool mysql.exe
>
> SELECT exe, sum(duration) FROM data GROUP BY exe ORDER BY 2
>
> BUT when running from the C API, the "ORDER BY" part doesn't seem
> active, why?
>
> Please help.
>
> /Christer
>
>
Hi,

What do expect with "ORDER BY 2" ?
If you want to arder by "sum(duration)" you should use:
SELECT exe, sum(duration) AS s FROM data GROUP BY exe ORDER BY s

Hope this helps
-- 
Joseph Bueno


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Identical entries

2002-10-31 Thread Joseph Bueno
Create an unique index.
See: http://www.mysql.com/doc/en/CREATE_INDEX.html

Regards,
Joseph Bueno

FlashGuy wrote:

> Hi,
>
> I have a database which has identical entries. Is there a way to tell
> mySQL to throw out duplicate entires when un insert/update is done via
> ColdFusion?
>
>
> ---
> Colonel Nathan R. Jessop
> Commanding Officer
> Marine Ground Forces
> Guatanamo Bay, Cuba
> ---



-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: How much data can MySQL push out?

2002-10-31 Thread Joseph Bueno
Since your requests are mainly selects and each select is returning
a lot of data (more than 500K) you could eliminate most of your
network problem with MySQL replication :
You install a slave server on your web frontend and use it for all
your selects: no need to upgrade your network anymore !

Regards,
Joseph Bueno

Benji Spencer wrote:

> Been waiting for someone to ask. The entire database is actually
> slightly over 30K (35K is a closer number...small non-the-less)
>
> How does 12 MBytes/s come from 35K rows? Part of the output is a large
> text field. Each chunk of large text is no more then 100K and not each
> has a large text with it (and some of the large texts which are returned
> are essentially empty). It is possible for more then one select to
> return a couple large text blocks, though the majority do not.
>
> Combine that data with 15-25 queries/sec (I am assuming in MSSQL that a
> transaction = query of some sort), it is a chunk of data. Assume that
> you also return more then one row at a time (select * from table where
> start_date < now and end_date > now).
>
> The bandwidth numbers are something which PerfMon and MRTG are telling us.
>
> The underlying code is (currently) (ASP, so it is using ADO to do the
> connection.)
>
> I also wouldn't disagree with some of the application elements being
> no-very-good and that fixes are needed there. We can even make some code
> changes which will help for the short term. However we are trying to
> build a case for what is really needed both short term and long term.
> Time and Money are also both issues which we are trying to work with.
>
> > I hate to jump into this, but I have to ask if you have only 30k
> rows why
> > are you producing such large amounts of data?  Are you trying to store
> > blobs
> > or large text data types?  I don't think you will ever find a database
> > vendor that wants to compete with a local filesystem under those
> kinds of
> > conditions.  How do you connect to get the data ODBC, JDBC, DBD:DBI,
> PHP?
> > none of these are designed to move large amounts of data.  I think the
> > problem you have is with the application not the database, I would
> > encourage
> > you to move away from MSSQL if possible, but in this case I don't think
> > changing the RDBMS will fix the problem.
>
>
>
> ---
> Ben Spencer
> Web Support
> [EMAIL PROTECTED]
> x 2288
>



-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: How much data can MySQL push out?

2002-10-31 Thread Joseph Bueno
Hi,

Just a silly question: are you pushing those 12.5 Mbytes/s over the
network ? If this is the case you have hit the limit of Fast Ethernet
(12.5x8 = 100Mbits/s) and no database (not even MySQL ;) ) will be
faster ! May be an upgrade to Gigabit Ethernet would help...

Regards
-- 
Joseph Bueno


Benji Spencer wrote:

> We are experiencing some issues with performance on a non-MySQL box and
> are looking for alternatives (and alternative methods). Once of the
> issues that we seem to be facing, is that the pure volume of data which
> needs to be pushed out. The other database is pushing out (at peak) 12.5
> megabytes per second and is being hit with 30-45 queries per second. If
> we rework the application, we end up with one of two solutions:
>
> 1) move to MySQL for the database engine (it currently is MSSQL)
> 2) Rework the application, so that the application still talked to
> MSSQL, but we generate static pages (this is for a website) and store
> them in MySQL, which are then served. This will reduce both bandwidth
> and queries per second. Bandwidth is unknown, but the queries per second
> are estimated at 15-25 queries per second. The select statements would
> be very generic though (select * from table where ID='abc123')
>
> This leaves one major question. How much data can MySQL push out? Can
> MySQL handle 12.5 megabytes (not megabits) per second of data? Will
> MySQL handle 20 queries per second?
>
> I know a lot of this also determined by OS/hardware. MySQL would be
> running on a 2-CPU Sun box.
>
> Any information with regards to this would be of use.
>
> If anyone also has such information on MSSQL (what is the Application
> Limit of MSSQL) it would also be helpful.
>
> thanks
> benji
>
> ---
> Ben Spencer
> Web Support
> [EMAIL PROTECTED]
> x 2288
>
>


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: mysql service on linux

2002-10-29 Thread Joseph Bueno
Hi,

Make sure that your system starts at runlevel 3 on
boot. If you start your machine with X11, chances are
that its runlevel is 5.

Check your /etc/inittab file and look for a line
with "initdefault" keyword:
id:3:initdefault:

If you start at runlevel 5, you should create a symbolic
link in  /etc/init.d/rc5.d

Hope this helps,
-- 
Joseph Bueno

Natale Babbo wrote:

> Many Thanks 
>
> No problems with mysql.server script ... it works fine
> if i start it manual like this:
> shell> mysql.server start
> or
> shell> mysql.server stop
> ... also symbolic link in /etc/init.d/rc3.d is ok.
>
> The only problem is that MySQL service doesn't start
> at boot.
>
> Natale Babbo.
>
>
>
>
>  --- Iikka_Meriläinen
> ha scritto: > On Tue, 29 Oct 2002, Natale Babbo wrote:
>
> >>Hi all,
> >>
> >>I'm using Linux Suse 8.0.
> >>I copied the file mysql.server located in
> >>/usr/local/mysql/support-files to /etc/init.d
> >>directory.
> >>I made a simbolic link to mysql.server script in
> >>/etc/init.d/rc3.d but MySQL doesn't appear to
> >
> >start at
> >
> >>the boot time.
> >>
> >>Anyone can help me?
> >
> >Hi!
> >
> >Try starting mysqld with --console --standalone
> >parameters and see if it
> >starts at all. If yes, you've got a problem with
> >your startup scripts. Check
> >any paths in those and check that your symbolic link
> >works. Also try running
> >the script in /etc/init.d/rc3.d (the symbolic link
> >in this case) manually
> >after the system startup.
> >
> >Regards,
> >Iikka
> >
> >**
> >* Iikka Meriläinen   *
> >* E-mail: [EMAIL PROTECTED] *
> >* Vaala, Finland *
> >**
> >



-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Is it possible to replicate just a couple of tables insteadof the whole database

2002-10-28 Thread Joseph Bueno
Ralf Koellner wrote:

> Hi,
>
> Is it possible to replicate just some tables instead of a whole mysql
> database? A replication of the whole database would be not necessary in
> my case.  It would be a one-way replication (means slave won't update
> master).

You can use 'replicate-do-table' on slave side to restrict replication
to some tables.
BTW, replication is ALWAYS one way: slaves receive their updates from
the master.

>
>
> Furthermore:
> Is there a cheep tool taking care of that (without changing the design
> of the master database)?
>
> Thank you in advance,
>
> Ralf Koellner
>
Hope this helps
-- 
Joseph Bueno


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: self relation query help

2002-10-26 Thread Joseph Bueno
Franklin Williams Jr. wrote:

> I have a table with names in it where the names can function as 1 of 2
> things -
> either an artist or a group. In order to make searching easy I wished
> to keep
> these names in one table. EG: searching for artist name, whether a
> group or
> soloist you would have to search only one table. The problem arises if the
> person wishes to get the names of the artists in the group or an
> artist also
> performs in multiple groups. I thought a self relationship of sorts
> solved this
> but then I could not figure out how to write the query. For instance the
> simplest query is: show me all the groups and artists in each group,
> or Show me
> the artists names in group U2. Easy if groups and artist names were in
> 2 tables,
> however...
>
> A table  B table
>  --
> idNameid grp_id art_id
>  --
> 1 Joe  1   2  1
> 2  U2  2   2  3
> 3 Mike 3   2  4
> 4 John   --
> 5 Jane
> ---
>
> In the above sample data, table B is a "join" of sorts where the A.id
> functions
> as both the B.grp_id and B.art_id. EG:  B.grp_id = A.id AND B.art_id =
> A.id
> I thought I had come up with a nifty little trick (using the same id in
> different fields to designate a distinction ) but alas.
> I can only select one column to get the names of the group and artists. I
> thought nested selects would solve this - maybe not - but no nested
> selects in
> mysql anyway.
> So..can this be done with this design or must I split the artists and
> groups
> into separate tables, forcing a multiple table search? I hoped column
> alias
> would solve this but I could not make it work using them. Confused?
> Good...so am
> I! Thanks in advance for any help
>
> Franklin Williams
> [EMAIL PROTECTED]
>
>
Hi,

select A1.Name as GroupName, A2.Name as Artist
 from A as A1,
  B,
  A as A2
where A1.id = B.grp_id AND B.art_id = A2.id

should solve your problem.

(Don't forget to create indexes on A(id), B(art_id) and B(grp_id) ;) )

Hope this helps
-- 
Joseph Bueno


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: subselect solution

2002-10-23 Thread Joseph Bueno
Eric Guesdon wrote:
> Hi everyone,
>  
> I’m a new user on this list as well as Mysql.
>  
> I use it with coldfusion 
>  
> My first problem is the following :
>  
> I have a table where I created a timestamp column named
> date_creation
>  
> When I try to execute this sql syntax it doesn’t work !
>  
> select fname, lname, date_creation
>from tb_person
>where date_creation = (select max(date_creation) from tb_person)
>  
> I know subselect is limited but what are alternatives
> solutions ?
>  
> Thanks
>  
> Sorry for my English
>  
> Eric
> 

Hi,

You should try:

select fname, lname, date_creation
  from tb_person
order by date_creation desc limit 1;

and look at : http://www.mysql.com/doc/en/ANSI_diff_Sub-selects.html
for other examples.

Regards,

Joseph Bueno


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




  1   2   3   >