Re: empty_blob() equivalent

2004-07-20 Thread Jeyabalan Murugesan Sankarasubramanian
Hi,
Actually i need to add a wavefile in byte[] format to the table with column name 
blob_col of type blob.  In Oracle empty_blob() is inserted into the table. While 
retrieving OracleResultSet supports getBLOB(). This returns oracle.sql.BLOB. From this 
i am able to insert the byte[] with the following code.

oracle.sql.BLOB myblob = ((OracleResultSet)rs).getBLOB(blob_col);
OutputStream os = myblob.getBinaryOutputStream();
os.write(byteArray);

This works in Oracle, which i m migrating to MySQL. For this i need equivalent thing 
so that i can insert byteArray in column blob_col.
regards
msjeyabalan



 In the last episode (Jul 20), Jeyabalan Murugesan Sankarasubramanian said:
  Is there any equivalent of empty_blob() in mysql. Kindly guide us in
  this regard. Thanks in advance.

 You didn't tell us what empty_blob() is supposed to do.  If
 empty_blob() checks to see whether a blob is empty:

   SELECT  WHERE myblob = '';

 If empty_blob() zeroes out a blob:

   UPDATE ... SET myblob = '';

 blobs are just large varchars, basically.

 --
   Dan Nelson
   [EMAIL PROTECTED]

**
 CONFIDENTIAL INFORMATION
**

This e-mail transmission and any attachments may contain confidential
information belonging to the sender.  The information is intended solely for
the use of the individual(s) or entities addressed.  If you are not the
intended recipient, you are hereby notified that any copying, disclosing,
distributing, or use of this e-mail and/or attachment is strictly
prohibited.  If you received this transmission in error please notify the
sender immediately and delete the message and all attachments.


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Problem with DATE_ADD - Michael Stassen, July 16 2004 7:48am

2004-07-20 Thread Michael Stassen
Gerard Gilliland wrote:
Michael:
  Again, Thank you for your excellent suggestions and quick response !!
  I agree that Two types of data means you should use two columns.
  There is no excuse for bad design.
  (I inherited the database and moved it to MySQL -- I should have converted
then.)
  I will split the Source table into N and Period (probably with better
names.)
  (I am indeed working with a temp table.
  Only for debug purposes and to test before I sent the original problem.)
  Concerning:
$qry = SELECT PlantName, Unit, TagName, CalDate, nPeriod,
   TRIM(MID(nPeriod,1,INSTR(nPeriod,' ')-1)) AS n,
   TRIM(MID(nPeriod,INSTR(nPeriod,' ')+1)) AS Period,
   CASE
 WHEN Period = 'Month' THEN DATE_ADD(CalDate, INTERVAL n MONTH)
 WHEN Period = 'Year' THEN DATE_ADD(CalDate, INTERVAL n YEAR)
   END
   AS DueDate
   FROM tblTemp;
  Did you try it?  (the above)
  -- Yes
  It should work.
   -- I agree.  However, It fails.
  It doesn't work with Temporary Calculated columns
snip
I think it fails for the same reason, I have trouble with using criteria
against DueDate
snip
Fails (in the WHERE clause):
 $qry = SELECT PlantName, Unit, TagName, CalDate, CalPeriod, CalPeriodType,
 CASE
   WHEN CalPeriodType = 'Month' THEN DATE_ADD(CalDate, INTERVAL
CalPeriod MONTH)
WHEN CalPeriodType = 'Year' THEN DATE_ADD(CalDate, INTERVAL
CalPeriod YEAR)
  END
AS DueDate
FROM tblTemp
WHERE DueDate   '2005-01-01';
(Note: DueDate is a TemporaryCalculated field)
Ahh, that's different.  You didn't mention trying to use DueDate in the 
WHERE clause before.  You can't do that.  The purpose of the WHERE clause is 
to indicate which rows should be used for your calculations, but you are 
asking mysql to choose rows based on the result of the calculations.  You 
see the problem?  You are asking mysql to pick the rows to operate on based 
on the result of the operation.  In other words, DueDate doesn't exist yet 
when the WHERE clause is evaluated, so you can't use it there.

Fortunately, there's a solution.  The HAVING clause filters rows near the 
end, after the calculations have been done, so it is the place to use 
DueDate to limit your results.  Thus, this should work:

  SELECT PlantName, Unit, TagName, CalDate, CalPeriod, CalPeriodType,
CASE
  WHEN CalPeriodType = 'Month'
THEN DATE_ADD(CalDate, INTERVAL CalPeriod MONTH)
  WHEN CalPeriodType = 'Year'
THEN DATE_ADD(CalDate, INTERVAL CalPeriod YEAR)
END
AS DueDate
  FROM tblTemp
  HAVING DueDate   '2005-01-01';
Works:
$qry = SELECT PlantName, Unit, TagName, CalDate, CalPeriod, CalPeriodType,
  CASE
WHEN CalPeriodType = 'Month' THEN DATE_ADD(CalDate, INTERVAL
CalPeriod MONTH)
WHEN CalPeriodType = 'Year' THEN DATE_ADD(CalDate, INTERVAL
CalPeriod YEAR)
  END
AS DueDate
FROM tblTemp
WHERE
  CASE
WHEN CalPeriodType = 'Month' THEN DATE_ADD(CalDate, INTERVAL
CalPeriod MONTH)
WHEN CalPeriodType = 'Year' THEN DATE_ADD(CalDate, INTERVAL
CalPeriod YEAR)
  END
 '2005-01-01';
(Note: Where Criteria does not contain any TemporaryCalculated field.)
Well, it does, sort of.  Here you force the calculation in the WHERE clause 
by explicitly doing the calculations.  This will work, but it will be less 
efficient unless the optimizer is smart enough not to do the calculations twice.

A simpler case of the above (but continuing the TemporaryCalculation field
discussion.)
Fails:
 $qry = SELECT PlantName, Unit, TagName, CalDate, CalPeriod,
CalPeriodType,
   (CalPeriod + Unit) AS CalNum
   FROM tblTemp
  WHERE CalNum  7;
Works:
 $qry = SELECT PlantName, Unit, TagName, CalDate, CalPeriod,
CalPeriodType,
   (CalPeriod + Unit) AS CalNum
   FROM tblTemp
  WHERE (CalPeriod + Unit)  7;
Right.  This is the same as above.  The second works by forcing the 
calculation in the WHERE clause.  The first would work if you changed WHERE 
to HAVING.

I do appreciate the thoroughness of your effort in coversion to Months,
However the potential for using Days looms ahead, and the complexity of
Days in combinations with Month lengths, and Leap years
causes more confusion than the well defined CASE of Day, Month, Year.
Agreed.
I should also point out that because we are selecting rows based on DueDate, 
and DueDate is a calculated value, there is no index to help us out.  In 
other words, even if there is an index on CalDate, we render it useless (for 
the purposes of this query) when we feed CalDate into a function to get 
DueDate.  The result is that we are asking for an inefficient full-table 
scan.  Mysql must calculate DueDate for *every* row, then compare the 
results (in HAVING) to decide which results to show us.  Depending on the 
size of your table and how often this query is run, that may or may 

Re: Relational Integrity

2004-07-20 Thread Martijn Tonies
Hi Roy,

If this is a business application, don't go without
transactions and foreign keys. Plain and simple.

Use InnoDB.

With regards,

Martijn Tonies
Database Workbench - developer tool for InterBase, Firebird, MySQL  MS SQL
Server.
Upscene Productions
http://www.upscene.com



 I need so general guidance on relational integrity.
 I'm setting up a reasonably small DB with 30 or so
 tables for a machine control application. Several of
 the tables will have referential links to each other
 (e.g. a finished part table will link to a master
 part type table via the product ID number).
 None of my table will ever contain more than a few
 hundred thousand records.

 This database is a conversion from an existing MS SQL7
 system in which I made extensive use of foreign keys.
 SQL7 has worked out well in the past but Windows and
 VBNet has ceased to be an efficient machine control
 development environment. We have decided
 to migrate to Linux on all of our new systems where
 practical.

 My first stab at a MySQL implementation is to use the
 MyISAM table structure and not the InnoDB structure,
 foregoing the use of explicit foreign keys and letting
 my apps take care of the relational integrity. I gathered
 from reading DuBois that this is not an uncommon approach
 to a MySQL implementation. Question: Are the advantages
 of MyISAM tables vs. InnoDB tables sufficient for me
 to continue this approach or am I better off setting
 up InnoDB tables throughout?


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Relational Integrity

2004-07-20 Thread Singer Wang
RAM is Cheap, so is a faster processor.. (InnoDB requires more RAM/Processor then
the simpler MyISAM)..

but your data and downtime is probably a lot more expensive. Its well worth it
going with InnoDB. 

For most of what I do, I use a combination of InnoDB and HEAP Tables. 


On Tue, Jul 20, 2004 at 09:43:40AM +0200, Martijn Tonies ([EMAIL PROTECTED]) wrote:
 Hi Roy,
 
 If this is a business application, don't go without
 transactions and foreign keys. Plain and simple.
 
 Use InnoDB.
 
 With regards,
 
 Martijn Tonies
 Database Workbench - developer tool for InterBase, Firebird, MySQL  MS SQL
 Server.
 Upscene Productions
 http://www.upscene.com
 
 
 
  I need so general guidance on relational integrity.
  I'm setting up a reasonably small DB with 30 or so
  tables for a machine control application. Several of
  the tables will have referential links to each other
  (e.g. a finished part table will link to a master
  part type table via the product ID number).
  None of my table will ever contain more than a few
  hundred thousand records.
 
  This database is a conversion from an existing MS SQL7
  system in which I made extensive use of foreign keys.
  SQL7 has worked out well in the past but Windows and
  VBNet has ceased to be an efficient machine control
  development environment. We have decided
  to migrate to Linux on all of our new systems where
  practical.
 
  My first stab at a MySQL implementation is to use the
  MyISAM table structure and not the InnoDB structure,
  foregoing the use of explicit foreign keys and letting
  my apps take care of the relational integrity. I gathered
  from reading DuBois that this is not an uncommon approach
  to a MySQL implementation. Question: Are the advantages
  of MyISAM tables vs. InnoDB tables sufficient for me
  to continue this approach or am I better off setting
  up InnoDB tables throughout?
 
 
 -- 
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]

-- 
Singer X.J. Wang, Ph.D. Candidate
Faculty of Computer Science, Dalhousie University
6050 University Avenue, Halifax, NS, Canada, B3H 1W5
  Email:[EMAIL PROTECTED]
  Fax:  (902) 492-1517
  WWW:  http://www.singerwang.com
  Schedule: http://schedule.singerwang.com
---
  Tobor, its Robot spelled backwards.
 - Product slogan for a toy called Tobor (circa 1978)

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



join subquerie rand problem

2004-07-20 Thread Gerske, Sebastian
Hello list,

i still need a solution for my 3 table join rand problem, im using version
4.1 now so subqueries are possible:

explanation:

i have 3 tables lets say:

fruits:

id,   name
1banana
2apple
3strawberry

fruits_animals

id,   id_fruits,   id_animals
11   2
21   3
33   1
42   4

animals

id,   name
1cat
2   dog
3   mouse
4   elephant


My problem is that if i join the tables and order them by rand i always get
one result something like:
strawberry, cat (fruit id = 3, animal id = 1 )
or
banana, dog (fruit id = 1, animal id = 2)
but never
banana, mouse( fruit id = 1, animal id = 3 )

and need to select 100 different relations without redundancies hows that
possible ?

regards sebastian gerske



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



MaxDB and cluster

2004-07-20 Thread jschung
I am new to MaxDB and clustering. It seems that both products provide
enterprise features and high availability.
Is clustering replacing MaxDB?
What are the difference between them?
Are they targeting different users?

Thanks.



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: RE: Upgrade 3.23 to 4.1.x

2004-07-20 Thread MaFai
Dear Victor Pendleton: 


It contains both isam  innodb tables.

 At 2004-07-14, 12:29:39 you wrote: 

What table types are you currently using?

-Original Message-
From: MaFai
To: [EMAIL PROTECTED]
Sent: 7/13/04 10:09 PM
Subject: Upgrade 3.23 to 4.1.x

Dear all:

Do any one try to upgrade the mysql from 3.23 to 4.1?

In doc,mysql doesn't recommend update the mysql from diff series.Do any
one try it successfully?

Do I just need to copy the datafile to the new mysql ?


Best regards. 

MaFai
[EMAIL PROTECTED]
2004-07-14


= = = = = = = = = = = = = = = = = = = = = =
Best regards. 

MaFai
[EMAIL PROTECTED]
2004-07-20


Fw: Confirm 2 question in replication.

2004-07-20 Thread MaFai
Dear [EMAIL PROTECTED]: 


Dear, [EMAIL PROTECTED],

I'd like to confirm 2 queries on the replication as follows:

1. Could we not using root on mysql for setting up the replication? ( I think no)
2. Could the password for replication be changed? ( I think Yes)

Best regards. 

MaFai
[EMAIL PROTECTED]
2004-07-20








= = = = = = = = = = = = = = = = = = = = 
  
Best regards. 

MaFai
[EMAIL PROTECTED]
2004-07-20


Re: Help! Nasty big table efficiency issues with GROUP BY

2004-07-20 Thread Chris Elsworth
On Tue, Jul 20, 2004 at 10:39:00AM +1000, Lachlan Mulcahy wrote:
 
 Chris,
 
 Have you checked your following server configurables:
 
 sort_buffer_size:
  - This is the size of the cache created by _each_ thread that requires
 ORDER BY or GROUP BY in a query.
 If you are doing a lot of large ordered queries you will need to increase
 this value otherwise MySQL will use _disk_ for sorting (this is very slow
 and largely undesirable).

This was at it's default 2MB, so I've raised it to 64; doesn't seem to
have helped much in the short term though :(

 
 tmp_table_size:
  - This is the maximum size of an in memory or HEAP temporary table. If a
 GROUP BY query causes the server to exceed this limit, an on disk table
 will be used. Once again, this is slow and undesirable.

This was 100MB, so that should be sufficient. I've altered
sort_buffer_size so I'll see how it goes over the next few hours.

Cheers for the pointer!

-- 
Chris

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: join subquerie rand problem

2004-07-20 Thread Gerske, Sebastian
help me please :/


Gerske, Sebastian [EMAIL PROTECTED] schrieb im Newsbeitrag
news:[EMAIL PROTECTED]
 Hello list,

 i still need a solution for my 3 table join rand problem, im using version
 4.1 now so subqueries are possible:

 explanation:

 i have 3 tables lets say:

 fruits:

 id,   name
 1banana
 2apple
 3strawberry

 fruits_animals

 id,   id_fruits,   id_animals
 11   2
 21   3
 33   1
 42   4

 animals

 id,   name
 1cat
 2   dog
 3   mouse
 4   elephant


 My problem is that if i join the tables and order them by rand i always
get
 one result something like:
 strawberry, cat (fruit id = 3, animal id = 1 )
 or
 banana, dog (fruit id = 1, animal id = 2)
 but never
 banana, mouse( fruit id = 1, animal id = 3 )

 and need to select 100 different relations without redundancies hows that
 possible ?

 regards sebastian gerske





-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Updating a field from multiple rows.

2004-07-20 Thread Stuart Grimshaw
Further to my Full Text question the other day, I'm trying to add the
parimetric data to the field that  gets searched.

We have a script that runs periodically to update this table. I can do
what I want in that script no problem, but it would be more elegent if
I could acheive the same results with 1 query.

As an example,  lets take a random product and look at the parimetric
data for it.

SELECT es.ProdID, ev.Text 
FROM especee as es 
INNER JOIN evocee as ev ON  (es.BodyID=ev.ID) 
WHERE es.ProdID = 45607;

++---+
| ProdID | Text  |
++---+
|  45607 | Limited warranty - 1 year |
|  45607 | 1 year warranty   |

|  45607 | External  |
|  45607 | POTS filter   |
++---+


I want to concat all the values of Text into 1 row in another table.
I have this so far:

UPDATE X_Search.text_search as ts 
INNER JOIN especee as es ON (ts.product_uid = es.ProdID) 
INNER JOIN evocee as ev ON (es.BodyID=ev.ID) 
SET ts.txt = CONCAT(ts.txt, ev.Text) 
WHERE ts.product_uid = 45607;

but all that gets appended is the 1st row, ie Limited warranty - 1 year

Is what I'm trying to to possible, and if it is, am I going down the right road?

-- 
-S

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: i cant connect to mysql from other machine

2004-07-20 Thread [EMAIL PROTECTED]
(excuse for my english)

thx for your interest Lachan..

the problem was the attribute bind_address at the my.cnf

:)

thanks
d2clon


On Tuesday 20 July 2004 02:59, Lachlan Mulcahy wrote:
 d2,

 By the looks of it the client machine you are trying to connect from can
 resolve the hostname of your 'linuxready' machine perfecetly fine... and
 your host (according to nmap) is running the MySQL server daemon listening
 on TCP port 3306. I would suggest the problem lies at the network level.

 It may be possible that your distribution by default sets up a firewall
 that blocks out external MySQL access (this is no an uncommon default).

 What linux distribution are you running on the host (linuxready)?

 Also what kind of network lies between the two machines. Are they simply
 two machines plugging into the same switch/hub? Are they connected via
 crossover cable?

 The connection refused message suggests a firewall somewhere between the
 two machines, I would guess it's most likely running on your host and you
 don't know it.

 Lachlan

 -Original Message-
 From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
 Sent: Tuesday, 20 July 2004 2:18 AM
 To: Victor Pendleton
 Cc: '[EMAIL PROTECTED] '; '[EMAIL PROTECTED] '
 Subject: Re: i cant connect to mysql from other machine


 (first: excuse for my english)

 On Monday 19 July 2004 15:03, Victor Pendleton wrote:
  Can you telnet port 3306 on linuxready from the remote machine?

 nop.. :/ .. i cant do telnet to linuxready 3306 ยก
 -
 [EMAIL PROTECTED] $ telnet linuxready 3306
 Trying 10.205.171.16...
 telnet: Unable to connect to remote host: Connection refused
 -


 but i have visibility to linuxready.. i can open a ssh session, or ftp
 session..

 i have not configure any firewall.

 why i cant do telnet to linuxready 3306?
 i have opened the port:
 --
 linuxready root # nmap localhost

 Starting nmap 3.50 ( http://www.insecure.org/nmap/ ) at 2004-07-19 18:24
 CEST
 Interesting ports on localhost (127.0.0.1):
 (The 1654 ports scanned but not shown below are in state: closed)
 PORT STATE SERVICE
 22/tcp   open  ssh
 80/tcp   open  http
 3306/tcp open  mysql
 8009/tcp open  ajp13
 8080/tcp open  http-proxy
 ---

 thanks
 d2clon

  -Original Message-
  From: [EMAIL PROTECTED]
  To: [EMAIL PROTECTED]
  Sent: 7/19/04 4:27 AM
  Subject: i cant connect to mysql from other machine
 
  hello people..
 
  i have a curious problem with mysql.
  i configure root user acount to enable access from other machines but i
  cant
  connect..
 
  my user table is:
  mysql select * from user;
  +---+--+--+-+-+-
  +-+-+---+-+-
  --+--+---++-+---
  -++
 
  | Host  | User | Password | Select_priv | Insert_priv |
 
  Update_priv | Delete_priv | Create_priv | Drop_priv | Reload_priv |
  Shutdown_priv | Process_priv | File_priv | Grant_priv | References_priv
 
  Index_priv | Alter_priv |
  +---+--+--+-+-+-
  +-+-+---+-+-
  --+--+---++-+---
  -++
 
  | localhost | root | 1595fd346d9734fe | Y   | Y   | Y
  |
  | Y   | Y   | Y | Y   | Y |
 
  Y
 
  | Y | Y  | Y   | Y  | Y  |
  | % | root | 1595fd346d9734fe | Y   | Y   | Y
  |
  | Y   | Y   | Y | Y   | Y |
 
  Y
 
  | Y | N  | Y   | Y  | Y  |
 
  +---+--+--+-+-+-
  +-+-+---+-+-
  --+--+---++-+---
  -++
  2 rows in set (0.00 sec)
 
 
  i execute the flush privileges sentence.
 
  i try to connect from another machine and this is the error:
  $ mysql --host linuxready -uroot -p
  Enter password:
  ERROR 2003: Can't connect to MySQL server on 'linuxready' (111)
 
 
  Maybe is it possible that I have to configure some attribute in my.cnf?
 
  thanks in advance
  d2clon

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



ARGH - Mysql won't stop eating RAM and finally it SEGV's

2004-07-20 Thread Richard Clarke
Mysql won't stop eating RAM!! :( 
Machine is a quad xeon 2.4 with 4 gigs of RAM.

Linux db2 2.6.7-rc3 #1 SMP Thu Jun 17 12:51:21 UTC 2004 i686 Intel(R)
Xeon(TM) CPU 2.40GHz GenuineIntel GNU/Linux

Mysql is 4.1.3-beta

Compile options are,
./configure --enable-thread-safe-client --enable-assembler
--with-unix-socket-path=/tmp/mysql-4.1.3.sock
--prefix=/usr/local/mysql-4.1.3 --without-extra-tools --without-docs
--without-bench --with-innodb --without-berkley-db

My startup variables are not especially high for a 4 gig box (see end
of e-mail).

I have 3 UDF functions which I wrote to do some simple things;
extracting domain portion of a web url, and access to some libgeoip
routines. I have run these 3 functions through
benchmark(100,function()) to see if rate of RAM consumption
increases, but I don't really see any change or at least if I do i
think it's probably the placebo effect.

I really can't think where to look to figure this problem out. I would
like mysql to run with 3.5/4 gigs of RAM and stay that way. Not start
off there and end up leaving the system with 1meg of RAM.
I also add the problem that within about 72 hours of the server being
started it dies with signal 11.

040720  6:04:15  Out of memory; check if mysqld or some other process
uses all available memory; if not, you may have to use 'ulimit' to
allow mysqld to use more memory or you can add more swap space
mysqld got signal 11;

key_buffer_size=134217728
read_buffer_size=2093056
max_used_connections=46
max_connections=500
threads_connected=45
It is possible that mysqld could use up to 
key_buffer_size + (read_buffer_size +
sort_buffer_size)*max_connections = 2177068 K
bytes of memory
Hope that's ok; if not, decrease some variables in the equation.

I resolve stack dump of one of the errors gives.

0x814f0dd handle_segfault + 669
0x40038d5d _end + 935216941
0x4020c79a _end + 937132394
0x8181d7a _Z11open_tablesP3THDP13st_table_listPj + 202
0x8182055 _Z20open_and_lock_tablesP3THDP13st_table_list + 37
0x8164b64 _Z21mysql_execute_commandP3THD + 9748
0x81681bc _Z11mysql_parseP3THDPcj + 268
0x816139d _Z16dispatch_command19enum_server_commandP3THDPcj + 1021
0x8160f56 _Z10do_commandP3THD + 134
0x81606f8 handle_one_connection + 872
0x40033dfb _end + 935196619
0x40255f8a _end + 937433434

Another time the stack dump resolves to,

0x814a0cd _ZN13Protocol_prep10store_dateEP13st_mysql_time + 13
0x40038d5d _end + 935216941
0x80d53db _Z7lex_endP6st_lex + 75
0x8175c86 _Z7yyparsePv + 43830
0x8161c5a _Z16dispatch_command19enum_server_commandP3THDPcj + 3258
0x815b42d _ZN26sys_var_slave_skip_counter6updateEP3THDP7set_var + 13
0x815afe6 _ZN23sys_var_key_buffer_size6updateEP3THDP7set_var + 438
0x815a768 _ZN21sys_var_character_set5checkEP3THDP7set_var + 408
0x40033dfb _end + 935196619
0x40255f8a _end + 937433434


But always the error is 'Out of Memory'.

My database uses roughly 20 HEAP tables, but they are simply used as
caches for incoming data and never grow past 2megs before I perform a
truncate table_name.

It is really not appropriate for me to recompile Mysql for debugging
so I hope that someone can give some advice.

Richard.

#START -- my.cnf - START
[client]
#password   = your_password
port= 3306
socket  = /tmp/mysql-4.1.3.sock

# The MySQL server
[mysqld]
port= 3306
socket  = /tmp/mysql-4.1.3.sock

skip-locking
key_buffer  = 128M
max_allowed_packet  = 16M
table_cache = 128
sort_buffer_size= 2M
read_buffer_size= 2M
read_rnd_buffer_size= 8M
myisam_sort_buffer_size = 64M
thread_cache= 8

query_cache_size= 128M
thread_concurrency  = 8
#skip-networking

log-slow-queries
log-long-format
long_query_time = 1

log-bin
server-id   = 2

replicate-do-table  = statistics.navmap
master-host = xxx.xxx.xxx.xxx
master-user = slave
master-password = 


max_heap_table_size = 134217728
max_connections = 500

# Point the following paths to different dedicated disks
tmpdir  = /tmp/
#log-update = /path-to-dedicated-directory/hostname

innodb_data_home_dir= /usr/local/mysql-4.1.3/var/
innodb_data_file_path   = innodb/ibdata1:1M:autoextend
innodb_log_group_home_dir   = /usr/local/mysql-4.1.3/var/innodb_log
innodb_log_arch_dir = /usr/local/mysql-4.1.3/var/innodb_log

innodb_buffer_pool_size = 1400M
innodb_additional_mem_pool_size = 64M

innodb_log_files_in_group   = 2
innodb_log_file_size= 256M
innodb_log_buffer_size  = 8M
innodb_flush_log_at_trx_commit  = 1
innodb_lock_wait_timeout= 50
#innodb_thread_concurrency  = 8

[mysqldump]
quick
max_allowed_packet = 16M

[isamchk]
key_buffer = 256M
sort_buffer_size = 256M
read_buffer = 2M
write_buffer = 2M

[myisamchk]
key_buffer = 256M
sort_buffer_size = 256M
read_buffer = 2M
write_buffer = 2M


Re: Fw: Confirm 2 question in replication.

2004-07-20 Thread Egor Egorov
MaFai [EMAIL PROTECTED] wrote:

 I'd like to confirm 2 queries on the replication as follows:
 1. Could we not using root on mysql for setting up the replication? ( I think no)

No. 

 2. Could the password for replication be changed? ( I think Yes)

mysqladmin  -uusername -ppassword password newpassword

This could be done remotely of course. 





-- 
For technical support contracts, goto https://order.mysql.com/?ref=ensita
This email is sponsored by Ensita.net http://www.ensita.net/
   __  ___ ___   __
  /  |/  /_ __/ __/ __ \/ /Egor Egorov
 / /|_/ / // /\ \/ /_/ / /__   [EMAIL PROTECTED]
/_/  /_/\_, /___/\___\_\___/   MySQL AB / Ensita.net
   ___/   www.mysql.com




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: data design question

2004-07-20 Thread Egor Egorov
Justin French [EMAIL PROTECTED] wrote:

 Should I worry at 40,000? 100,000?  Or will the indexing of the siteID 
 keep everything extensible?

Indexing is designed to keep SELECT speed small with no matter how much data it
is. You should ensure that the siteID index is properly used (use EXPLAIN
SELECT). Say, if half of the records belongs to a particular siteID - the index
won't help, the full table scan will be done to find out the needed article. 

You may consider creating a complex index as well - say, (siteID, articleID).
Then to fetch the list of all articles of a site the (siteID) index will be
used, and to fetch particular article the compound index will be used. 






-- 
For technical support contracts, goto https://order.mysql.com/?ref=ensita
This email is sponsored by Ensita.net http://www.ensita.net/
   __  ___ ___   __
  /  |/  /_ __/ __/ __ \/ /Egor Egorov
 / /|_/ / // /\ \/ /_/ / /__   [EMAIL PROTECTED]
/_/  /_/\_, /___/\___\_\___/   MySQL AB / Ensita.net
   ___/   www.mysql.com




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



MYSQL ROWS

2004-07-20 Thread Remember14a
Dear Friends,

I have 7000 rows in mysql table, it crashes often, is number of rows reason, 
or can their be any other reason, I want to increase number of rows and 
prevent further crashes how do I do the same, Inform ,please 


Re: Erro:Got error 28 from table handler!

2004-07-20 Thread Egor Egorov
[EMAIL PROTECTED] wrote:

   qual o significado da seguinte menssagem de erro do MySQL:
 Got error 28 from table handler?

[EMAIL PROTECTED] egor]$ perror 28
Error code  28:  No space left on device






-- 
For technical support contracts, goto https://order.mysql.com/?ref=ensita
This email is sponsored by Ensita.net http://www.ensita.net/
   __  ___ ___   __
  /  |/  /_ __/ __/ __ \/ /Egor Egorov
 / /|_/ / // /\ \/ /_/ / /__   [EMAIL PROTECTED]
/_/  /_/\_, /___/\___\_\___/   MySQL AB / Ensita.net
   ___/   www.mysql.com




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Optimize problem on BIG table

2004-07-20 Thread Egor Egorov
Aman Raheja [EMAIL PROTECTED] wrote:

 $ perror 28
 Error code  28:  No space left on device
 
 I agree that there is not much disk space.
 
 THE ISSUE: I want to reclaim the disk space that should be freed because 
 of the millions of records I deleted. I need disk space to reclaim disk 
 space? Atleast that's what it seems to be.

When you delete records from MyISAM data file you only delete records. You
don't physically empty the disk space they use.  The OPTIMIZE TABLE operation
works by making a temporary table. Of course it can be as big as the original
table is (including index). 

So you can't OPTIMIZE TABLE if you don't have spare about 78+16G. 






-- 
For technical support contracts, goto https://order.mysql.com/?ref=ensita
This email is sponsored by Ensita.net http://www.ensita.net/
   __  ___ ___   __
  /  |/  /_ __/ __/ __ \/ /Egor Egorov
 / /|_/ / // /\ \/ /_/ / /__   [EMAIL PROTECTED]
/_/  /_/\_, /___/\___\_\___/   MySQL AB / Ensita.net
   ___/   www.mysql.com




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: how the redhat 9.0 adds the mysql user:

2004-07-20 Thread Egor Egorov
j.rabbit [EMAIL PROTECTED] wrote:

 This is how the redhat 9.0 mysql.spec file adds the mysql user:
 
 ' useradd -M -o -r -d /var/lib/mysql -s /bin/bash -c MySQL Server -u =
 27 mysql '
 
 Anybody know why the shell is '/bin/bash' instead of '/sbin/nologin' =
 like other daemon users? Is this simply an oversite?

I can't speak about Red Hat 9, but for MySQL itself there is no need to have
a valid shell for the 'mysql' user.





-- 
For technical support contracts, goto https://order.mysql.com/?ref=ensita
This email is sponsored by Ensita.net http://www.ensita.net/
   __  ___ ___   __
  /  |/  /_ __/ __/ __ \/ /Egor Egorov
 / /|_/ / // /\ \/ /_/ / /__   [EMAIL PROTECTED]
/_/  /_/\_, /___/\___\_\___/   MySQL AB / Ensita.net
   ___/   www.mysql.com




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: query locking up the system

2004-07-20 Thread Egor Egorov
Aman Raheja [EMAIL PROTECTED] wrote:


 Has anyone experienced this kind of load.
 The hardware is not an issue - it is a dual processor, 1GB RAM etc.
 Suggections?

Tell us your MySQL server version, OS version, describe the structure of the table
and if possible show EXPLAIN SELECT on the statements provided. Then we could help.





-- 
For technical support contracts, goto https://order.mysql.com/?ref=ensita
This email is sponsored by Ensita.net http://www.ensita.net/
   __  ___ ___   __
  /  |/  /_ __/ __/ __ \/ /Egor Egorov
 / /|_/ / // /\ \/ /_/ / /__   [EMAIL PROTECTED]
/_/  /_/\_, /___/\___\_\___/   MySQL AB / Ensita.net
   ___/   www.mysql.com




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: MYSQL ROWS

2004-07-20 Thread Victor Pendleton
Is anything being logged to the error log? What messages are your receiving?

-Original Message-
From: [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: 7/20/04 7:45 AM
Subject: MYSQL ROWS

Dear Friends,

I have 7000 rows in mysql table, it crashes often, is number of rows
reason, 
or can their be any other reason, I want to increase number of rows and 
prevent further crashes how do I do the same, Inform ,please 

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: RAM-usage and hardware upgrade 10gb RAM

2004-07-20 Thread Egor Egorov
Jan Kirchhoff [EMAIL PROTECTED] wrote:

 Money is not really an issue but of course we don't want to waste it for 
 scsi-hardware if we can reach almost the same speed with hardware 
 sata-raids.

'Almost' is a key word. Some SCSI disk are working at 15k RPM, which will give
you a HUGE MySQL performance growth compared to 10k disks. 

AFAIR, there are no 15k RPM SATA disks yet. 


 Money is not really an issue but of course we don't want to waste it for 
 We'd like to stay with x86 because all our hardware is intel/amd and all 
 our servers are running debian-linux. Can we expect better performance 
 or problems using kernel 2.6.x?

You can expect better performance on kernel 2.6.x of course, especially on
multiple requests. 






-- 
For technical support contracts, goto https://order.mysql.com/?ref=ensita
This email is sponsored by Ensita.net http://www.ensita.net/
   __  ___ ___   __
  /  |/  /_ __/ __/ __ \/ /Egor Egorov
 / /|_/ / // /\ \/ /_/ / /__   [EMAIL PROTECTED]
/_/  /_/\_, /___/\___\_\___/   MySQL AB / Ensita.net
   ___/   www.mysql.com




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: ARGH - Mysql won't stop eating RAM and finally it SEGV's

2004-07-20 Thread Egor Egorov
Richard Clarke [EMAIL PROTECTED] wrote:

I think that is possibly related to some known glibc/gcc issues.  Please
download official MySQL binaries from http://www.mysql.com/ install them
and try the same on it. Should work smoothly. 

 Mysql won't stop eating RAM!! :( 
 Machine is a quad xeon 2.4 with 4 gigs of RAM.
 
 Linux db2 2.6.7-rc3 #1 SMP Thu Jun 17 12:51:21 UTC 2004 i686 Intel(R)
 Xeon(TM) CPU 2.40GHz GenuineIntel GNU/Linux
 
 Mysql is 4.1.3-beta
 
 Compile options are,
 ./configure --enable-thread-safe-client --enable-assembler
 --with-unix-socket-path=/tmp/mysql-4.1.3.sock
 --prefix=/usr/local/mysql-4.1.3 --without-extra-tools --without-docs
 --without-bench --with-innodb --without-berkley-db
 
 My startup variables are not especially high for a 4 gig box (see end
 of e-mail).
 
 I have 3 UDF functions which I wrote to do some simple things;
 extracting domain portion of a web url, and access to some libgeoip
 routines. I have run these 3 functions through
 benchmark(100,function()) to see if rate of RAM consumption
 increases, but I don't really see any change or at least if I do i
 think it's probably the placebo effect.
 
 I really can't think where to look to figure this problem out. I would
 like mysql to run with 3.5/4 gigs of RAM and stay that way. Not start
 off there and end up leaving the system with 1meg of RAM.
 I also add the problem that within about 72 hours of the server being
 started it dies with signal 11.
 
 040720  6:04:15  Out of memory; check if mysqld or some other process
 uses all available memory; if not, you may have to use 'ulimit' to





-- 
For technical support contracts, goto https://order.mysql.com/?ref=ensita
This email is sponsored by Ensita.net http://www.ensita.net/
   __  ___ ___   __
  /  |/  /_ __/ __/ __ \/ /Egor Egorov
 / /|_/ / // /\ \/ /_/ / /__   [EMAIL PROTECTED]
/_/  /_/\_, /___/\___\_\___/   MySQL AB / Ensita.net
   ___/   www.mysql.com




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: MYSQL ROWS

2004-07-20 Thread Egor Egorov
[EMAIL PROTECTED] wrote:

 I have 7000 rows in mysql table, it crashes often, is number of rows reason, 
 or can their be any other reason, I want to increase number of rows and 
 prevent further crashes how do I do the same, Inform ,please 

Please tell your MySQL version, your OS version, describe the hardware and
describe the table.





-- 
For technical support contracts, goto https://order.mysql.com/?ref=ensita
This email is sponsored by Ensita.net http://www.ensita.net/
   __  ___ ___   __
  /  |/  /_ __/ __/ __ \/ /Egor Egorov
 / /|_/ / // /\ \/ /_/ / /__   [EMAIL PROTECTED]
/_/  /_/\_, /___/\___\_\___/   MySQL AB / Ensita.net
   ___/   www.mysql.com




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: join subquerie rand problem

2004-07-20 Thread Gerske, Sebastian
i think its not even possible with subqueries



Gerske, Sebastian [EMAIL PROTECTED] schrieb im Newsbeitrag
news:[EMAIL PROTECTED]
 help me please :/


 Gerske, Sebastian [EMAIL PROTECTED] schrieb im Newsbeitrag
 news:[EMAIL PROTECTED]
  Hello list,
 
  i still need a solution for my 3 table join rand problem, im using
version
  4.1 now so subqueries are possible:
 
  explanation:
 
  i have 3 tables lets say:
 
  fruits:
 
  id,   name
  1banana
  2apple
  3strawberry
 
  fruits_animals
 
  id,   id_fruits,   id_animals
  11   2
  21   3
  33   1
  42   4
 
  animals
 
  id,   name
  1cat
  2   dog
  3   mouse
  4   elephant
 
 
  My problem is that if i join the tables and order them by rand i always
 get
  one result something like:
  strawberry, cat (fruit id = 3, animal id = 1 )
  or
  banana, dog (fruit id = 1, animal id = 2)
  but never
  banana, mouse( fruit id = 1, animal id = 3 )
 
  and need to select 100 different relations without redundancies hows
that
  possible ?
 
  regards sebastian gerske
 
 





-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: join subquerie rand problem

2004-07-20 Thread SGreen
Sebastian,

I don't think we completely understand your questionor someone would 
have responded long before now.  Let me see if I can rephrase the 
situation and maybe we can get a response.

You have two tables that contain objects (your example: fruits and 
animals) and a table that relates them (example: fruits_animals) in a 
many-to-many relationship.

Is your situation:
A) You want to return 100 un-duplicated random combinations of the 
objects, regardless of if they have an entry in the relationship table? 
- OR -
B) You want a list of 100 un-duplicated random rows from the relationship 
table?
- OR - 
C) Each time you run your query containing ORDER BY RAND() , you get the 
exact same set of records back?

Yours,
Shawn Green
Database Administrator
Unimin Corporation - Spruce Pine


Gerske, Sebastian [EMAIL PROTECTED] wrote on 07/20/2004 06:18:11 AM:

 help me please :/
 
 
 Gerske, Sebastian [EMAIL PROTECTED] schrieb im Newsbeitrag
 news:[EMAIL PROTECTED]
  Hello list,
 
  i still need a solution for my 3 table join rand problem, im using 
version
  4.1 now so subqueries are possible:
 
  explanation:
 
  i have 3 tables lets say:
 
  fruits:
 
  id,   name
  1banana
  2apple
  3strawberry
 
  fruits_animals
 
  id,   id_fruits,   id_animals
  11   2
  21   3
  33   1
  42   4
 
  animals
 
  id,   name
  1cat
  2   dog
  3   mouse
  4   elephant
 
 
  My problem is that if i join the tables and order them by rand i 
always
 get
  one result something like:
  strawberry, cat (fruit id = 3, animal id = 1 )
  or
  banana, dog (fruit id = 1, animal id = 2)
  but never
  banana, mouse( fruit id = 1, animal id = 3 )
 
  and need to select 100 different relations without redundancies hows 
that
  possible ?
 
  regards sebastian gerske
 
 
 
 
 
 -- 
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
 


Re: join subquerie rand problem

2004-07-20 Thread Gerske, Sebastian
well neither a,b or c :P

i have lets say 1000 different fruits
and 1000 different animals
with many to man relations
now i want to extract 100 differnt fruits held by 100 different animals
without dupes of fruit or animal


[EMAIL PROTECTED] schrieb im Newsbeitrag
news:[EMAIL PROTECTED]
 Sebastian,

 I don't think we completely understand your questionor someone would
 have responded long before now.  Let me see if I can rephrase the
 situation and maybe we can get a response.

 You have two tables that contain objects (your example: fruits and
 animals) and a table that relates them (example: fruits_animals) in a
 many-to-many relationship.

 Is your situation:
 A) You want to return 100 un-duplicated random combinations of the
 objects, regardless of if they have an entry in the relationship table?
 - OR -
 B) You want a list of 100 un-duplicated random rows from the relationship
 table?
 - OR -
 C) Each time you run your query containing ORDER BY RAND() , you get the
 exact same set of records back?

 Yours,
 Shawn Green
 Database Administrator
 Unimin Corporation - Spruce Pine


 Gerske, Sebastian [EMAIL PROTECTED] wrote on 07/20/2004 06:18:11 AM:

  help me please :/
 
 
  Gerske, Sebastian [EMAIL PROTECTED] schrieb im Newsbeitrag
  news:[EMAIL PROTECTED]
   Hello list,
  
   i still need a solution for my 3 table join rand problem, im using
 version
   4.1 now so subqueries are possible:
  
   explanation:
  
   i have 3 tables lets say:
  
   fruits:
  
   id,   name
   1banana
   2apple
   3strawberry
  
   fruits_animals
  
   id,   id_fruits,   id_animals
   11   2
   21   3
   33   1
   42   4
  
   animals
  
   id,   name
   1cat
   2   dog
   3   mouse
   4   elephant
  
  
   My problem is that if i join the tables and order them by rand i
 always
  get
   one result something like:
   strawberry, cat (fruit id = 3, animal id = 1 )
   or
   banana, dog (fruit id = 1, animal id = 2)
   but never
   banana, mouse( fruit id = 1, animal id = 3 )
  
   and need to select 100 different relations without redundancies hows
 that
   possible ?
  
   regards sebastian gerske
  
  
 
 
 
  -- 
  MySQL General Mailing List
  For list archives: http://lists.mysql.com/mysql
  To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
 




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Business Objects and MYSQL

2004-07-20 Thread janie . downie

Can anyone tell me if it's possible to use the reporting tool Business Objects
with a MySQL database?

Thanks
Janie Downie



***
Important.
Confidentiality: This communication is intended for the above-named person and
may be confidential and/or legally privileged. Any opinions expressed in this
communication are not necessarily those of the company. If it has come to you
in error you must take no action based on it, nor must you copy or show it to
anyone; please delete/destroy and inform the sender immediately.

Monitoring/Viruses
Orange may monitor all incoming and outgoing emails in line with current
legislation.  Although we have taken steps to ensure that this email and
attachments are free from any virus, we advise that in keeping with good
computing practice the recipient should ensure they are actually virus free.

Orange PCS Limited is a subsidiary of Orange SA and is registered in England No
2178917, with its address at St James Court, Great Park Road, Almondsbury Park,
Bradley Stoke, Bristol BS32 4QJ.
***


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Updating a field from multiple rows.

2004-07-20 Thread SGreen
Have you looked at the GROUP_CONCAT() function? 
(http://dev.mysql.com/doc/mysql/en/GROUP-BY-Functions.html)

It would make your update look like:

CREATE TABLE tmpList
SELECT ts.product_uid as UID, Group_Concat(ev.Text) as newtext
FROM X_Search.text_search as ts
INNER JOIN especee as es
ON ts.product_uid = es.ProdID
INNER JOIN evocee as ev
ON es.BodyID = ev.ID
GROUP BY ts.product_uid

UPDATE X_Search.text_search as ts
INNER JOIN tmpList tl
on tl.uid = ts.product_uid
SET ts.txt = tl.newtext

However, you **MUST** ensure that X_Search.text_search has a large enough 
txt column to take all of the values. If this is not possible then you 
only want to insert/update with your maximum # of characters to prevent 
field overflow. You probably also need to predeclare the temp table to 
make sure the text column will be wide enough to take the concatenated 
results. 

CREATE TABLE tmpList (
UID bigint,
newtext _correctly size this field_
)
INSERT tmpList (UID, newtext)
SELECT (same as above)

Yours,
Shawn Green
Database Administrator
Unimin Corporation - Spruce Pine

Stuart Grimshaw [EMAIL PROTECTED] wrote on 07/20/2004 06:35:17 
AM:

 Further to my Full Text question the other day, I'm trying to add the
 parimetric data to the field that  gets searched.
 
 We have a script that runs periodically to update this table. I can do
 what I want in that script no problem, but it would be more elegent if
 I could acheive the same results with 1 query.
 
 As an example,  lets take a random product and look at the parimetric
 data for it.
 
 SELECT es.ProdID, ev.Text 
 FROM especee as es 
 INNER JOIN evocee as ev ON  (es.BodyID=ev.ID) 
 WHERE es.ProdID = 45607;
 
 ++---+
 | ProdID | Text  |
 ++---+
 |  45607 | Limited warranty - 1 year |
 |  45607 | 1 year warranty   |
 
 |  45607 | External  |
 |  45607 | POTS filter   |
 ++---+
 
 
 I want to concat all the values of Text into 1 row in another table.
 I have this so far:
 
 UPDATE X_Search.text_search as ts 
 INNER JOIN especee as es ON (ts.product_uid = es.ProdID) 
 INNER JOIN evocee as ev ON (es.BodyID=ev.ID) 
 SET ts.txt = CONCAT(ts.txt, ev.Text) 
 WHERE ts.product_uid = 45607;
 
 but all that gets appended is the 1st row, ie Limited warranty - 1 
year
 
 Is what I'm trying to to possible, and if it is, am I going down the
 right road?
 
 -- 
 -S
 
 -- 
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
 


Re: ARGH - Mysql won't stop eating RAM and finally it SEGV's

2004-07-20 Thread Richard Clarke
Egor,
  But my UDF's are very important for the process of my statistics. I
need 3 functions
webdomain(val) = convert http://www.google.com/blah into google.com
geoip_lookup(123.123.123.123.) = country code
geoip_lookup_isp(123.123.123.123) = isp

I will try the mysql binaries and disable statistical operations
depending on the above functions temporarily.

I am using
gcc-3.3.3-r6  and glibc-2.3.3.20040420 gentoo packages.

Richard.

On Tue, 20 Jul 2004 15:57:17 +0300, Egor Egorov [EMAIL PROTECTED] wrote:
 Richard Clarke [EMAIL PROTECTED] wrote:
 
 I think that is possibly related to some known glibc/gcc issues.  Please
 download official MySQL binaries from http://www.mysql.com/ install them
 and try the same on it. Should work smoothly.
 
  Mysql won't stop eating RAM!! :(
  Machine is a quad xeon 2.4 with 4 gigs of RAM.
 
  Linux db2 2.6.7-rc3 #1 SMP Thu Jun 17 12:51:21 UTC 2004 i686 Intel(R)
  Xeon(TM) CPU 2.40GHz GenuineIntel GNU/Linux
 
  Mysql is 4.1.3-beta
 
  Compile options are,
  ./configure --enable-thread-safe-client --enable-assembler
  --with-unix-socket-path=/tmp/mysql-4.1.3.sock
  --prefix=/usr/local/mysql-4.1.3 --without-extra-tools --without-docs
  --without-bench --with-innodb --without-berkley-db
 
  My startup variables are not especially high for a 4 gig box (see end
  of e-mail).
 
  I have 3 UDF functions which I wrote to do some simple things;
  extracting domain portion of a web url, and access to some libgeoip
  routines. I have run these 3 functions through
  benchmark(100,function()) to see if rate of RAM consumption
  increases, but I don't really see any change or at least if I do i
  think it's probably the placebo effect.
 
  I really can't think where to look to figure this problem out. I would
  like mysql to run with 3.5/4 gigs of RAM and stay that way. Not start
  off there and end up leaving the system with 1meg of RAM.
  I also add the problem that within about 72 hours of the server being
  started it dies with signal 11.
 
  040720  6:04:15  Out of memory; check if mysqld or some other process
  uses all available memory; if not, you may have to use 'ulimit' to
 
 
 --
 For technical support contracts, goto https://order.mysql.com/?ref=ensita
 This email is sponsored by Ensita.net http://www.ensita.net/
   __  ___ ___   __
  /  |/  /_ __/ __/ __ \/ /Egor Egorov
 / /|_/ / // /\ \/ /_/ / /__   [EMAIL PROTECTED]
 /_/  /_/\_, /___/\___\_\___/   MySQL AB / Ensita.net
   ___/   www.mysql.com
 
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



(U) Chris Rock Lights it up !!!!

2004-07-20 Thread Johnson, Michael
CLASSIFICATION: UNCLASSIFIED

http://www.laserp.com/chris_rockfp.htm


Classification: UNCLASSIFIED


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: join subquerie rand problem

2004-07-20 Thread SGreen
OK, This is a similar solution to a problem posted last month (he was 
trying to match debits to credits). Here's how it works:

Create a temporary table to match your fruits_animals table except you 
want to put UNIQUE INDEXES on both columns individually. Then you run an 
INSERT IGNORE to copy the rows from fruits_animals into your temp table. 
What you will have when the INSERT IGNORE completes is a list that 
contains all of your animals listed only once and all of the fruits listed 
only once but only if that animal/fruit combination already existed.

CREATE TEMPORARY TABLE tmpDedupe(
animal_id int not null,
fruit_id int not null,
UNIQUE INDEX (animal_id),
UNIQUE INDEX (fruit_id)
)

INSERT IGNORE tmpDedupe (animal_id, fruit_id)
SELECT (id_fruits, id_animals)
FROM fruits_animals

SELECT *
FROM tmpDedupe
ORDER BY RAND()
LIMIT 100

DROP TABLE tmpDedupe

Make sense?

Yours,
Shawn Green
Database Administrator
Unimin Corporation - Spruce Pine


Gerske, Sebastian [EMAIL PROTECTED] wrote on 07/20/2004 09:22:30 AM:

 well neither a,b or c :P
 
 i have lets say 1000 different fruits
 and 1000 different animals
 with many to man relations
 now i want to extract 100 differnt fruits held by 100 different animals
 without dupes of fruit or animal
 
 
 [EMAIL PROTECTED] schrieb im Newsbeitrag
 
news:[EMAIL PROTECTED]
  Sebastian,
 
  I don't think we completely understand your questionor someone 
would
  have responded long before now.  Let me see if I can rephrase the
  situation and maybe we can get a response.
 
  You have two tables that contain objects (your example: fruits and
  animals) and a table that relates them (example: fruits_animals) in a
  many-to-many relationship.
 
  Is your situation:
  A) You want to return 100 un-duplicated random combinations of the
  objects, regardless of if they have an entry in the relationship 
table?
  - OR -
  B) You want a list of 100 un-duplicated random rows from the 
relationship
  table?
  - OR -
  C) Each time you run your query containing ORDER BY RAND() , you get 
the
  exact same set of records back?
 
  Yours,
  Shawn Green
  Database Administrator
  Unimin Corporation - Spruce Pine
 
 
  Gerske, Sebastian [EMAIL PROTECTED] wrote on 07/20/2004 06:18:11 
AM:
 
   help me please :/
  
  
   Gerske, Sebastian [EMAIL PROTECTED] schrieb im Newsbeitrag
   news:[EMAIL PROTECTED]
Hello list,
   
i still need a solution for my 3 table join rand problem, im using
  version
4.1 now so subqueries are possible:
   
explanation:
   
i have 3 tables lets say:
   
fruits:
   
id,   name
1banana
2apple
3strawberry
   
fruits_animals
   
id,   id_fruits,   id_animals
11   2
21   3
33   1
42   4
   
animals
   
id,   name
1cat
2   dog
3   mouse
4   elephant
   
   
My problem is that if i join the tables and order them by rand i
  always
   get
one result something like:
strawberry, cat (fruit id = 3, animal id = 1 )
or
banana, dog (fruit id = 1, animal id = 2)
but never
banana, mouse( fruit id = 1, animal id = 3 )
   
and need to select 100 different relations without redundancies 
hows
  that
possible ?
   
regards sebastian gerske
   
   
  
  
  
   -- 
   MySQL General Mailing List
   For list archives: http://lists.mysql.com/mysql
   To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED]
  
 
 
 
 
 -- 
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
 


Re: ARGH - Mysql won't stop eating RAM and finally it SEGV's

2004-07-20 Thread Egor Egorov
Richard Clarke [EMAIL PROTECTED] wrote:

  But my UDF's are very important for the process of my statistics. I
 need 3 functions
 webdomain(val) = convert http://www.google.com/blah into google.com
 geoip_lookup(123.123.123.123.) = country code
 geoip_lookup_isp(123.123.123.123) = isp
 
 I will try the mysql binaries and disable statistical operations
 depending on the above functions temporarily.

Check on official binaries. If everything is ok on them - then you will
need either to prepare a correct build farm with gcc 2.95 and patched glibc
2.2.5 (or hire someone to do it for you - us, for example) or buy primary,
enhanced or premium support from MySQL AB and ask them for a custom build. 





-- 
For technical support contracts, goto https://order.mysql.com/?ref=ensita
This email is sponsored by Ensita.net http://www.ensita.net/
   __  ___ ___   __
  /  |/  /_ __/ __/ __ \/ /Egor Egorov
 / /|_/ / // /\ \/ /_/ / /__   [EMAIL PROTECTED]
/_/  /_/\_, /___/\___\_\___/   MySQL AB / Ensita.net
   ___/   www.mysql.com




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: join subquerie rand problem

2004-07-20 Thread Gerske, Sebastian
brings the same result as

select
*
from
fruit,
fruit_animal,
animal
where
fruit.id = fruit_animal.id_fruit
AND
fruit_animal.id_animal = animal.id
order by rand()

or i got something wrong

the next thing is that the tables are hughe, like 3 millionen rows (growing)

thanks btw :O

[EMAIL PROTECTED] schrieb im Newsbeitrag
news:[EMAIL PROTECTED]
 OK, This is a similar solution to a problem posted last month (he was
 trying to match debits to credits). Here's how it works:

 Create a temporary table to match your fruits_animals table except you
 want to put UNIQUE INDEXES on both columns individually. Then you run an
 INSERT IGNORE to copy the rows from fruits_animals into your temp table.
 What you will have when the INSERT IGNORE completes is a list that
 contains all of your animals listed only once and all of the fruits listed
 only once but only if that animal/fruit combination already existed.

 CREATE TEMPORARY TABLE tmpDedupe(
 animal_id int not null,
 fruit_id int not null,
 UNIQUE INDEX (animal_id),
 UNIQUE INDEX (fruit_id)
 )

 INSERT IGNORE tmpDedupe (animal_id, fruit_id)
 SELECT (id_fruits, id_animals)
 FROM fruits_animals

 SELECT *
 FROM tmpDedupe
 ORDER BY RAND()
 LIMIT 100

 DROP TABLE tmpDedupe

 Make sense?

 Yours,
 Shawn Green
 Database Administrator
 Unimin Corporation - Spruce Pine


 Gerske, Sebastian [EMAIL PROTECTED] wrote on 07/20/2004 09:22:30 AM:

  well neither a,b or c :P
 
  i have lets say 1000 different fruits
  and 1000 different animals
  with many to man relations
  now i want to extract 100 differnt fruits held by 100 different animals
  without dupes of fruit or animal
 
 
  [EMAIL PROTECTED] schrieb im Newsbeitrag
 

news:[EMAIL PROTECTED]
   Sebastian,
  
   I don't think we completely understand your questionor someone
 would
   have responded long before now.  Let me see if I can rephrase the
   situation and maybe we can get a response.
  
   You have two tables that contain objects (your example: fruits and
   animals) and a table that relates them (example: fruits_animals) in a
   many-to-many relationship.
  
   Is your situation:
   A) You want to return 100 un-duplicated random combinations of the
   objects, regardless of if they have an entry in the relationship
 table?
   - OR -
   B) You want a list of 100 un-duplicated random rows from the
 relationship
   table?
   - OR -
   C) Each time you run your query containing ORDER BY RAND() , you get
 the
   exact same set of records back?
  
   Yours,
   Shawn Green
   Database Administrator
   Unimin Corporation - Spruce Pine
  
  
   Gerske, Sebastian [EMAIL PROTECTED] wrote on 07/20/2004 06:18:11
 AM:
  
help me please :/
   
   
Gerske, Sebastian [EMAIL PROTECTED] schrieb im Newsbeitrag
news:[EMAIL PROTECTED]
 Hello list,

 i still need a solution for my 3 table join rand problem, im using
   version
 4.1 now so subqueries are possible:

 explanation:

 i have 3 tables lets say:

 fruits:

 id,   name
 1banana
 2apple
 3strawberry

 fruits_animals

 id,   id_fruits,   id_animals
 11   2
 21   3
 33   1
 42   4

 animals

 id,   name
 1cat
 2   dog
 3   mouse
 4   elephant


 My problem is that if i join the tables and order them by rand i
   always
get
 one result something like:
 strawberry, cat (fruit id = 3, animal id = 1 )
 or
 banana, dog (fruit id = 1, animal id = 2)
 but never
 banana, mouse( fruit id = 1, animal id = 3 )

 and need to select 100 different relations without redundancies
 hows
   that
 possible ?

 regards sebastian gerske


   
   
   
-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED]
   
  
 
 
 
  -- 
  MySQL General Mailing List
  For list archives: http://lists.mysql.com/mysql
  To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
 




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: ARGH - Mysql won't stop eating RAM and finally it SEGV's

2004-07-20 Thread Richard Clarke
I have switched to the mysql standard binary and it helped in no way
at all. If anything it made the situation worse. It seems that mysql
grows even worse than before.
Mysql uses an extra 1MB roughly every 5-10 seconds.

Richard

On Tue, 20 Jul 2004 14:44:39 +0100, Richard Clarke
[EMAIL PROTECTED] wrote:
 Egor,
   But my UDF's are very important for the process of my statistics. I
 need 3 functions
 webdomain(val) = convert http://www.google.com/blah into google.com
 geoip_lookup(123.123.123.123.) = country code
 geoip_lookup_isp(123.123.123.123) = isp
 
 I will try the mysql binaries and disable statistical operations
 depending on the above functions temporarily.
 
 I am using
 gcc-3.3.3-r6  and glibc-2.3.3.20040420 gentoo packages.
 
 Richard.
 
 
 
 On Tue, 20 Jul 2004 15:57:17 +0300, Egor Egorov [EMAIL PROTECTED] wrote:
  Richard Clarke [EMAIL PROTECTED] wrote:
 
  I think that is possibly related to some known glibc/gcc issues.  Please
  download official MySQL binaries from http://www.mysql.com/ install them
  and try the same on it. Should work smoothly.
 
   Mysql won't stop eating RAM!! :(
   Machine is a quad xeon 2.4 with 4 gigs of RAM.
  
   Linux db2 2.6.7-rc3 #1 SMP Thu Jun 17 12:51:21 UTC 2004 i686 Intel(R)
   Xeon(TM) CPU 2.40GHz GenuineIntel GNU/Linux
  
   Mysql is 4.1.3-beta
  
   Compile options are,
   ./configure --enable-thread-safe-client --enable-assembler
   --with-unix-socket-path=/tmp/mysql-4.1.3.sock
   --prefix=/usr/local/mysql-4.1.3 --without-extra-tools --without-docs
   --without-bench --with-innodb --without-berkley-db
  
   My startup variables are not especially high for a 4 gig box (see end
   of e-mail).
  
   I have 3 UDF functions which I wrote to do some simple things;
   extracting domain portion of a web url, and access to some libgeoip
   routines. I have run these 3 functions through
   benchmark(100,function()) to see if rate of RAM consumption
   increases, but I don't really see any change or at least if I do i
   think it's probably the placebo effect.
  
   I really can't think where to look to figure this problem out. I would
   like mysql to run with 3.5/4 gigs of RAM and stay that way. Not start
   off there and end up leaving the system with 1meg of RAM.
   I also add the problem that within about 72 hours of the server being
   started it dies with signal 11.
  
   040720  6:04:15  Out of memory; check if mysqld or some other process
   uses all available memory; if not, you may have to use 'ulimit' to
 
 
  --
  For technical support contracts, goto https://order.mysql.com/?ref=ensita
  This email is sponsored by Ensita.net http://www.ensita.net/
__  ___ ___   __
   /  |/  /_ __/ __/ __ \/ /Egor Egorov
  / /|_/ / // /\ \/ /_/ / /__   [EMAIL PROTECTED]
  /_/  /_/\_, /___/\___\_\___/   MySQL AB / Ensita.net
___/   www.mysql.com
 
 
  --
  MySQL General Mailing List
  For list archives: http://lists.mysql.com/mysql
  To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
 
 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: join subquerie rand problem

2004-07-20 Thread Arnaud
 i have lets say 1000 different fruits
 and 1000 different animals
 with many to man relations
 now i want to extract 100 differnt fruits held by 100 different
 animals without dupes of fruit or animal

That's a nice one ! I'll give it a try :
The point is to get 100 random couples of (id_fruits, id_animals), 
with unique id_fruits and unique id_animals, right ?

SELECT id_fruits AS my_id_fruits, 
(SELECT id_animals 
FROM fruits_animals 
WHERE id_fruits = my_id_fruits 
ORDER BY RAND() 
LIMIT 1) AS my_id_animals
FROM fruits_animals
GROUP BY my_id_fruits
ORDER BY RAND()
LIMIT 100;

You have your unique many-to-many relations' table, you just have to 
join this with the animals and fruits tables.

Regards,
Arnaud

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: RAM-usage and hardware upgrade 10gb RAM

2004-07-20 Thread Jan Kirchhoff
Egor Egorov wrote:
Money is not really an issue but of course we don't want to waste it for 
scsi-hardware if we can reach almost the same speed with hardware 
sata-raids.
   

'Almost' is a key word. Some SCSI disk are working at 15k RPM, which will give
you a HUGE MySQL performance growth compared to 10k disks. 

AFAIR, there are no 15k RPM SATA disks yet. 
 

But shouldn't a sata-based RAID10 with 8 discs do job as well? writes 
would be spread on 4 discs...
Has anybody experience with those external SCSI-to-SATA RAIDs?
A SCSI-solution would cost twice as much, but would it really speed 
things up compared to a massive use of parallel (raid0) sata-discs?
I know disc i/o is the bottleneck in our case, of course we want the 
fastest disc/raid-system we can possibly get for our money.
Is our thinking too simple or shouldn't it be possible to reach the 
speed of fast scsi-discs by simply taking 2-3 fast sata-discs in a 
hardware raid0?
Our goal is a raid10, so reading should be even faster.

Money is not really an issue but of course we don't want to waste it for 
We'd like to stay with x86 because all our hardware is intel/amd and all 
our servers are running debian-linux. Can we expect better performance 
or problems using kernel 2.6.x?
   

You can expect better performance on kernel 2.6.x of course, especially on
multiple requests. 
 

Has anybody experiences with RAM-usage and cpu-architecture (please have 
a look at my earlier post)?

thanks
Jan
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: RAM-usage and hardware upgrade 10gb RAM

2004-07-20 Thread Brent Baisley
The single biggest difference between SATA (or any IDE) and SCSI is 
command queuing. Command queuing allows the drive to intelligently 
reorder reads and writes to make things faster.
An ATA drive executes a bunch of commands in the order it gets them, 
which can be slow if it needs to write data on an inside track, then 
read from and outside track, and back and forth until the both the read 
and write requests are finished.
SCSI will understand that there is a proximity benefit to the commands, 
so it will reorder the interlaced requests and execute one before the 
other. Essentially putting part of one request on hold while it does 
the other. That's an oversimplification of the algorithm of course. 
Especially since a server system will probably have many more the two 
interlaced requests.

Did you ever try to clean two rooms at once? Put the clothes away in 
one, then the other. Make the bed in one, then the other. Lots of 
useless travel time in there. You would probably clean one and then the 
other. Except if you are vacuuming, then you would probably vacuum both 
at once, interlacing two similar actions.

SATA is catching up though. Seagate has release SATA drives that have 
command queueing, but I don't know how it compares to SCSI.

On Jul 20, 2004, at 10:40 AM, Jan Kirchhoff wrote:
But shouldn't a sata-based RAID10 with 8 discs do job as well? writes 
would be spread on 4 discs...
Has anybody experience with those external SCSI-to-SATA RAIDs?
A SCSI-solution would cost twice as much, but would it really speed 
things up compared to a massive use of parallel (raid0) sata-discs?
I know disc i/o is the bottleneck in our case, of course we want the 
fastest disc/raid-system we can possibly get for our money.
Is our thinking too simple or shouldn't it be possible to reach the 
speed of fast scsi-discs by simply taking 2-3 fast sata-discs in a 
hardware raid0?
Our goal is a raid10, so reading should be even faster.

--
Brent Baisley
Systems Architect
Landover Associates, Inc.
Search  Advisory Services for Advanced Technology Environments
p: 212.759.6400/800.759.0577
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: join subquerie rand problem

2004-07-20 Thread Arnaud
Oops, this was not correct, excuse me !
You can have duplicate id_animals with this query.
You can do it like this :

SELECT my_id_fruits, my_id_animals FROM
(SELECT id_fruits AS my_id_fruits, 
(SELECT id_animals 
FROM fruits_animals 
WHERE id_fruits = my_id_fruits 
ORDER BY RAND() 
LIMIT 1) AS my_id_animals
FROM fruits_animals
GROUP BY my_id_fruits) AS tmpQuery
GROUP BY tmpQuery.my_id_animals
ORDER BY whatever you want
LIMIT 100;


On 20 Jul 2004 at 16:36, Arnaud [EMAIL PROTECTED] wrote:

  i have lets say 1000 different fruits
  and 1000 different animals
  with many to man relations
  now i want to extract 100 differnt fruits held by 100 different
  animals without dupes of fruit or animal
 
 That's a nice one ! I'll give it a try :
 The point is to get 100 random couples of (id_fruits, id_animals),
 with unique id_fruits and unique id_animals, right ?
 
 SELECT id_fruits AS my_id_fruits, 
  (SELECT id_animals 
  FROM fruits_animals 
  WHERE id_fruits = my_id_fruits 
  ORDER BY RAND() 
  LIMIT 1) AS my_id_animals
 FROM fruits_animals
 GROUP BY my_id_fruits
 ORDER BY RAND()
 LIMIT 100;
 
 You have your unique many-to-many relations' table, you just have to
 join this with the animals and fruits tables.
 
 Regards,
 Arnaud
 
 -- 
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:   
 http://lists.mysql.com/[EMAIL PROTECTED]
 



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



CIDR ranges in MySQL permissions?

2004-07-20 Thread Andrew Braithwaite
Hi All,
 
Has anyone had any experience with using IP address ranges in MySQL
permissions?  It would be easy if you had a whole class C for example
because you would be able to do:
 
Grant all privileges on *.* to someuser@'192.87.12.%';

But if you only wanted to give permissions to a CIDR range (e.g. a /32,
192.87.12.1 to 182.87.12.32 etc..) in one go, and used the above method,
it would leave your databases open to connection from others.

Is there any way to do this in a single line (without having an entry
for each IP address) ?

Hope you can help.

Regards,

Andrew

Mysql, query

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: CIDR ranges in MySQL permissions?

2004-07-20 Thread Andrew Braithwaite
Sorry - a /32 is a single ip - I meant a /27 :)

A

-Original Message-
From: Andrew Braithwaite [mailto:[EMAIL PROTECTED] 
Sent: Tuesday 20 July 2004 16:16
To: [EMAIL PROTECTED]
Cc: Karl Skidmore
Subject: CIDR ranges in MySQL permissions?

Hi All,
 
Has anyone had any experience with using IP address ranges in MySQL
permissions?  It would be easy if you had a whole class C for example
because you would be able to do:
 
Grant all privileges on *.* to someuser@'192.87.12.%';

But if you only wanted to give permissions to a CIDR range (e.g. a /32,
192.87.12.1 to 182.87.12.32 etc..) in one go, and used the above method,
it would leave your databases open to connection from others.

Is there any way to do this in a single line (without having an entry
for each IP address) ?

Hope you can help.

Regards,

Andrew

Mysql, query

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



newbie join issue

2004-07-20 Thread Edward Ritter
I've got a task that's gonna require me to compare one table to another 
and remove the rows from the first table that are found in the second 
table that match email_address.

I'm running 4.0.20a-nt-log. The first table has 10 colomns and about 50K 
records, and the second table has 46 columns and has about 16K records.

I've attempted a number of selects that just sat and hung the computer. 
I know I must be doing something wrong. I figure I'll need to do a left 
join on it, but I've not had much experience with joins as such and I 
need a little assistance.

Can anyone help me work this out? If you need more info, let me know.
Ed
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Primary Keys

2004-07-20 Thread Rui Monteiro
Mello,

 

I was wondering why canto r how can I put 2 primary keys on a table?

 

Here's na example on Oracle language:

 

CREATE TABLE FacturaMusica(

CodFactura number(4), CONSTRAINTS FK_FacturaMusica_CodFactura
FOREIGN KEY(CodFactura) REFERENCES Factura(CodFactura),

CodMusica number(4), CONSTRAINTS FK_FacturaMusica_CodMusica
FOREIGN KEY(CodMusica) REFERENCES Musica(CodMusica),

CONSTRAINT PK_FacturaMusica PRIMARY KEY(CodFactura,CodMusica)

);

 

This is very usefull to break n to n relations.

 

Any tip?

 

Thanks

 

Rui



Re: newbie join issue

2004-07-20 Thread Stefan Kuhn
I would expect that the speed problems are due to missing indices. Did you do 
proper indexing? If unsure, post your table structures and query.
Stefan

Am Tuesday 20 July 2004 17:45 schrieb Edward Ritter:
 I've got a task that's gonna require me to compare one table to another
 and remove the rows from the first table that are found in the second
 table that match email_address.

 I'm running 4.0.20a-nt-log. The first table has 10 colomns and about 50K
 records, and the second table has 46 columns and has about 16K records.

 I've attempted a number of selects that just sat and hung the computer.
 I know I must be doing something wrong. I figure I'll need to do a left
 join on it, but I've not had much experience with joins as such and I
 need a little assistance.

 Can anyone help me work this out? If you need more info, let me know.

 Ed

-- 
Stefan Kuhn M. A.
Cologne University BioInformatics Center (http://www.cubic.uni-koeln.de)
Zรผlpicher Str. 47, 50674 Cologne
Tel: +49(0)221-470-7428   Fax: +49 (0) 221-470-7786
My public PGP key is available at http://pgp.mit.edu


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Primary Keys

2004-07-20 Thread SGreen
Your example has 1 Primary Key and 2 Foreign Keys. 

Please post a sample data structure and state (not in SQL) what situation 
you want to achieve.  If you need more constraints on the table to prevent 
creating duplicates you can create additional UNIQUE Keys but, by 
definition, any table should have only one Primary Key.

Yours,

Shawn Green
Database Administrator
Unimin Corporation - Spruce Pine

Rui Monteiro [EMAIL PROTECTED] wrote on 07/20/2004 11:54:00 AM:

 Mello,
 
 
 
 I was wondering why canto r how can I put 2 primary keys on a table?
 
 
 
 Here's na example on Oracle language:
 
 
 
 CREATE TABLE FacturaMusica(
 
 CodFactura number(4), CONSTRAINTS 
FK_FacturaMusica_CodFactura
 FOREIGN KEY(CodFactura) REFERENCES Factura(CodFactura),
 
 CodMusica number(4), CONSTRAINTS FK_FacturaMusica_CodMusica
 FOREIGN KEY(CodMusica) REFERENCES Musica(CodMusica),
 
 CONSTRAINT PK_FacturaMusica PRIMARY 
KEY(CodFactura,CodMusica)
 
 );
 
 
 
 This is very usefull to break n to n relations.
 
 
 
 Any tip?
 
 
 
 Thanks
 
 
 
 Rui
 


Re: newbie join issue

2004-07-20 Thread Edward Ritter
Stefan:
  I added an index column to each after I imported. Here's a listing of 
the two tables.

la_entire
++-+--+-+-++
| Field  | Type| Null | Key | Default | Extra  |
++-+--+-+-++
| id1| int(3)  |  | | 0   ||
| id2| varchar(6)  |  | | ||
| first_name | varchar(30) |  | | ||
| last_name  | varchar(30) |  | | ||
| street_address | varchar(50) |  | | ||
| city   | varchar(30) |  | | ||
| state  | char(2) |  | | ||
| zip| varchar(9)  |  | | ||
| email_address  | varchar(50) |  | | ||
| idx| int(7)  |  | PRI | NULL| auto_increment |
++-+--+-+-++
10 rows in set (0.00 sec)
++-+--+-+-++
| Field  | Type| Null | Key | Default | Extra  |
++-+--+-+-++
| id | int(5)  |  | | 0   ||
| county | int(5)  |  | | 0   ||
| precinct   | int(5)  |  | | 0   ||
| last_name  | varchar(30) |  | | ||
| first_name | varchar(30) |  | | ||
| src_address| varchar(30) |  | | ||
| src_city   | varchar(30) |  | | ||
| src_state  | varchar(20) |  | | ||
| src_zip| int(5)  |  | | 0   ||
| email_address  | varchar(30) |  | | ||
| new_city   | varchar(30) |  | | ||
| new_state  | varchar(20) |  | | ||
| new_zip| int(5)  |  | | 0   ||
| new_zip4   | int(4)  |  | | 0   ||
| new_address| varchar(30) |  | | ||
| dma_flag   | varchar(4)  |  | | ||
| deceased   | varchar(4)  |  | | ||
| phone  | int(12) |  | | 0   ||
| time_zone  | varchar(4)  |  | | ||
| phone_sol  | varchar(4)  |  | | ||
| cluster| varchar(4)  |  | | ||
| age| varchar(4)  |  | | ||
| income | varchar(4)  |  | | ||
| pres_child | varchar(4)  |  | | ||
| own_rent   | varchar(4)  |  | | ||
| length_of_res  | varchar(4)  |  | | ||
| buyer  | varchar(4)  |  | | ||
| responder  | varchar(4)  |  | | ||
| gender | varchar(4)  |  | | ||
| occupation | varchar(4)  |  | | ||
| education  | varchar(4)  |  | | ||
| donor_prospect | varchar(4)  |  | | ||
| scr1ast1   | varchar(4)  |  | | ||
| scr1bst1   | varchar(4)  |  | | ||
| scr2ast1   | varchar(4)  |  | | ||
| scr2bst1   | varchar(4)  |  | | ||
| decile1| varchar(4)  |  | | ||
| decile2| varchar(4)  |  | | ||
| decile3| varchar(4)  |  | | ||
| decile4| varchar(4)  |  | | ||
| scr1ast2   | varchar(4)  |  | | ||
| scr1bst2   | varchar(4)  |  | | ||
| decile5| varchar(4)  |  | | ||
| decile6| varchar(4)  |  | | ||
| dob| varchar(12) |  | | ||
| party  | varchar(4)  |  | | ||
| idx| int(7)  |  | PRI | NULL| auto_increment |
++-+--+-+-++
47 rows in set (0.00 sec)
My latest attempt at a query is this:
select la_entire.* from la_entire left join la_final on 

Re: newbie join issue

2004-07-20 Thread Garth Webb
What is the 'idx' for when you already have an 'id' column?  Also, you
need an index on the column that you are joining on;  having a single
indexed column on a table doesn't automatically improve all queries
against that table.  Put an index on the 'email_address' fields of both
tables.  You'll need:

ALTER TABLE la_entire
ADD INDEX idx_email_address (email_address);

ALTER TABLE la_final
ADD INDEX idx_email_address (email_address);

See:
http://dev.mysql.com/doc/mysql/en/ALTER_TABLE.html

On Tue, 2004-07-20 at 09:22, Edward Ritter wrote:
 Stefan:
I added an index column to each after I imported. Here's a listing of 
 the two tables.
 
 la_entire
 ++-+--+-+-++
 | Field  | Type| Null | Key | Default | Extra  |
 ++-+--+-+-++
 | id1| int(3)  |  | | 0   ||
 | id2| varchar(6)  |  | | ||
 | first_name | varchar(30) |  | | ||
 | last_name  | varchar(30) |  | | ||
 | street_address | varchar(50) |  | | ||
 | city   | varchar(30) |  | | ||
 | state  | char(2) |  | | ||
 | zip| varchar(9)  |  | | ||
 | email_address  | varchar(50) |  | | ||
 | idx| int(7)  |  | PRI | NULL| auto_increment |
 ++-+--+-+-++
 10 rows in set (0.00 sec)
 
 ++-+--+-+-++
 | Field  | Type| Null | Key | Default | Extra  |
 ++-+--+-+-++
 | id | int(5)  |  | | 0   ||
 | county | int(5)  |  | | 0   ||
 | precinct   | int(5)  |  | | 0   ||
 | last_name  | varchar(30) |  | | ||
 | first_name | varchar(30) |  | | ||
 | src_address| varchar(30) |  | | ||
 | src_city   | varchar(30) |  | | ||
 | src_state  | varchar(20) |  | | ||
 | src_zip| int(5)  |  | | 0   ||
 | email_address  | varchar(30) |  | | ||
 | new_city   | varchar(30) |  | | ||
 | new_state  | varchar(20) |  | | ||
 | new_zip| int(5)  |  | | 0   ||
 | new_zip4   | int(4)  |  | | 0   ||
 | new_address| varchar(30) |  | | ||
 | dma_flag   | varchar(4)  |  | | ||
 | deceased   | varchar(4)  |  | | ||
 | phone  | int(12) |  | | 0   ||
 | time_zone  | varchar(4)  |  | | ||
 | phone_sol  | varchar(4)  |  | | ||
 | cluster| varchar(4)  |  | | ||
 | age| varchar(4)  |  | | ||
 | income | varchar(4)  |  | | ||
 | pres_child | varchar(4)  |  | | ||
 | own_rent   | varchar(4)  |  | | ||
 | length_of_res  | varchar(4)  |  | | ||
 | buyer  | varchar(4)  |  | | ||
 | responder  | varchar(4)  |  | | ||
 | gender | varchar(4)  |  | | ||
 | occupation | varchar(4)  |  | | ||
 | education  | varchar(4)  |  | | ||
 | donor_prospect | varchar(4)  |  | | ||
 | scr1ast1   | varchar(4)  |  | | ||
 | scr1bst1   | varchar(4)  |  | | ||
 | scr2ast1   | varchar(4)  |  | | ||
 | scr2bst1   | varchar(4)  |  | | ||
 | decile1| varchar(4)  |  | | ||
 | decile2| varchar(4)  |  | | ||
 | decile3| varchar(4)  |  | | ||
 | decile4| varchar(4)  |  | | ||
 | scr1ast2   | varchar(4)  |  | | ||
 

Re: newbie join issue

2004-07-20 Thread Edward Ritter
Thanks, I'll take a look at that. The id isn't unique, so that's why I 
added the idx column.

Does my query look okay beyond that? I'll add the additional indexes and 
try again.

Ed
Garth Webb said the following on 7/20/2004 1:03 PM:
What is the 'idx' for when you already have an 'id' column?  Also, you
need an index on the column that you are joining on;  having a single
indexed column on a table doesn't automatically improve all queries
against that table.  Put an index on the 'email_address' fields of both
tables.  You'll need:
ALTER TABLE la_entire
ADD INDEX idx_email_address (email_address);
ALTER TABLE la_final
ADD INDEX idx_email_address (email_address);
See:
http://dev.mysql.com/doc/mysql/en/ALTER_TABLE.html
On Tue, 2004-07-20 at 09:22, Edward Ritter wrote:
Stefan:
  I added an index column to each after I imported. Here's a listing of 
the two tables.

la_entire
++-+--+-+-++
| Field  | Type| Null | Key | Default | Extra  |
++-+--+-+-++
| id1| int(3)  |  | | 0   ||
| id2| varchar(6)  |  | | ||
| first_name | varchar(30) |  | | ||
| last_name  | varchar(30) |  | | ||
| street_address | varchar(50) |  | | ||
| city   | varchar(30) |  | | ||
| state  | char(2) |  | | ||
| zip| varchar(9)  |  | | ||
| email_address  | varchar(50) |  | | ||
| idx| int(7)  |  | PRI | NULL| auto_increment |
++-+--+-+-++
10 rows in set (0.00 sec)
++-+--+-+-++
| Field  | Type| Null | Key | Default | Extra  |
++-+--+-+-++
| id | int(5)  |  | | 0   ||
| county | int(5)  |  | | 0   ||
| precinct   | int(5)  |  | | 0   ||
| last_name  | varchar(30) |  | | ||
| first_name | varchar(30) |  | | ||
| src_address| varchar(30) |  | | ||
| src_city   | varchar(30) |  | | ||
| src_state  | varchar(20) |  | | ||
| src_zip| int(5)  |  | | 0   ||
| email_address  | varchar(30) |  | | ||
| new_city   | varchar(30) |  | | ||
| new_state  | varchar(20) |  | | ||
| new_zip| int(5)  |  | | 0   ||
| new_zip4   | int(4)  |  | | 0   ||
| new_address| varchar(30) |  | | ||
| dma_flag   | varchar(4)  |  | | ||
| deceased   | varchar(4)  |  | | ||
| phone  | int(12) |  | | 0   ||
| time_zone  | varchar(4)  |  | | ||
| phone_sol  | varchar(4)  |  | | ||
| cluster| varchar(4)  |  | | ||
| age| varchar(4)  |  | | ||
| income | varchar(4)  |  | | ||
| pres_child | varchar(4)  |  | | ||
| own_rent   | varchar(4)  |  | | ||
| length_of_res  | varchar(4)  |  | | ||
| buyer  | varchar(4)  |  | | ||
| responder  | varchar(4)  |  | | ||
| gender | varchar(4)  |  | | ||
| occupation | varchar(4)  |  | | ||
| education  | varchar(4)  |  | | ||
| donor_prospect | varchar(4)  |  | | ||
| scr1ast1   | varchar(4)  |  | | ||
| scr1bst1   | varchar(4)  |  | | ||
| scr2ast1   | varchar(4)  |  | | ||
| scr2bst1   | varchar(4)  |  | | ||
| decile1| varchar(4)  |  | | ||
| decile2| varchar(4)  |  | | ||
| decile3| varchar(4)  |  | | | 

Re: RAM-usage and hardware upgrade 10gb RAM

2004-07-20 Thread David Griffiths
We just put a new dual-Opteron server into our production environment. 
We ordered a Megaraid SCSI card and five 10k drives, and a 3Ware 
Escalade SATA card with six 7200 RPM drives (Maxtor) to see which ones 
were best.

Our network guy did a bunch of benchmarking on the drives and found that 
SCSI-RAID5 was a bit faster than SATA-RAID0+1.

The SATA was significantly cheaper (the 3Ware card was the same price as 
the Megaraid card, however). You might be able to tie a 10K SCSI rig if 
you went with the Western Digital Raptor drives.

We ended up putting the SATA drives in production - some bug in the SCSI 
driver kept crashing MySQL on index-creation, etc.

High Performance MySQL mentions that SCSI 15K drives are worth the extra 
money.

Fast hard drives are important, but so is lots of RAM (which is where 
the Opteron shines). In fact, all the benchmarks I've seen show that the 
Opteron/Athlon architecture beats Intel processors by a 30-odd percent 
margin if memory serves (note that for some reason, most benchmarks I've 
seen were on 3.23, which is outdated and not overly usefull).

One of our websites serves up 2 million distinct pages per day; the 
original coders of the site did something dumb and open a new connection 
to the database for most of those pages (probably about 1.8 million). 
Even with that additonal load, our Opteron server has an average CPU 
load of about 10%.

David
Jan Kirchhoff wrote:
Egor Egorov wrote:
Money is not really an issue but of course we don't want to waste it 
for scsi-hardware if we can reach almost the same speed with 
hardware sata-raids.
  

'Almost' is a key word. Some SCSI disk are working at 15k RPM, which 
will give
you a HUGE MySQL performance growth compared to 10k disks.
AFAIR, there are no 15k RPM SATA disks yet.  

But shouldn't a sata-based RAID10 with 8 discs do job as well? writes 
would be spread on 4 discs...
Has anybody experience with those external SCSI-to-SATA RAIDs?
A SCSI-solution would cost twice as much, but would it really speed 
things up compared to a massive use of parallel (raid0) sata-discs?
I know disc i/o is the bottleneck in our case, of course we want the 
fastest disc/raid-system we can possibly get for our money.
Is our thinking too simple or shouldn't it be possible to reach the 
speed of fast scsi-discs by simply taking 2-3 fast sata-discs in a 
hardware raid0?
Our goal is a raid10, so reading should be even faster.

Money is not really an issue but of course we don't want to waste it 
for We'd like to stay with x86 because all our hardware is intel/amd 
and all our servers are running debian-linux. Can we expect better 
performance or problems using kernel 2.6.x?
  

You can expect better performance on kernel 2.6.x of course, 
especially on
multiple requests.  

Has anybody experiences with RAM-usage and cpu-architecture (please 
have a look at my earlier post)?

thanks
Jan


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: RAM-usage and hardware upgrade 10gb RAM

2004-07-20 Thread Marc Slemko
On Mon, 19 Jul 2004 18:13:36 +0200, Jan Kirchhoff [EMAIL PROTECTED] wrote:
 Hi,
 
 We are currently using a 4.0.16-replication-setup (debian-linux, kernel
 2.4.21, xfs) of two 2.4ghz Intel-Pentium4 systems with  3gig RAM each
 and SCSI-Hardware-Raid, connected via gigabit-ethernet. We are reaching
 the limit of those systems and are going to buy new hardware as well as
 upgrade to mysql 4.1.x. We will start testing our applications on 4.1.3
 within the next few weeks but our main problem is that we are not quite
 sure what hardware to buy...
 
 We are planning to buy something like a dual-xeon system with 10-16gb of
 RAM and hardware raid10 with 8 sata-disks and as much cache as possible.
 Will mysql be able to use the ram efficiently or are we hitting limits?
 AMD or Intel? 32bit or 64bit?

Whatever you do, get a 64 bit system.  Opteron recommended, if you
really prefer Intel and can get your hands on one of their 64-bit
Xeons that is acceptable, although it may take a little longer for
Linux to catch up.

Even if the software isn't there yet (it is, it may just be a bit of a
hassle to all get working), in the worst case you'll have to run it in
32-bit mode until you can figure that out.

You can't use more than 2 gig most of the time / close to 4 gig if you
hack things up right innodb cache on a 32 bit system.  The rest of the
memory will be used by the OS (less efficiently than on a 64-bit
system though), but that may or may not be as efficient as innodb
doing it.  That depends a lot on your application's data access
patterns.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: newbie join issue

2004-07-20 Thread Edward Ritter
Thanks, adding the indexes worked beautifully. I'll go knock my head on 
the desk now. Thanks for your time :)

Ed
Edward Ritter said the following on 7/20/2004 1:08 PM:
Thanks, I'll take a look at that. The id isn't unique, so that's why I 
added the idx column.

Does my query look okay beyond that? I'll add the additional indexes and 
try again.

Ed
Garth Webb said the following on 7/20/2004 1:03 PM:
What is the 'idx' for when you already have an 'id' column?  Also, you
need an index on the column that you are joining on;  having a single
indexed column on a table doesn't automatically improve all queries
against that table.  Put an index on the 'email_address' fields of both
tables.  You'll need:
ALTER TABLE la_entire
ADD INDEX idx_email_address (email_address);
ALTER TABLE la_final
ADD INDEX idx_email_address (email_address);
See:
http://dev.mysql.com/doc/mysql/en/ALTER_TABLE.html
On Tue, 2004-07-20 at 09:22, Edward Ritter wrote:
Stefan:
  I added an index column to each after I imported. Here's a listing 
of the two tables.

la_entire
++-+--+-+-++
| Field  | Type| Null | Key | Default | Extra  |
++-+--+-+-++
| id1| int(3)  |  | | 0   ||
| id2| varchar(6)  |  | | ||
| first_name | varchar(30) |  | | ||
| last_name  | varchar(30) |  | | ||
| street_address | varchar(50) |  | | ||
| city   | varchar(30) |  | | ||
| state  | char(2) |  | | ||
| zip| varchar(9)  |  | | ||
| email_address  | varchar(50) |  | | ||
| idx| int(7)  |  | PRI | NULL| auto_increment |
++-+--+-+-++
10 rows in set (0.00 sec)
++-+--+-+-++
| Field  | Type| Null | Key | Default | Extra  |
++-+--+-+-++
| id | int(5)  |  | | 0   ||
| county | int(5)  |  | | 0   ||
| precinct   | int(5)  |  | | 0   ||
| last_name  | varchar(30) |  | | ||
| first_name | varchar(30) |  | | ||
| src_address| varchar(30) |  | | ||
| src_city   | varchar(30) |  | | ||
| src_state  | varchar(20) |  | | ||
| src_zip| int(5)  |  | | 0   ||
| email_address  | varchar(30) |  | | ||
| new_city   | varchar(30) |  | | ||
| new_state  | varchar(20) |  | | ||
| new_zip| int(5)  |  | | 0   ||
| new_zip4   | int(4)  |  | | 0   ||
| new_address| varchar(30) |  | | ||
| dma_flag   | varchar(4)  |  | | ||
| deceased   | varchar(4)  |  | | ||
| phone  | int(12) |  | | 0   ||
| time_zone  | varchar(4)  |  | | ||
| phone_sol  | varchar(4)  |  | | ||
| cluster| varchar(4)  |  | | ||
| age| varchar(4)  |  | | ||
| income | varchar(4)  |  | | ||
| pres_child | varchar(4)  |  | | ||
| own_rent   | varchar(4)  |  | | ||
| length_of_res  | varchar(4)  |  | | ||
| buyer  | varchar(4)  |  | | ||
| responder  | varchar(4)  |  | | ||
| gender | varchar(4)  |  | | ||
| occupation | varchar(4)  |  | | ||
| education  | varchar(4)  |  | | ||
| donor_prospect | varchar(4)  |  | | ||
| scr1ast1   | varchar(4)  |  | | ||
| scr1bst1   | varchar(4)  |  | | ||
| scr2ast1   | varchar(4)  |  | | ||
| scr2bst1   | varchar(4)  |  | | ||
| decile1| varchar(4)  

RE: Primary Keys

2004-07-20 Thread SGreen
I understand your question. I am so sorry to be so slow today :-)

You want to know how to create a PRIMARY KEY that is composed of more than 
one column.

Most of the time when we declare a PRIMARY KEY on a table, we do  it by 
putting the keywords PRIMARY KEY at the end of the column definition, 
like this:

CREATE TABLE example1(
ID int not null auto_increment primary key,
field2 char(5) ,
... more fields ...
)

But if you need more than one column to define the PRIMARY KEY for a table 
you CANNOT say:

CREATE TABLE example2(
id_table1 int not null primary key,
id_table2 int not null primary key
)

because, in MySQL that is a syntax error. What you need is:

CREATE TABLE example3 (
ID int auto_increment,
id_table1 int not null,
id_table2 int not null,
PRIMARY KEY(id_table1, id_table2)
)

You may be able to read: 
http://dev.mysql.com/doc/mysql/pt/CREATE_TABLE.html#IDX1582
for a better explanation.

Yours,
Shawn Green
Database Administrator
Unimin Corporation - Spruce Pine

Rui Monteiro [EMAIL PROTECTED] wrote on 07/20/2004 01:02:36 PM:

 Hello,
 
 The example I gave you has 2 foreign keys that are primary keys on that 
table.
 
 Heres na example
 
 Costumer
 Id (PK)
 Name
 
 Shopping list
 ID (PK)
 ID_costumer
 ID_product
 
 
 The relationship between these 2 tables is ?infinite? to ?infinite?.
 The way to resolve this is by creating a table in the middle like this:
 
 COS/SHOP
 ID_Cust (PK)
 ID_Shop (PK)
 
 Costumer ? 1 : N ? COS/SHOP
 Shopping list ? 1 : N ? COS/SHOP
 
 Thanks
 
 
 
 De: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] 
 Enviada: terรงa-feira, 20 de Julho de 2004 17:06
 Para: Rui Monteiro
 Cc: [EMAIL PROTECTED]
 Assunto: Re: Primary Keys
 
 
 Your example has 1 Primary Key and 2 Foreign Keys. 
 
 Please post a sample data structure and state (not in SQL) what 
 situation you want to achieve.  If you need more constraints on the 
 table to prevent creating duplicates you can create additional 
 UNIQUE Keys but, by definition, any table should have only one Primary 
Key. 
 
 Yours, 
 
 Shawn Green
 Database Administrator
 Unimin Corporation - Spruce Pine 
 
 Rui Monteiro [EMAIL PROTECTED] wrote on 07/20/2004 11:54:00 
AM:
 
  Mello,
  
  
  
  I was wondering why canto r how can I put 2 primary keys on a table?
  
  
  
  Here's na example on Oracle language:
  
  
  
  CREATE TABLE FacturaMusica(
  
  CodFactura number(4), CONSTRAINTS 
FK_FacturaMusica_CodFactura
  FOREIGN KEY(CodFactura) REFERENCES Factura(CodFactura),
  
  CodMusica number(4), CONSTRAINTS 
FK_FacturaMusica_CodMusica
  FOREIGN KEY(CodMusica) REFERENCES Musica(CodMusica),
  
  CONSTRAINT PK_FacturaMusica PRIMARY 
KEY(CodFactura,CodMusica)
  
  );
  
  
  
  This is very usefull to break n to n relations.
  
  
  
  Any tip?
  
  
  
  Thanks
  
  
  
  Rui
  

Replication Overhead and Benchmarks

2004-07-20 Thread Nathan
Hi,

Does anyone know of any written stats on how much overhead for CPU/ Disk
IO replication has for a single master and a single slave?
I am looking for any detailed stats on the proformance issues associated
with replication.


thanks
-Nathan


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Nested query with GROUP BY ... HAVING COUNT(*) ... hangs 4.1.3 beta

2004-07-20 Thread Leo Siefert
Thanks for your response.

 [EMAIL PROTECTED] 7/19/2004 11:47:39 AM 
It looks like your IN statement is forcing your inner SELECT to
execute 
once PER ROW of your main table. It is asking the engine to make sure
that 
_each and every_  id value in main meets the condition in the inner 
select. So, for each and every value in the table main, it has to 
re-computing the inner query and scan the results for matches.

Not sure why this would happen.  The nested query is not correlated to
the outer query, so I would expect it to be executed only once.  I have
tried the same query with even larger file sizes on other data managers
and not had this problem. (In fact, I copied the query from an existing
FoxPro program.)

Also, in my production app, the actual queries being run ar much more
complex, including multiple nested queries, and only with the having
clause is there ever a problem.

I would change it to a JOIN against an anonymous view and test again
-

SELECT m.*
FROM main m
INNER JOIN (SELECT main_ID
FROM receipt
GROUP BY main_ID
HAVING COUNT(1)  5) as r
ON m.id = r.main_ID

This query actually does run quickly.  Thanks - I will try to work the
syntax into my query generator.

- or to break it into two tables for some real speed -

CREATE TEMPORARY TABLE tmpR
SELECT main_ID
FROM receipt
GROUP BY main_ID
HAVING COUNT(1)  5;

alter table tmpR add key(main_Id);

SELECT m.*
FROM main m
INNER JOIN tmpR r
on m.ID = r.main_ID;

DROP TABLE tmpR;

This is actually the second scenario I had tried, as noted in my
original post, and it does yeild better results than the nested query,
but still takes an incredibly long time to run.

Either method should avoid the re-execution of the subselect for every
row 
in your primary table. Adding the index to the temporary table will
make 
the last select really fly.

Yours,
Shawn Green
Database Administrator
Unimin Corporation - Spruce Pine


Thanks for the info, and for the query syntax to work around the
problem.  I still think this is a bug in processing the nested query,
and if it is runing the subquery for each line in the master table, i
think that is incorrect.

 - Leo Siefert







Leo Siefert [EMAIL PROTECTED] wrote on 07/19/2004
11:22:39 
AM:

 OS: Win2k Server
 MySQL: 4.1.1 alpha / 4.1.3 beta
Table type: INNO DB
 
 In my production environment, running the query:
 
 select * from main where id in
 (select main_id from receipt group by main_id having COUNT(*)  5)
 
 will hang the server - sometimes for over a day, thugh it seems it
will
 eventually complete working on it if given enough time. Currently
main
 contains ~200,000 records and receipt contains ~16,000. Main records
 with any receipts have an average of ~10 receipts, but most have
none. 
 
 Created a smaller test database:
 
 master
id   int(6)  primary  autoincrement
name  varchar(25)  (filled with random 10 char strings)
 
 detail
id   int(6)  primary  autoincrement
master_id   int(6)  index  (filled with random ints =
 max(master.id))
detail  varchar(25)  (filled with random 10 char strings)
 
 temp
id int(6) index
 
 Fill master with 1,000 records, detail with 10,000.
 Clone and fill master with 10,000 records, detail with 100,000.
 
 Query:
 
 select * from master where master.id in
 (select master_id from detail group by master_id having COUNT(*) 
2)
 
 (small) returns 76 rows in 13 seconds.
 (large) returns 496 rows in 566 seconds. (COUNT(*)  15)
 
 Tried a two part query, sending the intermediate results to a
temporary
 table:
 
 create temporary table t select master_id from detail group by
 master_id having COUNT(*)  2;
 select * from master where master.id in (select master_id from t);
 drop table t;
 
 (small) returns 76 rows in 2.8 seconds.
 (large) returns 496 rows in 17 seconds.
 
 Running the intermediate results into a permanent table:
 
 truncate table temp;
 insert into temp select master_id from detail group by master_id
having
 COUNT(*)  2;
 select * from master where master.id in (select id from temp);
 
 (small) returns 76 rows in 0.16 seconds.
 (large) returns 496 rows in 0.17 seconds.
 
 Have tried playing around with some of the system variables:
 query_cache_size, innodb_buffer_pool_size with no real affect.
 
 In our production environment (record size is much larger, similar
 number of records to the large test set), both the nested query and
the
 two-part query using a temporary query hang for indeterminate ( 6
hrs)
 amounts of time, leaving the use of a permanent table as the only
 option. Of course, the only real way to manage this is to create a
 dedicated scratch table for each user of the system, a somewhat
onerous
 workaround.
 
 Anyone have an idea on a solution to this? Is there something in
 setting up for INNO DB that I am missing, or should I file this as a
 bug?
 
 Thanks.
 
  - Leo
 
 -- 
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql 
 To unsubscribe:   

Question about column name case sensitive

2004-07-20 Thread Ying Lu
Hello,
I have a question about column name case sensitive. Currently, we use 
MySQL 3.23.32 running on Linux.

I am trying to use JDBC to get email from table T1
T1(
id char(3),
name varchar(12),
Emailvarchar(16)
)
Now If I say,
   try{
   ...
  String email = 
getString(email);  //here I 
got Exception saying Column name cannot be found
...
   }catch(Exception e){

  System.err.println(e);
   }
If I change the exception line to
getString(Email);
everyting is all right.
In  MySQL documentation, on all OS, there should not have column name 
case sensitive problem, right? But how come I changed the column name to 
the same format as saved in table T1 it works; otherwise, I got exception?

Thanks a lot!
Emi


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: empty_blob() equivalent

2004-07-20 Thread Dan Nelson
In the last episode (Jul 20), Jeyabalan Murugesan Sankarasubramanian said:
 Actually i need to add a wavefile in byte[] format to the table with
 column name blob_col of type blob.  In Oracle empty_blob() is
 inserted into the table. While retrieving OracleResultSet supports
 getBLOB(). This returns oracle.sql.BLOB. From this i am able to
 insert the byte[] with the following code.
 
 oracle.sql.BLOB myblob = ((OracleResultSet)rs).getBLOB(blob_col);
 OutputStream os = myblob.getBinaryOutputStream();
 os.write(byteArray);
 
 This works in Oracle, which i m migrating to MySQL. For this i need
 equivalent thing so that i can insert byteArray in column blob_col.

MySQL doesn't have any special blob functions.  They are treated as
very large string fields, so you should be able to do a plain UPDATE or
INSERT.  If you're using a language that supports bind variables or
placeholders, something similar to this should work:

query(UPDATE mytable SET blob_col=%s WHERE id=%d, byteArray, rowid)

-- 
Dan Nelson
[EMAIL PROTECTED]

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



remote connection php code

2004-07-20 Thread Business A2Z
Hi All

Could someone give me a clue or a snippet of PHP code I can test a romte
connection to my MySQL DB for my website please?

Thank you
Andrew


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Removing Entries From a MySQL Table

2004-07-20 Thread Michael Mason








Ive been looking through the manual and searching the
web for the command and syntax used to do the above.



I basically just want to clear a table I use for logon
entries. Can anyone help with this please?





Michael Mason

Business Support Services

Arras People



Tel: 01706 342310

Mobile: 07793
782287

Fax: 01706 642754





Member of the Recruitment Employment Confederation
(00052055)

The views expressed in this mail are entirely those of the
sender, and do not necessarily represent the views or position of Arras
Services Ltd. The information contained in this communication is confidential
and may be legally privileged. It is intended solely for the use of the
individual or entity to whom it is addressed and others authorised to receive
it. If you are not the intended recipient you are hereby notified that any
disclosure, copying, distribution or taking any action in relation to the
contents of this information is strictly prohibited and may be unlawful.
Neither the sender nor the represented institution is liable for the correct
and complete transmission of the contents of this e-mail, or for its timely
receipt.










can not start mysql daemon

2004-07-20 Thread pohlhaut
Description:
I can not start the mysqld daemon. I had it running for a long time, but
then had to reboot. Then I could no longer connect. 
How-To-Repeat:
followed the directions on http://wiki.amazon.com/?MySQLInstallation, but
to no avail. When I run the first command there: sudo adduser -d 
/workplace2/mysql -c MySQL account mysql
I am informed that  this user already exists. This makes sense, but doing the
daemon start: sudo -u mysql ./mysqld_safe --user=mysql 
--datadir=/workplace2/mysql/data 
failes in the correct directory because there is no executable called 
mysqld_safe at all.

I tried setting up a new existense as mysql2, but to no luck. I also tried 
starting it
via the output that was generated when I did a mysql_install_db. It said that 
it could
not change directories:
[EMAIL 
PROTECTED]/opt/third-party/depot/Linux-2.4c2.2-i686/mysql-3.23.55/libexec/mysqld: 
Can't change dir to '/opt/disco/third-party/Linux-2.4c2.2-i686/mysql-3.23.55/var/' 
(Errcode: 2)
Fix:


Submitter-Id:  pohlhaut
Originator:Paul Ohlhaut
Organization:  Seller Central
 
MySQL support: [none | licence | email support | extended email support ]
Synopsis:  can not start mysql daemon
Severity:  critical
Priority:  
Category:  mysql
Class: 
Release:   mysql-3.23.55 (Source distribution)

Environment:
Linux Red Hat 7.2   
System: Linux pohlhaut.desktop.amazon.com 2.4.21-2.3a #1 Fri Dec 5 04:53:08 PST 2003 
i686 unknown
Architecture: i686

Some paths:  /opt/third-party/bin/perl /opt/third-party/bin/make /usr/bin/gmake 
/opt/third-party/bin/gcc
GCC: Reading specs from 
/opt/third-party/depot/Linux-2.4c2.2-i686/gcc-2.95.3/lib/gcc-lib/i686-pc-linux-gnu/2.95.3/specs
gcc version 2.95.3 20010315 (release)
Compilation info: CC='/opt/third-party/depot/Linux-2.4c2.2-i686/gcc-2.95.3/bin/gcc'  
CFLAGS=' -mcpu=pentiumpro -D_FILE_OFFSET_BITS=64'  
CXX='/opt/third-party/depot/Linux-2.4c2.2-i686/gcc-2.95.3/bin/g++'  CXXFLAGS=''  
LDFLAGS=''
LIBC: 
lrwxrwxrwx1 root root   13 Sep 30  2003 /lib/libc.so.6 - libc-2.2.4.so
-rwxr-xr-x1 root root  1283964 Dec  8  2001 /lib/libc-2.2.4.so
-rw-r--r--1 root root 27314296 Dec  8  2001 /usr/lib/libc.a
-rw-r--r--1 root root  178 Dec  8  2001 /usr/lib/libc.so
Configure command: ./configure 
'--prefix=/opt/third-party/depot/Linux-2.4c2.2-i686/mysql-3.23.55' 
'CC=/opt/third-party/depot/Linux-2.4c2.2-i686/gcc-2.95.3/bin/gcc' 'CFLAGS= 
-mcpu=pentiumpro -D_FILE_OFFSET_BITS=64' 
'CXX=/opt/third-party/depot/Linux-2.4c2.2-i686/gcc-2.95.3/bin/g++'


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Removing Entries From a MySQL Table

2004-07-20 Thread Andreas Ahlenstorf
Hello,

Michael Mason schrieb am Dienstag, 20. Juli 2004 um 22:46:

 I basically just want to clear a table I use for logon entries. Can anyone
 help with this please.?

http://dev.mysql.com/doc/mysql/en/TRUNCATE.html
http://dev.mysql.com/doc/mysql/en/DELETE.html

HTH,
A.


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Cluster on Solaris 9/x86 works

2004-07-20 Thread Justin Swanhart
You do have ByteOrder: Big in the .ini file for the
sparc database servers, right?



--- Alexander Haubold [EMAIL PROTECTED] wrote:
 Hi everyone,
 
 Just to follow up on my previous post regarding
 Cluster on Sparc/Solaris 9: 
 On an x86 Solaris 9 machine that was set up similar
 to the Sparc one, MySQL 
 Cluster (4.1.4) does not produce a Bus Error. Ndbd
 starts up just fine.
 
 I hope that the source will be corrected for the
 Sparc platform, or a note 
 is published on what needs to be done differently to
 compile for Sparc 
 versus x86.
 
 - Alex
 
 
 -+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-
 Alexander Haubold
 Columbia University - SEAS 2001, 2003
 
 362 Riverside Dr. #5B3
 New York, NY 10025
 
 Phone: +212-853-8239
 Email: [EMAIL PROTECTED]
 WWW: http://www.aquaphoenix.com
 
 
 
 -- 
 MySQL Cluster Mailing List
 For list archives: http://lists.mysql.com/cluster
 To unsubscribe:   

http://lists.mysql.com/[EMAIL PROTECTED]
 
 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Removing Entries From a MySQL Table

2004-07-20 Thread Sommerfield, Thomas P
Hello;

DELETE FROM table_name WHERE 1 ;

This command will remove all rows from the table.

For more information, see: 
http://dev.mysql.com/doc/mysql/en/DELETE.html

Make sure you have a backup if you think you may need the data in the
future. 

-Tom 

-Original Message-
From: Michael Mason [mailto:[EMAIL PROTECTED]
Sent: Tuesday, July 20, 2004 2:47 PM
To: 'MySQL Mailing List'
Subject: Removing Entries From a MySQL Table


I've been looking through the manual and searching the web for the
command and syntax used to do the above.
 
I basically just want to clear a table I use for logon entries. Can
anyone help with this please.?
 
 
Michael Mason
Business Support Services
ArrasR People
 
Tel: 01706 342310
Mobile: 07793 782287
Fax: 01706 642754


Member of the Recruitment Employment Confederation (00052055)
The views expressed in this mail are entirely those of the sender, and
do not necessarily represent the views or position of Arras Services
Ltd. The information contained in this communication is confidential and
may be legally privileged. It is intended solely for the use of the
individual or entity to whom it is addressed and others authorised to
receive it. If you are not the intended recipient you are hereby
notified that any disclosure, copying, distribution or taking any action
in relation to the contents of this information is strictly prohibited
and may be unlawful. Neither the sender nor the represented institution
is liable for the correct and complete transmission of the contents of
this e-mail, or for its timely receipt.
 

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Removing Entries From a MySQL Table

2004-07-20 Thread Andy Ford
Or simply,

Delete from table_name;

Andy



-Original Message-
From: Sommerfield, Thomas P [EMAIL PROTECTED]
To: [EMAIL PROTECTED] [EMAIL PROTECTED]; MySQL Mailing List [EMAIL PROTECTED]
Sent: Tue Jul 20 22:16:14 2004
Subject: RE: Removing Entries From a MySQL Table

Hello;

DELETE FROM table_name WHERE 1 ;

This command will remove all rows from the table.

For more information, see: 
http://dev.mysql.com/doc/mysql/en/DELETE.html

Make sure you have a backup if you think you may need the data in the
future. 

-Tom 

-Original Message-
From: Michael Mason [mailto:[EMAIL PROTECTED]
Sent: Tuesday, July 20, 2004 2:47 PM
To: 'MySQL Mailing List'
Subject: Removing Entries From a MySQL Table


I've been looking through the manual and searching the web for the
command and syntax used to do the above.
 
I basically just want to clear a table I use for logon entries. Can
anyone help with this please.?
 
 
Michael Mason
Business Support Services
ArrasR People
 
Tel: 01706 342310
Mobile: 07793 782287
Fax: 01706 642754


Member of the Recruitment Employment Confederation (00052055)
The views expressed in this mail are entirely those of the sender, and
do not necessarily represent the views or position of Arras Services
Ltd. The information contained in this communication is confidential and
may be legally privileged. It is intended solely for the use of the
individual or entity to whom it is addressed and others authorised to
receive it. If you are not the intended recipient you are hereby
notified that any disclosure, copying, distribution or taking any action
in relation to the contents of this information is strictly prohibited
and may be unlawful. Neither the sender nor the represented institution
is liable for the correct and complete transmission of the contents of
this e-mail, or for its timely receipt.
 

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


This e-mail is private and may be confidential and is for the intended recipient only. 
 If misdirected, please notify us by telephone and confirm that it has been deleted 
from your system and any copies destroyed.  If you are not the intended recipient you 
are strictly prohibited from using, printing, copying, distributing or disseminating 
this e-mail or any information contained in it.  We use reasonable endeavours to virus 
scan all e-mails leaving the Company but no warranty is given that this e-mail and any 
attachments are virus free.  You should undertake your own virus checking.  The right 
to monitor e-mail communications through our network is reserved by us. 




Re: remote connection php code

2004-07-20 Thread jeffrey_n_Dyke


 Hi All

 Could someone give me a clue or a snippet of PHP code I can test a romte
 connection to my MySQL DB for my website please?

Just take the one you're currently using and replace 'localhost' with
'someotherserver.com'
$cnx = mysql_connect('someotherserver.com','uname','pword');
mysql_select_db('dbname',$cnx);
mysql_query('Select * from table',$cnx);
etc

As long as the GRANT's are set up to accept connections from an exteral
client and Firewall ports are open(if needed), it will not differ much from
local testing.

HTH
Jeff

 Thank you
 Andrew







-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Removing a specific set of duplicates

2004-07-20 Thread Scott Haneda
Mysql 4

I have a slightly tougher than usual condition in which I want to remove
duplicates.  I am not sure how to deal with this and am looking for your
suggestions.

The table that stores my data is called addresses, it contains among others,
a field called email_address.  Within this table emails are grouped on a
unique id, so for example, select email_address from addresses where
group_id = '5' AND user_id = '2' would show me all the addresses that I want
to work on.

The case is that users will always be adding more addresses to this group,
this is for a mailing list manager I am working on.  The trouble is that I
suspect users will upload a batch of addresses, then a few weeks later, they
will upload a new set, but they will contain the old set as well.  This
would pile up the duplicates in short order, and I don't want multiple
emails sent to the same person over and over again.

My first option is when they upload new addresses, to select and test for
the existence of that address.  If it exists, do not add it, otherwise I
will add it.  The size of some of these lists are large, in the 10's of
thousands.  I suspect this will add too much overhead to the import time.  I
can not make the column unique as there is good reason to have the email
address in the column more than once, since they are groups of email
addresses.

My thought is to allow the import of all the addresses, allow all the dupes,
then take out the dupes, I suspect this will be faster than a select for
every email address I want to import.  I think this involves selecting
distinct() into a temp table, deleting the addresses from the main table,
then selecting into the old table from the temp table and then destroying
the temp table.  If anyone can suggest a tricky way to do this with perhaps
a group by clause to simpy remove the dupes in one go, I would love to hear
it.

In regards to the temp table, is it up to me to maintain a unique temp table
name to not collide in the event 2 users were to hit the page at the same
time?


And now, the other rub
Another field in the addresses table I used to track bounced emails, lets
call it 'bounces', which I increment by 1 on every bounce.  Here is the
other issue, I can not simply remove the dupes without first determining
which dupe to remove.  Basically, I want to remove all dupes where the
bounce count is 0 (default), but if there are more than 2 dupes, I want to
keep the one with the highest bounce count.

Any idea how I should be approaching this?
-- 
-
Scott HanedaTel: 415.898.2602
http://www.newgeo.com   Fax: 313.557.5052
[EMAIL PROTECTED]Novato, CA U.S.A.



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Replication Overhead and Benchmarks

2004-07-20 Thread Jeremy Zawodny
On Tue, Jul 20, 2004 at 10:57:02AM -0700, Nathan wrote:
 Hi,
 
 Does anyone know of any written stats on how much overhead for CPU/
 Disk IO replication has for a single master and a single slave?  I
 am looking for any detailed stats on the proformance issues
 associated with replication.

Overhead on the master?  I've not measured it, but it's trivial.

Jeremy
-- 
Jeremy D. Zawodny |  Perl, Web, MySQL, Linux Magazine, Yahoo!
[EMAIL PROTECTED]  |  http://jeremy.zawodny.com/

[book] High Performance MySQL -- http://highperformancemysql.com/

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Removing a specific set of duplicates

2004-07-20 Thread Wesley Furgiuele
Scott:
Rather than make the email address column unique, since you want to 
allow multiple email address instances, you can make a unique index 
based on email_address+group. Wouldn't that help, or is that still too 
simple for your situation?

Regarding temporary tables, from the MySQL manual:
From MySQL 3.23 on, you can use the TEMPORARY keyword when creating a 
table. A TEMPORARY table is visible only to the current connection, and 
is dropped automatically when the connection is closed. This means that 
two different connections can use the same temporary table name without 
conflicting with each other or with an existing non-TEMPORARY table of 
the same name. (The existing table is hidden until the temporary table 
is dropped.) From MySQL 4.0.2 on, you must have the CREATE TEMPORARY 
TABLES privilege to be able to create temporary tables.

I don't understand well enough how the group by function works, or 
select distinct. Would altering the table order to be ordered by bounce 
count DESC be enough to mean that when you do select distinct records, 
the record that comes first is the record that is selected? I don't 
know if it works reliably like that or if the selection is more random.

Wes
On Jul 20, 2004, at 6:37 PM, Scott Haneda wrote:
Mysql 4
I have a slightly tougher than usual condition in which I want to 
remove
duplicates.  I am not sure how to deal with this and am looking for 
your
suggestions.

The table that stores my data is called addresses, it contains among 
others,
a field called email_address.  Within this table emails are grouped on 
a
unique id, so for example, select email_address from addresses where
group_id = '5' AND user_id = '2' would show me all the addresses that 
I want
to work on.

The case is that users will always be adding more addresses to this 
group,
this is for a mailing list manager I am working on.  The trouble is 
that I
suspect users will upload a batch of addresses, then a few weeks 
later, they
will upload a new set, but they will contain the old set as well.  This
would pile up the duplicates in short order, and I don't want multiple
emails sent to the same person over and over again.

My first option is when they upload new addresses, to select and test 
for
the existence of that address.  If it exists, do not add it, otherwise 
I
will add it.  The size of some of these lists are large, in the 10's of
thousands.  I suspect this will add too much overhead to the import 
time.  I
can not make the column unique as there is good reason to have the 
email
address in the column more than once, since they are groups of email
addresses.

My thought is to allow the import of all the addresses, allow all the 
dupes,
then take out the dupes, I suspect this will be faster than a select 
for
every email address I want to import.  I think this involves selecting
distinct() into a temp table, deleting the addresses from the main 
table,
then selecting into the old table from the temp table and then 
destroying
the temp table.  If anyone can suggest a tricky way to do this with 
perhaps
a group by clause to simpy remove the dupes in one go, I would love to 
hear
it.

In regards to the temp table, is it up to me to maintain a unique temp 
table
name to not collide in the event 2 users were to hit the page at the 
same
time?

And now, the other rub
Another field in the addresses table I used to track bounced emails, 
lets
call it 'bounces', which I increment by 1 on every bounce.  Here is the
other issue, I can not simply remove the dupes without first 
determining
which dupe to remove.  Basically, I want to remove all dupes where the
bounce count is 0 (default), but if there are more than 2 dupes, I 
want to
keep the one with the highest bounce count.

Any idea how I should be approaching this?
--
-
Scott HanedaTel: 415.898.2602
http://www.newgeo.com   Fax: 313.557.5052
[EMAIL PROTECTED]Novato, CA U.S.A.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]




Re: Removing a specific set of duplicates

2004-07-20 Thread Scott Haneda
on 7/20/04 4:10 PM, Wesley Furgiuele at [EMAIL PROTECTED] wrote:

 Rather than make the email address column unique, since you want to allow
 multiple email address instances, you can make a unique index based on
 email_address+group. Wouldn't that help, or is that still too simple for your
 situation? 

I think you nailed it, I was not aware of this feature, any links that tell
me more?
-- 
-
Scott HanedaTel: 415.898.2602
http://www.newgeo.com   Fax: 313.557.5052
[EMAIL PROTECTED]Novato, CA U.S.A.



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Removing a specific set of duplicates

2004-07-20 Thread Wesley Furgiuele
Scott:
Sorry, should have included it...
http://dev.mysql.com/doc/mysql/en/CREATE_INDEX.html
The basic syntax you're looking to use is
ALTER TABLE tablename ADD UNIQUE `index_name` ( email_address, group )
Wes
On Jul 20, 2004, at 7:45 PM, Scott Haneda wrote:
on 7/20/04 4:10 PM, Wesley Furgiuele at [EMAIL PROTECTED] wrote:
Rather than make the email address column unique, since you want to 
allow
multiple email address instances, you can make a unique index based on
email_address+group. Wouldn't that help, or is that still too simple 
for your
situation?
I think you nailed it, I was not aware of this feature, any links that 
tell
me more?
--
-
Scott HanedaTel: 415.898.2602
http://www.newgeo.com   Fax: 313.557.5052
[EMAIL PROTECTED]Novato, CA U.S.A.


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


cannot install mysql on linux using RPM's

2004-07-20 Thread Levi Campbell
Hi, I'm trying to install MySQL on Debian Linux on an old computer whe had lying 
around. (Pentium 2 or three) and I'm trying to install the RPM files so I can install 
the software. my problem is with the Perl debian packages, I can't get them 
configured, can anyone help?

Need help with a select. Regex?

2004-07-20 Thread jabbott

What I am trying to do is select the hostname out of a refering url.  Such as:

http://www.google.com/search?hl=enie=UTF-8q=%22Third+Park+Point+Beach+Water+Contact+Advisory+Posted%3B+May+Indicate+Long-Term+Problem%22btnG=Google+Search

All I really want to get is:
http://www.google.com/

So I have:

Select DISTINCT LEFT(referer, LOCATE('/',referer)-1) referer , count(*) as refCount
FROM NNtracking
WHERE referer != '' 
GROUP BY referer
Limit 10

but that only selects me:
http:/

I have tried a couple of REGEXP ^/ variations but either I am on the wrong track or I 
can't get the syntax right.

Advice?

--ja
-- 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Removing a specific set of duplicates

2004-07-20 Thread Scott Haneda
on 7/20/04 4:10 PM, Wesley Furgiuele at [EMAIL PROTECTED] wrote:

 Rather than make the email address column unique, since you want to
 allow multiple email address instances, you can make a unique index
 based on email_address+group. Wouldn't that help, or is that still too
 simple for your situation?

I don't fully understand here, hope you can help.  Is making a unique index
on 2 columns something I would run perhaps on a schedule to deal with this,
or is this something I set once and it is just maintained?  If it is
something that is on schedule, or triggered say after a bulk import by the
user, when I send in that ALTER how do I also tell mysql to keep the one
dupe email address with the highest bounce count?

Or, perhaps this is something I set once, then I would use INSERT IGNORE and
the dupes would not be allowed and would simply gracefully fail?  Can this
work In a transaction environment, where I would
START TRANSACTION;
 repeat with aEmail in uploaded file
  INSERT IGNORE into addresses etc etc etc
 end repeat
COMMIT;

I will also have one more case that needs this treatment as well, say there
are 2 groups of emails, lets call them family and work.  I will be
allowing the user to merge those into one group, something like:
UPDATE addresses SET group='family' WHERE group='work' AND user_id ='123',
would I still be able to get the duplicates out in this scenario as well?
(note: group is not a string, I just used it as one in this example)
Maybe this would be a better case to use a temp table, select both the
family and work into a temp table, then somehow remove only those that
have a bounce count of zero, or in the case all dupes bounce count are zero,
simply remove the newest entry.

Ugh, this is making my head spin :-)

-- 
-
Scott HanedaTel: 415.898.2602
http://www.newgeo.com   Fax: 313.557.5052
[EMAIL PROTECTED]Novato, CA U.S.A.



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: phpMyAdmin does not show mysql-db, MacOS

2004-07-20 Thread Peter Paul Sint
At 0:48 h +0200 2004.07.20, Peter Paul Sint wrote:
After installing mysql4.1 on Mac OS 10.2.8   
mysql-standard-4.1.3-beta-apple-darwin6.8-powerpc
php
and phpMyAdmin 2.5.7-pl1
 (following the tutorial http://www.macservers.org/feature-2.html )

Using
http://localhost/phpMyAdmin/index.php
  (or with my absolute URL instead localhost)
I get:

Welcome to phpMyAdmin 2.5.7-pl1 - Login
Language: English (in popup)
The configuration file now needs a secret passphrase (blowfish_secret).

but no database or login field is shown.
 (I did not touch anything with blowfish)

I should have done so:
After rereading the comments  config.inc.php  more carefully I found that 
mysql4.1.3beta needs a blowfish passphrase if one uses at least one cookie auth-type.
My tutorial did not know about this (yet). The addition of blowfish seems to be new in 
4.1 or later (its a beta).

After inserting a blowfish passphrase in config.inc.php it works :-)

Some uncertainties with installing php on 10.2:
The installer Entropy-PHP-4.3.4-2.dmg has not enabled:

AddModule mod_php4.c
LoadModule php4_modulelibexec/httpd/libphp4.so

actually I found that the installer from Marc Linyage
 http://www.entropy.ch/software/macosx/php/
created an additional config file
/usr/local/php/httpd.conf.php   //Additional PHP apache directives
  which contains just above AddModule, LoadModule commands.

Thanks to Wesley Furgiuele who wrote off list.

-- 
Peter Sint
[EMAIL PROTECTED]

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Removing a specific set of duplicates

2004-07-20 Thread Wesley Furgiuele
First off, the unique index is something you define for the table once. 
Being unique, you won't be allowed to add in another record with the 
same values as an record that already exists in the table.

And yes, once you set it up, INSERT IGNORE would allow your query to 
simply skip the insertion of any records that already exist in the 
table. Something else to look at would be the INSERT ... ON DUPLICATE 
KEY UPDATE syntax, depending on your version of MySQL ( = 4.1 ) 
http://dev.mysql.com/doc/mysql/en/INSERT.html

I'm not yet sure yet what to make of your last situation, where you are 
merging addresses into one group. About the bounce count, presumably 
that is not necessarily the same value for each instance of an email 
address across different groups? Is the bounce count the only field 
that would differ between the two duplicate records?

Wes
On Jul 20, 2004, at 10:36 PM, Scott Haneda wrote:
on 7/20/04 4:10 PM, Wesley Furgiuele at [EMAIL PROTECTED] wrote:
Rather than make the email address column unique, since you want to
allow multiple email address instances, you can make a unique index
based on email_address+group. Wouldn't that help, or is that still too
simple for your situation?
I don't fully understand here, hope you can help.  Is making a unique 
index
on 2 columns something I would run perhaps on a schedule to deal with 
this,
or is this something I set once and it is just maintained?  If it is
something that is on schedule, or triggered say after a bulk import by 
the
user, when I send in that ALTER how do I also tell mysql to keep the 
one
dupe email address with the highest bounce count?

Or, perhaps this is something I set once, then I would use INSERT 
IGNORE and
the dupes would not be allowed and would simply gracefully fail?  Can 
this
work In a transaction environment, where I would
START TRANSACTION;
 repeat with aEmail in uploaded file
  INSERT IGNORE into addresses etc etc etc
 end repeat
COMMIT;

I will also have one more case that needs this treatment as well, say 
there
are 2 groups of emails, lets call them family and work.  I will be
allowing the user to merge those into one group, something like:
UPDATE addresses SET group='family' WHERE group='work' AND user_id 
='123',
would I still be able to get the duplicates out in this scenario as 
well?
(note: group is not a string, I just used it as one in this example)
Maybe this would be a better case to use a temp table, select both the
family and work into a temp table, then somehow remove only those 
that
have a bounce count of zero, or in the case all dupes bounce count are 
zero,
simply remove the newest entry.

Ugh, this is making my head spin :-)
--
-
Scott HanedaTel: 415.898.2602
http://www.newgeo.com   Fax: 313.557.5052
[EMAIL PROTECTED]Novato, CA U.S.A.


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Removing a specific set of duplicates

2004-07-20 Thread Scott Haneda
on 7/20/04 9:44 PM, Wesley Furgiuele at [EMAIL PROTECTED] wrote:

 First off, the unique index is something you define for the table once.
 Being unique, you won't be allowed to add in another record with the
 same values as an record that already exists in the table.

I thought so, thanks.

 And yes, once you set it up, INSERT IGNORE would allow your query to
 simply skip the insertion of any records that already exist in the
 table. Something else to look at would be the INSERT ... ON DUPLICATE
 KEY UPDATE syntax, depending on your version of MySQL ( = 4.1 )
 http://dev.mysql.com/doc/mysql/en/INSERT.html

Super, so the INSERT IGNORE is gonna work.  Curious why you pointed me to
the ON DUPLICATE KEY link.  Since I want to just gracefully exit from the
insert, I assume you were just pointing this out as a FYI?  This feature is
more or less if I wanted to make some update to a row when the duplicate was
hit?

 I'm not yet sure yet what to make of your last situation, where you are
 merging addresses into one group. About the bounce count, presumably
 that is not necessarily the same value for each instance of an email
 address across different groups? Is the bounce count the only field
 that would differ between the two duplicate records?

To be honest, I am not entirely sure, yet, this will require me to ponder
some more about how this is going to work.

Thanks for all your help so far, this is a great solution to a otherwise
complicated to me issue :-)

-- 
-
Scott HanedaTel: 415.898.2602
http://www.newgeo.com   Fax: 313.557.5052
[EMAIL PROTECTED]Novato, CA U.S.A.



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Need help with a select. Regex?

2004-07-20 Thread Wesley Furgiuele
To get http://www.google.com/; out of the URL, you can do this:
LEFT( referer, LENGTH( SUBSTRING_INDEX( referer, '/', 3 ) ) + 1 )
If you don't care about the trailing slash, you can use just the  
SUBSTRING_INDEX() portion:
SUBSTRING_INDEX( referer, '/', 3 )

Using the LENGTH() function just helps guarantee that you'll get the  
trailing slash if you want it. The MySQL manual doesn't specify what  
happens if the count value you feed SUBSTRING_INDEX() exceed the count  
of the delimiter, but it looks kind of like it just returns back the  
whole string.

Wes

On Jul 20, 2004, at 10:11 PM, [EMAIL PROTECTED] wrote:
What I am trying to do is select the hostname out of a refering url.   
Such as:

http://www.google.com/search?hl=enie=UTF 
-8q=%22Third+Park+Point+Beach+Water+Contact+Advisory+Posted%3B+May+Ind 
icate+Long-Term+Problem%22btnG=Google+Search

All I really want to get is:
http://www.google.com/
So I have:
Select DISTINCT LEFT(referer, LOCATE('/',referer)-1) referer ,  
count(*) as refCount
FROM NNtracking
WHERE referer != ''
GROUP BY referer
Limit 10

but that only selects me:
http:/
I have tried a couple of REGEXP ^/ variations but either I am on the  
wrong track or I can't get the syntax right.

Advice?
--ja
--
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe: 
http://lists.mysql.com/[EMAIL PROTECTED]



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Removing a specific set of duplicates

2004-07-20 Thread Wesley Furgiuele
Scott:
Yeah, exactly. The ON DUPLICATE KEY is useful if your table has 
something like a last modified or last accessed column that is 
relevant for what you're doing. It is just an easier way to accomplish 
INSERT IGNORE plus then doing an UPDATE on all the rows that got 
ignored, in case you need to document that there was an attempt to 
insert the data.

Wes
On Jul 21, 2004, at 12:58 AM, Scott Haneda wrote:
on 7/20/04 9:44 PM, Wesley Furgiuele at [EMAIL PROTECTED] wrote:
First off, the unique index is something you define for the table 
once.
Being unique, you won't be allowed to add in another record with the
same values as an record that already exists in the table.
I thought so, thanks.
And yes, once you set it up, INSERT IGNORE would allow your query to
simply skip the insertion of any records that already exist in the
table. Something else to look at would be the INSERT ... ON DUPLICATE
KEY UPDATE syntax, depending on your version of MySQL ( = 4.1 )
http://dev.mysql.com/doc/mysql/en/INSERT.html
Super, so the INSERT IGNORE is gonna work.  Curious why you pointed me 
to
the ON DUPLICATE KEY link.  Since I want to just gracefully exit from 
the
insert, I assume you were just pointing this out as a FYI?  This 
feature is
more or less if I wanted to make some update to a row when the 
duplicate was
hit?

I'm not yet sure yet what to make of your last situation, where you 
are
merging addresses into one group. About the bounce count, presumably
that is not necessarily the same value for each instance of an email
address across different groups? Is the bounce count the only field
that would differ between the two duplicate records?
To be honest, I am not entirely sure, yet, this will require me to 
ponder
some more about how this is going to work.

Thanks for all your help so far, this is a great solution to a 
otherwise
complicated to me issue :-)

--
-
Scott HanedaTel: 415.898.2602
http://www.newgeo.com   Fax: 313.557.5052
[EMAIL PROTECTED]Novato, CA U.S.A.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Removing a specific set of duplicates

2004-07-20 Thread Scott Haneda
on 7/20/04 10:06 PM, Wesley Furgiuele at [EMAIL PROTECTED] wrote:

 Scott:
 
 Yeah, exactly. The ON DUPLICATE KEY is useful if your table has
 something like a last modified or last accessed column that is
 relevant for what you're doing. It is just an easier way to accomplish
 INSERT IGNORE plus then doing an UPDATE on all the rows that got
 ignored, in case you need to document that there was an attempt to
 insert the data.

Thanks, this is indeed nice to know, if ever there was a good reason to
update mysql, this is it, I can see this coming in handy in the future a
whole lot.
-- 
-
Scott HanedaTel: 415.898.2602
http://www.newgeo.com   Fax: 313.557.5052
[EMAIL PROTECTED]Novato, CA U.S.A.



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Need help with a select. Regex?

2004-07-20 Thread Wesley Furgiuele
Just a follow-up oops...
I misread the manual page when verifying the SUBSTRING_INDEX() syntax.  
It states that it returns everything before _count_ instances of the  
delimiter, so naturally if you feed it a value that exceeds the actual  
instances of the delimiter, you get back the whole string. Sorry for  
any confusion.

Wes
On Jul 21, 2004, at 1:01 AM, Wesley Furgiuele wrote:
To get http://www.google.com/; out of the URL, you can do this:
LEFT( referer, LENGTH( SUBSTRING_INDEX( referer, '/', 3 ) ) + 1 )
If you don't care about the trailing slash, you can use just the  
SUBSTRING_INDEX() portion:
SUBSTRING_INDEX( referer, '/', 3 )

Using the LENGTH() function just helps guarantee that you'll get the  
trailing slash if you want it. The MySQL manual doesn't specify what  
happens if the count value you feed SUBSTRING_INDEX() exceed the count  
of the delimiter, but it looks kind of like it just returns back the  
whole string.

Wes

On Jul 20, 2004, at 10:11 PM, [EMAIL PROTECTED] wrote:
What I am trying to do is select the hostname out of a refering url.   
Such as:

http://www.google.com/search?hl=enie=UTF 
-8q=%22Third+Park+Point+Beach+Water+Contact+Advisory+Posted%3B+May+In 
dicate+Long-Term+Problem%22btnG=Google+Search

All I really want to get is:
http://www.google.com/
So I have:
Select DISTINCT LEFT(referer, LOCATE('/',referer)-1) referer ,  
count(*) as refCount
FROM NNtracking
WHERE referer != ''
GROUP BY referer
Limit 10

but that only selects me:
http:/
I have tried a couple of REGEXP ^/ variations but either I am on the  
wrong track or I can't get the syntax right.

Advice?
--ja

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe: 
http://lists.mysql.com/[EMAIL PROTECTED]



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Removing a specific set of duplicates

2004-07-20 Thread Michael Stassen
I'd suggest turning those around:
  ALTER TABLE tablename ADD UNIQUE `index_name` (group, email_address)
Why?  For purposes of keeping the combination of group and email unique, the 
order doesn't matter, but the leftmost part of the index can be used just as 
if it were a single column index.  From what you've described, I'm expecting 
you'll frequently run queries like

  SELECT email_address FROM yourtable WHERE group = somevalue;
An index on (group, email_address) would be used to determine which rows to 
select in this case, but an index on (email_address, group) would not.  In 
fact, this particular query could be resolved solely by looking at the 
(group, email_address) index.

In other words, you would not need a separate index on group this way, as 
the unique index on (group, email_address) would both guarantee uniqueness 
and speed up queries based on group or group-email_address combinations.

Michael
Wesley Furgiuele wrote:
Scott:
Sorry, should have included it...
http://dev.mysql.com/doc/mysql/en/CREATE_INDEX.html
The basic syntax you're looking to use is
ALTER TABLE tablename ADD UNIQUE `index_name` ( email_address, group )
Wes
On Jul 20, 2004, at 7:45 PM, Scott Haneda wrote:
on 7/20/04 4:10 PM, Wesley Furgiuele at [EMAIL PROTECTED] wrote:
Rather than make the email address column unique, since you want to 
allow
multiple email address instances, you can make a unique index based on
email_address+group. Wouldn't that help, or is that still too simple 
for your
situation?

I think you nailed it, I was not aware of this feature, any links that 
tell
me more?
--
-
Scott HanedaTel: 415.898.2602

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]