PHP/MySQL Tunneling

2004-07-09 Thread Karam Chand
Hello,

Recently lot of MySQL clients try to overcome host
based privilege system of MySQL by using PHP tunneling
method.

In this method they call up a PHP file in the server
and the PHP file executes a query and sends the data
in XML format. 

I am using C API() and I was just wondering if
somebody is working on such tunnels i.e. a PHP file
and its corresponding C/++ code that will fill up
MYSQL_RES structures correctly so that I can use them
to C API() without any problem. Otherwise, i guess i
have to write one for myself.

Regards,
Karam



__
Do you Yahoo!?
Yahoo! Mail is new and improved - Check it out!
http://promotions.yahoo.com/new_mail

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Between Operator

2004-07-09 Thread mos
At 05:51 PM 7/9/2004, you wrote:
Thanks everyone for helping out.I took Michael's advice and made a 
new  table called ranking and two columns. It defiantly cleared some 
things up  but I am still having issues using the BETWEEN operator.  I 
just need to pull up everything BETWEEEN 10 and 18 and it keeps adding 
additional rows. Suggestions?  What am I doing wrong?

Here is my query:
SELECT area, style, route, stars, date_climbed, ranking.* FROM routes, 
ranking WHERE ranking.id = ranking.rating BETWEEN ranking.id < '10' AND 
ranking.id = '18' AND routes.rating = ranking.rating AND area = 'Eldorado 
Canyon' AND style = 'Traditonal' GROUP BY route ORDER BY id DESC
Craig,
Did you read the MySQL manual? 
http://dev.mysql.com/doc/mysql/en/Comparison_Operators.html has examples 
that will show you how the Between operator works.

SELECT area, style, route, stars, date_climbed, ranking.* FROM routes, 
ranking WHERE ranking.id = ranking.rating BETWEEN ranking.id < '10' AND 
ranking.id = '18' AND routes.rating = ranking.rating AND area = 'Eldorado 
Canyon' AND style = 'Traditonal' GROUP BY route ORDER BY id DESC

Your Between clause is being evaluated incorrectly because the two operands 
are being equated to gibberish. Here's what I think it is doing. I added 
parenthesis to try and demonstrate what MySQL is interpreting it as:

SELECT area, style, route, stars, date_climbed, ranking.* FROM routes, 
ranking WHERE ranking.id = (ranking.rating BETWEEN (ranking.id < '10') AND 
(ranking.id = '18')) AND routes.rating = ranking.rating AND area = 
'Eldorado Canyon' AND style = 'Traditonal' GROUP BY route ORDER BY id DESC

Are you sure you need this:  ranking.id = ranking.rating  ???
Basically I think all you need is:WHERE  ranking.rating BETWEEN  '10' 
AND '18'

So it would look like this:
SELECT area, style, route, stars, date_climbed, ranking.* FROM routes, ranking
WHERE  ranking.rating BETWEEN  '10' AND '18' AND
routes.rating = ranking.rating AND area = 'Eldorado Canyon' AND style = 
'Traditonal'
GROUP BY route ORDER BY id DESC;

You can of course put the SQL statement on more than one line so you can 
read it better. I'm assuming Ranking.Rating is a Char or VarChar otherwise 
remove the quotes around '10' and '18'.

Mike


On Jul 9, 2004, at 1:17 PM, Pete Harlan wrote:
On Fri, Jul 09, 2004 at 09:39:02AM -0500, Craig Hoffman wrote:
Style:  Traditional
Area:  Yosemite
Rating: From: 5.5 To: 5.10c
...
"SELECT * FROM routes, users WHERE area='$area' AND style='$style'
BETWEEN rating='[$rating1]' AND rating='[$rating2]' GROUP BY route
ORDER BY rating ASC ";
 For some reason which I am not seeing, this query is not doing what it
should be doing.  Does anyone have any suggestions?
For starters your between syntax isn't correct (but is parsable in
ways you didn't want).  You probably want:
select  *
fromroutes, users
where   area = '$area'  and
style = '$style'and
rating between '$rating1' and '$rating2'
group by route
order by rating
As others have pointed out, your ratings aren't something MySQL will
know how to order.  That's a separate problem (and more difficult to
solve), but the between syntax is also one.
--Pete
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: CocoaMySQL access to db

2004-07-09 Thread Bill Allaire
Peter,
My apologies on calling you Paul in my previous response instead of 
Peter.

On Jul 9, 2004, at 10:30 PM, Peter Paul Sint wrote:
Bill, thank you for the prompt help.
This works.
I have just to find out how to get the Startup Item (or some 
replacement) to open MySQL with --old_passwords  (just now it is 4:30 
in the morning, to late to proceed - my wife wants to go to the 
countryside early)
--
Peter Sint
[EMAIL PROTECTED]

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: CocoaMySQL access to db

2004-07-09 Thread Bill Allaire
Paul,
You can place --old-passwords (without the leading dashes) in the 
my.cnf file under the option group [mysqld] instead of passing it on 
the command line.

The my.cnf file probably isn't on your system by default, at least it 
wasn't on mine until I created it.  This file is generally placed in 
/etc or in the data directory.  Your regular user won't have permission 
to write in either directory.  You can use sudo to gain privilege to 
write in either directory (e.g., sudo vi /etc/my.cnf).  sudo will ask 
for a password and this is the same password you've most likely used 
before when doing updates  or installing some applications on OS X.  
Lastly, restart MySQL and you should be all set.

On Jul 9, 2004, at 10:30 PM, Peter Paul Sint wrote:
Bill, thank you for the prompt help.
This works.
I have just to find out how to get the Startup Item (or some 
replacement) to open MySQL with --old_passwords  (just now it is 4:30 
in the morning, to late to proceed - my wife wants to go to the 
countryside early)
--
Peter Sint
[EMAIL PROTECTED]

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: CocoaMySQL access to db

2004-07-09 Thread Peter Paul Sint
At 18:16 h -0400 2004.07.09, Bill Allaire wrote:
>Your problem may have to do with the difference in how 4.1.x+  does password hashing 
>and that method is incompatible with older clients.
>
>You might find some help with this document:
>http://dev.mysql.com/doc/mysql/en/Old_client.html
>
>Specifically check out the information regarding resetting the password to an old 
>style.

Bill, thank you for the prompt help.
This works.
I have just to find out how to get the Startup Item (or some replacement) to open 
MySQL with --old_passwords  (just now it is 4:30 in the morning, to late to proceed - 
my wife wants to go to the countryside early)
-- 
Peter Sint
[EMAIL PROTECTED]

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



USING() and more than one JOIN

2004-07-09 Thread Daevid Vincent
I'm curious if USING() works with more than one join. I can't seem to get it
to work.

http://dev.mysql.com/doc/mysql/en/JOIN.html


The USING (column_list) clause names a list of columns that must exist in
both tables. 
The following two clauses are semantically identical: 

a LEFT JOIN b USING (c1,c2,c3)
a LEFT JOIN b ON a.c1 = b.c1 AND a.c2 = b.c2 AND a.c3 = b.c3

How would you do something like

a LEFT JOIN b ON a.c1 = b.c1
   LEFT JOIN c ON c.c2 = a.c2

I thought it would be as simple as

a LEFT JOIN b USING (c1) 
   LEFT JOIN c USING (c2)

But it fails.


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Weeding out duplicates

2004-07-09 Thread Jonathan Duncan
For the information of someone who may need it in the future.  I used
Jeffrey's idea for determining duplicates.  Then I created a temporary
table, and used insert...select to put the id's of the duplicates in the
temporary table.  Then it was a simple "delete from table where
temp.id=table.id".

Thanks for the help.

Jonathan Duncan
 
 
>>>"Jonathan Duncan" <[EMAIL PROTECTED]> 07/09 10:25 am >>> 
Lachlan, 
 
I want to identify the entries in the table where the email addresses 
are the same as another entry.  Whatever else is in the record does not 
matter to me. 
 
However, a second requirement for the query is that it show me the last 
duplicate instead of the first.  This way I keep the first entries and 
remove subsequent ones. 
 
Thanks, 
Jonathan Duncan 
 
 
>>>"Lachlan Mulcahy" <[EMAIL PROTECTED]> 07/08 10:59 pm >>> 
 
Jonathan, 
 
I'm not exactly sure what you want to do.. 
 
Do you want to identify the entries in the table where the email 
addresses 
are the same as another entry but the name and address details differ...

 
or.. do you want to find entries where the name and address information 
is 
the same but email addresses differ? 
 
Lachlan 
 
-Original Message- 
From: Jonathan Duncan [mailto:[EMAIL PROTECTED] 
Sent: Friday, 9 July 2004 7:54 AM 
To: [EMAIL PROTECTED] 
Subject: Weeding out duplicates 
 
 
I am trying to get rid of duplicate user info entries in a database.  I 
am assuming that the email address is unique but other information, like

 
first name, last name, address, etc are not.  The "email" field is not a

 
unique field in the database.  I was trying something like the 
following, but not getting what I wanted. 
 
select distinct u1.id,u1.firstname,u1.lastname,u1.email from Users u1, 
Users u2 where u1.email=u2.email; 
 
How can I go about identifying the duplicate entries of email addresses?

 
 
Thank you, 
Jonathan Duncan 
 
-- 
MySQL General Mailing List 
For list archives: http://lists.mysql.com/mysql 
To unsubscribe:   
http://lists.mysql.com/[EMAIL PROTECTED] 
 
 
 
 
-- 
MySQL General Mailing List 
For list archives: http://lists.mysql.com/mysql 
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]

 
 
 
-- 
MySQL General Mailing List 
For list archives: http://lists.mysql.com/mysql 
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]

 

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



How can I count() on multiple tables in a single query?

2004-07-09 Thread Daevid Vincent
Using mysql v4.0.x on linux.

Given three tables...

CREATE TABLE Departments (
  DeptID int(10) unsigned NOT NULL auto_increment,
  DeptName char(30) default NULL,
  PRIMARY KEY  (DeptID)
)

CREATE TABLE UserDept (
  CoreID int(10) unsigned NOT NULL default '0',
  DeptID int(10) unsigned NOT NULL default '0',
  DeptAdmin char(1) default NULL,
  DeptEmail char(1) default NULL,
  DeptContact char(1) default NULL,
  KEY DeptID (DeptID),
  KEY CoreID (CoreID)
)

CREATE TABLE IP_Dept (
  IP_Addr int(10) unsigned NOT NULL default '0',
  DeptID int(10) unsigned NOT NULL default '0',
  UNIQUE KEY DeptIP (IP_Addr,DeptID)
)

What I want is a listing of all the department names, and a tally of how
many users in each, and another column with the tally of how many IPs in
each

I've tried various combinations of this, changing the COUNT() and GROUP BY
values:

SELECT 
Departments.DeptID, DeptName, COUNT(UserDept.CoreID) AS users,
COUNT(IP_Addr) as devices
FROM  Departments 
 LEFT JOIN IP_Dept on Departments.DeptID = IP_Dept.DeptID
  LEFT JOIN UserDept ON Departments.DeptID = UserDept.DeptID 
GROUP BY 
UserDept.CoreID, IP_Dept.IP_Addr 
ORDER BY 
DeptName DESC;

But nothing is working right. Mostly what happens is both 'users' and
'devices' is the same value. Is this possible? I can do it for the first
COUNT(). And then I could do a second query, but I'm trying to do this in a
single query if possible.


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: problem upgrading from 4.1.0-alpha to 4.1.3-beta on Solaris 9

2004-07-09 Thread jbfink
Swany,

I do indeed have a host.frm file, and the timestamp is from 2000.
Unfortunately, I've had to start up 4.1.0 again and leave it up as folks
here have to work on the db.  Since they won't be working tomorrow, I'll
try to remove the host.frm (also perhaps the host.ISD and host.ISM?) file
tomorrow and let you know if it works out.  Somehow I think it will; your
suggestion feels correct to me.  Thanks much.

jf



On Fri, 9 Jul 2004, Justin Swanhart wrote:

> Do you have a hosts.MYD, or a hosts.frm file?
>
> If you do, and there is no .MYI file, perhaps the
> older version is just ignoring the table and not
> making it available while the newer version errors
> out.
>
> If those files exist, try removing them from the data
> directory (move them somewhere else) then starting the
> new version.
>
> Hope that helps,
>
> swany
>
>
>
> --- John Fink <[EMAIL PROTECTED]> wrote:
> >
> > Hey folks,
> >
> > My mysql-fu is minimal to the point of nonexistent,
> > so please forgive any
> > vagaries that come across:
> >
> > I've recently compiled 4.1.3 to replace 4.1.0 on a
> > machine here where I
> > work.  The compile and install went fine (as far as
> > I can tell, anyway),
> > but when I try to start mysqld via the init script
> > it dies almost
> > immediately and I get the following lines in my .err
> > file:
> >
> > 040709 13:41:04  mysqld started
> > 040709 13:41:04  InnoDB: Started; log sequence
> > number 0 43912
> > 040709 13:41:04  Fatal error: Can't open privilege
> > tables: Can't find file: 'host.MYI' (errno: 2)
> > 040709 13:41:04  mysqld ended
> >
> > Indeed, I have no host.MYI -- I thought perhaps it
> > might be a directory
> > permissions error or something, but it's not.  4.1.0
> > trundled happily
> > along without a host.MYI file in my db's mysql
> > directory.  Furthermore,
> > none of our other machines with MySQL (running
> > various 3.23 and 4.0) have
> > them either.
> >
> > Thanks for any help you can provide,
> >
> > jf.
> >
> >
> >
> > --
> > MySQL General Mailing List
> > For list archives: http://lists.mysql.com/mysql
> > To unsubscribe:
> >
> http://lists.mysql.com/[EMAIL PROTECTED]
> >
> >
>
>
> --
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
>

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: problem upgrading from 4.1.0-alpha to 4.1.3-beta on Solaris 9

2004-07-09 Thread Justin Swanhart
Do you have a hosts.MYD, or a hosts.frm file?

If you do, and there is no .MYI file, perhaps the
older version is just ignoring the table and not
making it available while the newer version errors
out.

If those files exist, try removing them from the data
directory (move them somewhere else) then starting the
new version.  

Hope that helps,

swany



--- John Fink <[EMAIL PROTECTED]> wrote:
> 
> Hey folks,
> 
> My mysql-fu is minimal to the point of nonexistent,
> so please forgive any
> vagaries that come across:
> 
> I've recently compiled 4.1.3 to replace 4.1.0 on a
> machine here where I
> work.  The compile and install went fine (as far as
> I can tell, anyway),
> but when I try to start mysqld via the init script
> it dies almost
> immediately and I get the following lines in my .err
> file:
> 
> 040709 13:41:04  mysqld started
> 040709 13:41:04  InnoDB: Started; log sequence
> number 0 43912
> 040709 13:41:04  Fatal error: Can't open privilege
> tables: Can't find file: 'host.MYI' (errno: 2)
> 040709 13:41:04  mysqld ended
> 
> Indeed, I have no host.MYI -- I thought perhaps it
> might be a directory
> permissions error or something, but it's not.  4.1.0
> trundled happily
> along without a host.MYI file in my db's mysql
> directory.  Furthermore,
> none of our other machines with MySQL (running
> various 3.23 and 4.0) have
> them either.
> 
> Thanks for any help you can provide,
> 
> jf.
> 
> 
> 
> -- 
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:   
>
http://lists.mysql.com/[EMAIL PROTECTED]
> 
> 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: SELECT DISTINCT + ORDER BY confusion

2004-07-09 Thread Justin Swanhart
If you are usign 4.1 you could try:

SELECT DISTINCT d, title
FROM 
(select p.id, p.title 
from product p
join e_prod ep on ep.product=p.id
join story s on s.id = ep.story and s.status = 9 and
s.type = 14
where p.platform_id = 5 and p.genre_id = 23282
order by s.post_date desc
)
limit 10

otherwise:

select p.id, p.title
  from product p
  join e_prod ep on ep.product = p.id
  join story s on s.id = ep.story and s.status = 9
   and s.type = 14
 where p.platform_id = 5 
   and p.genre_id = 23282
group by p.id, p.title
order by p.title
limit 10


--- Victor Pendleton <[EMAIL PROTECTED]> wrote:
> Have you tried using a group by clause? Group by
> title
> 
> -Original Message-
> From: news
> To: [EMAIL PROTECTED]
> Sent: 7/9/04 3:08 PM
> Subject: SELECT DISTINCT + ORDER BY confusion
> 
> I've got a product & story setup where there can be
> multiple stories of 
> a given type for any product.  I want to find the
> names of the products 
> with the most-recently-posted stories of a certain
> type.  This query 
> works well:
> 
> SELECT p.id,p.title
> FROM product p
> join e_prod ep on ep.product=p.id
> join story s on s.id = ep.story and s.status = 9 and
> s.type = 14
> where p.platform_id = 5 and p.genre_id = 23282
> order by s.post_date desc
> limit 10
> 
>
+++
> | id | title
>  |
>
+++
> | 917958 | Port Royale 2
>  |
> | 917958 | Port Royale 2
>  |
> | 917958 | Port Royale 2
>  |
> | 919355 | Warhammer 40,000: Dawn of War
>  |
> | 918989 | The Lord of the Rings, The Battle for
> Middle-earth |
> | 914811 | The Sims 2   
>  |
> | 919973 | RollerCoaster Tycoon 3   
>  |
> | 915040 | Soldiers: Heroes of World War II 
>  |
> | 915040 | Soldiers: Heroes of World War II 
>  |
> | 915040 | Soldiers: Heroes of World War II 
>  |
>
+++
> 
> 
> however since there are multiple stories of the
> correct type for some of
> 
> those products, i would like to dedupe the results
> and just get a unique
> 
> list of products.  however, if i use SELECT DISTINCT
> it applies that 
> BEFORE it does the sort, so i don't get only the
> most recent products. 
> what i actually get seems to be pretty random.
> 
> SELECT DISTINCT p.id,p.title
> FROM product p
> join e_prod ep on ep.product=p.id
> join story s on s.id = ep.story and s.status = 9 and
> s.type = 14
> where p.platform_id = 5 and p.genre_id = 23282
> order by s.post_date desc
> limit 10
> 
> ++---+
> | id | title |
> ++---+
> | 917958 | Port Royale 2 |
> | 920457 | Cuban Missile Crisis  |
> | 915000 | Axis & Allies |
> | 919602 | Blitzkrieg: Burning Horizon   |
> | 914594 | SuperPower 2  |
> | 914911 | Kohan II: Kings of War|
> | 915017 | Sid Meier's Pirates!  |
> | 918842 | Warlords Battlecry III|
> | 919973 | RollerCoaster Tycoon 3|
> | 920314 | Immortal Cities: Children of the Nile |
> ++---+
> 
> that's pretty messed up.  really what i'd like is:
> 
> 
>
+++
> | id | title
>  |
>
+++
> | 917958 | Port Royale 2
>  |
> | 919355 | Warhammer 40,000: Dawn of War
>  |
> | 918989 | The Lord of the Rings, The Battle for
> Middle-earth |
> | 914811 | The Sims 2   
>  |
> | 919973 | RollerCoaster Tycoon 3   
>  |
> | 915040 | Soldiers: Heroes of World War II 
>  |
> | 914468 | Perimeter
>  |
> | 915000 | Axis & Allies
>  |
> | 914811 | The Sims 2   
>  |
> | 918989 | The Lord of the Rings, The Battle for
> Middle-earth |
>
+++
> (i built this by hand just to demonstrate the
> desired outcome.)
> 
> 
> is there any way to do a post-order distinct?
> 
> -jsd-
> 
> 
> -- 
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:
>
http://lists.mysql.com/[EMAIL PROTECTED]
> 
> -- 
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:   
>
http://lists.m

Re: Between Operator

2004-07-09 Thread Craig Hoffman
Thanks everyone for helping out.I took Michael's advice and made a 
new  table called ranking and two columns. It defiantly cleared some 
things up  but I am still having issues using the BETWEEN operator.  I 
just need to pull up everything BETWEEEN 10 and 18 and it keeps adding 
additional rows. Suggestions?  What am I doing wrong?

Here is my query:
SELECT area, style, route, stars, date_climbed, ranking.* FROM routes, 
ranking WHERE ranking.id = ranking.rating BETWEEN ranking.id < '10' AND 
ranking.id = '18' AND routes.rating = ranking.rating AND area = 
'Eldorado Canyon' AND style = 'Traditonal' GROUP BY route ORDER BY id 
DESC

Thanks,
Craig


On Jul 9, 2004, at 1:17 PM, Pete Harlan wrote:
On Fri, Jul 09, 2004 at 09:39:02AM -0500, Craig Hoffman wrote:
Style:  Traditional
Area:  Yosemite
Rating: From: 5.5 To: 5.10c
...
"SELECT * FROM routes, users WHERE area='$area' AND style='$style'
BETWEEN rating='[$rating1]' AND rating='[$rating2]' GROUP BY route
ORDER BY rating ASC ";
 For some reason which I am not seeing, this query is not doing what 
it
should be doing.  Does anyone have any suggestions?
For starters your between syntax isn't correct (but is parsable in
ways you didn't want).  You probably want:
select  *
fromroutes, users
where   area = '$area'  and
style = '$style'and
rating between '$rating1' and '$rating2'
group by route
order by rating
As others have pointed out, your ratings aren't something MySQL will
know how to order.  That's a separate problem (and more difficult to
solve), but the between syntax is also one.
--Pete
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


need to move 4.x database to 3.x database

2004-07-09 Thread Joseph Norris
Group,

I have a project where I need to move a 4.x database to 3.x database.  Are
there any issues with doing the following:

mysqldump 4.x

mysqladmin create 3.x database

mysql 3.x database < 4x.dump

Thanks.


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Loading data into "TEXT" column;

2004-07-09 Thread Jens Gerster
--- mos <[EMAIL PROTECTED]> schrieb: > At 12:23 PM 

Thanks a lot, Mike.
It works.

I spent many hours with this problem.
I also tried the trick with a temporary table,
but used the "INSERT" command, which didn't help.


All the best to you,Harald


> 
> Harald,
>  1) load the data into a temporary table
> whose structure matches 
> that of the data in your text file
> 
> CREATE TABLE `tableb` (
>`Rcd_Id` int(10) NOT NULL auto_increment,
>`cust_name` char(10) default NULL
> )
> 
>  2) Load the data into the temp tableb using
> "Load Data  Infile ..."
> 
>  3) Once the data is in a database table,
> you can update an 
> existing table wrt to another table doing something
> like:
> 
> update tablea, tableb  set tablea.cust_name =
> tableb.cust_name where 
> tablea.rcd_id=tableb.rcd_id
> 
> Mike 
> 
> 
> -- 
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:   
>
http://lists.mysql.com/[EMAIL PROTECTED]
> 
>  






___
Gesendet von Yahoo! Mail - Jetzt mit 100MB Speicher kostenlos - Hier anmelden: 
http://mail.yahoo.de

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: CocoaMySQL access to db

2004-07-09 Thread Bill Allaire
Your problem may have to do with the difference in how 4.1.x+  does 
password hashing and that method is incompatible with older clients.

You might find some help with this document:
http://dev.mysql.com/doc/mysql/en/Old_client.html
Specifically check out the information regarding resetting the password 
to an old style.


On Jul 9, 2004, at 2:29 PM, Peter Paul Sint wrote:
I tried to install binary mysql 4.0.20 on MacOS 1.2.8 Jaguar
The installer works but if I try to use the result I get messages 
including:

defaults undefined reference to _stpcpy expected to be defined in 
/usr/lib/libSystem.B.dylib

As Marc Liyanaage
http://www.entropy.ch/software/macosx/mysql/
writes in his FAQs
Why do I get messages like
undefined reference to _BC expected to be defined in  
/usr/lib/libSystem.B.dylib
when I try to use mysql?

You installed the wrong MySQL package (the one I made for Mac OS X 
10.1) on a   machine running Mac OS X 10.2 (or you never updated 
MySQL after updating to Mac OS X 10.2).
 it may be that 4.0.20 has similar problems (not for 10.2.8?)
Thus I removed 4.0.20 and tried 4.1.3beta (earlier I hesitated to 
install a beta)
This worked.

MySQL server is run via a mysql user who was created by NetInfo 
manager according to
http://developer.apple.com/internet/opensource/osdb.html

I am able to access the MySQL server on the terminal/command-line with 
full root privileges (Actually also as a second user with the same 
privileges).

If I try to connect with CocoaMySQL it does not allow me to enter as 
root after I added a password to root (both on localhost and my 
numeric internet address.

Unable to connect to host localhost.
Be sure that the address is correct and that you have the necessary 
privileges.
As long as root had no password I was able to enter with all 
privileges).
Neither localhost nor my actual Internet number works.

I may still enter as mysql (no or empty password) with minimal 
privileges. Even accessing localhost without username works thus.

Probably I am missing some basic setting? What is the correct way to 
set up a user/pw for beeing acceptable for CocoaMySQL. Have I to 
change soccks,portal...? I do not assume this is a beta problem.

Suggestions welcome
Peter
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Solaris Performance Issue

2004-07-09 Thread Jeremy Zawodny
On Wed, Jul 07, 2004 at 06:23:45PM +0800, Linda wrote:
> Hi,
> 
> My old mysql is 3.23.56 on RedHat9(Intel). After moving mySQL to
> Solaris 9 (Sun F280R/2GB Memory) and upgrading mySQL to 4.0.20, I
> got a lot of complaints about the performance for select and
> update. Have anyone can tell me if there is anything I should tune
> for Solaris or MySQL to improve the performance.

Without knowing what sort of bottlenecks you're seeing, it's really
hard to say.  What's the resource limit?  CPU?  Disk?  RAM?

BTW, I've found Sun boxes of that vintage (I have MySQL on a 280R
also) to be quite a bit slower than much cheaper Intel hardware.

Jeremy
-- 
Jeremy D. Zawodny |  Perl, Web, MySQL, Linux Magazine, Yahoo!
<[EMAIL PROTECTED]>  |  http://jeremy.zawodny.com/

[book] High Performance MySQL -- http://highperformancemysql.com/

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Loading data into "TEXT" column;

2004-07-09 Thread mos
At 12:23 PM 7/9/2004, you wrote:
Hi,
I'd like to load "TEXT"-data of variable length
into an existing table with "CHAR"-columns.
The input file consists of about 4200 lines
with alphanumeric characters and newline only.
Instead of filling up the "TEXT"-column
next to the existing columns, Mysql
appends new rows, filling up the "TEXT" column
beginning behind the last row of the former table.
I tried many options of "LOAD DATA INFILE" and
checked my input file for problematic characters.
Didn't help. I'm totally stuck and stopped
working on the project.
Could you tell me how to import my
"TEXT" data correctly ?
Thanks a lot for your help.
Harald
Harald,
1) load the data into a temporary table whose structure matches 
that of the data in your text file

CREATE TABLE `tableb` (
  `Rcd_Id` int(10) NOT NULL auto_increment,
  `cust_name` char(10) default NULL
)
2) Load the data into the temp tableb using "Load Data  Infile ..."
3) Once the data is in a database table, you can update an 
existing table wrt to another table doing something like:

update tablea, tableb  set tablea.cust_name = tableb.cust_name where 
tablea.rcd_id=tableb.rcd_id

Mike 

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


RE: problem upgrading from 4.1.0-alpha to 4.1.3-beta on Solaris 9

2004-07-09 Thread jbfink
Yup.  It's all there, everything's fine on directory structure.  I can
start 4.1.0 back up without changing anything else and it starts up okay.
Very perplexing.

jf

On Fri, 9 Jul 2004, Victor Pendleton wrote:

> When you ls to /database/var do you see the mysql/ directory?
>
> -Original Message-
> From: [EMAIL PROTECTED]
> To: Victor Pendleton
> Cc: '[EMAIL PROTECTED] '; ''[EMAIL PROTECTED] ' '
> Sent: 7/9/04 4:15 PM
> Subject: RE: problem upgrading from 4.1.0-alpha to 4.1.3-beta on Solaris 9
>
> Okay, I changed the datadir to /database/var, same error.  The symlink
> is
> definitely valid: when doing cd and ls and such I always use the
> symlink.
>
> jf
>
> On Fri, 9 Jul 2004, Victor Pendleton wrote:
>
> > Is the symlink still valid? Can you point the data directory variable
> to
> > this location and see if the MySQL server starts up?
> >
> > -Original Message-
> > From: [EMAIL PROTECTED]
> > To: Victor Pendleton
> > Cc: 'John Fink '; '[EMAIL PROTECTED] '
> > Sent: 7/9/04 4:08 PM
> > Subject: RE: problem upgrading from 4.1.0-alpha to 4.1.3-beta on
> Solaris 9
> >
> >
> >
> > On Fri, 9 Jul 2004, Victor Pendleton wrote:
> >
> > > What is the location of your data/mysql directory?
> > >
> >
> > It's actually in /database/var.  There's a symlink in /opt/mysql that
> > points it over.  Could a symlink be the problem?
> >
> > jf
> >
>
> --
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
>

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Loading data into "TEXT" column;

2004-07-09 Thread Jens Gerster
--- Victor Pendleton <[EMAIL PROTECTED]> schrieb:
> What are the delimiters for this file? 

I tried different variants:

No delimiters.
Delimiter ' (quote).
Delimiter " (doublequote).
All lines terminated by \n.

The files holds only a single column.
The column makes up 4200 rows.
Each row of different length (100 - 3000 chars).

It looks like:

ASEDVFGHGHFDSFDGFG\n
FHFGHFGTZUZUNZTWERTJUKJZUKZUKZTZHTR\n
JHKHKKFHGJJVJGJGJFDHFGJHFJHJH\n


System: Debian/Sarge, MySQL 4.0.18



> 
> -Original Message-
> From: Jens Gerster
> To: [EMAIL PROTECTED]
> Sent: 7/9/04 12:23 PM
> Subject: Loading data into "TEXT" column; 
> 
> Hi,
> 
> I'd like to load "TEXT"-data of variable length
> into an existing table with "CHAR"-columns.
> 
> The input file consists of about 4200 lines
> with alphanumeric characters and newline only.
> 
> Instead of filling up the "TEXT"-column
> next to the existing columns, Mysql
> appends new rows, filling up the "TEXT" column
> beginning behind the last row of the former table.
> 
> I tried many options of "LOAD DATA INFILE" and
> checked my input file for problematic characters.
> Didn't help. I'm totally stuck and stopped
> working on the project.
> 
> Could you tell me how to import my 
> "TEXT" data correctly ?
> 
> Thanks a lot for your help.
> 
> Harald
> 
> 
> 
>   
> 
>   
>   
>
___
> Gesendet von Yahoo! Mail - Jetzt mit 100MB Speicher
> kostenlos - Hier
> anmelden: http://mail.yahoo.de
> 
> -- 
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:
>
http://lists.mysql.com/[EMAIL PROTECTED]
>  






___
Gesendet von Yahoo! Mail - Jetzt mit 100MB Speicher kostenlos - Hier anmelden: 
http://mail.yahoo.de

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: problem upgrading from 4.1.0-alpha to 4.1.3-beta on Solaris 9

2004-07-09 Thread jbfink
Yup.  It's definitely there and definitely being accessed when I start up
mysql -- for fun, I tried removing it (temporarily, natch) and starting
mysql and got different errors.

jf

On Fri, 9 Jul 2004, Victor Pendleton wrote:

> When you ls to /database/var do you see the mysql/ directory?
>
> -Original Message-
> From: [EMAIL PROTECTED]
> To: Victor Pendleton
> Cc: '[EMAIL PROTECTED] '; ''[EMAIL PROTECTED] ' '
> Sent: 7/9/04 4:15 PM
> Subject: RE: problem upgrading from 4.1.0-alpha to 4.1.3-beta on Solaris 9
>
> Okay, I changed the datadir to /database/var, same error.  The symlink
> is
> definitely valid: when doing cd and ls and such I always use the
> symlink.
>
> jf
>
> On Fri, 9 Jul 2004, Victor Pendleton wrote:
>
> > Is the symlink still valid? Can you point the data directory variable
> to
> > this location and see if the MySQL server starts up?
> >
> > -Original Message-
> > From: [EMAIL PROTECTED]
> > To: Victor Pendleton
> > Cc: 'John Fink '; '[EMAIL PROTECTED] '
> > Sent: 7/9/04 4:08 PM
> > Subject: RE: problem upgrading from 4.1.0-alpha to 4.1.3-beta on
> Solaris 9
> >
> >
> >
> > On Fri, 9 Jul 2004, Victor Pendleton wrote:
> >
> > > What is the location of your data/mysql directory?
> > >
> >
> > It's actually in /database/var.  There's a symlink in /opt/mysql that
> > points it over.  Could a symlink be the problem?
> >
> > jf
> >
>

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: problem upgrading from 4.1.0-alpha to 4.1.3-beta on Solaris 9

2004-07-09 Thread Victor Pendleton
When you ls to /database/var do you see the mysql/ directory?

-Original Message-
From: [EMAIL PROTECTED]
To: Victor Pendleton
Cc: '[EMAIL PROTECTED] '; ''[EMAIL PROTECTED] ' '
Sent: 7/9/04 4:15 PM
Subject: RE: problem upgrading from 4.1.0-alpha to 4.1.3-beta on Solaris 9

Okay, I changed the datadir to /database/var, same error.  The symlink
is
definitely valid: when doing cd and ls and such I always use the
symlink.

jf

On Fri, 9 Jul 2004, Victor Pendleton wrote:

> Is the symlink still valid? Can you point the data directory variable
to
> this location and see if the MySQL server starts up?
>
> -Original Message-
> From: [EMAIL PROTECTED]
> To: Victor Pendleton
> Cc: 'John Fink '; '[EMAIL PROTECTED] '
> Sent: 7/9/04 4:08 PM
> Subject: RE: problem upgrading from 4.1.0-alpha to 4.1.3-beta on
Solaris 9
>
>
>
> On Fri, 9 Jul 2004, Victor Pendleton wrote:
>
> > What is the location of your data/mysql directory?
> >
>
> It's actually in /database/var.  There's a symlink in /opt/mysql that
> points it over.  Could a symlink be the problem?
>
> jf
>

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: problem upgrading from 4.1.0-alpha to 4.1.3-beta on Solaris 9

2004-07-09 Thread jbfink
Okay, I changed the datadir to /database/var, same error.  The symlink is
definitely valid: when doing cd and ls and such I always use the symlink.

jf

On Fri, 9 Jul 2004, Victor Pendleton wrote:

> Is the symlink still valid? Can you point the data directory variable to
> this location and see if the MySQL server starts up?
>
> -Original Message-
> From: [EMAIL PROTECTED]
> To: Victor Pendleton
> Cc: 'John Fink '; '[EMAIL PROTECTED] '
> Sent: 7/9/04 4:08 PM
> Subject: RE: problem upgrading from 4.1.0-alpha to 4.1.3-beta on Solaris 9
>
>
>
> On Fri, 9 Jul 2004, Victor Pendleton wrote:
>
> > What is the location of your data/mysql directory?
> >
>
> It's actually in /database/var.  There's a symlink in /opt/mysql that
> points it over.  Could a symlink be the problem?
>
> jf
>

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Secure Database Systems

2004-07-09 Thread David Dick
CPAN is your friend.  for example;
http://search.cpan.org/modlist/Security
uru
-Dave
Sarah Tanembaum wrote:
So,  we can virtually use any database to do the job. It is really the
function of the program to encrypt(save) and decrypt(read) the sensitive
data.
Does anyone knows such a program that can handle such function?
Thanks
"David Dick" <[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]
afaik the term "translucent database" applies to a regular database that
  has encrypted data in it.  The main differences is in whether the
encryption is one way only (ie. using a md5 hash of a name instead of
the actual name) or reversible (using 3des to encrypt and decrypt the
name).  a good example of the former is /etc/passwd or /etc/shadow.
Sarah Tanembaum wrote:
Hi David, the link you provided is quite interesting. Is such
database(translucent database) actually exist? Or is it just a concept?
Thanks
--
MySQL Perl Mailing List
For list archives: http://lists.mysql.com/perl
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


RE: problem upgrading from 4.1.0-alpha to 4.1.3-beta on Solaris 9

2004-07-09 Thread Victor Pendleton
Is the symlink still valid? Can you point the data directory variable to
this location and see if the MySQL server starts up? 

-Original Message-
From: [EMAIL PROTECTED]
To: Victor Pendleton
Cc: 'John Fink '; '[EMAIL PROTECTED] '
Sent: 7/9/04 4:08 PM
Subject: RE: problem upgrading from 4.1.0-alpha to 4.1.3-beta on Solaris 9



On Fri, 9 Jul 2004, Victor Pendleton wrote:

> What is the location of your data/mysql directory?
>

It's actually in /database/var.  There's a symlink in /opt/mysql that
points it over.  Could a symlink be the problem?

jf

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: SELECT DISTINCT + ORDER BY confusion

2004-07-09 Thread Jon Drukman
Victor Pendleton wrote:
Have you tried using a group by clause? Group by title
same problem - the group by happens before the order by and you get 
essentially random results.


-Original Message-
From: news
To: [EMAIL PROTECTED]
Sent: 7/9/04 3:08 PM
Subject: SELECT DISTINCT + ORDER BY confusion
I've got a product & story setup where there can be multiple stories of 
a given type for any product.  I want to find the names of the products 
with the most-recently-posted stories of a certain type.  This query 
works well:

SELECT p.id,p.title
FROM product p
join e_prod ep on ep.product=p.id
join story s on s.id = ep.story and s.status = 9 and s.type = 14
where p.platform_id = 5 and p.genre_id = 23282
order by s.post_date desc
limit 10
+++
| id | title  |
+++
| 917958 | Port Royale 2  |
| 917958 | Port Royale 2  |
| 917958 | Port Royale 2  |
| 919355 | Warhammer 40,000: Dawn of War  |
| 918989 | The Lord of the Rings, The Battle for Middle-earth |
| 914811 | The Sims 2 |
| 919973 | RollerCoaster Tycoon 3 |
| 915040 | Soldiers: Heroes of World War II   |
| 915040 | Soldiers: Heroes of World War II   |
| 915040 | Soldiers: Heroes of World War II   |
+++
however since there are multiple stories of the correct type for some of
those products, i would like to dedupe the results and just get a unique
list of products.  however, if i use SELECT DISTINCT it applies that 
BEFORE it does the sort, so i don't get only the most recent products. 
what i actually get seems to be pretty random.

SELECT DISTINCT p.id,p.title
FROM product p
join e_prod ep on ep.product=p.id
join story s on s.id = ep.story and s.status = 9 and s.type = 14
where p.platform_id = 5 and p.genre_id = 23282
order by s.post_date desc
limit 10
++---+
| id | title |
++---+
| 917958 | Port Royale 2 |
| 920457 | Cuban Missile Crisis  |
| 915000 | Axis & Allies |
| 919602 | Blitzkrieg: Burning Horizon   |
| 914594 | SuperPower 2  |
| 914911 | Kohan II: Kings of War|
| 915017 | Sid Meier's Pirates!  |
| 918842 | Warlords Battlecry III|
| 919973 | RollerCoaster Tycoon 3|
| 920314 | Immortal Cities: Children of the Nile |
++---+
that's pretty messed up.  really what i'd like is:
+++
| id | title  |
+++
| 917958 | Port Royale 2  |
| 919355 | Warhammer 40,000: Dawn of War  |
| 918989 | The Lord of the Rings, The Battle for Middle-earth |
| 914811 | The Sims 2 |
| 919973 | RollerCoaster Tycoon 3 |
| 915040 | Soldiers: Heroes of World War II   |
| 914468 | Perimeter  |
| 915000 | Axis & Allies  |
| 914811 | The Sims 2 |
| 918989 | The Lord of the Rings, The Battle for Middle-earth |
+++
(i built this by hand just to demonstrate the desired outcome.)
is there any way to do a post-order distinct?
-jsd-


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


RE: problem upgrading from 4.1.0-alpha to 4.1.3-beta on Solaris 9

2004-07-09 Thread jbfink


On Fri, 9 Jul 2004, Victor Pendleton wrote:

> What is the location of your data/mysql directory?
>

It's actually in /database/var.  There's a symlink in /opt/mysql that
points it over.  Could a symlink be the problem?

jf

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: problem upgrading from 4.1.0-alpha to 4.1.3-beta on Solaris 9

2004-07-09 Thread Victor Pendleton
What is the location of your data/mysql directory?

-Original Message-
From: John Fink
To: [EMAIL PROTECTED]
Sent: 7/9/04 3:49 PM
Subject: problem upgrading from 4.1.0-alpha to 4.1.3-beta on Solaris 9


Hey folks,

My mysql-fu is minimal to the point of nonexistent, so please forgive
any
vagaries that come across:

I've recently compiled 4.1.3 to replace 4.1.0 on a machine here where I
work.  The compile and install went fine (as far as I can tell, anyway),
but when I try to start mysqld via the init script it dies almost
immediately and I get the following lines in my .err file:

040709 13:41:04  mysqld started
040709 13:41:04  InnoDB: Started; log sequence number 0 43912
040709 13:41:04  Fatal error: Can't open privilege tables: Can't find
file: 'host.MYI' (errno: 2)
040709 13:41:04  mysqld ended

Indeed, I have no host.MYI -- I thought perhaps it might be a directory
permissions error or something, but it's not.  4.1.0 trundled happily
along without a host.MYI file in my db's mysql directory.  Furthermore,
none of our other machines with MySQL (running various 3.23 and 4.0)
have
them either.

Thanks for any help you can provide,

jf.



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Loading data into "TEXT" column;

2004-07-09 Thread Victor Pendleton
What are the delimiters for this file? 

-Original Message-
From: Jens Gerster
To: [EMAIL PROTECTED]
Sent: 7/9/04 12:23 PM
Subject: Loading data into "TEXT" column; 

Hi,

I'd like to load "TEXT"-data of variable length
into an existing table with "CHAR"-columns.

The input file consists of about 4200 lines
with alphanumeric characters and newline only.

Instead of filling up the "TEXT"-column
next to the existing columns, Mysql
appends new rows, filling up the "TEXT" column
beginning behind the last row of the former table.

I tried many options of "LOAD DATA INFILE" and
checked my input file for problematic characters.
Didn't help. I'm totally stuck and stopped
working on the project.

Could you tell me how to import my 
"TEXT" data correctly ?

Thanks a lot for your help.

Harald







___
Gesendet von Yahoo! Mail - Jetzt mit 100MB Speicher kostenlos - Hier
anmelden: http://mail.yahoo.de

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



problem upgrading from 4.1.0-alpha to 4.1.3-beta on Solaris 9

2004-07-09 Thread John Fink

Hey folks,

My mysql-fu is minimal to the point of nonexistent, so please forgive any
vagaries that come across:

I've recently compiled 4.1.3 to replace 4.1.0 on a machine here where I
work.  The compile and install went fine (as far as I can tell, anyway),
but when I try to start mysqld via the init script it dies almost
immediately and I get the following lines in my .err file:

040709 13:41:04  mysqld started
040709 13:41:04  InnoDB: Started; log sequence number 0 43912
040709 13:41:04  Fatal error: Can't open privilege tables: Can't find file: 'host.MYI' 
(errno: 2)
040709 13:41:04  mysqld ended

Indeed, I have no host.MYI -- I thought perhaps it might be a directory
permissions error or something, but it's not.  4.1.0 trundled happily
along without a host.MYI file in my db's mysql directory.  Furthermore,
none of our other machines with MySQL (running various 3.23 and 4.0) have
them either.

Thanks for any help you can provide,

jf.



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: SELECT DISTINCT + ORDER BY confusion

2004-07-09 Thread Victor Pendleton
Have you tried using a group by clause? Group by title

-Original Message-
From: news
To: [EMAIL PROTECTED]
Sent: 7/9/04 3:08 PM
Subject: SELECT DISTINCT + ORDER BY confusion

I've got a product & story setup where there can be multiple stories of 
a given type for any product.  I want to find the names of the products 
with the most-recently-posted stories of a certain type.  This query 
works well:

SELECT p.id,p.title
FROM product p
join e_prod ep on ep.product=p.id
join story s on s.id = ep.story and s.status = 9 and s.type = 14
where p.platform_id = 5 and p.genre_id = 23282
order by s.post_date desc
limit 10

+++
| id | title  |
+++
| 917958 | Port Royale 2  |
| 917958 | Port Royale 2  |
| 917958 | Port Royale 2  |
| 919355 | Warhammer 40,000: Dawn of War  |
| 918989 | The Lord of the Rings, The Battle for Middle-earth |
| 914811 | The Sims 2 |
| 919973 | RollerCoaster Tycoon 3 |
| 915040 | Soldiers: Heroes of World War II   |
| 915040 | Soldiers: Heroes of World War II   |
| 915040 | Soldiers: Heroes of World War II   |
+++


however since there are multiple stories of the correct type for some of

those products, i would like to dedupe the results and just get a unique

list of products.  however, if i use SELECT DISTINCT it applies that 
BEFORE it does the sort, so i don't get only the most recent products. 
what i actually get seems to be pretty random.

SELECT DISTINCT p.id,p.title
FROM product p
join e_prod ep on ep.product=p.id
join story s on s.id = ep.story and s.status = 9 and s.type = 14
where p.platform_id = 5 and p.genre_id = 23282
order by s.post_date desc
limit 10

++---+
| id | title |
++---+
| 917958 | Port Royale 2 |
| 920457 | Cuban Missile Crisis  |
| 915000 | Axis & Allies |
| 919602 | Blitzkrieg: Burning Horizon   |
| 914594 | SuperPower 2  |
| 914911 | Kohan II: Kings of War|
| 915017 | Sid Meier's Pirates!  |
| 918842 | Warlords Battlecry III|
| 919973 | RollerCoaster Tycoon 3|
| 920314 | Immortal Cities: Children of the Nile |
++---+

that's pretty messed up.  really what i'd like is:


+++
| id | title  |
+++
| 917958 | Port Royale 2  |
| 919355 | Warhammer 40,000: Dawn of War  |
| 918989 | The Lord of the Rings, The Battle for Middle-earth |
| 914811 | The Sims 2 |
| 919973 | RollerCoaster Tycoon 3 |
| 915040 | Soldiers: Heroes of World War II   |
| 914468 | Perimeter  |
| 915000 | Axis & Allies  |
| 914811 | The Sims 2 |
| 918989 | The Lord of the Rings, The Battle for Middle-earth |
+++
(i built this by hand just to demonstrate the desired outcome.)


is there any way to do a post-order distinct?

-jsd-


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: How to Speed this Query Up?

2004-07-09 Thread Doug V
Thank you for your detailed response.
You might get better performance just from using the explicit INNER JOINS
but I make no assumptions.
I tried INNER JOINS and did not see any difference in speed.
You may also get better performance if you had
composite indexes (not just several individual field indexes) on the tables
that contain all of the columns used in each JOIN clause.  For example you
could have an index on user_intros with (user_id, lang_id) and the engine
won't have to read the table to check for either condition as they would
both be integer values that exist in an index.
I am already using composite indexes for every table with a lang_id field, 
like user_intros.

ALSO NOTE: there is no condition in the ON clause of loc_countries_lang
that relates that table to any other.  This means that for all values in
the veg table you will need to match one row from the loc_countries_lang
table that has lang_id=0. If there are more than one languages that match
that key, you will get multiple sets of matches.
Well, I was thinking, since this table will never really change, and there 
only a couple of hundred entries, I should just store this as an array and 
get the country name directly from the array. However, when I removed the 
loc_countries_lang table from the query, it was still slow, i.e. 3-5 seconds 
on production server.

Would changing the LEFT JOIN to an INNER JOIN improve things? It would be 
possible to change the logic such that the veg_titles table is used in an 
INNER JOIN instead, but when I tried that it was still very slow (3-5 
seconds). In this instance the EXPLAIN returned the following:

t ALL veg_lang_id NULL NULL NULL 76001 Using where; Using temporary; Using 
filesort

Every other table in the EXPLAIN returned one row with type eq_ref
Please, let me know if I helped or not, OK?
Yes, thank you for your response, however the respone time is still too 
slow. Now I'm thinking that maybe my underlying database structure is not 
correct. Am I joining too many tables? Is there anything else I can do 
before I try increasing the sort_buffer? Thanks.

_
MSN Toolbar provides one-click access to Hotmail from any Web page – FREE 
download! http://toolbar.msn.click-url.com/go/onm00200413ave/direct/01/

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Newbie: Replication Question

2004-07-09 Thread João Paulo Vasconcellos
Very good, gmail does not handle mailing lists properly..
Sorry for sending this off-list to you, Alec.


On Fri, 9 Jul 2004 17:09:45 -0300, João Paulo Vasconcellos
<[EMAIL PROTECTED]> wrote:
> 
> 
> On Fri, 9 Jul 2004 10:44:42 +0100, [EMAIL PROTECTED]
> <[EMAIL PROTECTED]> wrote:
> > "L. Yeung" <[EMAIL PROTECTED]> wrote on 09/07/2004 08:38:38:
> >
> > > Hi! I wanted to set-up a master-slave replication on
> > > my win2k box. When my master server fails, my slave
> > > server will automatically becomes my new "master
> > > server". And when my master server is back online, any
> > > changes on my slave server is replicated back to my
> > > master server.
> > >
> > > Normal: A -> B
> > > ^
> > > L inserts/updates from clients
> > > Failure:XB
> > > New Master: XB <- inserts/updates from clients
> > > Master is back: A <- B x- inserts/updates are blocked
> > > manually.
> > > Normal: A -> B
> > > ^
> > > L inserts/updates from clients
> > >
> > > Any idea? Thanks.
> >
> > You cannot do this automatically: you need a supervisor progream.
> >
> > The way you need to do it is to have both machines come up with their
> > slave thread *not* running. The supervisor then determines which is master
> > and which slave, and starts the appropriate slave thread running. If it
> > determines that the original master has failed, it stops replicatio on the
> > slave and directs updates to it: the slave has now become master. When the
> > original master reappears, it determines that updates have been made to
> > the original slave later than those to the original master, it instructs
> > the originl master to reload its databse from the slave. Master and slave
> > have now exchanged roles.
> >
> > The determination of which is the most-recently updated is done by a
> > single row, single column table which is incremented whenever the slave
> > takes over from the master. If, at startup, two active machines are found
> > with differeent values in this entry, the higher value becomes master and
> > the lower must be re-synchronized. If the values are the same, the slave
> > status can be inspected to see which is slaving to which.
> >
> > We have implemented such a system in our own middleware. We have a target
> > changeover time of 10 seconds, which we are meeting easily. It only works
> > for MyISAM tables, since LOAD DATA FROM MASTER is only available for
> > these.
> >
> > Note to MySQL development team: this request comes up often enough that I
> > hope the idee of embedding this supervisor in the MySQL daemon is at least
> > on the wish list.
> >
> > Alec
> >
> 
> I personally think that, if my master has gone away, there must be a
> reason and I do not want the former master to take over when (and if)
> it came back. So, in my production environment, I set this with
> heartbeat (linux-ha.org) and a perl script that makes my slave works
> as master when needed. Of course, my slave is nothing but a hot backup
> spare server.
> 
> 
> --
> João Paulo Vasconcellos
> ICQ: 123-953-864
> 


-- 
João Paulo Vasconcellos
ICQ: 123-953-864

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



SELECT DISTINCT + ORDER BY confusion

2004-07-09 Thread Jon Drukman
I've got a product & story setup where there can be multiple stories of 
a given type for any product.  I want to find the names of the products 
with the most-recently-posted stories of a certain type.  This query 
works well:

SELECT p.id,p.title
FROM product p
join e_prod ep on ep.product=p.id
join story s on s.id = ep.story and s.status = 9 and s.type = 14
where p.platform_id = 5 and p.genre_id = 23282
order by s.post_date desc
limit 10
+++
| id | title  |
+++
| 917958 | Port Royale 2  |
| 917958 | Port Royale 2  |
| 917958 | Port Royale 2  |
| 919355 | Warhammer 40,000: Dawn of War  |
| 918989 | The Lord of the Rings, The Battle for Middle-earth |
| 914811 | The Sims 2 |
| 919973 | RollerCoaster Tycoon 3 |
| 915040 | Soldiers: Heroes of World War II   |
| 915040 | Soldiers: Heroes of World War II   |
| 915040 | Soldiers: Heroes of World War II   |
+++
however since there are multiple stories of the correct type for some of 
those products, i would like to dedupe the results and just get a unique 
list of products.  however, if i use SELECT DISTINCT it applies that 
BEFORE it does the sort, so i don't get only the most recent products. 
what i actually get seems to be pretty random.

SELECT DISTINCT p.id,p.title
FROM product p
join e_prod ep on ep.product=p.id
join story s on s.id = ep.story and s.status = 9 and s.type = 14
where p.platform_id = 5 and p.genre_id = 23282
order by s.post_date desc
limit 10
++---+
| id | title |
++---+
| 917958 | Port Royale 2 |
| 920457 | Cuban Missile Crisis  |
| 915000 | Axis & Allies |
| 919602 | Blitzkrieg: Burning Horizon   |
| 914594 | SuperPower 2  |
| 914911 | Kohan II: Kings of War|
| 915017 | Sid Meier's Pirates!  |
| 918842 | Warlords Battlecry III|
| 919973 | RollerCoaster Tycoon 3|
| 920314 | Immortal Cities: Children of the Nile |
++---+
that's pretty messed up.  really what i'd like is:
+++
| id | title  |
+++
| 917958 | Port Royale 2  |
| 919355 | Warhammer 40,000: Dawn of War  |
| 918989 | The Lord of the Rings, The Battle for Middle-earth |
| 914811 | The Sims 2 |
| 919973 | RollerCoaster Tycoon 3 |
| 915040 | Soldiers: Heroes of World War II   |
| 914468 | Perimeter  |
| 915000 | Axis & Allies  |
| 914811 | The Sims 2 |
| 918989 | The Lord of the Rings, The Battle for Middle-earth |
+++
(i built this by hand just to demonstrate the desired outcome.)
is there any way to do a post-order distinct?
-jsd-
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Help with a Date Query Please!

2004-07-09 Thread shaun thornburgh
Hi,
I have a table called Bookings which holds start times and end times for 
appointments, these are held in Booking_Start_Date and Booking_End_Date. I 
have a page on my site that runs a query to produce a grid to show 
availiability per day for the next ten days for each user of the system. 
Users work 8.5 hours a day and the query shows how many hours available the 
user has on that day:

SELECT 8.5 - (SUM(((DATE_FORMAT(B.Booking_End_Date, '%k') * 60 ) +
DATE_FORMAT(B.Booking_End_Date, '%i')) -
((DATE_FORMAT(B.Booking_Start_Date, '%k') * 60 ) +
DATE_FORMAT(B.Booking_Start_Date, '%i'))) / 60) AS Available_Hours
FROM Bookings B WHERE B.User_ID = '610'
AND NOT ( '2004-07-08' < DATE_FORMAT(Booking_Start_Date, "%Y-%m-%d")
OR '2004-07-08' > DATE_FORMAT(Booking_End_Date, "%Y-%m-%d") )
The problem here is I have to do this query to produce a result for each 
cell and then for each user so 10 users = 100 queries to load the page!

Is there a way to produce the result so that I only need one query per user 
so it groups the result by day for the next ten days?

Thanks for your help
_
It's fast, it's easy and it's free. Get MSN Messenger today! 
http://www.msn.co.uk/messenger

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Linux 2GB Memory Limit

2004-07-09 Thread Marc Slemko
On Fri, 9 Jul 2004 15:46:37 +0100 , Marvin Wright
<[EMAIL PROTECTED]> wrote:
> Hi,
> 
> Current Platform
>  RH version is 7.3
>  IBM Blade Server - 2 x Intel(R) Xeon(TM) CPU 3.20GHz
>  32 GB SCSI
>  4 GB Ram
> 
> This is the platform we are moving to in a week or so
>  RH Enterprise AS 2.1 or 3.0
>  4 x Intel(R) Xeon(TM) MP CPU 2.70GHz
>  128 GB SCSI Raid
>  16 GB Ram
> 
> So with the new platform I'll be able to have a much bigger InnoDB buffer

Note it will still be limited to something that is definitely no
bigger than 4 gigs, and may be smaller... I haven't had any luck with
>~2 gig innodb buffer sizes even on systems with 3 or 3.5 gigs of
addess space available per process, but I never looked into that too
deeply so it may work fine with the right setup.

This is probably a bit late, but I would have definitely recommended
running 64-bit opterons in your configuration since then you could
have a larger innodb buffer.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



CocoaMySQL access to db

2004-07-09 Thread Peter Paul Sint
I tried to install binary mysql 4.0.20 on MacOS 1.2.8 Jaguar

The installer works but if I try to use the result I get messages including:

defaults undefined reference to _stpcpy expected to be defined in 
/usr/lib/libSystem.B.dylib

As Marc Liyanaage
http://www.entropy.ch/software/macosx/mysql/

writes in his FAQs
> Why do I get messages like
> undefined reference to _BC expected to be defined in  /usr/lib/libSystem.B.dylib
> when I try to use mysql?
>
>You installed the wrong MySQL package (the one I made for Mac OS X 10.1) on a   
>machine running Mac OS X 10.2 (or you never updated MySQL after updating to Mac OS X 
>10.2).

 it may be that 4.0.20 has similar problems (not for 10.2.8?)

Thus I removed 4.0.20 and tried 4.1.3beta (earlier I hesitated to install a beta)
This worked.

MySQL server is run via a mysql user who was created by NetInfo manager according to
http://developer.apple.com/internet/opensource/osdb.html


I am able to access the MySQL server on the terminal/command-line with full root 
privileges (Actually also as a second user with the same privileges).

If I try to connect with CocoaMySQL it does not allow me to enter as root after I 
added a password to root (both on localhost and my numeric internet address.

>Unable to connect to host localhost.
>Be sure that the address is correct and that you have the necessary privileges.

As long as root had no password I was able to enter with all privileges).
Neither localhost nor my actual Internet number works.

I may still enter as mysql (no or empty password) with minimal privileges. Even 
accessing localhost without username works thus.

Probably I am missing some basic setting? What is the correct way to set up a user/pw 
for beeing acceptable for CocoaMySQL. Have I to change soccks,portal...? I do not 
assume this is a beta problem.

Suggestions welcome

Peter

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Between Operator

2004-07-09 Thread Pete Harlan
On Fri, Jul 09, 2004 at 09:39:02AM -0500, Craig Hoffman wrote:
> Style:  Traditional
> Area:  Yosemite
> Rating: From: 5.5 To: 5.10c
...
> "SELECT * FROM routes, users WHERE area='$area' AND style='$style'  
> BETWEEN rating='[$rating1]' AND rating='[$rating2]' GROUP BY route 
> ORDER BY rating ASC ";
> 
>  For some reason which I am not seeing, this query is not doing what it 
> should be doing.  Does anyone have any suggestions?

For starters your between syntax isn't correct (but is parsable in
ways you didn't want).  You probably want:

select  *
fromroutes, users
where   area = '$area'  and
style = '$style'and
rating between '$rating1' and '$rating2'
group by route
order by rating

As others have pointed out, your ratings aren't something MySQL will
know how to order.  That's a separate problem (and more difficult to
solve), but the between syntax is also one.

--Pete

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: anyone help with this query? Returning tooooo many results

2004-07-09 Thread Aaron Wolski
Well well...

That worked too!

Damn... this is starting to make life easier :)

Thanks again. Very much appreciated!!!

Aaron

> -Original Message-
> From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
> Sent: July 9, 2004 2:00 PM
> To: Aaron Wolski
> Cc: [EMAIL PROTECTED]
> Subject: RE: anyone help with this query? Returning to many
results
> 
> 
> Aaron,
> 
> That would be an INNER JOIN situation:
> 
> SELECT a.ID, a.First, a.Last, a.Email
> FROM producta_customers a
> INNER JOIN productb_customers b
>ON a.email=b.email
> 
> Yours,
> Shawn Green
> Database Administrator
> Unimin Corporation - Spruce Pine
> 
> 
> 
> 
> 
>   "Aaron Wolski"
>   <[EMAIL PROTECTED]To:
> <[EMAIL PROTECTED]>
>   z.com>   cc:
> <[EMAIL PROTECTED]>
>Fax to:
>   07/09/2004 01:10 Subject:  RE: anyone
help
> with this query? Returning to many
>   PMresults
> 
> 
> 
> 
> 
> 
> Hi all,
> 
> First... I just want tot hank everyone for their help and explanations
> of how I was going wrong, and the measures to correct my logic!
> 
> Great, great advice.
> 
> Shawn's solution worked absolutely wonderful for my needs.
> 
> My next question is how do I reverse the query so that I can get all
of
> those customers who DO have email address that matches in each table?
> 
> Thanks again guys. Very much appreciated!
> 
> Aaron
> 
> > -Original Message-
> > From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
> > Sent: July 9, 2004 12:17 PM
> > To: Aaron Wolski
> > Cc: [EMAIL PROTECTED]
> > Subject: Re: anyone help with this query? Returning to many
> results
> >
> >
> > You have written a cross-product join. This is what happened but
with
> a
> > much smaller example:
> >
> > Assume you have two tables: Colors and Sizes
> >
> > CREATE TABLE Colors (
> >   id int auto_increment primary key
> >   , name varchar(10)
> > );
> >
> > CREATE TABLE Sizes (
> >   id int auto_increment primary key
> >   , abbr varchar(6)
> > );
> >
> > And you populate them with the following data:
> >
> > INSERT Colors (name) VALUES ('Red'),('Blue'),('Yellow'),('Violet');
> > INSERT Sizes (abbr) VALUES ('XS'),('M'),('L'),('XL'),('XXL');
> >
> >
> > This query:
> > SELECT colors.name, sizes.abbr FROM Colors, Sizes;
> >
> > Returns:
> > ++--+
> > | name   | abbr |
> > ++--+
> > | Red| XS   |
> > | Blue   | XS   |
> > | Yellow | XS   |
> > | Violet | XS   |
> > | Red| M|
> > | Blue   | M|
> > | Yellow | M|
> > | Violet | M|
> > | Red| L|
> > | Blue   | L|
> > | Yellow | L|
> > | Violet | L|
> > | Red| XL   |
> > | Blue   | XL   |
> > | Yellow | XL   |
> > | Violet | XL   |
> > | Red| XXL  |
> > | Blue   | XXL  |
> > | Yellow | XXL  |
> > | Violet | XXL  |
> > ++--+
> > 20 rows in set (0.04 sec)
> >
> > Notice that every possible combination between color and size is
> listed?
> > When you wrote your query, you also asked the query engine to create
> every
> > possible combination between each customer in the first table and
> every
> > customer in the second table. That resulted in 486,240 matches. Then
> the
> > engine applied your WHERE condition to all of those matches and
> ELIMINATED
> > of all of the rows where the email addresses were the SAME between
the
> two
> > tables so you wound up with *only* 486,057 combinations of customers
> > between the two tables where their addresses were different.
> >
> > I think what you wanted to find was all of the rows in one table
that
> > didn't match any rows in the other table. You can do it with this
> > statement:
> >
> > SELECT a.ID, a.First, a.Last, a.Email
> > FROM producta_customers a
> > LEFT JOIN productb_customers b
> >   ON a.email=b.email
> > WHERE b.id is null
> >
> > This will give you all of the records in producta_customers that DO
> NOT
> > have a matching email address in the productb_customers table.
> >
> > Yours,
> > Shawn Green
> > Database Administrator
> > Unimin Corporation - Spruce Pine
> >
> >
> >
> >   "Aaron Wolski"
> >   <[EMAIL PROTECTED]To:
> > <[EMAIL PROTECTED]>
> >   z.com>   cc:
> >Fax to:
> >   07/09/2004 11:33 Subject:  anyone help
> with
> > this query? Returning to many results
> >   AM
> >
> >
> >
> >
> >
> >
> > Hi all,
> >
> > Having a problem with a query that's returning 486,057 results when
it
> > most definitely should NOT be doing that.
> >
> > I have two tables:
> >
> > 1 for a list of customers that purchase product A, another for
> customers
> > who purchased product B.
> >
> > Columns are:
> >
> > Id
> > First
> > Last
> > Email
> >
> > I am trying to compare

RE: anyone help with this query? Returning tooooo many results

2004-07-09 Thread SGreen

Aaron,

That would be an INNER JOIN situation:

SELECT a.ID, a.First, a.Last, a.Email
FROM producta_customers a
INNER JOIN productb_customers b
   ON a.email=b.email

Yours,
Shawn Green
Database Administrator
Unimin Corporation - Spruce Pine




   

  "Aaron Wolski"   

  <[EMAIL PROTECTED]To:   <[EMAIL PROTECTED]>  
 
  z.com>   cc:   <[EMAIL PROTECTED]>   

   Fax to: 

  07/09/2004 01:10 Subject:  RE: anyone help with this 
query? Returning to many
  PMresults

   

   





Hi all,

First... I just want tot hank everyone for their help and explanations
of how I was going wrong, and the measures to correct my logic!

Great, great advice.

Shawn's solution worked absolutely wonderful for my needs.

My next question is how do I reverse the query so that I can get all of
those customers who DO have email address that matches in each table?

Thanks again guys. Very much appreciated!

Aaron

> -Original Message-
> From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
> Sent: July 9, 2004 12:17 PM
> To: Aaron Wolski
> Cc: [EMAIL PROTECTED]
> Subject: Re: anyone help with this query? Returning to many
results
>
>
> You have written a cross-product join. This is what happened but with
a
> much smaller example:
>
> Assume you have two tables: Colors and Sizes
>
> CREATE TABLE Colors (
>   id int auto_increment primary key
>   , name varchar(10)
> );
>
> CREATE TABLE Sizes (
>   id int auto_increment primary key
>   , abbr varchar(6)
> );
>
> And you populate them with the following data:
>
> INSERT Colors (name) VALUES ('Red'),('Blue'),('Yellow'),('Violet');
> INSERT Sizes (abbr) VALUES ('XS'),('M'),('L'),('XL'),('XXL');
>
>
> This query:
> SELECT colors.name, sizes.abbr FROM Colors, Sizes;
>
> Returns:
> ++--+
> | name   | abbr |
> ++--+
> | Red| XS   |
> | Blue   | XS   |
> | Yellow | XS   |
> | Violet | XS   |
> | Red| M|
> | Blue   | M|
> | Yellow | M|
> | Violet | M|
> | Red| L|
> | Blue   | L|
> | Yellow | L|
> | Violet | L|
> | Red| XL   |
> | Blue   | XL   |
> | Yellow | XL   |
> | Violet | XL   |
> | Red| XXL  |
> | Blue   | XXL  |
> | Yellow | XXL  |
> | Violet | XXL  |
> ++--+
> 20 rows in set (0.04 sec)
>
> Notice that every possible combination between color and size is
listed?
> When you wrote your query, you also asked the query engine to create
every
> possible combination between each customer in the first table and
every
> customer in the second table. That resulted in 486,240 matches. Then
the
> engine applied your WHERE condition to all of those matches and
ELIMINATED
> of all of the rows where the email addresses were the SAME between the
two
> tables so you wound up with *only* 486,057 combinations of customers
> between the two tables where their addresses were different.
>
> I think what you wanted to find was all of the rows in one table that
> didn't match any rows in the other table. You can do it with this
> statement:
>
> SELECT a.ID, a.First, a.Last, a.Email
> FROM producta_customers a
> LEFT JOIN productb_customers b
>   ON a.email=b.email
> WHERE b.id is null
>
> This will give you all of the records in producta_customers that DO
NOT
> have a matching email address in the productb_customers table.
>
> Yours,
> Shawn Green
> Database Administrator
> Unimin Corporation - Spruce Pine
>
>
>
>   "Aaron Wolski"
>   <[EMAIL PROTECTED]To:
> <[EMAIL PROTECTED]>
>   z.com>   cc:
>Fax to:
>   07/09/2004 11:33 Subject:  anyone help
with
> this query? Returning to many results
>   AM
>
>
>
>
>
>
> Hi all,
>
> Having a problem with a query that's returning 486,057 results when it
> most definitely should NOT be doing that.
>
> I have two tables:
>
> 1 for a list of customers that purchase product A, another for
customers
> who purchased product B.
>
> Columns are:
>
> Id
> First
> Last
> Email
>
> I am trying to compare table 1 to table 2 to get a result set 

Loading data into "TEXT" column;

2004-07-09 Thread Jens Gerster
Hi,

I'd like to load "TEXT"-data of variable length
into an existing table with "CHAR"-columns.

The input file consists of about 4200 lines
with alphanumeric characters and newline only.

Instead of filling up the "TEXT"-column
next to the existing columns, Mysql
appends new rows, filling up the "TEXT" column
beginning behind the last row of the former table.

I tried many options of "LOAD DATA INFILE" and
checked my input file for problematic characters.
Didn't help. I'm totally stuck and stopped
working on the project.

Could you tell me how to import my 
"TEXT" data correctly ?

Thanks a lot for your help.

Harald







___
Gesendet von Yahoo! Mail - Jetzt mit 100MB Speicher kostenlos - Hier anmelden: 
http://mail.yahoo.de

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: anyone help with this query? Returning tooooo many results

2004-07-09 Thread Aaron Wolski
Hi all,

First... I just want tot hank everyone for their help and explanations
of how I was going wrong, and the measures to correct my logic!

Great, great advice.

Shawn's solution worked absolutely wonderful for my needs.

My next question is how do I reverse the query so that I can get all of
those customers who DO have email address that matches in each table?

Thanks again guys. Very much appreciated!

Aaron

> -Original Message-
> From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
> Sent: July 9, 2004 12:17 PM
> To: Aaron Wolski
> Cc: [EMAIL PROTECTED]
> Subject: Re: anyone help with this query? Returning to many
results
> 
> 
> You have written a cross-product join. This is what happened but with
a
> much smaller example:
> 
> Assume you have two tables: Colors and Sizes
> 
> CREATE TABLE Colors (
>   id int auto_increment primary key
>   , name varchar(10)
> );
> 
> CREATE TABLE Sizes (
>   id int auto_increment primary key
>   , abbr varchar(6)
> );
> 
> And you populate them with the following data:
> 
> INSERT Colors (name) VALUES ('Red'),('Blue'),('Yellow'),('Violet');
> INSERT Sizes (abbr) VALUES ('XS'),('M'),('L'),('XL'),('XXL');
> 
> 
> This query:
> SELECT colors.name, sizes.abbr FROM Colors, Sizes;
> 
> Returns:
> ++--+
> | name   | abbr |
> ++--+
> | Red| XS   |
> | Blue   | XS   |
> | Yellow | XS   |
> | Violet | XS   |
> | Red| M|
> | Blue   | M|
> | Yellow | M|
> | Violet | M|
> | Red| L|
> | Blue   | L|
> | Yellow | L|
> | Violet | L|
> | Red| XL   |
> | Blue   | XL   |
> | Yellow | XL   |
> | Violet | XL   |
> | Red| XXL  |
> | Blue   | XXL  |
> | Yellow | XXL  |
> | Violet | XXL  |
> ++--+
> 20 rows in set (0.04 sec)
> 
> Notice that every possible combination between color and size is
listed?
> When you wrote your query, you also asked the query engine to create
every
> possible combination between each customer in the first table and
every
> customer in the second table. That resulted in 486,240 matches. Then
the
> engine applied your WHERE condition to all of those matches and
ELIMINATED
> of all of the rows where the email addresses were the SAME between the
two
> tables so you wound up with *only* 486,057 combinations of customers
> between the two tables where their addresses were different.
> 
> I think what you wanted to find was all of the rows in one table that
> didn't match any rows in the other table. You can do it with this
> statement:
> 
> SELECT a.ID, a.First, a.Last, a.Email
> FROM producta_customers a
> LEFT JOIN productb_customers b
>   ON a.email=b.email
> WHERE b.id is null
> 
> This will give you all of the records in producta_customers that DO
NOT
> have a matching email address in the productb_customers table.
> 
> Yours,
> Shawn Green
> Database Administrator
> Unimin Corporation - Spruce Pine
> 
> 
> 
>   "Aaron Wolski"
>   <[EMAIL PROTECTED]To:
> <[EMAIL PROTECTED]>
>   z.com>   cc:
>Fax to:
>   07/09/2004 11:33 Subject:  anyone help
with
> this query? Returning to many results
>   AM
> 
> 
> 
> 
> 
> 
> Hi all,
> 
> Having a problem with a query that's returning 486,057 results when it
> most definitely should NOT be doing that.
> 
> I have two tables:
> 
> 1 for a list of customers that purchase product A, another for
customers
> who purchased product B.
> 
> Columns are:
> 
> Id
> First
> Last
> Email
> 
> I am trying to compare table 1 to table 2 to get a result set that
gives
> me the contact info (table columns) for those whose email addresses in
> table 1 DON'T EQUAL those in table two.
> 
> In table one I have 2026 records
> In table two I have 240 records
> 
> The query is this:
> 
> SELECT * FROM producta_customers AS t1, productb_customers AS t2 WHERE
> t1.email != t2.email
> 
> When I do this query. I get 486,057 results returne.
> 
> Where am I going wrong? Any ideas?
> 
> Thanks so much for the help!
> 
> Aaron
> 
> 
> 
> --
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]
> 
> 
> 
> 
> 
> 
> 
> --
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:
> http://lists.mysql.com/[EMAIL PROTECTED]
> 



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: ERROR 2013: Lost connection to MySQL server during query

2004-07-09 Thread Batara Kesuma
> I am running Debian sarge with MySQL 4.0.20. My problem is I can
> connect from localhost, but when I try to connect from other host this
> error comes up:
> ERROR 2013: Lost connection to MySQL server during query

Sorry I found the answer. I have ALL:ALL in hosts.deny 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



ERROR 2013: Lost connection to MySQL server during query

2004-07-09 Thread Batara Kesuma
Hi,

I am running Debian sarge with MySQL 4.0.20. My problem is I can connect
from localhost, but when I try to connect from other host this error
comes up:
ERROR 2013: Lost connection to MySQL server during query

I tried from many clients, included MySQL 4.0 and MySQL 3.23, but they
all got same error. 

I also noticed that usually when I connct to server that I have no
access, the error looks like:
ERROR 1130: Host '219.106.XX.XX' is not allowed to connect to this MySQL
server

But when I tried to connect to this server, the error was:
ERROR 2013: Lost connection to MySQL server during query

I have tried google, without any result. Please help.

Thanks in advance,
Batara


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Weeding out duplicates

2004-07-09 Thread Jonathan Duncan
Lachlan,

I want to identify the entries in the table where the email addresses
are the same as another entry.  Whatever else is in the record does not
matter to me.

However, a second requirement for the query is that it show me the last
duplicate instead of the first.  This way I keep the first entries and
remove subsequent ones.

Thanks,
Jonathan Duncan
 
 
>>>"Lachlan Mulcahy" <[EMAIL PROTECTED]> 07/08 10:59 pm >>> 
 
Jonathan, 
 
I'm not exactly sure what you want to do.. 
 
Do you want to identify the entries in the table where the email
addresses 
are the same as another entry but the name and address details differ...

or.. do you want to find entries where the name and address information
is 
the same but email addresses differ? 
 
Lachlan 
 
-Original Message- 
From: Jonathan Duncan [mailto:[EMAIL PROTECTED] 
Sent: Friday, 9 July 2004 7:54 AM 
To: [EMAIL PROTECTED] 
Subject: Weeding out duplicates 
 
 
I am trying to get rid of duplicate user info entries in a database.  I 
am assuming that the email address is unique but other information, like

first name, last name, address, etc are not.  The "email" field is not a

unique field in the database.  I was trying something like the 
following, but not getting what I wanted. 
 
select distinct u1.id,u1.firstname,u1.lastname,u1.email from Users u1, 
Users u2 where u1.email=u2.email; 
 
How can I go about identifying the duplicate entries of email addresses?

 
Thank you, 
Jonathan Duncan 
 
-- 
MySQL General Mailing List 
For list archives: http://lists.mysql.com/mysql 
To unsubscribe:   
http://lists.mysql.com/[EMAIL PROTECTED] 
 
 
 
 
-- 
MySQL General Mailing List 
For list archives: http://lists.mysql.com/mysql 
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]

 

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: anyone help with this query? Returning tooooo many results

2004-07-09 Thread SGreen

You have written a cross-product join. This is what happened but with a
much smaller example:

Assume you have two tables: Colors and Sizes

CREATE TABLE Colors (
  id int auto_increment primary key
  , name varchar(10)
);

CREATE TABLE Sizes (
  id int auto_increment primary key
  , abbr varchar(6)
);

And you populate them with the following data:

INSERT Colors (name) VALUES ('Red'),('Blue'),('Yellow'),('Violet');
INSERT Sizes (abbr) VALUES ('XS'),('M'),('L'),('XL'),('XXL');


This query:
SELECT colors.name, sizes.abbr FROM Colors, Sizes;

Returns:
++--+
| name   | abbr |
++--+
| Red| XS   |
| Blue   | XS   |
| Yellow | XS   |
| Violet | XS   |
| Red| M|
| Blue   | M|
| Yellow | M|
| Violet | M|
| Red| L|
| Blue   | L|
| Yellow | L|
| Violet | L|
| Red| XL   |
| Blue   | XL   |
| Yellow | XL   |
| Violet | XL   |
| Red| XXL  |
| Blue   | XXL  |
| Yellow | XXL  |
| Violet | XXL  |
++--+
20 rows in set (0.04 sec)

Notice that every possible combination between color and size is listed?
When you wrote your query, you also asked the query engine to create every
possible combination between each customer in the first table and every
customer in the second table. That resulted in 486,240 matches. Then the
engine applied your WHERE condition to all of those matches and ELIMINATED
of all of the rows where the email addresses were the SAME between the two
tables so you wound up with *only* 486,057 combinations of customers
between the two tables where their addresses were different.

I think what you wanted to find was all of the rows in one table that
didn't match any rows in the other table. You can do it with this
statement:

SELECT a.ID, a.First, a.Last, a.Email
FROM producta_customers a
LEFT JOIN productb_customers b
  ON a.email=b.email
WHERE b.id is null

This will give you all of the records in producta_customers that DO NOT
have a matching email address in the productb_customers table.

Yours,
Shawn Green
Database Administrator
Unimin Corporation - Spruce Pine


   

  "Aaron Wolski"   

  <[EMAIL PROTECTED]To:   <[EMAIL PROTECTED]>  
 
  z.com>   cc: 

   Fax to: 

  07/09/2004 11:33 Subject:  anyone help with this query? 
Returning to many results
  AM   

   

   





Hi all,

Having a problem with a query that's returning 486,057 results when it
most definitely should NOT be doing that.

I have two tables:

1 for a list of customers that purchase product A, another for customers
who purchased product B.

Columns are:

Id
First
Last
Email

I am trying to compare table 1 to table 2 to get a result set that gives
me the contact info (table columns) for those whose email addresses in
table 1 DON'T EQUAL those in table two.

In table one I have 2026 records
In table two I have 240 records

The query is this:

SELECT * FROM producta_customers AS t1, productb_customers AS t2 WHERE
t1.email != t2.email

When I do this query. I get 486,057 results returne.

Where am I going wrong? Any ideas?

Thanks so much for the help!

Aaron



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]







-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Script to purge

2004-07-09 Thread Mark Steele
Hi Luis,

I've made a small script to do this, it's in the 'backing up mysql
databases' on http://www.control-alt-del.org/code

Basically, you probably want to save the binary logs by archiving
them (in case of a database crash). On a binary log I get about
60-80% compression ratios, so it's worth just archiving them.

Here's the script (see the web site for explanations)

#!/usr/bin/perl
## Binary log backup utility
## Author: Mark Steele <[EMAIL PROTECTED]>
## Bug: This script will fail when you hit binlog #999 and roll over to
1
##  I'm too lazy to fix this just now, you've been warned.
use DBI;
use strict;

my $DBUSER = 'user';
my $DBPASSWORD = 'password';

## where the binlogs are eg: /usr/local/mysql/data
my $PATH_TO_DATA = '/path/to/mysql/data/files'; 

my $HOSTNAME = `hostname -s`;

## Which server to copy binary logs to
my $REMOTE_SERVER = 'backupserver.yourdomain.com'; 

## Path on remote machine where you want the backups to go
my $REMOTE_PATH = '/tmp'; 

my $dbh = DBI->connect("DBI:mysql:database=mysql;host=127.0.0.1",
 $DBUSER,$DBPASSWORD) || die;

## Figure out the current binary log file
my $sth = $dbh->prepare("SHOW MASTER STATUS");
$sth->execute();

my $lastfile = $sth->fetchrow_hashref()->{File};

## Get log list from server
$sth = $dbh->prepare("SHOW MASTER LOGS");
$sth->execute();

my @files;
while (my $ref = $sth->fetchrow_hashref()) {
last if ($ref->{Log_name} eq $lastfile);
push(@files,$ref->{Log_name});
}

## Figure out first and last binlog numbers
$lastfile =~ /\.(\d+)$/;
my $lastno = $1 - 1;
$files[0] =~ /\.(\d+)$/;
my $firstno = $1;

## Make a list of the files to backup
my $a = join(" ",@files);
chdir($PATH_TO_DATA);

## Backup the binary logs, and remove them once they are backed up
`tar cvfj $PATH_TO_DATA/$HOSTNAME-binlogs-$firstno-$lastno.tar.bz2 $a`;
$dbh->do("PURGE MASTER LOGS TO '$lastfile'");
$dbh->disconnect;

## Copy to the remote machine, comment this out if you don't want to 
## backup your binary logs to a remote machine
`/usr/bin/scp -i /root/.ssh/backup-key
$PATH_TO_DATA/$HOSTNAME-binlogs-$firstno-$lastno.tar.bz2
$REMOTE_SERVER:$REMOTE_PATH`;

## Remove the backup (comment this out if you aren't 
## backing up to a remote server)
unlink("$PATH_TO_DATA/$HOSTNAME-binlogs-$firstno-$lastno.tar.gz"); 
  

Cheers,

Mark Steele
Implementation Director
CDT Inc.

-Original Message-
From: Luis Mediero [mailto:[EMAIL PROTECTED] 
Sent: July 7, 2004 1:43 PM
To: [EMAIL PROTECTED]
Subject: Script to purge

Hi,

I would like write a script to purge every nigth the master log
with a cron
process. I need do it every nigth after load a lot of data into the
master.
I know if i do 'show master status' i can see the file_name of the last
log
file and then do -purge master logs to 'file_name'-. Is possible put the
file name into a variable and then do - purge master logs to '$variable'
-,
into a script?. Someone have a example?.

I can't use 'PURGE MASTER LOGS BEFORE ' because I use the
4.0.20
version. :-(


TIA

Luis




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: anyone help with this query? Returning tooooo many results

2004-07-09 Thread Philippe Poelvoorde
From the documentation
mysql> SELECT table1.* FROM table1
->LEFT JOIN table2 ON table1.id=table2.id
->WHERE table2.id IS NULL;
will normally give you the right answer.
and you should get : 2026  x 240 - 486,057 = 183 results
Aaron Wolski wrote:
Hi all,
Having a problem with a query that's returning 486,057 results when it
most definitely should NOT be doing that.
I have two tables:
1 for a list of customers that purchase product A, another for customers
who purchased product B.
Columns are:
Id
First
Last
Email
I am trying to compare table 1 to table 2 to get a result set that gives
me the contact info (table columns) for those whose email addresses in
table 1 DON'T EQUAL those in table two.
In table one I have 2026 records
In table two I have 240 records
The query is this:
SELECT * FROM producta_customers AS t1, productb_customers AS t2 WHERE
t1.email != t2.email
When I do this query. I get 486,057 results returne.
Where am I going wrong? Any ideas?
Thanks so much for the help!
Aaron


--
Philippe Poelvoorde
COS Trading Ltd.
+44.(0)20.7376.2401
[EMAIL PROTECTED]
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Between Operator

2004-07-09 Thread Michael Johnson
On Fri, 09 Jul 2004 11:38:05 -0400, Keith Ivey <[EMAIL PROTECTED]>  
wrote:

Craig Hoffman wrote:
This should pull up all the rock climbs that are in Yosemite, that are  
traditional style and are between the rating 5.5 to 5.10c.  Here is my  
query:

"SELECT * FROM routes, users WHERE area='$area' AND style='$style'   
BETWEEN rating='[$rating1]' AND rating='[$rating2]' GROUP BY route  
ORDER BY rating ASC ";

Not sure what those square brackets are doing there, but your main  
problem is that MySQL has no way of knowing what order you think those  
rating strings should be in.  The string '5.5' is greater than '5.10c',  
and the number 5.5 is greater than 5.10.  I think you're going to need  
to change the way you represent the ratings -- maybe something like  
'5.05' and '5.10c' would work (depending on what other possibilities  
are, and how the letters are supposed to affect sorting). Then  
manipulate the strings to produce what you're used to when it comes time  
to display them (that, or have two columns: one for display and one for  
sorting).

You could also set up a linked table that holds the sorting and rating  
string information. You could go with either a two column or three column  
table. The only difference being that the three column table would have an  
auto increment primary key so the sorting column can be changed without  
affecting the linking. A simple join connects the sort and rating columns.

Michael
--
Michael Johnson < [EMAIL PROTECTED] >
Internet Application Programmer, Pitsco, Inc.
+++ Opinions are my own, not my employer's +++
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: anyone help with this query? Returning tooooo many results

2004-07-09 Thread Alec . Cawley
"Aaron Wolski" <[EMAIL PROTECTED]> wrote on 09/07/2004 16:33:27:

> Hi all,
> 
> Having a problem with a query that's returning 486,057 results when it
> most definitely should NOT be doing that.
> 
> I have two tables:
> 
> 1 for a list of customers that purchase product A, another for customers
> who purchased product B.
> 
> Columns are:
> 
> Id
> First
> Last
> Email
> 
> I am trying to compare table 1 to table 2 to get a result set that gives
> me the contact info (table columns) for those whose email addresses in
> table 1 DON'T EQUAL those in table two.
> 
> In table one I have 2026 records
> In table two I have 240 records
> 
> The query is this:
> 
> SELECT * FROM producta_customers AS t1, productb_customers AS t2 WHERE
> t1.email != t2.email
> 
> When I do this query. I get 486,057 results returne.
> 
> Where am I going wrong? Any ideas?

You have not clearly understood the effect of a simple join. This creates 
(logically speaking) a table containing every possible combination fro t1 
and t2. Your WHERE statement is then stripping out those few rows where 
the email addresses match.

I am not quite sure what you are trying to do. I think you are trying to 
get the details of those customers who have bought A but not B. This is 
the province of the LEFT JOIN. Try

SELECT t1.* FROM t1 LEFT JOIN t2 ON t1.email = t2.email WHERE t2.email IS 
NULL 

This logically creates a table with a record for every customer who has 
bought both A and B PLUS a record for every customer who bough A but not 
B, with the fields for the latter being null. The WHERE statement picks 
out only the latter group of records, which are what you want. Don't worry 
about the purely virtual huge table created in the middle - MySQL can 
optimise it out. Trust In The (MySQL) Force.

Alec

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: anyone help with this query? Returning tooooo many results

2004-07-09 Thread Victor Pendleton
It sounds like a cartesian join. Have you run an explain plan on this query?
What are you joining the two tables on?

-Original Message-
From: Aaron Wolski
To: [EMAIL PROTECTED]
Sent: 7/9/04 10:33 AM
Subject: anyone help with this query? Returning to many results

Hi all,

Having a problem with a query that's returning 486,057 results when it
most definitely should NOT be doing that.

I have two tables:

1 for a list of customers that purchase product A, another for customers
who purchased product B.

Columns are:

Id
First
Last
Email

I am trying to compare table 1 to table 2 to get a result set that gives
me the contact info (table columns) for those whose email addresses in
table 1 DON'T EQUAL those in table two.

In table one I have 2026 records
In table two I have 240 records

The query is this:

SELECT * FROM producta_customers AS t1, productb_customers AS t2 WHERE
t1.email != t2.email

When I do this query. I get 486,057 results returne.

Where am I going wrong? Any ideas?

Thanks so much for the help!

Aaron



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: anyone help with this query? Returning tooooo many results

2004-07-09 Thread Martijn Tonies
Hi Aaron,

> Having a problem with a query that's returning 486,057 results when it
> most definitely should NOT be doing that.
>
> I have two tables:
>
> 1 for a list of customers that purchase product A, another for customers
> who purchased product B.
>
> Columns are:
>
> Id
> First
> Last
> Email
>
> I am trying to compare table 1 to table 2 to get a result set that gives
> me the contact info (table columns) for those whose email addresses in
> table 1 DON'T EQUAL those in table two.
>
> In table one I have 2026 records
> In table two I have 240 records
>
> The query is this:
>
> SELECT * FROM producta_customers AS t1, productb_customers AS t2 WHERE
> t1.email != t2.email
>
> When I do this query. I get 486,057 results returne.
>
> Where am I going wrong? Any ideas?

What do you want to do?

With regards,

Martijn Tonies
Database Workbench - developer tool for InterBase, Firebird, MySQL & MS SQL
Server.
Upscene Productions
http://www.upscene.com


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Data corruption on deletes

2004-07-09 Thread Jim Nachlin
gerald_clark wrote:
Hardware?
Celeron 1.3Ghz, IDE drive, 512Mb RAM
OS and version?
GNU/Linux, 2.4.20-8 kernel
MySql version?
4.0.17
Size of data file?
Size of index file?
postsearch.frm 8.7K
postsearch.MYD 3.5G
postsearch.MYI 1.0G
postsearch.TMD 3.5G
Filesystem type?
ext3
  Sorry 'bout that!
Also, in the time since posting the question below, I tried the
operation again (after repairing things), but I did a ALTER TABLE ___
DISABLE KEYS before and an ENABLE KEYS after.  This did not work, as
although the DB seemed to be working afterwards,
mysql> describe  postsearch;
ERROR 1016: Can't open file: 'postsearch.MYI'. (errno: 144)
So now I am repairing again.
  Thanks for your interest,
   Jim


I have a table with several keys.  When I try to delete anything from 
this table, I get data corruption and have to repair it with 
myisamchk.  Selects, updates work fine.

Here's the create table statement:
CREATE TABLE `postsearch` (
  `postId` int(11) NOT NULL default '0',
  `weblogId` int(11) NOT NULL default '0',
  `url` varchar(200) NOT NULL default '',
  `plink` varchar(200) NOT NULL default '',
  `image` varchar(100) default NULL,
  `language` varchar(100) default NULL,
  `title` varchar(100) default NULL,
  `weblogTitle` varchar(100) default NULL,
  `dateCreated` datetime NOT NULL default '-00-00 00:00:00',
  `post` text,
  `excerpt` text,
  `parserVersion` varchar(255) default NULL,
  PRIMARY KEY  (`postId`),
  KEY `weblog_key` (`weblogId`,`dateCreated`),
  KEY `url_key` (`url`),
  KEY `plink_key` (`plink`),
  FULLTEXT KEY `excerpt` (`excerpt`)
) TYPE=MyISAM
I think I have to somehow disable the keys, but am not sure quite how.
Here's what happens when I try to delete:
mysql> select postId from postsearch where dateCreated < NOW() - 
INTERVAL 14 DAY limit 1;
++
| postId |
++
|  65031 |
++
1 row in set (0.10 sec)

mysql> delete from postsearch where postId=65031 limit 1;
ERROR 1034: Incorrect key file for table: 'postsearch'. Try to repair it

Anybody have any idea?
   Jim


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: MySQL Open Source Status

2004-07-09 Thread Michael Johnson
On Fri, 9 Jul 2004 09:24:37 -0500, Darryl Hoar <[EMAIL PROTECTED]>  
wrote:

Someone on a technical forum I participate in stated that
MySQL was converting to be a commercial application.
I always knew that MySQL had a commercial arm, but always
continued to have the Open Source arm as well.
For project planning purposes, will there continue to be an
open source version of MySQL for personal use ?
We're dealing with this right now. Basically, MySQL is distributed under  
the GPL and always will be. However, their interpretation of what a work  
based on the program is (which requires the new work to be GPLed) is that  
anything that connects to the server is based on it. I believe this comes  
from the fact that you need to use the client library to do so. As a  
result, they offer a commercial license for proprietary products.

The commercial license is quite inexpensive compared to most competitors  
($249 USD w/o InnoDB, $495 USD w/InnoDB) and is permanent. You get  
unlimited upgrades and can transfer the license as well. IIRC, the price  
is per server, not per processor. And there are no limitations on the  
number of connections (unlike most competitors).

All in all not a bad deal if you don't want to GPL your code.
Michael
--
Michael Johnson < [EMAIL PROTECTED] >
Internet Application Programmer, Pitsco, Inc.
+++ Opinions are my own, not my employer's +++
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: How to Speed this Query Up?

2004-07-09 Thread SGreen

I prefer to use the _explicit_ form of INNER JOIN rather than the
_implicit_ form of the comma-separated list of tables.  I feel, with no
proof either way, that by specifying which conditions belong to which JOINs
I gain more detailed control over the query process.

Here is your same query (reformatted *only* so that I made sure I didn't
leave anything out) using explicit INNER JOIN statements:

SELECT v.veg_name
  , v.veg_id
  , u.user_id
  , u.user_name
  , IFNULL( t.title_name, 'Untitled' ) AS title_name
  , ctrl.country_name
  , ctr.nice_country_name
  , te.equip_name
  , CONCAT( ui.first_name, ' ',ui.last_name ) AS full_name
FROM veg AS v
INNER JOIN loc_countries_lang AS ctrl
  ON ctrl.lang_id =0
INNER JOIN loc_countries AS ctr
  ON ctr.country_id = ctrl.country_id
INNER JOIN loc_states AS s
  ON s.state_id = v.state_id
  AND s.lang_id =0
INNER JOIN loc_districts AS d
  ON d.district_id = s.district_id
  AND d.country_id = ctr.country_id
  AND d.lang_id =0
INNER JOIN users AS u
  ON u.user_id = v.user_id
  AND u.acct_status = 'Enabled'
INNER JOIN user_intros AS ui
  ON ui.user_id = u.user_id
  AND ui.lang_id =0
LEFT JOIN veg_titles AS t
  ON t.veg_id = v.veg_id
  AND t.lang_id =0
LEFT JOIN tech_equip AS te
  ON te.equip_id = v.equip_id
WHERE  v.latest_version = 'Y'
  AND v.cur_status = 'Active'
ORDER BY v.date_submitted DESC
LIMIT 0 , 10

You might get better performance just from using the explicit INNER JOINS
but I make no assumptions. You may also get better performance if you had
composite indexes (not just several individual field indexes) on the tables
that contain all of the columns used in each JOIN clause.  For example you
could have an index on user_intros with (user_id, lang_id) and the engine
won't have to read the table to check for either condition as they would
both be integer values that exist in an index.

ALSO NOTE: there is no condition in the ON clause of loc_countries_lang
that relates that table to any other.  This means that for all values in
the veg table you will need to match one row from the loc_countries_lang
table that has lang_id=0. If there are more than one languages that match
that key, you will get multiple sets of matches.

Assume for a second that all of the other JOINs have been deleted from the
query. If veg has 10 rows and there are 200 rows in loc_countries_lang
where lang_id=0 then your WHERE clause will have to be evaluated against
2000 rows. That may be what you want to do but it's going to take a
while no matter what.

I have an aside to the developers 
Have you considered expanding the query engine to also use the nonnumeric
values stored in indexes to avoid additional data table reads?? (According
to the docs, the engine would only retrieve a value from the index
(avoiding another table read) if the data is numeric.)
-- end aside

Please, let me know if I helped or not, OK?
Yours,
Shawn Green
Database Administrator
Unimin Corporation - Spruce Pine



   

  "Doug V" 

  <[EMAIL PROTECTED]>To:   [EMAIL PROTECTED]   
  
   cc: 

  07/09/2004 01:51 Fax to: 

  AM   Subject:  How to Speed this Query Up?   

   

   





A query which is constantly being run takes about 3 seconds when not
cached,
and I was wondering if there were any way to speed this up. There are
several tables being joined and sorted by latest date with a LIMIT of 10.
All fields being joined by are indexed. So I'm not sure what else I can do.

The query and EXPLAIN are listed below. Based on the information below, is
there anything I can do to speed this up? There are about 100K rows in the
main veg table.

SELECT v.veg_name, v.veg_id, u.user_id, u.user_name, IFNULL( t.title_name,
'Untitled' ) AS title_name, ctrl.country_name,
ctr.nice_country_name, te.equip_name, CONCAT( ui.first_name, ' ',
ui.last_name ) AS full_name
FROM veg AS v, loc_countries AS ctr, loc_countries_lang AS ctrl,
loc_districts AS d, loc_states AS s, users AS u, user_intros AS ui
LEFT JOIN veg_titles AS t ON v.veg_id = t.veg_id AND t.lang_id =0
LEFT JOIN tech_equip AS te ON v.equip_id = te.equip_id
WHERE d.lang_id =0 AND ctrl.lang_id

Re: Between Operator

2004-07-09 Thread Keith Ivey
Craig Hoffman wrote:
This should pull up all the rock climbs that are in Yosemite, that are 
traditional style and are between the rating 5.5 to 5.10c.  Here is my 
query:

"SELECT * FROM routes, users WHERE area='$area' AND style='$style'  
BETWEEN rating='[$rating1]' AND rating='[$rating2]' GROUP BY route 
ORDER BY rating ASC ";

Not sure what those square brackets are doing there, but your main 
problem is that MySQL has no way of knowing what order you think those 
rating strings should be in.  The string '5.5' is greater than '5.10c', 
and the number 5.5 is greater than 5.10.  I think you're going to need 
to change the way you represent the ratings -- maybe something like 
'5.05' and '5.10c' would work (depending on what other possibilities 
are, and how the letters are supposed to affect sorting). Then 
manipulate the strings to produce what you're used to when it comes time 
to display them (that, or have two columns: one for display and one for 
sorting).

--
Keith Ivey <[EMAIL PROTECTED]>
Washington, DC
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


anyone help with this query? Returning tooooo many results

2004-07-09 Thread Aaron Wolski
Hi all,

Having a problem with a query that's returning 486,057 results when it
most definitely should NOT be doing that.

I have two tables:

1 for a list of customers that purchase product A, another for customers
who purchased product B.

Columns are:

Id
First
Last
Email

I am trying to compare table 1 to table 2 to get a result set that gives
me the contact info (table columns) for those whose email addresses in
table 1 DON'T EQUAL those in table two.

In table one I have 2026 records
In table two I have 240 records

The query is this:

SELECT * FROM producta_customers AS t1, productb_customers AS t2 WHERE
t1.email != t2.email

When I do this query. I get 486,057 results returne.

Where am I going wrong? Any ideas?

Thanks so much for the help!

Aaron



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Control Center : dump

2004-07-09 Thread Philippe Poelvoorde
[EMAIL PROTECTED] wrote:
Just run the query:
SELECT * FROM table_name_goes_here;
To see all of the columns and all of the data in a particular table. I
don't use the control center but I have heard that if you do not change a
setting, it limits you by default to viewing only the first 1000 records of
any query. How to set that value was answered recently in this same list so
you should be able to find it in the archives rather easily.
SELECT ... just shows the data, and does not generate :
INSERT INTO my_table (...)
that a 'mysqldump my_base my_table' would do.
It doesn't matter, I don't usually use the CC, but that can be useful.

Yours,
Shawn Green
Database Administrator
Unimin Corporation - Spruce Pine

   
  Philippe 
  Poelvoorde   To:   [EMAIL PROTECTED] 
  <[EMAIL PROTECTED]cc: 
  ing.co.uk>   Fax to: 
   Subject:  Control Center : dump 
  07/09/2004 06:34 
  AM   
   
   


Hi,
mysqlcc have a nice option which is : show create, but is there a menu
where one can obtain something like : show insert (or a dump of the table)
?
--
Philippe Poelvoorde
COS Trading Ltd.
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




--
Philippe Poelvoorde
COS Trading Ltd.
+44.(0)20.7376.2401
[EMAIL PROTECTED]
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Secure Database Systems

2004-07-09 Thread mos
At 06:29 PM 7/8/2004, you wrote:
Hi David, the link you provided is quite interesting. Is such
database(translucent database) actually exist? Or is it just a concept?
Thanks
Sarah,
These databases do exist. Transparent (translucent) encryption has 
been around for a while (at least on Windows machines). Take a look at 
Advantage database from Extended Systems http://advantagedatabase.com. They 
have a Windows and Linux version. They use AES to encrypt the data, index, 
blobs (including memos). You just enter your password when you want access 
to the table, and only your program can see the data. It does the 
decryption on a record by record basis in memory only, so no one else who 
has access to your machine when you are viewing your data can see any 
unencrypted data. There is no need to re-encrypt the data you've changed 
because it is done when the record is saved. It's all transparent.

I've found adding encryption does not slow down record access. 
Their LAN version (Advantage Local Server:ALS) is free to use and 
distribute. Their C/S version (Advantage Database Server:ADS) costs 
$.  Unfortunately their license agreement does not allow ALS to be used as 
a webserver, but you can legally connect from one ALS application to 
another ALS application that resides on a webserver (thin client). See 
their license agreement for more info. I've had ALS up to 1 million rows 
and it is still lightning fast.

By default ALS allows for 5 connected users at a time, but this 
can be bumped up to around 20 if you request it. For more users you should 
really use their ADS product. Applications are typically written in Delphi 
or C++Builder for Windows but they have free ODBC driver and also one that 
works with PHP and another for Linux using Borland's Kylix). They also have 
a database architect program that has simple query access to the tables 
without writing any code. So you can define your tables, enter the data, 
and query it all from the data architect program.

It would be nice if MySQL had the same capabilities, because if 
someone steals your computer with your MySQL database on it, you're 
screwed. If you have your MySQL database on a virtual server, you also 
have a security problem. With Advantage they still won't be able to break 
into it because each encrypted table can have their own password. Using hex 
editors is of no use and like I said, even indexes and blobs are encrypted. 
Passwords are not transmitted in the clear.

Mike

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


RE: Linux 2GB Memory Limit

2004-07-09 Thread Kevin Jackson

Again, this isn't an issue in latest kernels for well over 3 years (2.4.0
Test7+) and uses LFS, though the filesystem implementation has been more
recent but still a couple of years old).
Ext3 supports this if you are this if you are looking at the Enterprise
Linux kernels and ReiserFS also support this as do the other journaling
filesystems.

http://www.suse.de/~aj/linux_lfs.html

for more info.

Again, all modern distros support this feature - but they support it because
the Linux kernel supports it, which you can make work with your existing
distro version.

Kev

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] 
Sent: 09 July 2004 15:38
To: Kevin Jackson
Subject: RE: Linux 2GB Memory Limit


He is talking about file size.

--ja

On Fri, 9 Jul 2004, Kevin Jackson wrote:

> What version are you using?
> What platform are you on?
> How old is your hardware?
> 
> The 2Gb limit has long been addressed.
> 
> RH9, Fedora, RHES all support more than 2Gb Ram (assuming Ram) out of the
> box... but its dependent on the kernel.
> 
> Newer 2.4 uses a 3G/1G split to address the 4Gb it could handle. (3Gb
user,
> 1Gb kernel).
> 2.6 uses 4G/4G split allowing a lot more to be used (64Gb with the hugemem
> kernels).  This has been backported to the RHES 2.4.21 kernels.
> 
> If you are running some stuff on larger amounts of memory in a production
> environment I'd start to look at the "Enterprise" distributions such as
RHEL
> (or their F/OSS rebuild equivalents like TAO Linux and White box Linux).
> 
> Kev
> 
> -Original Message-
> From: Marvin Wright [mailto:[EMAIL PROTECTED] 
> Sent: 09 July 2004 14:07
> To: [EMAIL PROTECTED]
> Subject: Linux 2GB Memory Limit
> 
> Hi,
> 
> Is there any work around for this yet where a process can not allocate
more
> than 2GB.
> Can I upgrade my Redhat OS to any particular version ?
> 
> Many Thanks.
> 
> Marvin Wright
> Flights Developer
> Lastminute.com
> [EMAIL PROTECTED]
> +44 (0) 207 802 4543
> 
> 
> 
> This e-mail has been scanned for all viruses by Star Internet. The
> service is powered by MessageLabs. For more information on a proactive
> anti-virus service working around the clock, around the globe, visit:
> http://www.star.net.uk
> 
> 
> 

-- 


**
This email and any files transmitted with it are confidential and
intended solely for the use of the individual or entity to whom they
are addressed. If you have received this email in error please notify
the system manager.
This footnote also confirms that this email message has been swept by
MIMEsweeper for the presence of computer viruses.
www.mimesweeper.com
**


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Linux 2GB Memory Limit

2004-07-09 Thread Marvin Wright
Hi,

Current Platform
 RH version is 7.3 
 IBM Blade Server - 2 x Intel(R) Xeon(TM) CPU 3.20GHz
 32 GB SCSI
 4 GB Ram

This is the platform we are moving to in a week or so
 RH Enterprise AS 2.1 or 3.0
 4 x Intel(R) Xeon(TM) MP CPU 2.70GHz
 128 GB SCSI Raid
 16 GB Ram

So with the new platform I'll be able to have a much bigger InnoDB buffer
pool.

Thats good news.

Cheers.

Marvin.


-Original Message-
From: Kevin Jackson [mailto:[EMAIL PROTECTED]
Sent: 09 July 2004 15:28
To: [EMAIL PROTECTED]
Subject: RE: Linux 2GB Memory Limit


What version are you using?
What platform are you on?
How old is your hardware?

The 2Gb limit has long been addressed.

RH9, Fedora, RHES all support more than 2Gb Ram (assuming Ram) out of the
box... but its dependent on the kernel.

Newer 2.4 uses a 3G/1G split to address the 4Gb it could handle. (3Gb user,
1Gb kernel).
2.6 uses 4G/4G split allowing a lot more to be used (64Gb with the hugemem
kernels).  This has been backported to the RHES 2.4.21 kernels.

If you are running some stuff on larger amounts of memory in a production
environment I'd start to look at the "Enterprise" distributions such as RHEL
(or their F/OSS rebuild equivalents like TAO Linux and White box Linux).

Kev

-Original Message-
From: Marvin Wright [mailto:[EMAIL PROTECTED] 
Sent: 09 July 2004 14:07
To: [EMAIL PROTECTED]
Subject: Linux 2GB Memory Limit

Hi,

Is there any work around for this yet where a process can not allocate more
than 2GB.
Can I upgrade my Redhat OS to any particular version ?

Many Thanks.

Marvin Wright
Flights Developer
Lastminute.com
[EMAIL PROTECTED]
+44 (0) 207 802 4543



This e-mail has been scanned for all viruses by Star Internet. The
service is powered by MessageLabs. For more information on a proactive
anti-virus service working around the clock, around the globe, visit:
http://www.star.net.uk


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]


**
This email and any files transmitted with it are confidential and
intended solely for the use of the individual or entity to whom they
are addressed. If you have received this email in error please notify
the system manager.
This footnote also confirms that this email message has been swept by
MIMEsweeper for the presence of computer viruses.
www.mimesweeper.com
**


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



This e-mail has been scanned for all viruses by Star Internet. The
service is powered by MessageLabs. For more information on a proactive
anti-virus service working around the clock, around the globe, visit:
http://www.star.net.uk



This e-mail has been scanned for all viruses by Star Internet. The
service is powered by MessageLabs. For more information on a proactive
anti-virus service working around the clock, around the globe, visit:
http://www.star.net.uk


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: problem importing .csv (excel format) into mysql

2004-07-09 Thread Chip Wiegand
"Joshua J. Kugler" <[EMAIL PROTECTED]> wrote on 07/08/2004 04:24:41 PM:

> On Thursday 08 July 2004 02:35 pm, Chip Wiegand said something like:
> > I was sent an excel file from a remote office, and need to put the 
data
> > into a mysql database to be displayed on our web site. I removed a few
> > lines of fluff from the excel file and saved it as .csv (using .csv
> > (ms-dos)). When I try to import the file it gives me a duplicate entry 
for
> > key 1 error. I have looked through the file and the duplicate item it 
is
> > pointing to does not exist in the file. Here is the error:
> >
> > mysql> load data infile '/usr/home/autopilots/whs4.csv'
> > -> into table refurbs
> > -> fields terminated by ','
> > -> optionally enclosed by '"'
> > -> lines terminated by '\n';
> > ERROR 1062: Duplicate entry '2147483647' for key 1
> 
> That sounds like you have a number that is too large for the field, and 
MySQL 
> is rolling it over.  What is the type of your key?  Int? Big int?  And 
what 
> is in that field in the Excel file?  What kind of numbers?

Thanks for the tip. I'm a bit out of practice, I forgot to specify the 
column names - so field two was getting imported into the field 1 id 
column.
Regards,
Chip


> j- k-
> 
> -- 
> Joshua J. Kugler -- Fairbanks, Alaska -- ICQ#:13706295
> Every knee shall bow, and every tongue confess, in heaven, on earth,and 
under 
> the earth, that Jesus Christ is LORD -- Count on it!
> 
> -- 
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe: 
http://lists.mysql.com/[EMAIL PROTECTED]
> 
> 


Between Operator

2004-07-09 Thread Craig Hoffman
Hey Everyone,
I have query where one selects an "style" then a "area" and finally 
"rating".  When some selects a rating they select a range of ratings.  
For example:

Style:  Traditional
Area:  Yosemite
Rating: From: 5.5 To: 5.10c
This should pull up all the rock climbs that are in Yosemite, that are 
traditional style and are between the rating 5.5 to 5.10c.  Here is my 
query:

"SELECT * FROM routes, users WHERE area='$area' AND style='$style'  
BETWEEN rating='[$rating1]' AND rating='[$rating2]' GROUP BY route 
ORDER BY rating ASC ";

 For some reason which I am not seeing, this query is not doing what it 
should be doing.  Does anyone have any suggestions?

Thanks, Craig
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Control Center : dump

2004-07-09 Thread SGreen

Just run the query:

SELECT * FROM table_name_goes_here;

To see all of the columns and all of the data in a particular table. I
don't use the control center but I have heard that if you do not change a
setting, it limits you by default to viewing only the first 1000 records of
any query. How to set that value was answered recently in this same list so
you should be able to find it in the archives rather easily.

Yours,
Shawn Green
Database Administrator
Unimin Corporation - Spruce Pine



   

  Philippe 

  Poelvoorde   To:   [EMAIL PROTECTED] 

  <[EMAIL PROTECTED]cc:
 
  ing.co.uk>   Fax to: 

   Subject:  Control Center : dump 

  07/09/2004 06:34 

  AM   

   

   





Hi,

mysqlcc have a nice option which is : show create, but is there a menu
where one can obtain something like : show insert (or a dump of the table)
?

--
Philippe Poelvoorde
COS Trading Ltd.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]







-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Cost of joins?

2004-07-09 Thread Martijn Tonies
Hi Michael,

> >> > If you need more performance, throw more hardware at it -
> >> > a larger cache (settings -> memory), faster disks and a faster CPU.
> >>
> >> After adding a column for "one level up", adding indexes, optimizing
the
> >> query it took only a few hundreds of seconds.
> >
> > Of course, indices should be added to get acceptable performance.
> > That's what they are here for.
> >
> > Nevertheless, your database design should be based on logic
> > and all data should be stored normalized. If you're de-normalizing
> > your design to get better performance, then there's something
> > wrong with the database engine (whatever engine that may be).
>
> Unfortunately, there is not a perfect database engine. Sometimes you have
> to break normalization to get acceptable performance, especially when you
> can't through more hardware at the problem. I have no doubt that some day
> every problem that must be de-normalized now for acceptable performance
> can be renormalized at some future time. But you can't know when that
> future time will be exactly and must accept a compromise in the meantime.

What you're saying is true. The difference is, that I will start saying:
don't bother about joins causing trouble unless they do.

That's different than starting to ask "do joins cause trouble".

I think we agree on that.

With regards,

Martijn Tonies
Database Workbench - developer tool for InterBase, Firebird, MySQL & MS SQL
Server.
Upscene Productions
http://www.upscene.com


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Linux 2GB Memory Limit

2004-07-09 Thread Kevin Jackson
What version are you using?
What platform are you on?
How old is your hardware?

The 2Gb limit has long been addressed.

RH9, Fedora, RHES all support more than 2Gb Ram (assuming Ram) out of the
box... but its dependent on the kernel.

Newer 2.4 uses a 3G/1G split to address the 4Gb it could handle. (3Gb user,
1Gb kernel).
2.6 uses 4G/4G split allowing a lot more to be used (64Gb with the hugemem
kernels).  This has been backported to the RHES 2.4.21 kernels.

If you are running some stuff on larger amounts of memory in a production
environment I'd start to look at the "Enterprise" distributions such as RHEL
(or their F/OSS rebuild equivalents like TAO Linux and White box Linux).

Kev

-Original Message-
From: Marvin Wright [mailto:[EMAIL PROTECTED] 
Sent: 09 July 2004 14:07
To: [EMAIL PROTECTED]
Subject: Linux 2GB Memory Limit

Hi,

Is there any work around for this yet where a process can not allocate more
than 2GB.
Can I upgrade my Redhat OS to any particular version ?

Many Thanks.

Marvin Wright
Flights Developer
Lastminute.com
[EMAIL PROTECTED]
+44 (0) 207 802 4543



This e-mail has been scanned for all viruses by Star Internet. The
service is powered by MessageLabs. For more information on a proactive
anti-virus service working around the clock, around the globe, visit:
http://www.star.net.uk


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]


**
This email and any files transmitted with it are confidential and
intended solely for the use of the individual or entity to whom they
are addressed. If you have received this email in error please notify
the system manager.
This footnote also confirms that this email message has been swept by
MIMEsweeper for the presence of computer viruses.
www.mimesweeper.com
**


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



MySQL Open Source Status

2004-07-09 Thread Darryl Hoar
Greetings,
Someone on a technical forum I participate in stated that
MySQL was converting to be a commercial application.
I always knew that MySQL had a commercial arm, but always
continued to have the Open Source arm as well.

For project planning purposes, will there continue to be an
open source version of MySQL for personal use ?

thanks,
Darryl

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: problem importing .csv (excel format) into mysql

2004-07-09 Thread SGreen

It appears as though your key column has exceeded its numerical limit. Do
you have that column typed large enough to contain the data you are putting
into it? Your database will accept the first out-of-range value and re-size
it to the MAX of that column's datatype (int?). When the second
out-of-range number rolls in, that's when the error is thrown.

Check your input data to make sure it doesn't exceed the range of your
destination columns. You may need to use a larger datatype for that column.

Yours,
Shawn Green
Database Administrator
Unimin Corporation - Spruce Pine



   

  Chip Wiegand 

  <[EMAIL PROTECTED]To:   [EMAIL PROTECTED]
 
  rad.com> cc: 

   Fax to: 

  07/08/2004 06:35 Subject:  problem importing .csv (excel 
format) into mysql  
  PM   

   

   





I was sent an excel file from a remote office, and need to put the data
into a mysql database to be displayed on our web site. I removed a few
lines of fluff from the excel file and saved it as .csv (using .csv
(ms-dos)). When I try to import the file it gives me a duplicate entry for
key 1 error. I have looked through the file and the duplicate item it is
pointing to does not exist in the file. Here is the error:

mysql> load data infile '/usr/home/autopilots/whs4.csv'
-> into table refurbs
-> fields terminated by ','
-> optionally enclosed by '"'
-> lines terminated by '\n';
ERROR 1062: Duplicate entry '2147483647' for key 1

(When I try to import through phpMyAdmin-2.5.6 it doesn't work also, keeps
telling me to load a file, which is done by pressing Browse button.)

Any ideas what could cause it to fail on a non-existant key? Is there a
trick to loading Excel .csv files into mysql?

Thanks
--
Chip Wiegand
Computer Services
Simrad, Inc
www.simradusa.com
www.simrad.com
[EMAIL PROTECTED]

"When I'm working on a problem, I never think about beauty. I think only
how to solve the problem. But when I have finished, if the solution is not
beautiful, I know it is wrong." - R. Buckminster Fuller





-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Space is filling up

2004-07-09 Thread Harald Fuchs
In article <[EMAIL PROTECTED]>,
gerald_clark <[EMAIL PROTECTED]> writes:

> Asif Iqbal wrote:

>> Jack Coxen wrote:
>> 
>>> If you database contains time-based data you could age out old records.  I
>>> only need to keep data for 6 months so I run a nightly script to delete any
>>> records more than 6 months old.  And before anyone asks...yes, I also run
>>> another script to ANALYZE/OPTIMIZE my tables.
>>> 
>>> 
>> 
>> My databases (mysql,rt3,test) are taking only about 2k.
>> 
> No. The ibdata* files ARE your database.
> You need more disk space, and you need it now.

Not necessarily.  ibdata files can grow if they are declared with
"autoextend", but they don't shrink, even if you're deleting all
rows.


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Cost of joins?

2004-07-09 Thread Jigal van Hemert
From: "Jochem van Dieten" <[EMAIL PROTECTED]>
> > After adding a column for "one level up", adding indexes, optimizing the
> > query it took only a few hundreds of seconds.
>
> Maybe I misunderstand the problem, but I get the impression you have
> the category computers>internet>providers>adsl and you want to count
> everything in computers>internet>providers. Isn't that just a BETWEEN
> 'computers>internet>providers' AND 'computers>internet>providers>z'
> which a B+tree is supposed to handle just fine?

If I remember it well we had to do something like:
computers>internet>providers
adsl (20)
cable (31)
dial-up (107)
hosting (12)

So it was not simply a computers>internet>providers>adsl% , but
computers>internet>providers>adsl without
computers>internet>providers>adsl>%

> > I really don't know how much hardware you would like to use to get these
> > results?
>
> For DMOZ data I don't want hardware, I want GiST indexes:
> http://www.sai.msu.su/~megera/postgres/gist/ltree/

OpenLDAP could also work well I guess ;-)

Anyhow, the point I was trying to make was that it isn't just a question of
adding more hardware to get performance. More often than not a few
modifications will make a query pretty fast.
Yes, the query optimizer could be improved, but for a fast and reasonably
light DMS MySQL isn't bad at all!

Regards, Jigal.


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RV: Data Spatial

2004-07-09 Thread Wilder Castelo
Estaba trabajando con el ArcView, con ula extension "OpenSVGMapServer 1.01"
(http://arcscripts.esri.com/scripts.asp?eLang=&eProd=&perPage=10&eQuery=mysq
l) permitiendo poder exportar un Shapefile (Archivo Vectorial) a MySQL
generandome el siguiente Peru.sql(que con gusto enviaria al correo que me
preste su ayuda)mediante sentencias
mySQL> create basedata peru
y
c:\mysql\bin\mysql -u root < peru.sql
he llegado a ingresarlo. El Problema  es que aun no puedo visualizarlo.



Wilder Castelo Rojas
Analista de Sistemas III
Instituto Nacional de Estadistica
Lima - Perú




Re: How to Speed this Query Up?

2004-07-09 Thread Brent Baisley
You may try increasing the sort buffer size variable since it appears 
MySQL is resorting to sorting in a file.

On Jul 9, 2004, at 1:51 AM, Doug V wrote:
A query which is constantly being run takes about 3 seconds when not 
cached, and I was wondering if there were any way to speed this up. 
There are several tables being joined and sorted by latest date with a 
LIMIT of 10. All fields being joined by are indexed. So I'm not sure 
what else I can do. The query and EXPLAIN are listed below. Based on 
the information below, is there anything I can do to speed this up? 
There are about 100K rows in the main veg table.

SELECT v.veg_name, v.veg_id, u.user_id, u.user_name, IFNULL( 
t.title_name, 'Untitled' ) AS title_name, ctrl.country_name,
ctr.nice_country_name, te.equip_name, CONCAT( ui.first_name, ' ', 
ui.last_name ) AS full_name
FROM veg AS v, loc_countries AS ctr, loc_countries_lang AS ctrl, 
loc_districts AS d, loc_states AS s, users AS u, user_intros AS ui
LEFT JOIN veg_titles AS t ON v.veg_id = t.veg_id AND t.lang_id =0
LEFT JOIN tech_equip AS te ON v.equip_id = te.equip_id
WHERE d.lang_id =0 AND ctrl.lang_id =0 AND s.lang_id =0 AND 
ctr.country_id = ctrl.country_id AND d.district_id = s.district_id
AND ctr.country_id = d.country_id AND v.latest_version = 'Y' AND 
u.acct_status = 'Enabled' AND s.state_id = v.state_id
AND u.user_id = v.user_id AND v.cur_status = 'Active' AND u.user_id = 
ui.user_id AND ui.lang_id =0
ORDER BY v.date_submitted DESC
LIMIT 0 , 10

s ALL PRIMARY,district_id NULL NULL NULL 2457 Using where; Using 
temporary; Using filesort
d eq_ref PRIMARY,country_id PRIMARY 3 s.district_id,const 1
ctr eq_ref PRIMARY PRIMARY 1 d.country_id 1
ctrl eq_ref PRIMARY,country_id PRIMARY 2 ctr.country_id,const 1
v ref state_id,user_id state_id 2 s.state_id 32 Using where
u eq_ref PRIMARY PRIMARY 2 p.user_id 1 Using where
ui eq_ref uniq_user_lang_id uniq_user_lang_id 3 u.user_id,const 1
t eq_ref veg_lang_id veg_lang_id 4 v.veg_id,const 1
te eq_ref PRIMARY PRIMARY 2 p.equip_id 1

_
Is your PC infected? Get a FREE online computer virus scan from 
McAfee® Security. 
http://clinic.mcafee.com/clinic/ibuy/campaign.asp?cid=3963

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]


--
Brent Baisley
Systems Architect
Landover Associates, Inc.
Search & Advisory Services for Advanced Technology Environments
p: 212.759.6400/800.759.0577
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Cannot start 4.1.3 on Debian Linux

2004-07-09 Thread Sergey S. Kostyliov
Hello Michael,

On Thursday 08 July 2004 23:58, Michael Johnson wrote:
> I just tried installing 4.1.3 on my development machine today. To my  
> dismay, I couldn't get it to start properly. I was upgrading from 4.1.2,  
> which I installed identically to the procedure below.
> 
> On to the actual problem. When I started MySQL the first time I used  
> '/etc/init.d/mysql start'. mysqld_safe was started and just sat there,  
> eating CPU cycles. I couldn't connect at all. I checked the error log and  
> found the log information that follows. I also tried manually starting the  
> server. The results were the same but I discovered that mysqld_safe was  
> continually trying to start processes and failing, apparently with the  
> error show below.
> 
> I could not find anything, after a fairly extensive search, that even  
> hinted to a problem like this. I downloaded the debug version of the  
> server, but installing it is a bit more complex that I'd like to do at the  
> moment, especially if someone has an easy fix.
> 
> To install the server, I downloaded the RPMs, converted them to DEBs for  
> debian with the alien package, and used dpkg to install them. This update  
> procedure has worked since 4.1.0. I remember at some point I did have to  
> run a conversion program for permissions, but I don't believe that was  
> necesary with this update. I am running a current (as of 2004-06-08  
> 12:00-0500) installation of Debian testing/sarge.
> 
> Does anyone have any ideas or pointers?
I think this is a bug#4407 http://bugs.mysql.com/bug.php?id=4407
Could you please check that the patch from the links fixes this problem for you?

cut>
> New value of fp=(nil) failed sanity check, terminating stack trace!
> Please read http://www.mysql.com/doc/en/Using_stack_trace.html and follow  
> instructions on how to resolve the stack trace. Resolved
> stack trace is much more helpful in diagnosing the problem, so please do
> resolve it
> Trying to get some variables.
> Some pointers may be invalid and cause the dump to abort...
> thd->query at (nil)  is invalid pointer
> thd->thread_id=0
> The manual page at http://www.mysql.com/doc/en/Crashing.html contains
> information that should help you find out what is causing the crash.
> 
> 
> Resolved backtrace:
> 0x808bd47 handle_segfault + 423
> 0x82c79d8 pthread_sighandler + 184
> 0x82c8f47 __pthread_unlock + 103
> 0x82ca2ca pthread_rwlock_rdlock + 234
> 0x826eef4 mi_rnext + 148
> 0x80fae2a index_next__9ha_myisamPc + 42
> 0x80e6789 __11GRANT_TABLEP8st_tableT1 + 1641
> 0x80e80fe grant_init__FP3THD + 526
> 0x808d389 main + 697
> 0x82d74f4 __libc_start_main + 212
> 0x8048101 _start + 33
> 
> -- 
> Michael Johnson < [EMAIL PROTECTED] >
> Internet Application Programmer, Pitsco, Inc.
> +++ Opinions expressed are my own, not my employer's +++
> 

-- 
Sergey S. Kostyliov <[EMAIL PROTECTED]>
Jabber ID: [EMAIL PROTECTED]

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Cost of joins?

2004-07-09 Thread Jochem van Dieten
On Fri, 9 Jul 2004 14:01:46 +0100, [EMAIL PROTECTED]
<[EMAIL PROTECTED]> wrote:
> 
> Nearly always, but not absolutely always. I have a table with columns
> primary start
> primary finish
> secondary start
> secondary finish
> 
> Since it is defined that the distance between start and finish is the same
> for both primary and secondary, this is denormalised, since secondary
> finish is always given by secondary start + (primary finish - primary
> start). However, I want to use all four fields as indexes, including
> secondary finish. Does any database allow indexes on complex functions of
> columns rather than on columns themselves?

Sure. In PostgreSQL I would do:
CREATE INDEX the_idx ON table (secondary start + primary finish -
primary start);


> Or is this just a defect in MySQL?

The absence of an optimization is the absence of an optimization, not a defect.

Jochem

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Cost of joins?

2004-07-09 Thread Jochem van Dieten
On Fri, 9 Jul 2004 14:45:41 +0200, Jigal van Hemert <[EMAIL PROTECTED]> wrote:
> From: "Martijn Tonies" <[EMAIL PROTECTED]>
>> Design for understanding, logic and maintenance, not performance.
>>
>> If you need more performance, throw more hardware at it -
>> a larger cache (settings -> memory), faster disks and a faster CPU.
> 
> Sorry, but I can't agree with you. Years ago I had to put the DMOZ
> (http://www.dmoz.org/) database (2 million records, 100,000 or so categories
> at the time) in a MySQL database. Next we had to calculate the number of
> sites in a certain category. The 'path' to the category was known, but a
> regexp was needed to select the path of one level up. The query took > 30
> seconds.
> After adding a column for "one level up", adding indexes, optimizing the
> query it took only a few hundreds of seconds.

Maybe I misunderstand the problem, but I get the impression you have
the category computers>internet>providers>adsl and you want to count
everything in computers>internet>providers. Isn't that just a BETWEEN
'computers>internet>providers' AND 'computers>internet>providers>z' 
which a B+tree is supposed to handle just fine?


> I really don't know how much hardware you would like to use to get these
> results?

For DMOZ data I don't want hardware, I want GiST indexes:
http://www.sai.msu.su/~megera/postgres/gist/ltree/

Jochem

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Cost of joins?

2004-07-09 Thread Michael Johnson
On Fri, 9 Jul 2004 14:55:40 +0200, Martijn Tonies <[EMAIL PROTECTED]>  
wrote:


> If you need more performance, throw more hardware at it -
> a larger cache (settings -> memory), faster disks and a faster CPU.
After adding a column for "one level up", adding indexes, optimizing the
query it took only a few hundreds of seconds.
Of course, indices should be added to get acceptable performance.
That's what they are here for.
Nevertheless, your database design should be based on logic
and all data should be stored normalized. If you're de-normalizing
your design to get better performance, then there's something
wrong with the database engine (whatever engine that may be).
Unfortunately, there is not a perfect database engine. Sometimes you have  
to break normalization to get acceptable performance, especially when you  
can't through more hardware at the problem. I have no doubt that some day  
every problem that must be de-normalized now for acceptable performance  
can be renormalized at some future time. But you can't know when that  
future time will be exactly and must accept a compromise in the meantime.

Michael
--
Michael Johnson < [EMAIL PROTECTED] >
Internet Application Programmer, Pitsco, Inc.
+++ Opinions are my own, not my employer's +++
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Data corruption on deletes

2004-07-09 Thread gerald_clark
Hardware?
OS and version?
MySql version?
Size of data file?
Size of index file?
Filesystem type?
Jim Nachlin wrote:
I have a table with several keys.  When I try to delete anything from 
this table, I get data corruption and have to repair it with 
myisamchk.  Selects, updates work fine.

Here's the create table statement:
CREATE TABLE `postsearch` (
  `postId` int(11) NOT NULL default '0',
  `weblogId` int(11) NOT NULL default '0',
  `url` varchar(200) NOT NULL default '',
  `plink` varchar(200) NOT NULL default '',
  `image` varchar(100) default NULL,
  `language` varchar(100) default NULL,
  `title` varchar(100) default NULL,
  `weblogTitle` varchar(100) default NULL,
  `dateCreated` datetime NOT NULL default '-00-00 00:00:00',
  `post` text,
  `excerpt` text,
  `parserVersion` varchar(255) default NULL,
  PRIMARY KEY  (`postId`),
  KEY `weblog_key` (`weblogId`,`dateCreated`),
  KEY `url_key` (`url`),
  KEY `plink_key` (`plink`),
  FULLTEXT KEY `excerpt` (`excerpt`)
) TYPE=MyISAM
I think I have to somehow disable the keys, but am not sure quite how.
Here's what happens when I try to delete:
mysql> select postId from postsearch where dateCreated < NOW() - 
INTERVAL 14 DAY limit 1;
++
| postId |
++
|  65031 |
++
1 row in set (0.10 sec)

mysql> delete from postsearch where postId=65031 limit 1;
ERROR 1034: Incorrect key file for table: 'postsearch'. Try to repair it

Anybody have any idea?
   Jim

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Secure Database Systems

2004-07-09 Thread Mitch Pirtle
Rory McKinley wrote:
Hi Sarah
This is more of  a PHP question than a MySQL question as to my mind 
while it is all possible, the bulk of the work would need to be done 
on the PHP side. Assuming that you don't have the time to write all 
the necessary code from scratch, you might want to look for a 
content-management system (CMS) written in PHP and using MySQL that 
has some (ideally, all) the functionality that you require. This can 
serve as a good foundation and can be tweaked to your relevant 
requriements. A good place to start looking for this would be 
Sourceforge or google.

Take a look at http://opensourcecms.com - there you can actually demo 
the CMS before you download it.  You can definitely work on an existing 
CMS and patch it to work with encrypted data storage.  I recommend 
Mambo, as most of your SQL modifications are localized in one handy file 
(DISCLAIMER:  I am a Mambo core developer).  Maybe there are others that 
also take this approach.

I also agree with Shawn, that your best bet is to use a single server, 
ideally hosted by a hosting company that provides MySQL/PHP. This 
removes the need for replication to multiple machines, and you can 
just keep a regular backup copy on a local machine - also reduces some 
of your security concerns.

This has been discussed over at the NYPHP lists, and one of the ideas 
that I came up with was to use the user's password as the key to 
encrypt/decrypt all of their data in the database.  In the event that 
Joe could actually view Sally's data, he wouldn't have her key to 
decrypt it so it would have little use.  Ditto for h4x0rd00d, as he 
would have to brute force the key for every row in the database...

My favourite part of this design is that the 'key' is actually stored as 
an MD5 hash in the database as the user's password; and the only time 
the server sees it in the clear is when the user logs in.  This can be 
added to the session (stored on the server, not in a cookie) so that it 
is resistant to tampering.

I'm considering this topic for a future article, if you are interested 
please contact me off-list ;)

-- Mitch
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Space is filling up

2004-07-09 Thread gerald_clark

Asif Iqbal wrote:
Jack Coxen wrote:
 

If you database contains time-based data you could age out old records.  I
only need to keep data for 6 months so I run a nightly script to delete any
records more than 6 months old.  And before anyone asks...yes, I also run
another script to ANALYZE/OPTIMIZE my tables.
   

My databases (mysql,rt3,test) are taking only about 2k.
No. The ibdata* files ARE your database.
You need more disk space, and you need it now.
I am worry about
the ib*log and ibdata* files. Mostly the ibdata files which are taking 5
gig. I am not sure if I can clean those up or rotate or something else
to reclaim the disk space.
(root)@webrt:/usr/local/mysql/data# /usr/local/bin/ls -lh
total 5.2G
-rw-rw1 mysqlmysql 25K Jul  4  2003
ib_arch_log_00
-rw-rw1 mysqlmysql2.5K Aug 27  2003
ib_arch_log_02
-rw-rw1 mysqlmysql2.5K Aug 28  2003
ib_arch_log_04
-rw-rw1 mysqlmysql100M Jul  8 20:38 ib_logfile0
-rw-rw1 mysqlmysql100M Jun 29 20:23 ib_logfile1
-rw-rw1 mysqlmysql1.9G Jul  8 20:37 ibdata1
-rw-rw1 mysqlmysql3.1G Jul  8 20:33 ibdata2
-rw-r--r--1 mysqlmysql1.6K Sep 15  2003 my.cnf
drwxr-x---2 mysqlmysql 512 Jul  4  2003 mysql
drwx--2 mysqlmysql 512 Jul 30  2003 newdb
drwx--2 mysqlmysql1.5K Jun 30 23:10 rt3
drwxr-x---2 mysqlmysql 512 Jul 28  2003 test
-rw-rw1 mysqlother 67K Nov 21  2003 webrt.err
-rw-rw1 mysqlmysql   3 Nov 21  2003 webrt.pid
 


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Cost of joins?

2004-07-09 Thread Martijn Tonies
Alec,

> > If you're de-normalizing
> > your design to get better performance, then there's something
> > wrong with the database engine (whatever engine that may be).
>
> Nearly always, but not absolutely always. I have a table with columns
> primary start
> primary finish
> secondary start
> secondary finish
>
> Since it is defined that the distance between start and finish is the same
> for both primary and secondary, this is denormalised, since secondary
> finish is always given by secondary start + (primary finish - primary
> start). However, I want to use all four fields as indexes, including
> secondary finish. Does any database allow indexes on complex functions of
> columns rather than on columns themselves? Or is this just a defect in
> MySQL?

Indices, by themselves, are a-logical. They're physical. They
exist because of performance problems. A JOIN is logical and
doesn't have anything to do with indices.

The question is not:
should I avoid joins to gain performance?

The question is:
should I bother the vendor to increase performance despite
my logical joins?

The answer is: Yes, you should.

If the vendor answer is: add a (funtion) index, and if that helps,
then it's fine. If it doesn't, then the vendor should think of another
trick so that we designers can use their product (database
engine) the way it's supposed to be used.

With regards,

Martijn Tonies
Database Workbench - developer tool for InterBase, Firebird, MySQL & MS SQL
Server.
Upscene Productions
http://www.upscene.com


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Linux 2GB Memory Limit

2004-07-09 Thread Marvin Wright
Hi,

Is there any work around for this yet where a process can not allocate more
than 2GB.
Can I upgrade my Redhat OS to any particular version ?

Many Thanks.

Marvin Wright
Flights Developer
Lastminute.com
[EMAIL PROTECTED]
+44 (0) 207 802 4543



This e-mail has been scanned for all viruses by Star Internet. The
service is powered by MessageLabs. For more information on a proactive
anti-virus service working around the clock, around the globe, visit:
http://www.star.net.uk


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Cost of joins?

2004-07-09 Thread Alec . Cawley
"Martijn Tonies" <[EMAIL PROTECTED]> wrote on 09/07/2004 13:55:40:

> If you're de-normalizing
> your design to get better performance, then there's something
> wrong with the database engine (whatever engine that may be).

Nearly always, but not absolutely always. I have a table with columns 
primary start
primary finish
secondary start
secondary finish

Since it is defined that the distance between start and finish is the same 
for both primary and secondary, this is denormalised, since secondary 
finish is always given by secondary start + (primary finish - primary 
start). However, I want to use all four fields as indexes, including 
secondary finish. Does any database allow indexes on complex functions of 
columns rather than on columns themselves? Or is this just a defect in 
MySQL?

Alec



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Cost of joins?

2004-07-09 Thread Martijn Tonies

> > Design for understanding, logic and maintenance, not performance.
> >
> > If you need more performance, throw more hardware at it -
> > a larger cache (settings -> memory), faster disks and a faster CPU.
>
> Sorry, but I can't agree with you. Years ago I had to put the DMOZ
> (http://www.dmoz.org/) database (2 million records, 100,000 or so
categories
> at the time) in a MySQL database. Next we had to calculate the number of
> sites in a certain category. The 'path' to the category was known, but a
> regexp was needed to select the path of one level up. The query took > 30
> seconds.
> After adding a column for "one level up", adding indexes, optimizing the
> query it took only a few hundreds of seconds.

Of course, indices should be added to get acceptable performance.
That's what they are here for.

Nevertheless, your database design should be based on logic
and all data should be stored normalized. If you're de-normalizing
your design to get better performance, then there's something
wrong with the database engine (whatever engine that may be).

A design should be logical. This doesn't mean that there's just
one particular design to cover one particular problem, taking
a different your while designing could lead to a different end-result
design (although according to the normalization rules, it shouldn't).

> I really don't know how much hardware you would like to use to get these
> results?

Lots of ;-)

With regards,

Martijn Tonies
Database Workbench - developer tool for InterBase, Firebird, MySQL & MS SQL
Server.
Upscene Productions
http://www.upscene.com


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Cost of joins?

2004-07-09 Thread Jigal van Hemert
From: "Martijn Tonies" <[EMAIL PROTECTED]>
> Design for understanding, logic and maintenance, not performance.
>
> If you need more performance, throw more hardware at it -
> a larger cache (settings -> memory), faster disks and a faster CPU.

Sorry, but I can't agree with you. Years ago I had to put the DMOZ
(http://www.dmoz.org/) database (2 million records, 100,000 or so categories
at the time) in a MySQL database. Next we had to calculate the number of
sites in a certain category. The 'path' to the category was known, but a
regexp was needed to select the path of one level up. The query took > 30
seconds.
After adding a column for "one level up", adding indexes, optimizing the
query it took only a few hundreds of seconds.

I really don't know how much hardware you would like to use to get these
results?

Regards, Jigal.


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Scripts - ERROR

2004-07-09 Thread Victor Pendleton
Have you tried running this from the mysql monitor?

-Original Message-
From: Andre MATOS
To: Victor Pendleton
Cc: '[EMAIL PROTECTED] '
Sent: 7/8/04 5:33 PM
Subject: RE: Scripts - ERROR

Hi,

I tried but didn't work. Here is my script:

#

# @Name : NTT_dbcreator.2004-06-07.v1-001.andre.sql
# @DESCRIPTION  : NTT Database Creator
# @CREATED  : Jun 07, 2004 by Andre Matos - [EMAIL PROTECTED]
# @VERSION  : 1.01
#


SET FOREIGN_KEY_CHECKS=0;

DROP DATABASE IF EXISTS `database1`;
CREATE DATABASE `database1`;
commit;
SOURCE ntt_dbcreator.2004-06-07.v1-001.andre.sql;
commit;

DROP DATABASE IF EXISTS `database2`;
CREATE DATABASE `database2`;
commit;
SOURCE ntt_dbcreator.2004-06-07.v1-001.andre.sql;
commit;


DROP DATABASE IF EXISTS `database3`;
CREATE DATABASE `database3`;
commit;
SOURCE ntt_dbcreator.2004-06-07.v1-001.andre.sql;
commit;


DROP DATABASE IF EXISTS `database4`;
CREATE DATABASE `database4`;
commit;
SOURCE ntt_dbcreator.2004-06-07.v1-001.andre.sql;
commit;


DROP DATABASE IF EXISTS `database5`;
CREATE DATABASE `database5`;
commit;
SOURCE ntt_dbcreator.2004-06-07.v1-001.andre.sql;
commit;


DROP DATABASE IF EXISTS `database6`;
CREATE DATABASE `database6`;
commit;
SOURCE ntt_dbcreator.2004-06-07.v1-001.andre.sql;
commit;


SET FOREIGN_KEY_CHECKS=0;


# end of the script


NB: "ntt_dbcreator.2004-06-07.v1-001.andre.sql" is another script that
is 
responsable for creating the tables, indexes, relations, etc...

This is the error when I tried to run the main script using phpMyAdmim:

 Database mysql  running on localhost

Error

SQL-query : 

SOURCE ntt_dbcreator.2004 -06 - 07.v1 - 001.andre.sql

MySQL said:

#1064 - You have an error in your SQL syntax.  Check the manual that 
corresponds to your MySQL server version for the right syntax to use
near 
'SOURCE ntt_dbcreator.2004-06-07.v1-001.andre.sql' at line 1


Thanks.

Andre



On Thu, 8 Jul 2004, Victor Pendleton wrote:

> Have you tried writing the script using the source command?
> drop database;
> create database;
> source populate_db;
> 
> 
> -Original Message-
> From: Andre MATOS
> To: [EMAIL PROTECTED]
> Sent: 7/8/04 12:20 PM
> Subject: Scripts
> 
> Hi List,
> 
> Let's suppose that I have a script to create one database (tables, 
> indexes, etc).
> 
> How can I call another script from this script? For example:
> 
> DROP DATABASE test;
> CREATE DATABASE test;
> 
> (here call another script)
> 
> back to the previous script
> 
> 
> 
> Thanks.
> 
> Andre
> 
> 

-- 
Andre Matos
[EMAIL PROTECTED]


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Cost of joins?

2004-07-09 Thread Martijn Tonies


> "Martijn Tonies"  >
> > Design for understanding, logic and maintenance, not performance.
> >
> > If you need more performance, throw more hardware at it -
> > a larger cache (settings -> memory), faster disks and a faster CPU.
>
> Or more indexes (which may require more hardware).

Oh yes, indices can help :-)

Indices ONLY exist for performance reasons, not for anything
else.

That being said, wouldn't it be great to have a self-tuning
database engine that would create indices after running your
database application for a few days :-)

With regards,

Martijn Tonies
Database Workbench - developer tool for InterBase, Firebird, MySQL & MS SQL
Server.
Upscene Productions
http://www.upscene.com


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Cost of joins?

2004-07-09 Thread Alec . Cawley
"Martijn Tonies" <[EMAIL PROTECTED]> wrote on 09/07/2004 13:28:23:

> 
> Design for understanding, logic and maintenance, not performance.
> 
> If you need more performance, throw more hardware at it -
> a larger cache (settings -> memory), faster disks and a faster CPU.

Or more indexes (which may require more hardware). 

Alec


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Cost of joins?

2004-07-09 Thread Alec . Cawley
Margaret MacDonald <[EMAIL PROTECTED]> wrote on 09/07/2004 12:07:54:

> Is there a generally-accepted rule of thumb for estimating the
> performance cost of joins?   I can't find one even in Date, but
> intuitively it seems as though there must be one by now.

I don't think there is a general answer to this any more than there is to 
the general question of optimising selects. It all depends upon your 
indexes, and whether the optimiser notices them. I have had an order of 
magnitude speedup on a join by introducing an appropriate index. This says 
that is no underlaying "cost of a join", only the cost of a particular 
join. 

There are numerous tips available for optimising MySQL selects, which 
should followed carefully. However, I have found that, when you get to the 
bottom line, the only way to handle it is to dry-run it on paper. Say "If 
I had to do this search on paper tables, what indexes would I want?". If 
you then create those indexes, MySQL is pretty good at spotting the 
appropriate ones and using them.

One strategy that seems useful is that if you are doing a join of A and B, 
and A is primarily a selector table with the bulk data in B, which is what 
your application sounds like, you build an index which contains all the 
fields in A you want for a query. Thus if you want to do
Select A1, A2, B.* FROM A JOIN B ON A1 = B1 WHERE ...
Select A1, A3, C.* FROM A JOIN C ON A1 = C1 WHERE ...
you build indexes on both (A1, A2) and (A1, A3). This allows MySQL to 
extract all the necessary information from the indexes without reading the 
underlying records.

Alec


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Cost of joins?

2004-07-09 Thread Martijn Tonies
Margaret,


> Is there a generally-accepted rule of thumb for estimating the
> performance cost of joins?   I can't find one even in Date, but
> intuitively it seems as though there must be one by now.

Don't bother...

> I'm thinking of something like 'if not doing anything costs 0, and
> reading 1 table costs 100, then joining a second table brings the cost
> to  150, a third to 225, etc' (or 110 or  500 or whatever the
> numbers/functions really are).
>
> I'm working on a large information system --probably tens of millions
> of records-- and am guessing that reads should outnumber writes by a
> factor of 10 at least.   High performance from the human user's point
> of view is an absolute requirement for success.
>
> Rather than store category and other  'flavoring' strings repetitively
> in the information records themselves, I've stored indexes into other
> tables.  But that means every lookup joins 3-7 tables.  How much am I
> paying for that?
>
> Adding the flavoring strings themselves to each record would increase
> the overall storage requirement by some large fraction less than 50%.
> Since disk space is already relatively cheap, that shouldn't be a
> problem.  But some of the flavoring is guaranteed to change at an
> unpredictable rate,  which would mean repeatedly traversing every
> information record to update it.
>
> I'm new to databases, but I'm sure this is a 'been there done that'
> problem for the more experienced people on the list!

Design for understanding, logic and maintenance, not performance.

If you need more performance, throw more hardware at it -
a larger cache (settings -> memory), faster disks and a faster CPU.

With regards,

Martijn Tonies
Database Workbench - developer tool for InterBase, Firebird, MySQL & MS SQL
Server.
Upscene Productions
http://www.upscene.com


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Cost of joins?

2004-07-09 Thread Jigal van Hemert
From: "Margaret MacDonald" <[EMAIL PROTECTED]>

> Is there a generally-accepted rule of thumb for estimating the
> performance cost of joins?   I can't find one even in Date, but
> intuitively it seems as though there must be one by now.

It's hard to estimate the cost of a join as such. The performance is
influenced by the combination of database lay-out and the query.
If you know the queries you will perform you can optimize the database and
the query itself.

The MySQL documentation contains a lot of articles with directions for
optimizations.

> I'm working on a large information system --probably tens of millions
> of records-- and am guessing that reads should outnumber writes by a
> factor of 10 at least.   High performance from the human user's point
> of view is an absolute requirement for success.
>
> Rather than store category and other  'flavoring' strings repetitively
> in the information records themselves, I've stored indexes into other
> tables.  But that means every lookup joins 3-7 tables.  How much am I
> paying for that?

If the number of 'flavouring' strings are relatively low you can also decide
to do the translation from categoryID to category name in the program you
write around your queries.

> Adding the flavoring strings themselves to each record would increase
> the overall storage requirement by some large fraction less than 50%.
> Since disk space is already relatively cheap, that shouldn't be a
> problem.  But some of the flavoring is guaranteed to change at an
> unpredictable rate,  which would mean repeatedly traversing every
> information record to update it.

Disk space is not your only concern here. Indices are smaller for integer
data (ID numbers) and for integer data MySQL often does not have to access
the database itself if it uses the index for that column (which saves you
disk reads).
Also, smaller data files are easier to cache by the OS.

The best thing will be to see to it that you keep your data as small as
possible.

Analyse the queries using EXPLAIN.

A smaller record set in each step will produce faster queries (see estimated
number of records).

See if you can use indexes for each column in your query. You can make
indexes on two or more columns!

If you don't have any "using filesort" or "using temporary" in the explain
chances are the query is blazingly fast.

Try to avoid columns with very little distinct values (e.g. sex:
male/female) if you use these in your query and MySQL guesses that it will
result in more than apporx. 30% of the records it will perform a full table
scan instead of using the index. If you use such a low cardinality column in
a combined index MySQL can use the index again.

If necessary use USE INDEX or FORCE INDEX to make sure the right index is
used.

Optimize where clauses yourself by using "column operator constant" (e.g.
`birthdate` < NOW() - INTERVAL 16 YEAR). The constant part may be an
expression as long as the result is a constant value. This is a lot faster
than calculating the expression for each record in the table (e.g.
`birthdate` + INTERVAL 16 YEAR < NOW()). Use precalculated values in a
seperate column if necessary to prevent MySQL from having to perform
calculations on every record for a WHERE condition.

The speed of your query also depends on the size of the resulting record set
and the operations you will perform on that. Sorting 10 relatively small
records is done in memory and will be very fast. Sorting 100.000 records
without an index will most likely require a temporary table and wil take
'forever'.

Read all you can find in the MySQL documentation and perhaps take a look at
Jeremy Zawodny's "High Performance MySQL".

Regards, Jigal.


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Cost of joins?

2004-07-09 Thread Bartis, Robert M (Bob)
I have a question that may be similar to the one which Margaret asked recently 
concerning the "Cost of Joins". I have a DB with numerous tables and have inserted 
keys to relate one table to another. The method minimizes the data I store, but 
results in me joining multiple tables, sometimes 10 at a time to retrieve information 
needed to satisfy a given search request.

A simple version of my DB would be:

Table one
oneKey
a
c
b

Table two
twoKey
oneKey-Foreign key
e
f
g

Table three
threeKey
twoKey-Foreign key
x
y
z

If I want to collect data concerning x, y, z and its relation to 'a' I need to join 
tables one, two and three. It seems to me this is the most efficient storage of 
information. It also, assuming the resulting queries return a large number of records, 
is the most efficient for end users when moving from record to record. Conversely, it 
also seems like it will be the most inefficient while waiting for the query results to 
be calculated?


I've noticed another solution proposed by some is to carry forward Foreign Keys. For 
instance:
Table one
oneKey
a
c
b

Table two
twoKey
oneKey-Foreign key
e
f
g

Table three
threeKey
  oneKey-Foreign key
twoKey-Foreign key
x
y
z 

In this case collecting the same information (x, y, z and its relation to 'a') I need 
only join tables one and three or just three and do look-ups into table one. 
Obviously, the issue scales if you add 10 tables into the equation. 

This method appears less efficient from a data storage perspective and complicates the 
application. I need to store multiple Foreign keys each time a record is added to a 
given table. The time to return query results would appear to be very short as each 
query would only return a single record, but the record to record movement would 
result in a new query each time.

What advantages or disadvantages are there to one method vs. another?

Bob

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Secure Database Systems

2004-07-09 Thread Rory McKinley
Sarah Tanembaum wrote:

We have 10 computers(5bros, 4sisters, and myself) plus 1 server with I
maintained. The data should be synchronize/replicate between those
computers.
Well, so far it is easy, isn't it?
Here's my question:
a) How can I make sure that it secure so only authorized person can
modify/add/delete the information? Beside transaction logs, are there
any
other method to trace any transaction(kind of paper trail)?
Assuming there are 3 step process to one enter the info e.g:
- One who enter the info (me)
- One who verify the info(the owner of info)
- One who verify and then commit the change!
How can I implement such a process in MySQL and/or PHP or any  other web
language?
b) How can I make sure that no one can tap the info while we are
entering
the data in the computer? (our family are scattered within US and
Canada)
c) Is it possible to securely synchronize/replicate between our
computers
using VPN? Does MySQL has this functionality by default?
d) Other secure method that I have not yet mentioned.
Anyone has good ideas on how to implement such a systems?
Thanks

 

Hi Sarah
This is more of  a PHP question than a MySQL question as to my mind 
while it is all possible, the bulk of the work would need to be done on 
the PHP side. Assuming that you don't have the time to write all the 
necessary code from scratch, you might want to look for a 
content-management system (CMS) written in PHP and using MySQL that has 
some (ideally, all) the functionality that you require. This can serve 
as a good foundation and can be tweaked to your relevant requriements. A 
good place to start looking for this would be Sourceforge or google.

I also agree with Shawn, that your best bet is to use a single server, 
ideally hosted by a hosting company that provides MySQL/PHP. This 
removes the need for replication to multiple machines, and you can just 
keep a regular backup copy on a local machine - also reduces some of 
your security concerns.

Regards
Rory McKinley
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Cost of joins?

2004-07-09 Thread Margaret MacDonald
Is there a generally-accepted rule of thumb for estimating the
performance cost of joins?   I can't find one even in Date, but
intuitively it seems as though there must be one by now.

I'm thinking of something like 'if not doing anything costs 0, and
reading 1 table costs 100, then joining a second table brings the cost
to  150, a third to 225, etc' (or 110 or  500 or whatever the
numbers/functions really are).   

I'm working on a large information system --probably tens of millions
of records-- and am guessing that reads should outnumber writes by a
factor of 10 at least.   High performance from the human user's point
of view is an absolute requirement for success.

Rather than store category and other  'flavoring' strings repetitively
in the information records themselves, I've stored indexes into other
tables.  But that means every lookup joins 3-7 tables.  How much am I
paying for that?   

Adding the flavoring strings themselves to each record would increase
the overall storage requirement by some large fraction less than 50%.
Since disk space is already relatively cheap, that shouldn't be a
problem.  But some of the flavoring is guaranteed to change at an
unpredictable rate,  which would mean repeatedly traversing every
information record to update it.

I'm new to databases, but I'm sure this is a 'been there done that'
problem for the more experienced people on the list!

Thanks for any insights!
Margaret

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Control Center : dump

2004-07-09 Thread Philippe Poelvoorde
Hi,
mysqlcc have a nice option which is : show create, but is there a menu 
where one can obtain something like : show insert (or a dump of the table) ?

--
Philippe Poelvoorde
COS Trading Ltd.
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Newbie: Replication Question

2004-07-09 Thread Alec . Cawley
"L. Yeung" <[EMAIL PROTECTED]> wrote on 09/07/2004 08:38:38:

> Hi! I wanted to set-up a master-slave replication on
> my win2k box. When my master server fails, my slave
> server will automatically becomes my new "master
> server". And when my master server is back online, any
> changes on my slave server is replicated back to my
> master server.
> 
> Normal: A -> B
> ^
> L inserts/updates from clients
> Failure:XB
> New Master: XB <- inserts/updates from clients
> Master is back: A <- B x- inserts/updates are blocked
> manually.
> Normal: A -> B
> ^
> L inserts/updates from clients
> 
> Any idea? Thanks.

You cannot do this automatically: you need a supervisor progream.

The way you need to do it is to have both machines come up with their 
slave thread *not* running. The supervisor then determines which is master 
and which slave, and starts the appropriate slave thread running. If it 
determines that the original master has failed, it stops replicatio on the 
slave and directs updates to it: the slave has now become master. When the 
original master reappears, it determines that updates have been made to 
the original slave later than those to the original master, it instructs 
the originl master to reload its databse from the slave. Master and slave 
have now exchanged roles. 

The determination of which is the most-recently updated is done by a 
single row, single column table which is incremented whenever the slave 
takes over from the master. If, at startup, two active machines are found 
with differeent values in this entry, the higher value becomes master and 
the lower must be re-synchronized. If the values are the same, the slave 
status can be inspected to see which is slaving to which. 

We have implemented such a system in our own middleware. We have a target 
changeover time of 10 seconds, which we are meeting easily. It only works 
for MyISAM tables, since LOAD DATA FROM MASTER is only available for 
these.

Note to MySQL development team: this request comes up often enough that I 
hope the idee of embedding this supervisor in the MySQL daemon is at least 
on the wish list.

Alec


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Please clarify

2004-07-09 Thread Chandrakanth
Hi
 
I have installed mysql 4.1 for experimental purpose. i took source and
compiled it on Redhat Linux AS 3.0
 
I am getting these errors when i run mysql_installdb script
 
what is the problem ? reply me immediately.
 
 
[EMAIL PROTECTED] bin]# mysql_install_db --user=mysql
Installing all prepared tables
Fill help tables
ERROR: 1153  Got a packet bigger than 'max_allowed_packet' bytes
040709 19:45:40  Aborting

 
Regards,
 

K. Chandrakanth

Hewlett-Packard

Global delivery India centre

Heritage Technologies, 

Porting and Migration practice

30C, Cunningham Road

Bangalore - 560052



IMPORTANT: Some tables must be REBUILT in upgrade to >= 4.1.2

2004-07-09 Thread Heikki Tuuri
Hi!

I am posting this separate note about this change in 4.1.3, because it is
unusual that a data conversion is needed in a MySQL server upgrade.

The default charset of MySQL was latin1 in 3.23 and in 4.0, and it is
latin1_swedish_ci from 4.1.2 on.

InnoDB users who have used a non-default charset in 3.23 or 4.0 or <= 4.1.2,
and ALL users of  4.1.0 and 4.1.1, MAY NEED TO REBUILD their tables when
upgrading to 4.1.3 or later.

http://dev.mysql.com/doc/mysql/en/InnoDB_news-4.1.3.html
"
Important: Starting from MySQL 4.1.3, InnoDB uses the same character set
comparison functions as MySQL for non-latin1_swedish_ci character strings
that are not BINARY. This changes the sorting order of space and characters
< ASCII(32) in those character sets. For latin1_swedish_ci character strings
and BINARY strings, InnoDB uses its own pad-spaces-at-end comparison method,
which stays unchanged. If you have an InnoDB table created with MySQL 4.1.2
or earlier, with an index on a non-latin1 character set (in the case of
4.1.0 and 4.1.1 with any character set) CHAR/VARCHAR/or TEXT column that is
not BINARY but may contain characters < ASCII(32), then you should do ALTER
TABLE or OPTIMIZE table on it to regenerate the index, after upgrading to
MySQL 4.1.3 or later.
"

Some examples that help you in determining if a table rebuild may be needed:

* If you have only used the MySQL default charset (latin1) in 3.23 or 4.0,
then you do not need to rebuild any InnoDB tables.

* If you have only stored normal, printable characters (whose code is >= 32)
in your CHAR or VARCHAR columns, then you do not need to rebuild any InnoDB
tables.

* If you have used the latin1_german charset and stored 'abc' and
'abc' in an indexed CHAR column, then you NEED to rebuild that
InnoDB table.

* If you have created InnoDB tables with any charset in 4.1.0 or 4.1.1, you
may need to rebuild them.

* You can use CHECK TABLE to determine if a table rebuild is needed.

---

For MyISAM users, also the users of the default latin1 charset may need to
rebuild their tables:

http://dev.mysql.com/doc/mysql/en/News-4.1.2.html
"
Warning: Incompatible change! String comparison now works according to the
SQL standard. Because we have that 'a' = 'a ' then from it must follow that
'a' > 'a\t'. (The latter was not the case before MySQL 4.1.2.) To implement
it, we had to change how storage engines compare strings internally. As a
side effect, if you have a table where a CHAR or VARCHAR column in some row
has a value with the last character less than ASCII(32), you will have to
repair this table. CHECK TABLES will tell you if this problem exists. (Bug
#3152)
"

Best regards,

Heikki Tuuri
Innobase Oy
Foreign keys, transactions, and row level locking for MySQL
InnoDB Hot Backup - a hot backup tool for InnoDB which also backs up MyISAM
tables
http://www.innodb.com/order.php

Order MySQL technical support from https://order.mysql.com/


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



replication / directory sharing

2004-07-09 Thread Bastian Hoyer
Hi,

I have 3 servers ( A B and C ) and every Server has one db ( DBA DBB DBC )
On the third Server I would like to query all three databases ( only selects
from DBA and DBB ).
I thought about setting 2 slave and 1 master Servers on Server C but then I
would need to open 3 connections to query all databases.
Last night I got the idea to let the slaves run under an other user and link the
DBA and DBB directory read only into the Server C data dir... So I could query
the DBC Server with all databases.
I guess query_cache won't work then because Server C don't see updates or
deletes in the DBA and DBB tables but another question is do I need locking
when I only do read only access ?

--
Bastian Hoyer


This message was sent using IMP, the Internet Messaging Program.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Secure Database Systems

2004-07-09 Thread David Dick
afaik the term "translucent database" applies to a regular database that 
  has encrypted data in it.  The main differences is in whether the 
encryption is one way only (ie. using a md5 hash of a name instead of 
the actual name) or reversible (using 3des to encrypt and decrypt the 
name).  a good example of the former is /etc/passwd or /etc/shadow.

Sarah Tanembaum wrote:
Hi David, the link you provided is quite interesting. Is such
database(translucent database) actually exist? Or is it just a concept?
Thanks
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: MySQL crash on adding foreign key constraint

2004-07-09 Thread Heikki Tuuri
Daniel,

I tested this with very small test tables, and it did not crash.

You should run CHECK TABLE on both tables. Maybe they are corrupt.

Can you make a repeatable test case that you can email or upload to ftp:
support.mysql.com:/pub/mysql/secret ?

Best regards,

Heikki

..

[EMAIL PROTECTED]:~/mysql-standard-4.0.18-pc-linux-i686/bin> ./mysql test
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 1 to server version: 4.0.18-standard-log

Type 'help;' or '\h' for help. Type '\c' to clear the buffer.

mysql> create table
-> `Prospects` (
->   `LeadNo` mediumint(8) unsigned NOT NULL auto_increment,
->   `MySQLStamp` timestamp(14) NOT NULL,
->   `Client` varchar(70) NOT NULL default '',
->   `Address` varchar(50) NOT NULL default '',
->   `Suburb` varchar(25) NOT NULL default '',
->   `State` char(3) NOT NULL default '',
->   `PostCode` smallint(6) NOT NULL default '0',
->   `BusinessTypeID` smallint(3) unsigned NOT NULL default '0',
->   `TelecomSpend` mediumint(8) unsigned NOT NULL default '0',
->   `ElectricitySpend` mediumint(8) unsigned NOT NULL default '0',
->   `GasSpend` mediumint(8) unsigned NOT NULL default '0',
->   `Turnover` bigint(20) unsigned default '0',
->   `Region` enum('L','N','I') default 'N',
->   `Locations` smallint(6) default '0',
->   `Employees` mediumint(8) unsigned default '0',
->   `ACN` varchar(11) default '',
->   `ParentID` mediumint(8) unsigned NOT NULL default '0',
->   `Autonomous` smallint(3) NOT NULL default '0',
->   `BusinessDescription` varchar(100) default '',
->   `FuelSpend` mediumint(8) unsigned NOT NULL default '0',
->   `WebPage` varchar(50) default '',
->   `SignedBy` mediumint(8) unsigned default '0',
->   `FileNo` mediumint(9) default NULL,
->   `SubNo` mediumint(9) default NULL,
->   `EnteredBy` mediumint(9) NOT NULL default '0',
->   `DateEntered` date default '-00-00',
->   `TooSmall` datetime default NULL,
->   `DontCall` datetime default NULL,
->   `Status` smallint(5) unsigned NOT NULL default '0',
->   `Archived` date default NULL,
->   `EnergySupplier` smallint(5) unsigned default '0',
->   `TelecomSupplier` smallint(5) unsigned default '0',
->   PRIMARY KEY  (`LeadNo`),
->   KEY `IDX_BusinessTypeID` (`BusinessTypeID`),
->   KEY `IDX_ParentID` (`ParentID`),
->   KEY `IDX_DontCall` (`DontCall`),
->   KEY `IDX_TooSmall` (`TooSmall`),
->   KEY `IDX_PostCode` (`PostCode`),
->   KEY `IDX_State` (`State`),
->   KEY `IDX_FileNo` (`FileNo`)
-> ) TYPE=InnoDB;
Query OK, 0 rows affected (0.06 sec)

mysql>
mysql>
mysql> create table
-> `Leads` (
->   `DanPK` mediumint(8) unsigned NOT NULL auto_increment,
->   `TimeStamp` timestamp(14) NOT NULL,
->   `LeadNo` mediumint(9) unsigned NOT NULL default '0',
->   `IssueDate` date default NULL,
->   `IssuedTo` tinyint(3) unsigned NOT NULL default '0',
->   `CompleteDate` date default NULL,
->   `Status` tinyint(3) unsigned NOT NULL default '1',
->   `Telemarketer` mediumint(9) NOT NULL default '0',
->   PRIMARY KEY  (`DanPK`),
->   KEY `IDX_LeadNo` (`LeadNo`)
-> ) TYPE=InnoDB;
Query OK, 0 rows affected (0.00 sec)

mysql>
mysql> insert into Prospects(LeadNo) values (1);
Query OK, 1 row affected (0.01 sec)

mysql> insert into Prospects(LeadNo) values (2);
Query OK, 1 row affected (0.00 sec)

mysql> insert into Prospects(LeadNo) values (3);
Query OK, 1 row affected (0.00 sec)

mysql> insert into Prospects(LeadNo) values (4);
Query OK, 1 row affected (0.00 sec)

mysql>
mysql>
mysql> insert into Leads(LeadNo) values (1);
Query OK, 1 row affected (0.00 sec)

mysql> insert into Leads(LeadNo) values (2);
Query OK, 1 row affected (0.00 sec)

mysql> insert into Leads(LeadNo) values (3);
Query OK, 1 row affected (0.00 sec)

mysql> insert into Leads(LeadNo) values (4);
Query OK, 1 row affected (0.00 sec)

mysql> insert into Leads(LeadNo) values (2);
Query OK, 1 row affected (0.00 sec)

mysql>
mysql> alter table Leads
-> add foreign key fk_LeadNo ( LeadNo ) references Prospects ( LeadNo );
Query OK, 5 rows affected (0.01 sec)
Records: 5  Duplicates: 0  Warnings: 0

mysql>

- Original Message - 
From: "Daniel Kasak" <[EMAIL PROTECTED]>
Newsgroups: mailing.database.myodbc
Sent: Friday, July 09, 2004 7:08 AM
Subject: MySQL crash on adding foreign key constraint


> =_1089346615-12596-119
> Content-Type: text/plain; charset=ISO-8859-1; format=flowed
> Content-Transfer-Encoding: 7bit
> Content-Disposition: inline
>
> Hi all.
>
> I just tried adding a foreign key constraint, and crashed MySQL (
4.0.18 ).
> I tried it again and crashed it again :(
>
> The SQL I'm using is:
>
> ---
>
> alter table Leads
> add foreign key fk_LeadNo ( LeadNo ) references Prospects ( LeadNo );
>
> ---
>
> The tables involved:
>
> `Prospects` (
>   `LeadNo` mediumint(8) u

Newbie: Replication Question

2004-07-09 Thread L. Yeung
Hi! I wanted to set-up a master-slave replication on
my win2k box. When my master server fails, my slave
server will automatically becomes my new "master
server". And when my master server is back online, any
changes on my slave server is replicated back to my
master server.

Normal: A -> B
^
L inserts/updates from clients
Failure:XB
New Master: XB <- inserts/updates from clients
Master is back: A <- B x- inserts/updates are blocked
manually.
Normal: A -> B
^
L inserts/updates from clients

Any idea? Thanks.

l.yeung




__
Do you Yahoo!?
New and Improved Yahoo! Mail - 100MB free storage!
http://promotions.yahoo.com/new_mail 

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



  1   2   >