Re: Databases not seen in MySQLCC

2004-06-18 Thread Hans-Peter Grimm
Marvin Cummings wrote:
Have a strange problem where only one of my databases shows up in the
MySQLCC console, yet if I look in the MySQL Admin console all databases
are seen. Anyone experience this before? 
If you right-click the server entry in MySQLCC and select Edit, there 
is a Databases tab where you can select the databases to be shown. 
Check whether the Show all Databases checkbox is enabled.

Hans-Peter

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: load data into 2 tables and set id

2004-06-18 Thread J S


Mos forgot to populate the url_id column in your user table. I would use
his same process but re-arrange it like this:
1) create table BIG_TABLE
2) load data infile
3) create table URL_TABLE (
  url_id bigint not null auto_increment,
  url varchar(25) not null primary key,
  category 
  )
4) INSERT IGNORE URL_TABLE (url, category)
  SELECT url,category
  FROM BIG_TABLE
4) create table USER_TABLE (
  user_id varchar?(...) primary key,
  date_time datetime,
  url_id bigint,
  size int
  )
5) insert USER_TABLE
  SELECT bt.userID, bt.datetime, u.url_id, bt.size
  FROM BIG_TABLE bt
  INNER JOIN URL_TABLE u
  ON u.url = bt.url
doing it this way lets the SQL engine handle the conversion of URLs to
their IDs in the USER_TABLE...see? No scripting required at all!
Yours,
Shawn Green
Database Administrator
Unimin Corporation - Spruce Pine

Shawn,
Many thanks for your reply. It sounds like this might be quicker than my 
perl script which parses a 1GB log file and fills the database in 40 mins at 
the mo. (The actual parsing only takes 1.5 mins).
There's one snag which I forgot about and that's in the url_table I have 
another column called hits which is the number of hits for each url.  I'm 
terrible at SQL and wondered if you might be able to suggest a way of doing 
this with the method above?

Thanks,
JS.
_
It's fast, it's easy and it's free. Get MSN Messenger today! 
http://www.msn.co.uk/messenger

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


ODBC error

2004-06-18 Thread jschung
From a Windows2000 machine, I use ODBC Data Source Administrator to
connect to mysql server on RedHat ES 3. I got the following message.

[MySQL][ODBC 3.51 Driver] Host 'IP_address' is not allowed to connect to
this MySQL server

Any idea?

Thanks,
Joseph



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: ODBC error

2004-06-18 Thread Cemal Dalar
You should check your privileges in mysql server. Look at the Host field in
mysql.user table. Probably that is the problem and change the value to the
Windows IP address. And don't forget to make  FLUSH PRIVILEGES..

Ref: http://dev.mysql.com/doc/mysql/en/User_Account_Management.html


Best Regards,
Cemal Dalar a.k.a Jimmy
System Administrator  Web Developer
http://www.gittigidiyor.com  http://www.dalar.net

- Original Message - 
From: [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Friday, June 18, 2004 12:51 PM
Subject: ODBC error


 From a Windows2000 machine, I use ODBC Data Source Administrator to
 connect to mysql server on RedHat ES 3. I got the following message.

 [MySQL][ODBC 3.51 Driver] Host 'IP_address' is not allowed to connect to
 this MySQL server

 Any idea?

 Thanks,
 Joseph



 -- 
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



strange query result

2004-06-18 Thread Alex
Hi, ppl!
I have a MySQL-server on FreeBSD-5.2.1 computer and a client on another 
computer under windows xp.

I have the following simple query  SELECT internal_number,page_no FROM 
pages WHERE docs=307 ORDER BY page_no

that results in 587 lines with two integers only.
Simple query with low output volume
The problem is that if I use a local client   on FreeBSD-computer I get 
the result immediately.
But if I try to do the same via tcp-connection it takes few minutes.
After I used EXPAIN query it turned out that when I use tcp-connection 
USING FILE SORTING occurs.

How to cure that?
Alex

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Losing some records

2004-06-18 Thread Sergey . Chistyakov


RE: Losing some records

2004-06-18 Thread Willem Roos

Losing some message body too :-?

 -Original Message-
 From: [EMAIL PROTECTED]
 [mailto:[EMAIL PROTECTED]
 Sent: 18 June 2004 13:11
 To: [EMAIL PROTECTED]
 Subject: Losing some records




Disclaimer
http://www.shoprite.co.za/disclaimer.html

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Group Query

2004-06-18 Thread shaun thornburgh
Hi,
The following table produces a query that shows all bookings that user has 
made and groups the number of bookings by the users location code.

mysql SELECT COUNT(B.Booking_ID), User_Location FROM Bookings B, Users U 
WHERE U.User_ID = B.User_ID GROUP BY(U.User_Location);

At the momnet if no bookings are made by a particular user then their 
location isnt included in the result. Is it possible to modify this query so 
that if no bookings are made for a particlar location then the location is 
still included in the result i.e.

+-+---+
| COUNT(B.Booking_ID) | User_Location |
+-+---+
|   1 | 01|
|   8 | 02  |
|   9 | 03  |
|   1 | 04   |
|   1 | 05  |
|   2 | 06   |
|   1 | 07  |
...
Many thanks for your help
_
It's fast, it's easy and it's free. Get MSN Messenger today! 
http://www.msn.co.uk/messenger

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Group Query

2004-06-18 Thread Johan Hook
Hi,
you can use:
SELECT COUNT(B.Booking_ID), User_Location
FROM Users U
LEFT JOIN Bookings B ON U.User_ID = B.User_ID
GROUP BY(U.User_Location);
/Johan
shaun thornburgh wrote:
Hi,
The following table produces a query that shows all bookings that user 
has made and groups the number of bookings by the users location code.

mysql SELECT COUNT(B.Booking_ID), User_Location FROM Bookings B, Users 
U WHERE U.User_ID = B.User_ID GROUP BY(U.User_Location);

At the momnet if no bookings are made by a particular user then their 
location isnt included in the result. Is it possible to modify this 
query so that if no bookings are made for a particlar location then the 
location is still included in the result i.e.

+-+---+
| COUNT(B.Booking_ID) | User_Location |
+-+---+
|   1 | 01|
|   8 | 02  |
|   9 | 03  |
|   1 | 04   |
|   1 | 05  |
|   2 | 06   |
|   1 | 07  |
...
Many thanks for your help
_
It's fast, it's easy and it's free. Get MSN Messenger today! 
http://www.msn.co.uk/messenger



--
Johan Hk, Pythagoras Engineering Group
- MailTo:[EMAIL PROTECTED]
- http://www.pythagoras.se
Pythagoras AB, Stormbyv. 2-4, SE-163 55 SPNGA, Sweden
Phone: +46 8 760 00 10 Fax: +46 8 761 22 77

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


RE: Group Query

2004-06-18 Thread Paul McNeil
You can use a Left Outer Join.  Left Outer Join will include all that
matches as well as that which doesn't.  The resulting NULL entries for the
count will evaluate to 0.

SELECT COUNT(B.Booking_ID), U.User_Location FROM
Users U
LEFT OUTER JOIN
Bookings B
ON
U.User_ID = B.User_ID GROUP BY(U.User_Location);


God Bless

Paul C. McNeil
Developer in Java, MS-SQL, MySQL, and web technologies.















GOD BLESS AMERICA!
To God Be The Glory!

-Original Message-
From: shaun thornburgh [mailto:[EMAIL PROTECTED]
Sent: Friday, June 18, 2004 7:54 AM
To: [EMAIL PROTECTED]
Subject: Group Query


Hi,

The following table produces a query that shows all bookings that user has
made and groups the number of bookings by the users location code.

mysql SELECT COUNT(B.Booking_ID), User_Location FROM Bookings B, Users U
WHERE U.User_ID = B.User_ID GROUP BY(U.User_Location);

At the momnet if no bookings are made by a particular user then their
location isnt included in the result. Is it possible to modify this query so
that if no bookings are made for a particlar location then the location is
still included in the result i.e.

+-+---+
| COUNT(B.Booking_ID) | User_Location |
+-+---+
|   1 | 01|
|   8 | 02  |
|   9 | 03  |
|   1 | 04   |
|   1 | 05  |
|   2 | 06   |
|   1 | 07  |
...

Many thanks for your help

_
It's fast, it's easy and it's free. Get MSN Messenger today!
http://www.msn.co.uk/messenger


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



tricky timetable based query

2004-06-18 Thread Sam Russo
I recieve a delimited file whose fields are:
day,slot,subject,room
An example of this file is:
2,1,Mat,R1
3,1,Sci,R6
1,2,Sci,R6
3,2,Mat,R3
1,3,Eng,R2
2,3,Eng,R5
1,4,Mat,R7
3,4,Eng,R9

I need a mysql query that will generate a timetable which looks like:
Day1Day2Day3
---
1   Mat R1  Sci R6
2   Sci R6  Mat R3
3   Eng R2  Eng R5
4   Mat R7  Eng R9

Any help would be appreciated
sam
**
This message is intended for the addressee named and may contain
privileged information or confidential information or both. If you
are not the intended recipient please delete it and notify the sender.
**


RE: strange query result

2004-06-18 Thread Victor Pendleton
Your explain plan should not change based on the client you are using. The
explain plan is the path for the query not the return of data to the client.
Are you using sockets when connecting locally on the Free-BSD machine?
Sockets are normally faster than using the tcp/ip port.

-Original Message-
From: Alex
To: [EMAIL PROTECTED]
Sent: 6/18/04 5:07 AM
Subject: strange query result

Hi, ppl!

I have a MySQL-server on FreeBSD-5.2.1 computer and a client on another 
computer under windows xp.

I have the following simple query  SELECT internal_number,page_no FROM 
pages WHERE docs=307 ORDER BY page_no

that results in 587 lines with two integers only.
Simple query with low output volume

The problem is that if I use a local client   on FreeBSD-computer I get 
the result immediately.
But if I try to do the same via tcp-connection it takes few minutes.
After I used EXPAIN query it turned out that when I use tcp-connection 
USING FILE SORTING occurs.

How to cure that?

Alex



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: load data into 2 tables and set id

2004-06-18 Thread SGreen

No problem!!

Please post the structures of your big_table and your url_table
(whatever you called them)  and I will help you to rewrite step 4 to count
how many times a URL appears in the big_table.

Is this bulk import a process you need to do repeatedly? if so we need to
worry about updating the count column on the next batch import and not just
re-creating it from the new data. That will change the query significantly.

Yours,
Shawn Green
Database Administrator
Unimin Corporation - Spruce Pine



   
 
  J S
 
  [EMAIL PROTECTED]To:   [EMAIL PROTECTED]
  
  com cc:   [EMAIL PROTECTED] 
 
   Fax to: 
 
  06/13/2004 12:29 Subject:  Re: load data into 2 tables 
and set id 
  PM   
 
   
 
   
 








Mos forgot to populate the url_id column in your user table. I would use
his same process but re-arrange it like this:

1) create table BIG_TABLE
2) load data infile
3) create table URL_TABLE (
   url_id bigint not null auto_increment,
   url varchar(25) not null primary key,
   category 
   )
4) INSERT IGNORE URL_TABLE (url, category)
   SELECT url,category
   FROM BIG_TABLE
4) create table USER_TABLE (
   user_id varchar?(...) primary key,
   date_time datetime,
   url_id bigint,
   size int
   )

5) insert USER_TABLE
   SELECT bt.userID, bt.datetime, u.url_id, bt.size
   FROM BIG_TABLE bt
   INNER JOIN URL_TABLE u
   ON u.url = bt.url

doing it this way lets the SQL engine handle the conversion of URLs to
their IDs in the USER_TABLE...see? No scripting required at all!

Yours,
Shawn Green
Database Administrator
Unimin Corporation - Spruce Pine



Shawn,

Many thanks for your reply. It sounds like this might be quicker than my
perl script which parses a 1GB log file and fills the database in 40 mins
at
the mo. (The actual parsing only takes 1.5 mins).
There's one snag which I forgot about and that's in the url_table I have
another column called hits which is the number of hits for each url.  I'm
terrible at SQL and wondered if you might be able to suggest a way of doing

this with the method above?

Thanks,

JS.

_
It's fast, it's easy and it's free. Get MSN Messenger today!
http://www.msn.co.uk/messenger







-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: load data into 2 tables and set id

2004-06-18 Thread J S
Shawn,
Thanks for helping on this. I really appreciate it.
No problem!!
Please post the structures of your big_table and your url_table
(whatever you called them)  and I will help you to rewrite step 4 to count
how many times a URL appears in the big_table.
mysql desc internet_usage;
++-+--+-+-+---+
| Field  | Type| Null | Key | Default | Extra |
++-+--+-+-+---+
| uid| varchar(10) | YES  | MUL | NULL|   |
| time   | datetime| YES  | | NULL|   |
| ip | varchar(20) | YES  | | NULL|   |
| action | varchar(20) | YES  | | NULL|   |
| urlid  | int(11) | YES  | | NULL|   |
| size   | int(11) | YES  | | NULL|   |
++-+--+-+-+---+
6 rows in set (0.03 sec)
mysql desc url_table;
+--+--+--+-+-++
| Field| Type | Null | Key | Default | Extra|
+--+--+--+-+-++
| urlid| mediumint(9) |  | PRI | NULL| auto_increment |
| url  | text | YES  | MUL | NULL||
| hits | mediumint(9) | YES  | | NULL||
| category | varchar(50)  | YES  | | NULL||
+--+--+--+-+-++
4 rows in set (0.01 sec)
mysql
Is this bulk import a process you need to do repeatedly? if so we need to
worry about updating the count column on the next batch import and not just
re-creating it from the new data. That will change the query significantly.
Yes it will be a repeated process. Actually I have a backlog of 6 months 
data to load!

Here's an example of what the data looks like:
mysql select * from internet_usage limit 5;
+-+-++--+---+--+
| uid | time| ip | action   | urlid | 
size |
+-+-++--+---+--+
| n58396  | 2004-06-07 21:12:16 | 21.38.25.204   | TCP_TUNNELED |  5999 | 
5297 |
| u344584 | 2004-06-07 21:07:12 | 21.33.136.74   | TCP_HIT  |  4494 |  
438 |
| -   | 2004-06-07 21:07:02 | 21.38.92.76| TCP_NC_MISS  |  2134 |  
771 |
| u524797 | 2004-06-07 21:03:27 | 21.32.25.41| TCP_NC_MISS  |   260 |  
582 |
| -   | 2004-06-07 21:09:13 | 21.201.130.240 | TCP_HIT  |  3112 |  
542 |
+-+-++--+---+--+

mysql select * from url_table limit 5;
+---+---+--+---+
| urlid | url
   | hits | category  |
+---+---+--+---+
| 1 | 
http://www.bbc.co.uk/horoscopes/chinesehoroscopes/images/hp-snake.gif | NULL 
| Entertainment |
| 2 | http://www.call18866.co.uk/images/logo.jpg 
   | NULL | none  |
| 3 | http://m2.doubleclick.net/866421/0409_santoku_250.gif  
   | NULL | none  |
| 4 | http://lysto1-dc02.ww.ad.ba.com/   
   | NULL | Travel|
| 5 | http://www.aboutscotland.com/edin/newstreetph/sitview.jpg  
   | NULL | Travel|
+---+---+--+---+
5 rows in set (0.00 sec)

One other problem I'm having here is making the rows in internet_usage 
unique. At the moment I have lots of duplicates, and I was trying to create 
a temporary table but unfortunately got an error 27 (I think this refers to 
a 2GB limit).

mysql CREATE TEMPORARY TABLE bad_temp AS SELECT DISTINCT * FROM 
internet_usage;
ERROR 1030: Got error 27 from table handler

Is there another way of doing this?
Yours,
Shawn Green
Database Administrator
Unimin Corporation - Spruce Pine

Mos forgot to populate the url_id column in your user table. I would use
his same process but re-arrange it like this:

1) create table BIG_TABLE
2) load data infile
3) create table URL_TABLE (
   url_id bigint not null auto_increment,
   url varchar(25) not null primary key,
   category 
   )
4) INSERT IGNORE URL_TABLE (url, category)
   SELECT url,category
   FROM BIG_TABLE
4) create table USER_TABLE (
   user_id varchar?(...) primary key,
   date_time datetime,
   url_id bigint,
   size int
   )

5) insert USER_TABLE
   SELECT bt.userID, bt.datetime, u.url_id, bt.size
   FROM BIG_TABLE bt
   INNER JOIN URL_TABLE u
   ON u.url = bt.url

doing it this way lets the SQL engine handle the conversion of URLs to
their IDs in 

MySQL Web Clustering...

2004-06-18 Thread Roy Nasser
Hi All,

We have recently acquired some new machines for our ASP service, and I
am investigating different options and setups to optimize everything.

We currently have one large DB server, with RAID5, etc, running mysql
and a few smaller servers for web applications, and e-mail.  These
smaller servers arent all identical in their software, and they run
different services.

We currently have reached a certain limit in the DB as well as in some
of our applications on the webservers, hence the need for something
expandable.

I have read slightly about MySQLCluster, as well as some other solutions
such as openMosix, Mosix and LVS.  I was wondering if you guys have
experience, and what you would recommend for the DB and for the
webservers.  I still want to maintain them separate, even if I end up
having 2 different clusters, I prefer to keep the DB away from the
application for security reasons.

Thanks!
Roy


Re: tricky timetable based query

2004-06-18 Thread SGreen

What you are trying to do is an example of what is called pivoting a
table or creating a pivot table.

SELECT slot
  , max(if(day=1, concat(subject,' ',room), '')) as day1
  , max(if(day=2, concat(subject,' ',room), '')) as day2
  , max(if(day=3, concat(subject,' ',room), '')) as day3
  , max(if(day=4, concat(subject,' ',room), '')) as day4
  , max(if(day=5, concat(subject,' ',room), '')) as day5
from schedule
group by slot

The max() decides between an entry and a blank ( the entry will win if one
exists) while the group by lines everything up on the same row. Friendly
caution: If more than one entry exists for the same day and time, you will
only see the one that is alphabetically greater.

if you wanted to see how many classes are scheduled by day for each slot
(to check for conflicts) you could modify the query:

SELECT slot
  , sum(if(day=1,1,0)) as day1
  , sum(if(day=2,1,0)) as day2
  , sum(if(day=3,1,0)) as day3
  , sum(if(day=4,1,0)) as day4
  , sum(if(day=5,1,0)) as day5
from schedule
group by slot

There is definitely a pattern. The columns you want as row headers are
listed both in the SELECT  **and** in the GROUP BY clauses. The values you
want as columns are selectively chosen ( by the IF() functions) to return
something or nothing so that one of the aggregate functions (MIN, MAX, SUM,
AVG, etc) can condense those rows and columns into single values.

The decisions for what gets counted in any column can be as complex as you
like. If you don't like the IF() function you can use CASE statements or
anything else to help you provide the correct set of values for the
aggregate function you are using on that column. You can mix and match
aggregate functions to be what you need. Let's say you join the appropriate
tables together to form a data set that looks like :

day, slot, subject, student, grade

and you wanted to build a chart showing the statistics of  grades vs. days
and subject. You could use this type of query to work that out:

SELECT day, subject
  , AVG(grade) as average
  , MIN(grade) as lowest
  , MAX(grade) as highest
from necessary tables
group by day, subject

Now while I didn't actually need to choose values for my columns you can
see that if I wanted to pivot that table by days (each column
representing the statistics for just one day) I would have to modify they
query to read:

SELECT subject
  , AVG(IF(day=1, grade,null)) as D1_average
  , MIN(IF(day=1, grade, null)) as D1_lowest
  , MAX(IF(day=1,grade,null)) as D1_highest
  , AVG(IF(day=2, grade,null)) as D2_average
  , MIN(IF(day=2, grade, null)) as D2_lowest
  , MAX(IF(day=2,grade,null)) as D2_highest
  ,  (repeat for rest of the days)
from necessary tables
group by day, subject


Notice how I had to change the IF function to return NULL values? That kept
the AVG() from counting all of the other grades for the same subject but
from different days.  If I had used a constant, like zero, then those
zeroes would have counted against us in the aggregate calculation. (The
same goes for the MIN and MAX functions)

I hope this helps!
Shawn Green
Database Administrator
Unimin Corporation - Spruce Pine



   
 
  Sam Russo  
 
  [EMAIL PROTECTED] To:   [EMAIL PROTECTED]  

   cc: 
 
  06/18/2004 08:34 Fax to: 
 
  AM   Subject:  tricky timetable based query  
 
   
 
   
 




I recieve a delimited file whose fields are:
day,slot,subject,room
An example of this file is:
2,1,Mat,R1
3,1,Sci,R6
1,2,Sci,R6
3,2,Mat,R3
1,3,Eng,R2
2,3,Eng,R5
1,4,Mat,R7
3,4,Eng,R9

I need a mysql query that will generate a timetable which looks like:
 Day1Day2Day3
---
1Mat R1Sci R6
2Sci R6Mat R3
3Eng R2Eng R5
4Mat R7Eng R9

Any help would be appreciated
sam
**
This message is intended for the addressee named and may contain
privileged information or confidential information or both. If you
are not the intended recipient please delete it 

RE: Replication - promoting slave to master

2004-06-18 Thread stanner
I would need log-bin and log-slave-updates enabled on the slave, correct?

  So to automate the process it would be better to start both servers 
without the Master-host info in the conf file, letting Heartbeat issue the 
commands on startup to convert one box to slave. During a fail-over, 
heartbeat would take down the master server and issue the stop slave 
command on the slave.Would deleting the master.info be needed at this 
point, or would the stop slave be enough to accept writes? 

In theory, a slave will look at the server-id and will not process 
statements where the server-id matches its own server-id

So is another option to tell the slave to change master to itself?


Thanks,

Scott Tanner
System Administrator
AMI International






Victor Pendleton [EMAIL PROTECTED]
06/17/2004 01:41 PM

 
To: '[EMAIL PROTECTED] ' [EMAIL PROTECTED], 
'[EMAIL PROTECTED] ' [EMAIL PROTECTED]
cc: 
Subject:RE: Replication - promoting slave to master


Initially you should have set the slave up with log-bin in its my.cnf file
so that it is writing to its own binary logs. In the event that the master
goes down, you should issue a change master on any other slaves so that 
they
point to the new master. Redirect all writes to the new master. On the new
master you will want to remove references to the old master. A slave stop
and the removing of the master.info file will work. But you should still
edit the my.cnf file to prevent the machine from trying to reattach at the
next startup. In theory, a slave will look at the server-id and will not
process statements where the server-id matches its own server-id.

-Original Message-
From: [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: 6/17/04 12:25 PM
Subject: Replication - promoting slave to master

Hello,
 I've been lurking about the lists for a few months now, hoping to gain 
some knowledge on MySQL's replication features for HA setup. I have a 
fairly good understanding of the setup process, but I'm a little
confused 
on one points. We are running a 2 node - master-slave setup, and we are 
planning on using Heartbeat (from the Linux-HA project) to handle 
failover. If the Master fails, the slave will take over permanently, and

the master will be rebuilt as a slave.

  What is the procedure for promoting a slave server to master? If the 
slave server has the Master-Host, User, and Password set in the conf
file, 
will the Stop Slave command be enough to promote it, or will I need to 
remove the master.info file as well?


Thank you,

Scott Tanner
System Administrator
AMI International






RE: Replication - promoting slave to master

2004-06-18 Thread Victor Pendleton
If the server has log-bin enabled it will log writes. If you have a
daisy-chained master, one that serves as a master to other slaves, you will
need to have log-slave-updates enabled. If this server is just another
machine pulling from the master it is not necessary to log slave updates but
it is necessary to have log-bin enabled in the event that you will be
promoting this server to master one day.


-Original Message-
From: [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: 6/18/04 9:09 AM
Subject: RE: Replication - promoting slave to master

I would need log-bin and log-slave-updates enabled on the slave,
correct?

  So to automate the process it would be better to start both servers 
without the Master-host info in the conf file, letting Heartbeat issue
the 
commands on startup to convert one box to slave. During a fail-over, 
heartbeat would take down the master server and issue the stop slave 
command on the slave.Would deleting the master.info be needed at this 
point, or would the stop slave be enough to accept writes? 

In theory, a slave will look at the server-id and will not process 
statements where the server-id matches its own server-id

So is another option to tell the slave to change master to itself?


Thanks,

Scott Tanner
System Administrator
AMI International






Victor Pendleton [EMAIL PROTECTED]
06/17/2004 01:41 PM

 
To: '[EMAIL PROTECTED] '
[EMAIL PROTECTED], 
'[EMAIL PROTECTED] ' [EMAIL PROTECTED]
cc: 
Subject:RE: Replication - promoting slave to master


Initially you should have set the slave up with log-bin in its my.cnf
file
so that it is writing to its own binary logs. In the event that the
master
goes down, you should issue a change master on any other slaves so that 
they
point to the new master. Redirect all writes to the new master. On the
new
master you will want to remove references to the old master. A slave
stop
and the removing of the master.info file will work. But you should still
edit the my.cnf file to prevent the machine from trying to reattach at
the
next startup. In theory, a slave will look at the server-id and will not
process statements where the server-id matches its own server-id.

-Original Message-
From: [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: 6/17/04 12:25 PM
Subject: Replication - promoting slave to master

Hello,
 I've been lurking about the lists for a few months now, hoping to gain 
some knowledge on MySQL's replication features for HA setup. I have a 
fairly good understanding of the setup process, but I'm a little
confused 
on one points. We are running a 2 node - master-slave setup, and we are 
planning on using Heartbeat (from the Linux-HA project) to handle 
failover. If the Master fails, the slave will take over permanently, and

the master will be rebuilt as a slave.

  What is the procedure for promoting a slave server to master? If the 
slave server has the Master-Host, User, and Password set in the conf
file, 
will the Stop Slave command be enough to promote it, or will I need to 
remove the master.info file as well?


Thank you,

Scott Tanner
System Administrator
AMI International





-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



skip through records (dbf-mysql)

2004-06-18 Thread Tom Horstmann
Hi all,

i try to simulate dBase-like record-skips through a compound
(dbf-) order. Anybody did that before? Please give me a hint.

thx in advance,

TomH

-- 
PROSOFT EDV-Lösungen GmbH  Co. KG phone: +49 941 / 78 88 7 - 121
Ladehofstraße 28, D-93049 Regensburg  cellphone: +49 174 / 41 94 97 0
Geschäftsführer: Axel-Wilhelm Wegmann[EMAIL PROTECTED]
AG Regensburg HRA 6608 USt.183/68311http://www.proSoft-Edv.de
-- 


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: MySQL Web Clustering...

2004-06-18 Thread Cemal Dalar
Go for a simple MySQL master - slave configuration. I'm runing 1
master 3 slaves for performance issues and doesn't have any problem at all
for 1 year.. Of course in this case you should change some code in your
webservers to direct SELECT queries to slave machine to gain performance on
the master.
I will also keep in touch with MySQL Cluster development and probably go
for this in the near future.


Best Regards,
Cemal Dalar a.k.a Jimmy
System Administrator  Web Developer
http://www.gittigidiyor.com  http://www.dalar.net

- Original Message - 
From: Roy Nasser [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Friday, June 18, 2004 4:58 PM
Subject: MySQL  Web Clustering...


Hi All,

We have recently acquired some new machines for our ASP service, and I
am investigating different options and setups to optimize everything.

We currently have one large DB server, with RAID5, etc, running mysql
and a few smaller servers for web applications, and e-mail.  These
smaller servers arent all identical in their software, and they run
different services.

We currently have reached a certain limit in the DB as well as in some
of our applications on the webservers, hence the need for something
expandable.

I have read slightly about MySQLCluster, as well as some other solutions
such as openMosix, Mosix and LVS.  I was wondering if you guys have
experience, and what you would recommend for the DB and for the
webservers.  I still want to maintain them separate, even if I end up
having 2 different clusters, I prefer to keep the DB away from the
application for security reasons.

Thanks!
Roy


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: tricky timetable based query

2004-06-18 Thread Harald Fuchs
In article [EMAIL PROTECTED],
Sam Russo [EMAIL PROTECTED] writes:

 I recieve a delimited file whose fields are:
 day,slot,subject,room
 An example of this file is:
 2,1,Mat,R1
 3,1,Sci,R6
 1,2,Sci,R6
 3,2,Mat,R3
 1,3,Eng,R2
 2,3,Eng,R5
 1,4,Mat,R7
 3,4,Eng,R9

 I need a mysql query that will generate a timetable which looks like:
   Day1Day2Day3
 ---
 1 Mat R1  Sci R6
 2 Sci R6  Mat R3
 3 Eng R2  Eng R5
 4 Mat R7  Eng R9

If there are only three days, you could use the following:

CREATE TEMPORARY TABLE slots (slot TINYINT UNSIGNED NOT NULL PRIMARY KEY) AS
SELECT DISTINCT slot FROM tbl;

SELECT s.slot,
   coalesce(concat(t1.subject, ' ', t1.room), '') AS Day1,
   coalesce(concat(t2.subject, ' ', t2.room), '') AS Day2,
   coalesce(concat(t3.subject, ' ', t3.room), '') AS Day3
FROM slots s
LEFT JOIN tbl t1 ON t1.slot = s.slot AND t1.day = 1
LEFT JOIN tbl t2 ON t2.slot = s.slot AND t2.day = 2
LEFT JOIN tbl t3 ON t3.slot = s.slot AND t3.day = 3;


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: load data into 2 tables and set id

2004-06-18 Thread SGreen

JS,

I need one more piece of information to help make that query work for you,
I need the structure of the table that you use (or would use) to bulk
import those logs into.

If you are running out of room you may consider further normalizing you
data (which saves space, but creates more data maintenance steps).  I, too,
have had to deal with millions of rows of internet usage logs so I
understand your pain.

You can store your IP addresses in an INT and get them back in dotted
notation with the MySQL functions INET_ATON() and INET_NTOA().  That will
save you an average of 10 bytes PER ROW (it adds up when you are into the
millions of rows).

Create a Proxy_Action table (id tinyint, action varchar(20) ) and populate
it with all of the actions your proxy/firewall can log.  Then replace the
column action varchar(20) with ProxyAction_id tinyint.  (I assume there
are less than 256 action messages available from your proxy?) That's 1
byte vs. 8 at the low end of your sample data (counting the null at the end
of the string).  First add the new column to the table, populate it with
the ID values from your new Action table, then drop the old column.

URLs consist (in a basic sense) of the server portion (to the left of the
first  single / ) and the path portion (right of the first / and left of a
? or #) and either a fragment (after the #) or a query string (after the ?)
I would at least split out the server portion into it's own table. For each
page request (assume 1 page and 9 pictures, all from the same server) that
would be 10 rows of log data that all contain the same chunk of similar
information. Reducing that heavily repeated portion of your data to an ID
number will greatly help reduce the size of your database.

About the non-uniqueness of your internet_usage table.  Even if the same
user visits the same URL multiple times (is that what you mean by repeated
records?) the times should all be slightly different.  If they are not
different, it is still possible that the same person requested the same
page twice or more during the same second (the auto-login feature of MSIE
comes to mind as one culprit). OR you could have multiple users all on the
same userID hitting the same page from different machines I guess I
would have to see more data to understand your problem better.

Yours,
Shawn Green
Database Administrator
Unimin Corporation - Spruce Pine



   
 
  J S
 
  [EMAIL PROTECTED]To:   [EMAIL PROTECTED]
  
  com cc:   [EMAIL PROTECTED] 
 
   Fax to: 
 
  06/18/2004 09:40 Subject:  Re: load data into 2 tables 
and set id 
  AM   
 
   
 
   
 




Shawn,

Thanks for helping on this. I really appreciate it.

No problem!!

Please post the structures of your big_table and your url_table
(whatever you called them)  and I will help you to rewrite step 4 to count
how many times a URL appears in the big_table.

mysql desc internet_usage;
++-+--+-+-+---+
| Field  | Type| Null | Key | Default | Extra |
++-+--+-+-+---+
| uid| varchar(10) | YES  | MUL | NULL|   |
| time   | datetime| YES  | | NULL|   |
| ip | varchar(20) | YES  | | NULL|   |
| action | varchar(20) | YES  | | NULL|   |
| urlid  | int(11) | YES  | | NULL|   |
| size   | int(11) | YES  | | NULL|   |
++-+--+-+-+---+
6 rows in set (0.03 sec)

mysql desc url_table;
+--+--+--+-+-++
| Field| Type | Null | Key | Default | Extra|
+--+--+--+-+-++
| urlid| mediumint(9) |  | PRI | NULL| auto_increment |
| url  | text | YES  | MUL | NULL||
| hits | mediumint(9) | YES  | | NULL||
| category | varchar(50)  | YES  | | NULL||
+--+--+--+-+-++
4 rows in set (0.01 sec)

mysql


Is this bulk import a process you 

Re: load data into 2 tables and set id

2004-06-18 Thread J S
Shawn,
Thanks for the email below - I will go through it over later tonight in more 
detail (going home time now!) although I can already see good points there.

JS,
I need one more piece of information to help make that query work for you,
I need the structure of the table that you use (or would use) to bulk
import those logs into.
I'm completely flexible over this as I'm still at the testing stage. You 
might be able to suggest the most efficient structure? Isn't the problem 
getting the urlid before I insert into internet_usage so that I can find the 
associated url.

If you are running out of room you may consider further normalizing you
data (which saves space, but creates more data maintenance steps).  I, too,
have had to deal with millions of rows of internet usage logs so I
understand your pain.
You can store your IP addresses in an INT and get them back in dotted
notation with the MySQL functions INET_ATON() and INET_NTOA().  That will
save you an average of 10 bytes PER ROW (it adds up when you are into the
millions of rows).
Create a Proxy_Action table (id tinyint, action varchar(20) ) and populate
it with all of the actions your proxy/firewall can log.  Then replace the
column action varchar(20) with ProxyAction_id tinyint.  (I assume there
are less than 256 action messages available from your proxy?) That's 1
byte vs. 8 at the low end of your sample data (counting the null at the end
of the string).  First add the new column to the table, populate it with
the ID values from your new Action table, then drop the old column.
URLs consist (in a basic sense) of the server portion (to the left of the
first  single / ) and the path portion (right of the first / and left of a
? or #) and either a fragment (after the #) or a query string (after the ?)
I would at least split out the server portion into it's own table. For each
page request (assume 1 page and 9 pictures, all from the same server) that
would be 10 rows of log data that all contain the same chunk of similar
information. Reducing that heavily repeated portion of your data to an ID
number will greatly help reduce the size of your database.
About the non-uniqueness of your internet_usage table.  Even if the same
user visits the same URL multiple times (is that what you mean by repeated
records?) the times should all be slightly different.  If they are not
different, it is still possible that the same person requested the same
page twice or more during the same second (the auto-login feature of MSIE
comes to mind as one culprit). OR you could have multiple users all on the
same userID hitting the same page from different machines I guess I
would have to see more data to understand your problem better.
Yours,
Shawn Green
Database Administrator
Unimin Corporation - Spruce Pine

  J S
  [EMAIL PROTECTED]To:   [EMAIL PROTECTED]
  com cc:   
[EMAIL PROTECTED]
   Fax to:
  06/18/2004 09:40 Subject:  Re: load data 
into 2 tables and set id
  AM



Shawn,
Thanks for helping on this. I really appreciate it.
No problem!!

Please post the structures of your big_table and your url_table
(whatever you called them)  and I will help you to rewrite step 4 to 
count
how many times a URL appears in the big_table.

mysql desc internet_usage;
++-+--+-+-+---+
| Field  | Type| Null | Key | Default | Extra |
++-+--+-+-+---+
| uid| varchar(10) | YES  | MUL | NULL|   |
| time   | datetime| YES  | | NULL|   |
| ip | varchar(20) | YES  | | NULL|   |
| action | varchar(20) | YES  | | NULL|   |
| urlid  | int(11) | YES  | | NULL|   |
| size   | int(11) | YES  | | NULL|   |
++-+--+-+-+---+
6 rows in set (0.03 sec)
mysql desc url_table;
+--+--+--+-+-++
| Field| Type | Null | Key | Default | Extra|
+--+--+--+-+-++
| urlid| mediumint(9) |  | PRI | NULL| auto_increment |
| url  | text | YES  | MUL | NULL||
| hits | mediumint(9) | YES  | | NULL||
| category | varchar(50)  | YES  | | NULL||
+--+--+--+-+-++
4 rows in set (0.01 sec)
mysql

Is this bulk import a process you need to do repeatedly? if so we need to
worry about updating the count column on the next batch import and not
just
re-creating it from the new data. That will change the query
significantly.
Yes it will be a repeated process. Actually I have a backlog of 6 months
data to load!
Here's an example of what the data looks like:
mysql select * from 

problems with 4.0.20 (icc)

2004-06-18 Thread Michael Spindler
Hi,

yesterday I installed the 4.0.20 binaries, compiled with Intel C++ Compiler from 
mysql.com.

I tried to transfer one of our (small) databases from the production-system (4.0.20 
gcc binaries installed) to the test-system and came across the following problem:
I did a mysqldump --opt ... | mysql  The command finished without error, but no 
data has been transfered. I checked several scenarios and got the following results:
- Without extended inserts, the transfer is ok
- Just doing the extended insert fails, but reducing the size of the extended insert 
worked. I think, this issue has already been reported 
(http://bugs.mysql.com/bug.php?id=4031)
- When using mysqldump --opt ... with a really small table (just 2 records), it 
didn´t transfer the data, but when I removed the ALTER TABLE tablename DISABLE 
KEYS; statement at the beginning of the dump-file (and the corresponding one at the 
end) it worked.

Any ideas?


Michael



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Replication - promoting slave to master

2004-06-18 Thread Alec . Cawley
[EMAIL PROTECTED] wrote on 18/06/2004 15:09:58:

 I would need log-bin and log-slave-updates enabled on the slave, 
correct?
 
   So to automate the process it would be better to start both servers 
 without the Master-host info in the conf file, letting Heartbeat issue 
the 
 commands on startup to convert one box to slave. During a fail-over, 
 heartbeat would take down the master server and issue the stop slave 
 command on the slave.Would deleting the master.info be needed at this 
 point, or would the stop slave be enough to accept writes? 


That is what we do. We have an HA system of single master and one or more 
slaves in a line. All systems start up with slave threads NOT running. At 
startup, the heartbeat does not know which machine was previously master 
and which slave(s) in what order. We have a special 1 row, 1 column table 
known, for reasons lost in the past, as the Fishbowl value. This is 
incremented every time the configuration changes. We therefore read the 
fishbowl value from all machines. Only machines with the largest value of 
the fishbowl are candidates for becoming master. We then read the slave 
status of all the machines which have this value. Reading the name of the 
machine to which it is slaved, we should be able to organise them into one 
or more chains of consistent master and slave. The longest such chain 
becomes the live chain, and its head becomes the master. Slave threads 
are then started on all the slaves in this chain.

We now have a running system. All writes are directed to the master, reads 
can be directed to any slave (because we have designed around transactions 
and locking problems).

All machines with less than the maximum fishbowl value, or which form part 
of shorter chains, or simply do not have the databases on them (status 
when new or repaired machine is added) have the databases dropped and are 
regarded as idle. One by one, the idle machines are slaved to the last 
slave in the chain and loaded up with LOAD DATA FROM MASTER, then slave 
threads started.

The heartbeat continuously monitors all machines. If the master fails, it 
is simply necessary to stop the slave thread on the first slave (now 
master), increment the fishbowl value on it (which will ripple down to all 
remaining slaves) and direct all writes to it. If the ex-master reappears, 
it will have an out-of-dated fishbowl value and will therefore have its 
database dumped and reloaded.

We do not bother to delete master.info. It will point to a database with 
an out-of-date fishbowl, which can be ignored.

 In theory, a slave will look at the server-id and will not process 
 statements where the server-id matches its own server-id
 
 So is another option to tell the slave to change master to itself?

This sounds a bit dodgy to me. Strictly speaking, if the master is truly 
dead, it is not necessary even to stop the slave thread: there is no 
reason not to write to the database while slaving IF no conflicting 
updates are being replicated from the master. Practically, it is 
definitely advisable to do so in case the master is rebooting.

Alec


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: MySQL Web Clustering...

2004-06-18 Thread Peter J Milanese
I currently run LVS (pre-distribution) on my farm, which gets about 100M 
hits/month.

Good points about LVS are that it is completely rock solid, and runs on 
minimal hardware.

I have never run MySQL behind it, as I think that would be a bit flaky for 
a live site. Probably
worth checking out though. I would think that instead of LVS Load 
Balancing, server failover
might be a more viable path for MySQL.

P
 




Roy Nasser [EMAIL PROTECTED]
06/18/2004 01:58 PM
 
To: [EMAIL PROTECTED]
cc: 
Subject:MySQL  Web Clustering...


Hi All,

We have recently acquired some new machines for our ASP service, and I
am investigating different options and setups to optimize everything.

We currently have one large DB server, with RAID5, etc, running mysql
and a few smaller servers for web applications, and e-mail.  These
smaller servers arent all identical in their software, and they run
different services.

We currently have reached a certain limit in the DB as well as in some
of our applications on the webservers, hence the need for something
expandable.

I have read slightly about MySQLCluster, as well as some other solutions
such as openMosix, Mosix and LVS.  I was wondering if you guys have
experience, and what you would recommend for the DB and for the
webservers.  I still want to maintain them separate, even if I end up
having 2 different clusters, I prefer to keep the DB away from the
application for security reasons.

Thanks!
Roy



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



ERROR 1045: Access denied for user: 'foo@host' (Using password: YES)

2004-06-18 Thread Marc Tardif
I'm using mysql-3.23.58-1.9 installed from RPM on Red Hat Linux release 
9 (Shrike). First, I create a database and user to connect to this 
database using the following commands:

  mysql CREATE DATABASE foo;
  mysql GRANT ALL PRIVILEGES ON foo.* TO foo@'%'
IDENTIFIED BY 'password' WITH GRANT OPTION;
  mysql GRANT ALL PRIVILEGES ON foo.* TO [EMAIL PROTECTED]
IDENTIFIED BY 'password' WITH GRANT OPTION;
  mysql FLUSH PRIVILEGES;
Then, I try to connect to the database using the host parameter but I 
get the following error message:

  # mysql --user=foo --password=password -h host foo
  ERROR 1045: Access denied for user: '[EMAIL PROTECTED]' (Using password: YES)
I've only managed to find a single thread about this issue which was a 
bug in the installation of mysql on RedHat 7. Many months have gone by 
since then so I wonder if this is still the same issue.

--
Marc Tardif
Sitepak
(514) 866-8883
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: tricky timetable based query

2004-06-18 Thread Michael Stassen
Harald Fuchs wrote:
Sam Russo [EMAIL PROTECTED] writes:
I recieve a delimited file whose fields are:
day,slot,subject,room
An example of this file is:
2,1,Mat,R1
3,1,Sci,R6
1,2,Sci,R6
3,2,Mat,R3
1,3,Eng,R2
2,3,Eng,R5
1,4,Mat,R7
3,4,Eng,R9

I need a mysql query that will generate a timetable which looks like:
Day1Day2Day3
---
1   Mat R1  Sci R6
2   Sci R6  Mat R3
3   Eng R2  Eng R5
4   Mat R7  Eng R9
If there are only three days, you could use the following:
CREATE TEMPORARY TABLE slots (slot TINYINT UNSIGNED NOT NULL PRIMARY KEY) AS
SELECT DISTINCT slot FROM tbl;
SELECT s.slot,
   coalesce(concat(t1.subject, ' ', t1.room), '') AS Day1,
   coalesce(concat(t2.subject, ' ', t2.room), '') AS Day2,
   coalesce(concat(t3.subject, ' ', t3.room), '') AS Day3
FROM slots s
LEFT JOIN tbl t1 ON t1.slot = s.slot AND t1.day = 1
LEFT JOIN tbl t2 ON t2.slot = s.slot AND t2.day = 2
LEFT JOIN tbl t3 ON t3.slot = s.slot AND t3.day = 3;
We could do without the temporary table by simply moving the DISTINCT to the 
SELECT, though I expect that's less efficient.  We should also probably 
explicitly ORDER BY slot.  In this case, an index on slot (as you did in the 
temporary table) would probably help.

  SELECT DISTINCT s.slot,
 coalesce(concat(t1.subject, ' ', t1.room), '') AS Day1,
 coalesce(concat(t2.subject, ' ', t2.room), '') AS Day2,
 coalesce(concat(t3.subject, ' ', t3.room), '') AS Day3
  FROM slots s
  LEFT JOIN tbl t1 ON t1.slot = s.slot AND t1.day = 1
  LEFT JOIN tbl t2 ON t2.slot = s.slot AND t2.day = 2
  LEFT JOIN tbl t3 ON t3.slot = s.slot AND t3.day = 3
  ORDER BY s.slot;
We could also accomplish both the DISTINCT and the ORDER BY using GROUP BY 
instead:

  SELECT s.slot,
 coalesce(concat(t1.subject, ' ', t1.room), '') AS Day1,
 coalesce(concat(t2.subject, ' ', t2.room), '') AS Day2,
 coalesce(concat(t3.subject, ' ', t3.room), '') AS Day3
  FROM slots s
  LEFT JOIN tbl t1 ON t1.slot = s.slot AND t1.day = 1
  LEFT JOIN tbl t2 ON t2.slot = s.slot AND t2.day = 2
  LEFT JOIN tbl t3 ON t3.slot = s.slot AND t3.day = 3
  GROUP BY s.slot;
Michael

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: MySQL Web Clustering...

2004-06-18 Thread Mike Miller
Unless you have a specific need for it, you could save yourself a lot of 
trouble by putting select tables or databases or even clients on each 
server.  This also means you don't incur the added overhead of keeping the 
database in sync, creating actions if a master goes down, etc.  Then just 
tell client1 to use database5 as their hostname for example.  Replication 
requires that updates go to the master, which requires client-side code, or 
an intermediate daemon which analyzes the statement and forwards the 
request.

-Mike
From: Peter J Milanese [EMAIL PROTECTED]
To: Roy Nasser [EMAIL PROTECTED]
CC: [EMAIL PROTECTED]
Subject: Re: MySQL  Web Clustering...
Date: Fri, 18 Jun 2004 11:25:45 -0400
I currently run LVS (pre-distribution) on my farm, which gets about 100M
hits/month.
Good points about LVS are that it is completely rock solid, and runs on
minimal hardware.
I have never run MySQL behind it, as I think that would be a bit flaky for
a live site. Probably
worth checking out though. I would think that instead of LVS Load
Balancing, server failover
might be a more viable path for MySQL.
P


Roy Nasser [EMAIL PROTECTED]
06/18/2004 01:58 PM
To: [EMAIL PROTECTED]
cc:
Subject:MySQL  Web Clustering...
Hi All,
We have recently acquired some new machines for our ASP service, and I
am investigating different options and setups to optimize everything.
We currently have one large DB server, with RAID5, etc, running mysql
and a few smaller servers for web applications, and e-mail.  These
smaller servers arent all identical in their software, and they run
different services.
We currently have reached a certain limit in the DB as well as in some
of our applications on the webservers, hence the need for something
expandable.
I have read slightly about MySQLCluster, as well as some other solutions
such as openMosix, Mosix and LVS.  I was wondering if you guys have
experience, and what you would recommend for the DB and for the
webservers.  I still want to maintain them separate, even if I end up
having 2 different clusters, I prefer to keep the DB away from the
application for security reasons.
Thanks!
Roy

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
_
MSN Premium with Virus Guard and Firewall* from McAfee® Security : 2 months 
FREE*   
http://join.msn.com/?pgmarket=en-capage=byoa/premxAPID=1994DI=1034SU=http://hotmail.com/encaHL=Market_MSNIS_Taglines

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


MySQL SQL Case Stetement

2004-06-18 Thread Rafi Sheikh
Hi list.  Can someone give me a working example of a SQL CASE statement(in
SELECT)?  I have examples for stored procedures, but none for use within the
DML side of SQL.

I am trying to in my SELECT (used in PHP) create a variable/col for example:
 status='high' where value in a other col is withinin certain range.

I am sure that I do not have the order of the syntax or complete code.  I am
using MySQL 4.0.17, and a newbie.

Any simple example will help.

Please note I am not requesting for use within a procedure or trigger, etc,
just a simple DML side SELECT.

TIA

RS


This e-mail, including attachments, may include confidential and/or
proprietary information, and may be used only by the person or entity to
which it is addressed. If the reader of this e-mail is not the intended
recipient or his or her authorized agent, the reader is hereby notified that
any dissemination, distribution or copying of this e-mail is prohibited. If
you have received this e-mail in error, please notify the sender by replying
to this message and delete this e-mail immediately.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: MySQL Web Clustering...

2004-06-18 Thread Jeff Smelser
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On Friday 18 June 2004 11:05 am, Mike Miller wrote:
 Unless you have a specific need for it, you could save yourself a lot of
 trouble by putting select tables or databases or even clients on each
 server.  This also means you don't incur the added overhead of keeping the
 database in sync, creating actions if a master goes down, etc.  Then just
 tell client1 to use database5 as their hostname for example.  Replication
 requires that updates go to the master, which requires client-side code, or
 an intermediate daemon which analyzes the statement and forwards the
 request.

Daemons? Client side code? Can you explain this please?

mysql handles all the replication. Depending on your setup, client won't know 
the difference. Its really all depends on how your replication is gonna be 
set up.
- -- 
SUSHIDO--The way of the Tuna.
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.2.4 (GNU/Linux)

iD8DBQFA0xuZld4MRA3gEwYRAv1EAKCot3j1j16j892FtrTEea8Brlk0NgCcCKou
9K0QzPH4uFz+TYynwdNpxbY=
=a/OR
-END PGP SIGNATURE-

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Ask again about comparing cast string to date

2004-06-18 Thread Michael Stassen
Cao, Wenhong wrote:
I am trying to select the records where the field activationtimestamp is
within a certain range. Unfortunately the field activationtimestamp is
defined as character(14) in the table. 
Perhaps you should change the type of activationtimestamp:
  ALTER TABLE SubscriptionVersion
  CHANGE activationtimestamp activationtimestamp DATETIME;
mysql select * from SoutheastDB.SubscriptionVersion where
date(activationtimestamp)  DATE_ADD('2004061807', INTERVAL 5 HOUR)
and activationtimestamp = '20040619065959' limit 1;
ERROR 1064: You have an error in your SQL syntax near
'(activationtimestamp)  DATE_ADD('2004061807', INTERVAL 5 HOUR) and
activati' at line 1
The DATE() function was added in version 4.1.1.  From the error message, I 
expect you're using an earlier version.  In any case, DATE() doesn't do what 
you want.  It extracts the date portion of the input, throwing away the time 
part.  Not very useful when you want to compare to a DATETIME.

It seems to me it would be just as easy to type '2004061812' as 
DATE_ADD('2004061807', INTERVAL 5 HOUR).

  SELECT * from SoutheastDB.SubscriptionVersion
  WHERE activationtimestamp  '2004061812'
  AND activationtimestamp = '20040619065959'
  LIMIT 1;
If you need the DATE_ADD, then you need to reformat the result to match your 
char(14) activationtimestamp column:

  SELECT * FROM SoutheastDB.SubscriptionVersion
  WHERE activationtimestamp 
  DATE_FORMAT(DATE_ADD('2004061807',INTERVAL 5 HOUR), '%Y%m%d%H%i%s')
  AND activationtimestamp = '20040619065959'
  LIMIT 1;
On the other hand, if you change activationtimestamp to a DATETIME, you 
could simply

  SELECT * FROM SoutheastDB.SubscriptionVersion
  WHERE activationtimestamp  DATE_ADD('2004061807',INTERVAL 5 HOUR)
  AND activationtimestamp = '20040619065959'
  LIMIT 1;
Thanks,
Wen
Michael
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


RE: MySQL SQL Case Stetement

2004-06-18 Thread Paul McNeil
I have used the CASE statement for ordering many times.  It's very useful...

SELECT
myDATA,
CASE
WHEN data2 = SomeValue
THEN 0
ELSE
WHEN data2 = SomeOtherValue
THEN 1
ELSE
2
END
AS mySort
from MyTable Where myConstraints.


God Bless

Paul C. McNeil
Developer in Java, MS-SQL, MySQL, and web technologies.














GOD BLESS AMERICA!
To God Be The Glory!

-Original Message-
From: Rafi Sheikh [mailto:[EMAIL PROTECTED]
Sent: Friday, June 18, 2004 12:21 PM
To: '[EMAIL PROTECTED]'
Subject: MySQL SQL Case Stetement


Hi list.  Can someone give me a working example of a SQL CASE statement(in
SELECT)?  I have examples for stored procedures, but none for use within the
DML side of SQL.

I am trying to in my SELECT (used in PHP) create a variable/col for example:
 status='high' where value in a other col is withinin certain range.

I am sure that I do not have the order of the syntax or complete code.  I am
using MySQL 4.0.17, and a newbie.

Any simple example will help.

Please note I am not requesting for use within a procedure or trigger, etc,
just a simple DML side SELECT.

TIA

RS


This e-mail, including attachments, may include confidential and/or
proprietary information, and may be used only by the person or entity to
which it is addressed. If the reader of this e-mail is not the intended
recipient or his or her authorized agent, the reader is hereby notified that
any dissemination, distribution or copying of this e-mail is prohibited. If
you have received this e-mail in error, please notify the sender by replying
to this message and delete this e-mail immediately.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: MySQL Web Clustering...

2004-06-18 Thread Mike Miller
The meaning of that was client-side being sending inserts/updates to the 
master server.  This would be done in the program/script upon an insert 
query.

A daemon could forward insert/update requests to a master and all others 
round-robin and simply pass packets.  This is if you have to make it 
seamless.  Would require some work to make one though.

-Mike

From: Jeff Smelser [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Subject: Re: MySQL  Web Clustering...
Date: Fri, 18 Jun 2004 11:43:05 -0500
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On Friday 18 June 2004 11:05 am, Mike Miller wrote:
 Unless you have a specific need for it, you could save yourself a lot of
 trouble by putting select tables or databases or even clients on each
 server.  This also means you don't incur the added overhead of keeping 
the
 database in sync, creating actions if a master goes down, etc.  Then 
just
 tell client1 to use database5 as their hostname for example.  
Replication
 requires that updates go to the master, which requires client-side code, 
or
 an intermediate daemon which analyzes the statement and forwards the
 request.

Daemons? Client side code? Can you explain this please?
mysql handles all the replication. Depending on your setup, client won't 
know
the difference. Its really all depends on how your replication is gonna be
set up.
- --
		SUSHIDO--The way of the Tuna.
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.2.4 (GNU/Linux)

iD8DBQFA0xuZld4MRA3gEwYRAv1EAKCot3j1j16j892FtrTEea8Brlk0NgCcCKou
9K0QzPH4uFz+TYynwdNpxbY=
=a/OR
-END PGP SIGNATURE-
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
_
Free yourself from those irritating pop-up ads with MSn Premium. Get 2months 
FREE*  
http://join.msn.com/?pgmarket=en-capage=byoa/premxAPID=1994DI=1034SU=http://hotmail.com/encaHL=Market_MSNIS_Taglines

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: MySQL SQL Case Stetement

2004-06-18 Thread Michael Stassen
The CASE function is documented in the manual 
http://dev.mysql.com/doc/mysql/en/Control_flow_functions.html.

  SELECT CASE
WHEN other_col  100 THEN 'low'
WHEN other_col BETWEEN 100 AND 1000 THEN 'medium'
WHEN other_col  1000 THEN 'high'
END AS col
  FROM your_table;
Michael
Rafi Sheikh wrote:
Hi list.  Can someone give me a working example of a SQL CASE statement(in
SELECT)?  I have examples for stored procedures, but none for use within the
DML side of SQL.
I am trying to in my SELECT (used in PHP) create a variable/col for example:
 status='high' where value in a other col is withinin certain range.
I am sure that I do not have the order of the syntax or complete code.  I am
using MySQL 4.0.17, and a newbie.
Any simple example will help.
Please note I am not requesting for use within a procedure or trigger, etc,
just a simple DML side SELECT.
TIA
RS

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Clustered Index - Where is data inserted ?

2004-06-18 Thread Paul Chu
Hi,

Can someone explain how the data is stored in a table using a clustered
index.

Does this mean that the data is inserted in the .myd file in sorted
index order ?

If so, how is space made to insert the new records ? 
Does this make inserting records slow because data is being inserted in
physical sort order in the table ?

Thanks, Paul



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Full Text Index on Large Tables

2004-06-18 Thread Paul Chu
Hi,

If I have a table with 100 - 200 million rows and I want to search
For records with specific characteristics.

Ex. 
Skills varchar(300) 
Skill id's 10   15  
Accounting finance etc.

Is it advisable to created a field with skill ids and then use the
Skills column in a full text index 

Thanks for your help, Paul




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Fastest way to load a master table removing duplicates

2004-06-18 Thread Paul Chu
Hi all,

 

I want to load a Master table millions of rows for other sources with a
unique index on e.g. ssn social sec number.

If  inserted records have a duplicate SSN I don't want to insert those
but put  them in a duplicate table or flag them.

.

The primary key will be an auto-increment field. There will be  other
indexes such as zipcode..

 

What is the fastest way to load these rows and remove duplicates ?

Assume I load the rows to be inserted into another table.

 

1.  Check if the ssn already exists before inserting the row ?

 

2.  Insert the row and ignore duplicate using 

insert into master ( .)  

  select .. From loaddata 

 

I have lots of  files with data that can be saved to load tables and
then inserted into the master table.

 

 

Regards, Paul

 

 



Re: Dying query ....

2004-06-18 Thread Heikki Tuuri
Ivan,

- Original Message - 
From: Ivan Latysh [EMAIL PROTECTED]
Newsgroups: mailing.database.myodbc
Sent: Thursday, June 17, 2004 7:44 PM
Subject: Dying query 


 Hello!

 I am running:
 Server version  4.1.2-alpha-max-log
 On linux RedHat.

 When I execute a simple select, it's dying, with no error message
 reported or so, the query is simply dying together with the server.
 After query is died entire server is hang on. I can't even shut it down.
 Query shows as *** DEAD *** in mysql processlist.

 Here is a query:
 select contest_id, question_id, answer_id, answer,
 count(distinct(answer_id)) as count from user_answer
 group by question_id, answer_id, answer
 order by question_id, answer_id

 The table is:
 CREATE TABLE `user_answer` (
   `user_id` int(11) NOT NULL default '0',
   `contest_id` int(11) NOT NULL default '0',
   `question_id` int(11) NOT NULL default '0',
   `answer_id` int(11) NOT NULL default '0',
   `answer` char(255) NOT NULL default '',
   `date` datetime NOT NULL default '-00-00 00:00:00',
   KEY `user_id` (`user_id`),
   KEY `contest_id` (`contest_id`),
   KEY `question_id` (`question_id`),
   KEY `answer_id` (`answer_id`),
   KEY `date` (`date`),
   CONSTRAINT `0_3488` FOREIGN KEY (`user_id`) REFERENCES `user` (`id`),
   CONSTRAINT `0_3489` FOREIGN KEY (`contest_id`) REFERENCES `contest`
 (`id`),
   CONSTRAINT `0_3490` FOREIGN KEY (`question_id`) REFERENCES `question`
 (`id`),
   CONSTRAINT `0_3491` FOREIGN KEY (`answer_id`) REFERENCES `answer` (`id`)
 ) ENGINE=InnoDB DEFAULT CHARSET=latin1;
 A real table has 52000 records.

 I tested it on a 4.0.1 version - the result is same.

 Any idead what is wrong with it ?

please run CHECK TABLE on that table. Does it report corruption or does
mysqld print anything to the .err log?

I tested your query with a small data set on 4.1.3, and it ran fine.

 --
 With best regards,
 Ivan Latysh.

Best regards,

Heikki Tuuri
Innobase Oy
Foreign keys, transactions, and row level locking for MySQL
InnoDB Hot Backup - a hot backup tool for InnoDB which also backs up MyISAM
tables
http://www.innodb.com/order.php

Order MySQL technical support from https://order.mysql.com/


[EMAIL PROTECTED]:~/mysql-4.1/client ./mysql test
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 1 to server version: 4.1.3-beta-debug-log

Type 'help;' or '\h' for help. Type '\c' to clear the buffer.

mysql create table user(id int not null primary key) type=innodb;
Query OK, 0 rows affected, 1 warning (0.16 sec)

mysql create table contest(id int not null primary key) type=innodb;
Query OK, 0 rows affected, 1 warning (0.03 sec)

mysql create table question(id int not null primary key) type=innodb;
Query OK, 0 rows affected, 1 warning (0.02 sec)

mysql create table answer(id int not null primary key) type=innodb;
Query OK, 0 rows affected, 1 warning (0.02 sec)

mysql insert into user values (10);
Query OK, 1 row affected (0.02 sec)

mysql insert into contest values (10);
Query OK, 1 row affected (0.00 sec)

mysql insert into question values (10);
Query OK, 1 row affected (0.00 sec)

mysql insert into answer values (10);
Query OK, 1 row affected (0.00 sec)

mysql
mysql
mysql
mysql
mysql CREATE TABLE `user_answer` (
-   `user_id` int(11) NOT NULL default '0',
-   `contest_id` int(11) NOT NULL default '0',
-   `question_id` int(11) NOT NULL default '0',
-   `answer_id` int(11) NOT NULL default '0',
-   `answer` char(255) NOT NULL default '',
-   `date` datetime NOT NULL default '-00-00 00:00:00',
-   KEY `user_id` (`user_id`),
-   KEY `contest_id` (`contest_id`),
-   KEY `question_id` (`question_id`),
-   KEY `answer_id` (`answer_id`),
-   KEY `date` (`date`),
-   CONSTRAINT `0_3488` FOREIGN KEY (`user_id`) REFERENCES `user`
(`id`),
-   CONSTRAINT `0_3489` FOREIGN KEY (`contest_id`) REFERENCES `contest`
- (`id`),
-   CONSTRAINT `0_3490` FOREIGN KEY (`question_id`) REFERENCES
`question`
- (`id`),
-   CONSTRAINT `0_3491` FOREIGN KEY (`answer_id`) REFERENCES `answer`
(`id`
)
- ) ENGINE=InnoDB DEFAULT CHARSET=latin1;
Query OK, 0 rows affected (0.11 sec)

mysql insert into user_answer values (1, 1, 1, 1, 'kukkuuu', '2004-06-18');
ERROR 1216 (23000): Cannot add or update a child row: a foreign key
constraint f
ails
mysql insert into user_answer values (10, 10, 10, 10, 'kukkuuu',
'2004-06-18');

Query OK, 1 row affected (0.00 sec)

mysql insert into user_answer values (10, 10, 10, 10, 'kukkuuu',
'2004-06-18');
Query OK, 1 row affected (0.00 sec)

mysql insert into user_answer values (10, 10, 10, 10, 'kukkuuu',
'2004-06-18');
Query OK, 1 row affected (0.00 sec)

mysql insert into user_answer values (10, 10, 10, 10, 'kukkuuu',
'2004-06-18');
Query OK, 1 row affected (0.00 sec)

mysql insert into user_answer values (10, 10, 10, 10, 'kukkuuu',
'2004-06-18');
Query OK, 1 row affected (0.00 sec)

mysql insert into user_answer values (10, 10, 10, 10, 

Fresh 4.1.2 install on Redhat 9

2004-06-18 Thread jabbott


Hello List,
I have done some googling around but can't find an answer to this one.  Brand new box, 
installed with RedHat 9 and trying to run 4.1.2.  This is what I get.

--ja

[EMAIL PROTECTED] mysql]# scripts/mysql_install_db --user mysql --log
Installing all prepared tables
040618 11:06:51  Warning: Asked for 196608 thread stack, but got 126976
./bin/mysqld: File './mySnort.log' not found (Errcode: 13)
040618 11:06:51  Could not use mySnort.log for logging (error 13). Turning
logging off for the whole duration of the MySQL server process. To turn it
on again: fix the cause, shutdown the MySQL server and restart it.
./bin/mysqld: File './mySnort-bin.1' not found (Errcode: 13)
040618 11:06:51  Could not use mySnort-bin for logging (error 13). Turning
logging off for the whole duration of the MySQL server process. To turn it
on again: fix the cause, shutdown the MySQL server and restart it.
ERROR: 1049  Unknown database 'mysql'
040618 11:06:51  Aborting

040618 11:06:51  ./bin/mysqld: Shutdown Complete

Installation of system tables failed!

Examine the logs in ./data for more information.
You can also try to start the mysqld daemon with:
./bin/mysqld --skip-grant 
You can use the command line tool
./bin/mysql to connect to the mysql
database and look at the grant tables:

shell ./bin/mysql -u root mysql
mysql show tables

Try 'mysqld --help' if you have problems with paths. Using --log
gives you a log in ./data that may be helpful.

The latest information about MySQL is available on the web at
http://www.mysql.com
Please consult the MySQL manual section: 'Problems running
mysql_install_db',
and the manual section that describes problems on your OS.
Another information source is the MySQL email archive.
Please check all of the above before mailing us!
And if you do mail us, you MUST use the ./bin/mysqlbug script!
[EMAIL PROTECTED] mysql]# 



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: DROP TEMPORARY TABLE and implicit commits

2004-06-18 Thread Michael McTernan
Hi,

I guess since it is documented, it is a new feature - I agree with the
principal of not backporting it.

Many thanks for the reply - can't wait for 4.1 to mature :-)

Thanks,

Mike

 -Original Message-
 From: Heikki Tuuri [mailto:[EMAIL PROTECTED]
 Sent: 15 June 2004 13:00
 To: Mysql
 Subject: Re: DROP TEMPORARY TABLE and implicit commits


 Michael,

 - Alkuperäinen viesti -
 Lähettäjä: Michael McTernan [EMAIL PROTECTED]
 Vastaanottaja: Mysql [EMAIL PROTECTED]
 Kopio: [EMAIL PROTECTED]
 Lähetetty: Monday, June 14, 2004 9:40 PM
 Aihe: DROP TEMPORARY TABLE and implicit commits


  Hi there,
 
  I'm using MySQL 4.0.18 and finding that DROP TEMPORARY TABLE is
 performing
  an implicit commit, as is documented.
 
  The problem is that I have a complex query in a Java function,
 and I need
 to
  call it a number of times to get the overall, which I also want
 to do as a
  single transaction so as not to leave data inconsistent at any point.
 
  The implicit commit is causing me a problem here, the only solution to
 which
  I can think is to use 4.1.0 (which is alpha) or to do some
 horrible table
  name mangling for the temp table and just accumulate a lot of
 data for the
  duration of the transaction.
 
  Does anyone know if it is planned to back port the fix in 4.1.0 onto
 4.0.21
  or later?

 I am sorry, 4.0 is frozen from new features. The backport will
 probably not
 happen.

  From 4.1.0 changelog:
 
   - DROP TEMPORARY TABLE now drops only temporary tables and doesn't end
  transactions.
 
  Thanks,
 
  Mike

 Best regards,

 Heikki
 Innobase Oy
 InnoDB - transactions, row level locking, and foreign keys for MySQL
 InnoDB Hot Backup - a hot backup tool for InnoDB which also backs
 up MyISAM
 tables
 http://www.innodb.com/order.php

 Order MySQL support from http://www.mysql.com/support/index.html




 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]






-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Mysqld stalls

2004-06-18 Thread Michael McTernan
Dear Mark,

You should be tweaking your mailer such that your mails originate fom
[EMAIL PROTECTED], and not my own email address.

Thanks,

Mike

 -Original Message-
 From: Michael McTernan
 Sent: 27 May 2004 10:00
 To: [EMAIL PROTECTED]
 Subject: Mysqld stalls


 I've been trying to fine tune my mysqld settings in my.cnf on a very busy
 server. It seems to be doing fine, as in the server loading. BUT every so
 often I see the number of processes spike and then it sesms mysqld is
 unresponsive through httpd and I end up having to restart mysqld to get it
 going again.

 What settings should I be tweaking ?

 Mark Susol




 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]






-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: MySQL4 and phpBB

2004-06-18 Thread Michael McTernan
Dear Mark,

 Is anyone experiencing problems with MySQL 4.0.17-max (other
 versions) and a
 lot of I/O type errors on large databases or especially running phpBB?

This can be fixed by sending emails from your own email address
([EMAIL PROTECTED]), and not mine.

Thanks,

Mike

 -Original Message-
 From: Michael McTernan
 Sent: 19 April 2004 10:22
 To: [EMAIL PROTECTED]
 Subject: MySQL4 and phpBB


 I have recently setup a new server running EnsimPro 3.7 (Fedora) and
 upgraded the MySQL to 4.0.17-max per their tech notes.

 Now, I'm having troubles with tables related to my most busiest site
 becoming corrupt. The site in question uses phpBB. We've checked the hard
 drive out and don't think we have an issue with it. We did map out any bad
 blocks on install. The corruption is only with this one site.

 My sys admin believes this may be some I/O problem at the kernel
 level with
 this version of MySQL. The phpBB has been very stable for years even on a
 RaQ4 running RH6.2 and MySQL 4.0.15..albeit that server could
 barely handle
 the load. But since moving to this new install I'm struggling
 with problems..

 Is anyone experiencing problems with MySQL 4.0.17-max (other
 versions) and a
 lot of I/O type errors on large databases or especially running phpBB?

 Yes I'm asking on Ensim  phpBB support sites for help as well,
 but this may
 also be a compile/module issue with MySQl and the server.


 Mark Súsol
 ---
 u l t i m a t e ­ CreativeMedia
 Web | Print | CD Media | eCommerce
 www.ultimatecreativemedia.com
 Ph: 301-668-0588


 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]






-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Professional certification

2004-06-18 Thread Brian Mansell
I took the certification exam this morning and passed. When should I
expect to receive the certificate (and other items) in the mail?

--bmansell

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: PHP + MySQL Not Playing Nice Any More

2004-06-18 Thread Michael McTernan
Dear Mark,

Please set your from address to [EMAIL PROTECTED], and don't
use my email address.

Thanks,

Mike

 -Original Message-
 From: Michael McTernan
 Sent: 16 April 2004 10:21
 To: Joseph A. Nagy, Jr.; MySQL General
 Subject: Re: PHP + MySQL Not Playing Nice Any More


 On 4/15/04 11:46 PM, Joseph A. Nagy, Jr.
 [EMAIL PROTECTED] wrote:

  So I'm using PHP and MySQL to serve up a game and all is going
 well  until
  today. The first problem came when for some reason the game was sending
  apache as the username to access the db (which is not what I have in the
  dbconnect file) and output some errors. I checked the page it was
  complaining about and all was good so I use phpmyadmin to login
 and admin my
  db but now even phpmyadmin won't let me in and I know I haven't
 touched the
  config file since I first set it up.
 
  Does MySQL do this very often or is it a PHP error and not a
 MySQL one and
  if this is a PHP error where do I look to fix it?

 This seems more like a change was made in your hosting environment,
 unrelated to php or mysql. I've seen this happen when sites were
 moved into
 safe moded environments.


 Mark Súsol
 ---
 u l t i m a t e ­ CreativeMedia
 Web | Print | CD Media | eCommerce
 www.ultimatecreativemedia.com
 Ph: 301-668-0588


 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]






-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Dumping mass inserts?

2004-06-18 Thread Michael McTernan
Dear Mark,

 And when I go to move them to the new db, I do use the mysqldump command
 since I am dumping to the new db?

You need to correctly setup your mailer such that the from address is
[EMAIL PROTECTED], and not my email address.

Thanks,

Mike


 -Original Message-
 From: Michael McTernan
 Sent: 09 April 2004 18:30
 To: James E Hicks III; [EMAIL PROTECTED]
 Subject: Re: Dumping mass inserts?


  Ah..I see now. I was still using mysqldump..instead of mysql. Yes I'm
  trying to import the data from backup, into an empty table
 since the data
  was corrupt.
 
  Looks like you are on the right track now! :)
 
  James

 Ok now I want to dump tables with names starting with phpads so I can
 move them to another db. How is the mysqldump command run then?

 And when I go to move them to the new db, I do use the mysqldump command
 since I am dumping to the new db?

 Mark


 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]






-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Corruption and my.cnf

2004-06-18 Thread Michael McTernan
Dear Mark,

I think your emailer has also experienced corruption since your from address
is actually mine.  Please set it to your own email address, such as
[EMAIL PROTECTED]

Thanks,

Mike

 -Original Message-
 From: Michael McTernan
 Sent: 09 April 2004 12:18
 To: [EMAIL PROTECTED]
 Subject: Re: Corruption and my.cnf


  I've experienced more corruption lately on my main site since I
 moved to my
  own server running 4.0.17max. The site is very busy (60GB a
 month) and the
  tables are large. I didn't have this level of problems on the
 rental server.
 
  What are the variables to look into regarding why my tables are getting
  corrupt? Is this a my.cnf issue? Is this a memory issue? Is this a hard
  drive issue? Is this a too many connections issue?

 Looking at my dmesg output..

 end_request: I/O error, dev 03:42 (hdb), sector 52228450
 hdb: read_intr: status=0x59 { DriveReady SeekComplete DataRequest Error }
 hdb: read_intr: error=0x40 { UncorrectableError },
 LBAsect=56276830, high=3,
 low=5945182, sector=52228450

 Is it possible this is related to my MySQL table corruption issues? There
 are more of these in the file.


 Mark Súsol
 ---
 u l t i m a t e ­ CreativeMedia
 Web | Print | CD Media | eCommerce
 www.ultimatecreativemedia.com
 Ph: 301-668-0588


 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]






-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Using PHP to copy tables

2004-06-18 Thread Michael McTernan
Dear Mark,

You also seem to have used my email address before.  Please correctly
configure your mailer such that the from address is correctly reported as
[EMAIL PROTECTED], and not mine.

Thanks,

Mike

 -Original Message-
 From: Michael McTernan
 Sent: 08 April 2004 18:36
 To: [EMAIL PROTECTED]; [EMAIL PROTECTED]
 Subject: Re: Using PHP to copy tables


  Is there an easy way to create an HTML page that will copy
 selected tables
  to backup copies on the same server? I want to create an administration
  page for my client to be able to backup their database
 whenever they see
  fit. But, I can't give them direct access to the MySQL server and don't
  want them backing up to their hard drive. I prefer to simply copy the
  tables to backup versions on the server so that if problems
 arise, I can
  log into the server and simply copy the backups to the originals.

 Here is what I have used before...but this might be for you to
 run as a cron
 task and be transparent to them.

 http://www.silisoftware.com/scripts/index.php?scriptname=backupDB


 Mark Súsol
 ---
 u l t i m a t e ­ CreativeMedia
 Web | Print | CD Media | eCommerce
 www.ultimatecreativemedia.com
 Ph: 301-668-0588


 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]






-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Too Many Connections

2004-06-18 Thread Michael McTernan
Dear Mark,

The best way to fix this is by correctly setting your from address in your
mailer to [EMAIL PROTECTED]

Thanks,

Mike

 -Original Message-
 From: Michael McTernan
 Sent: 08 April 2004 10:33
 To: [EMAIL PROTECTED]
 Subject: Too Many Connections


 What is the best way to diagnose the root cause of this error?
 What scripts
 are doing the connecting and totalling them up?

 Warning: mysql_connect(): User ultimated has already more than
 'max_user_connections' active connections

 I have a very active phpBB but I'm on a new server and its not pulling a
 server loading over 0.5. I ran some data before (crontab php
 script gathered
 the info for me every 5 minutes for several weeks) and the
 problem happened
 before related to server loading..not necessarily how many users I had on
 that site posting. That was an older Cobalt RaQ4. I seemed to be having a
 lot of search bots accessing the site then.

 [mysqld]
 set-variable = max_connections=512
 set-variable = max_user_connections=200
 set-variable = key_buffer=64M
 set-variable = table_cache=256
 set-variable = sort_buffer=4M
 set-variable = wait_timeout=300

 I've only had this problem this week, its run 3 weeks fine. I do have a
 corrupted MYI file according to myisamck.


 Mark Súsol
 ---
 u l t i m a t e ­ CreativeMedia
 Web | Print | CD Media | eCommerce
 www.ultimatecreativemedia.com
 Ph: 301-668-0588


 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]






-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Best practice question

2004-06-18 Thread Randy Paries
Hello All,

I have a design question

I have a table that represents a Newsletter page.

This newsletter can be single or 2 columns

Here is my question.

Currently the table is defined as 
Int articleID
Int pagenum
Texttitle
Textbody
Texttitle2
Textbody2

So here is my question. 
1)there will never be more than two columns
2)the majority of articles will only be one column

Does a majority of two empty text fields out way the penalty of a join with
a Column table

Do empty text columns take up much room?

Thanks for any help

randy 


smime.p7s
Description: S/MIME cryptographic signature


Re: load data into 2 tables and set id

2004-06-18 Thread SGreen

 welcome to a basic overview of bulk importing and normalizing as
you go 

[ author's note: if you are seeing this thread for the first time and
certain items seem to be
introduced out of context, please review all previous posts in this thread.
There has been
a lot of information already exchanged on this topic that I clipped
out of this response. Thanks! -- SG]

In an earlier post you said
The data values I have for each record are:

user_id   date_time   size   url   category

for example:

u752359   2004-04-02 12:33:04   3403   http://www.mysql.com/index.html
business


To me that implies that you have a standard text log where each field is
separated by a space and rows are delimited by a CRLF pair. To get that log
into MySQL you will need to do something _like_ (I say like as you will
most likely need to tweak it to accommodate your actual data)

CREATE TABLE bulk_table (
  user varchar(10)
  , eventdate date
  , eventtime time
  , size int
  , url text
  , category varchar(50)
);

LOAD DATA INFILE a proxy log file INTO bulk_table FIELDS TERMINATED BY '
' LINES TERMINATED BY '\r\n';

Now we can start merging the logs into your data tables.

To quote RFC 2396:
   The URI syntax does not require that the scheme-specific-part have
   any general structure or set of semantics which is common among all
   URI.  However, a subset of URI do share a common syntax for
   representing hierarchical relationships within the namespace.  This
   generic URI syntax consists of a sequence of four main components:

  scheme://authoritypath?query

   each of which, except scheme, may be absent from a particular URI.

That translates into 4 logical pieces we can split a URL into:
1) the scheme -- HTTP, FTP, GOPHER, etc...
2) the authority -- that is the server (www.yahoo.com) and any port numbers
or login information
3) the path -- /somefolder/somefile.whatever
4) the query -- everything after the ?

Not part of the generic URI is that bit known as a fragment (as identified
by the #) it is ALWAYS at the end of the _entire URL_ (including the query)
when it's used. I have always lumped those and queries into the same field.

So you could create tables for each of those parts and get VERY normalized
or you can partly normalize like this:

ALTER TABLE bulk_table add server_split int not null default 0
  , add path_split int not null default 0
  , add server varchar(255)
  , add path varchar(255)

UPDATE bulk_table
set server_split = LOCATE('/', URL , 8)-1;

UPDATE bulk_table
SET path_split = if(LOCATE('?', URL, server_split) 0, LOCATE('?', URL,
server_split),  LOCATE('#', URL, server_split)-1);

UPDATE bulk_table
set server=LEFT(URL, server_split )

#those 4 new columns helped us to parse out the 3 major parts of the url
#I added them to the table so that we would not have to keep recalculating
those values later on
# if it turns out that adding the columns takes a LONG time, we can create
this table with those columns
# already created and just not import to them (change the LOAD DATA INFILE
statement slightly)

CREATE TABLE IF NOT EXISTS url_servers (
ID int not null auto_increment,
server varchar(255) primary key,
Key (ID)
)

CREATE TABLE IF NOT EXISTS url_paths (
ID int not null auto_increment,
path varchar(255) primary key,
Key (ID)
)

INSERT IGNORE INTO url_servers(server)
SELECT DISTINCT server
FROM bulk_table

INSERT IGNORE INTO url_paths (path)
SELECT DISTINCT path
FROM bulk_table

# at this point we have all of our new Servers and our Paths uniquely
numbered
# but we are going to need a slightly different URL table to track visits.

CREATE TABLE url_visit (
urlid mediumint not null auto_increment primary key,
url_server_ID int not null default 0,
url_path_ID int not null default 0,
querystring text default null,
category varchar(50)default null,
KEY(url_server_ID, url_path_ID)
)

## that last key is to speed up our joins to our _servers and _paths
tables...

#  we finally have enough information to insert to the visit table

INSERT url_visit (url_server_ID, url_path_ID, querystring, category)
SELECT us.ID, up.ID, if(bt.path_split 0, SUBSTRING(bt.url,path), NULL),
bt.category
FROM bulk_table bt
INNER JOIN url_servers us
  ON us.server = bt.server
INNER JOIN url_paths up
  on up.path=bt.path

## (see where the new pre-computed columns come in handy?) :-D

# and now we have enough information to load the internet_usage table.
Though there isn't enough data in your sample
# source data to fill in all of the columns

INSERT internet_usage ( uid,`time`,urlid, size)
SELECT bt.user, ADDTIME(bt.date, bt.time), uv.urlid, bt.size
FROM bulk_table bt
INNER JOIN url_servers us
  ON us.server = bt.server
INNER JOIN url_paths up
  ON up.path=bt.path
INNER JOIN url_visit uv
  ON uv.url_server_ID = us.ID
  AND uv.url_path_ID = up.id
  AND uv.querystring = if(bt.path_split 0, SUBSTRING(bt.url,path),
NULL)


It may not be perfect but it's how 

RE: GROUP BY across UNION

2004-06-18 Thread Michael McTernan
Hi John,

Depending on the size of your datasets, you could merge the data into a
TEMPORARY table and then compute from there?  If the temp table is small
enough it will fit in RAM as a heap table, and will probably be more
efficient than fetching all the results and computing them in code.  Of
course, if the dataset is large enough, the temporary table will hit the
disc, and then it will be very inefficent though.

Thanks,

Mike

 -Original Message-
 From: John McCaskey [mailto:[EMAIL PROTECTED]
 Sent: 24 February 2004 17:08
 To: [EMAIL PROTECTED]
 Subject: RE: GROUP BY across UNION


 Nope, this would yield a 'Column: 'avg' in field list is ambiguous'
 error.  I'm still trying to work out a better way of doing this.  I also
 need to get standard deviations now, and the method I mentioned in my
 original post doesn't even work for that.  So now I'm left with actually
 getting all the values and computing them in code.  Very sub optimal.

 John A. McCaskey


 -Original Message-
 From: Ligaya Turmelle [mailto:[EMAIL PROTECTED]
 Sent: Wednesday, February 18, 2004 6:30 PM
 To: [EMAIL PROTECTED]
 Subject: Re: GROUP BY across UNION


 I am pretty new myself but can't you do it with a join? Like SELECT
 AVG(avg) FROM table_a, table_b GROUP BY id_field.

 Respectfully,
 Ligaya Turmelle

 John McCaskey [EMAIL PROTECTED] wrote in message
 news:[EMAIL PROTECTED]
 First of all I am using mysql 4.0.18 and I am not free to upgrade to 4.1
 or 5.0.



 I have two tables:



 table_a, and table_b these two tables have the same structure:

 CREATE table_a (

   id_field mediumint(8) unsigned NOT NULL,

   avg float default NULL

 )



 What I want to do is get the combined avg across table_a and table_b for
 each id value.  So I tried doing

 (SELECT AVG(avg) FROM table_a)

 UNION

 (SELECT AVG(avg) FROM table_b)

 GROUP BY id_field;



 This however doesn't work.  I can see why this is thinking about how a
 union should work mathematically but I'm left with no apparent way to
 directly get the avg across two tables.  I know that I can take the avg
 from each along with the row count and then do a weighted average using
 those values, but I'd prefer not to have to do that.  Any suggestions or
 am I stuck doing individual group bys on each table uninoning the
 results and then doing weighted averages on the matching pairs?



 Here is what I'm talking about doing with the weighted averages incase
 it is unclear:

 (SELECT AVG(avg) AS avg_a, COUNT(id_field) AS count_a FROM table_a GROUP
 BY id_field)

 UNION

 (SELECT AVG(avg) AS avg_b, COUNT(id_field) AS count_b FROM table_b GROUP
 BY id_field);



 Then I would match up the results and compute total_avg = (avg_a*count_a
 + avg_b*count_b)/(count_a+count_b).  This is not nearly as clean as I
 would like.







 John A. McCaskey






 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:
 http://lists.mysql.com/[EMAIL PROTECTED]


 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]







-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: GROUP BY across UNION

2004-06-18 Thread John McCaskey
Hi Mike, 

This is a good suggestion.  We ended up changing the requirements to not
require the functionality I was trying to develop at the time.  However,
I did just change a temporary table I'm using for a similar process to
HEAP and saw a very nice perfomance improvement.  Should have thought
about that before.

Thanks,

John A. McCaskey


-Original Message-
From: Michael McTernan [mailto:[EMAIL PROTECTED] 
Sent: Friday, June 18, 2004 11:16 AM
To: John McCaskey
Cc: [EMAIL PROTECTED]
Subject: RE: GROUP BY across UNION


Hi John,

Depending on the size of your datasets, you could merge the data into a
TEMPORARY table and then compute from there?  If the temp table is small
enough it will fit in RAM as a heap table, and will probably be more
efficient than fetching all the results and computing them in code.  Of
course, if the dataset is large enough, the temporary table will hit the
disc, and then it will be very inefficent though.

Thanks,

Mike

 -Original Message-
 From: John McCaskey [mailto:[EMAIL PROTECTED]
 Sent: 24 February 2004 17:08
 To: [EMAIL PROTECTED]
 Subject: RE: GROUP BY across UNION


 Nope, this would yield a 'Column: 'avg' in field list is ambiguous' 
 error.  I'm still trying to work out a better way of doing this.  I 
 also need to get standard deviations now, and the method I mentioned 
 in my original post doesn't even work for that.  So now I'm left with 
 actually getting all the values and computing them in code.  Very sub 
 optimal.

 John A. McCaskey


 -Original Message-
 From: Ligaya Turmelle [mailto:[EMAIL PROTECTED]
 Sent: Wednesday, February 18, 2004 6:30 PM
 To: [EMAIL PROTECTED]
 Subject: Re: GROUP BY across UNION


 I am pretty new myself but can't you do it with a join? Like SELECT
 AVG(avg) FROM table_a, table_b GROUP BY id_field.

 Respectfully,
 Ligaya Turmelle

 John McCaskey [EMAIL PROTECTED] wrote in message 
 news:[EMAIL PROTECTED]
 First of all I am using mysql 4.0.18 and I am not free to upgrade to 
 4.1 or 5.0.



 I have two tables:



 table_a, and table_b these two tables have the same structure:

 CREATE table_a (

   id_field mediumint(8) unsigned NOT NULL,

   avg float default NULL

 )



 What I want to do is get the combined avg across table_a and table_b 
 for each id value.  So I tried doing

 (SELECT AVG(avg) FROM table_a)

 UNION

 (SELECT AVG(avg) FROM table_b)

 GROUP BY id_field;



 This however doesn't work.  I can see why this is thinking about how a

 union should work mathematically but I'm left with no apparent way to 
 directly get the avg across two tables.  I know that I can take the 
 avg from each along with the row count and then do a weighted average 
 using those values, but I'd prefer not to have to do that.  Any 
 suggestions or am I stuck doing individual group bys on each table 
 uninoning the results and then doing weighted averages on the matching

 pairs?



 Here is what I'm talking about doing with the weighted averages incase

 it is unclear:

 (SELECT AVG(avg) AS avg_a, COUNT(id_field) AS count_a FROM table_a 
 GROUP BY id_field)

 UNION

 (SELECT AVG(avg) AS avg_b, COUNT(id_field) AS count_b FROM table_b 
 GROUP BY id_field);



 Then I would match up the results and compute total_avg = 
 (avg_a*count_a
 + avg_b*count_b)/(count_a+count_b).  This is not nearly as clean as I
 would like.







 John A. McCaskey






 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe: 
 http://lists.mysql.com/[EMAIL PROTECTED]


 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]







--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: GROUP BY across UNION

2004-06-18 Thread SGreen

I agree with Mike.

In order to use the aggregate functions across both tables' worth of data
you will have to combine them into one large table. That can be a real
table or a temporary one but in either case you are stuck combining them
BEFORE the calculations or MySQL won't be able to crunch the numbers
correctly.

One way not mentioned yet could be to use a derived table ( a temp table
built into the query):

SELECT id_field, AVG(avg), STD(avg), Min(avg)
FROM ((SELECT id_field, avg from table_a) union (SELECT id_field, avg from
table_b))
GROUP by id_field

If MySQL had FULL OUTER JOIN, you could use that too but that feature is
not released yet.

Yours,
Shawn Green
Database Administrator
Unimin Corporation - Spruce Pine



   
  
  Michael McTernan   
  
  Michael.McTernan@To:   John McCaskey [EMAIL 
PROTECTED] 
  ttpcom.com   cc:   [EMAIL PROTECTED]  
  
Fax to:
  
  06/18/2004 02:15  Subject:  RE: GROUP BY across UNION
  
  PM   
  
   
  
   
  




Hi John,

Depending on the size of your datasets, you could merge the data into a
TEMPORARY table and then compute from there?  If the temp table is small
enough it will fit in RAM as a heap table, and will probably be more
efficient than fetching all the results and computing them in code.  Of
course, if the dataset is large enough, the temporary table will hit the
disc, and then it will be very inefficent though.

Thanks,

Mike

 -Original Message-
 From: John McCaskey [mailto:[EMAIL PROTECTED]
 Sent: 24 February 2004 17:08
 To: [EMAIL PROTECTED]
 Subject: RE: GROUP BY across UNION


 Nope, this would yield a 'Column: 'avg' in field list is ambiguous'
 error.  I'm still trying to work out a better way of doing this.  I also
 need to get standard deviations now, and the method I mentioned in my
 original post doesn't even work for that.  So now I'm left with actually
 getting all the values and computing them in code.  Very sub optimal.

 John A. McCaskey


 -Original Message-
 From: Ligaya Turmelle [mailto:[EMAIL PROTECTED]
 Sent: Wednesday, February 18, 2004 6:30 PM
 To: [EMAIL PROTECTED]
 Subject: Re: GROUP BY across UNION


 I am pretty new myself but can't you do it with a join? Like SELECT
 AVG(avg) FROM table_a, table_b GROUP BY id_field.

 Respectfully,
 Ligaya Turmelle

 John McCaskey [EMAIL PROTECTED] wrote in message
 news:[EMAIL PROTECTED]
 First of all I am using mysql 4.0.18 and I am not free to upgrade to 4.1
 or 5.0.



 I have two tables:



 table_a, and table_b these two tables have the same structure:

 CREATE table_a (

   id_field mediumint(8) unsigned NOT NULL,

   avg float default NULL

 )



 What I want to do is get the combined avg across table_a and table_b for
 each id value.  So I tried doing

 (SELECT AVG(avg) FROM table_a)

 UNION

 (SELECT AVG(avg) FROM table_b)

 GROUP BY id_field;



 This however doesn't work.  I can see why this is thinking about how a
 union should work mathematically but I'm left with no apparent way to
 directly get the avg across two tables.  I know that I can take the avg
 from each along with the row count and then do a weighted average using
 those values, but I'd prefer not to have to do that.  Any suggestions or
 am I stuck doing individual group bys on each table uninoning the
 results and then doing weighted averages on the matching pairs?



 Here is what I'm talking about doing with the weighted averages incase
 it is unclear:

 (SELECT AVG(avg) AS avg_a, COUNT(id_field) AS count_a FROM table_a GROUP
 BY id_field)

 UNION

 (SELECT AVG(avg) AS avg_b, COUNT(id_field) AS count_b FROM table_b GROUP
 BY id_field);



 Then I would match up the results and compute total_avg = (avg_a*count_a
 + avg_b*count_b)/(count_a+count_b).  This is not nearly as clean as I
 would like.







 John A. McCaskey






 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:
 http://lists.mysql.com/[EMAIL PROTECTED]


 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]







--
MySQL General Mailing List
For list 

DBD::mysql problem

2004-06-18 Thread Kairam, Raj
I am unable to install DBD::mysql in my environment shown below
RedHat Linux 9
perl v5.8.0
DBI-1.42
mysqld Ver 4.0.18-standard

After unzipping and untarring the DBD-mysql-2.9003.tar.gz ( obtained from CPAN ), in 
the DBD-mysql-2.9003 directory
I tried the following
mkdir /tmp/mysql-static
cp /usr/lib/mysql/*.a  /tmp/mysql-static
perl Makefile.PL --libs=-L/tmp/mysql-static -lmysqlclient
make

It fails.
When I looked at the Makefile created by 'perl Makefile.PL ..' command, I find there 
are lines that cause make to fail
Mostly they are constants that are not properly quoted.

Is there a proper tar.gz file that I can use in my ( RH Linux 9) environment stated 
above that someone was able to successfully install?

Any advice or help will be very much appreciated.

Thanks
Raj Kairam


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Tuning mysql performance

2004-06-18 Thread Paolo Audiberti
Hello,

I need some help tuning mysql. I'm running 3.23.58-Max-log on a Red Hat
Linux Enterprise server with 1 gig of memory and 4 cpus. The database is
used by a web application which runs on a separate machine. The performance
is not so good. Can anybody tell me if my configurations are incorrect, and
what I may need to change? My /etc/my.cnf file is as follows:

user= mysql
port= 3306
skip-locking

set-variable= key_buffer=384M
set-variable= max_allowed_packet=1M
set-variable= table_cache=512
set-variable= sort_buffer=2M
set-variable= record_buffer=2M
set-variable= thread_cache=8
# Try number of CPU's*2 for thread_concurrency
set-variable= thread_concurrency=8
set-variable= myisam_sort_buffer_size=64M
##set-variable  = net_buffer_length=8K
set-variable = max_connections = 200
log-bin
server-id   = 1


# Uncomment the following if you are using Innobase tables
##innodb_data_file_path = ibdata1:2000M;ibdata2:10M:autoextend
innodb_data_file_path = ibdata1:800M
innodb_data_home_dir = /var/mysql/data/
innodb_log_group_home_dir = /var/log/mysql/
innodb_log_arch_dir = /var/log/mysql/
set-variable = innodb_mirrored_log_groups=1
set-variable = innodb_log_files_in_group=3
##set-variable = innodb_log_file_size=100M
set-variable = innodb_log_file_size=5M
set-variable = innodb_log_buffer_size=8M
innodb_flush_log_at_trx_commit=1
innodb_log_archive=0
set-variable = innodb_buffer_pool_size=384M
set-variable = innodb_additional_mem_pool_size=20M
set-variable = innodb_file_io_threads=8
set-variable = innodb_lock_wait_timeout=50


[mysqldump]
quick
set-variable= max_allowed_packet=16M

[mysql]
no-auto-rehash

[isamchk]
set-variable= key_buffer=256M
set-variable= sort_buffer=256M
set-variable= read_buffer=2M
set-variable= write_buffer=2M

[myisamchk]
set-variable= key_buffer=256M
set-variable= sort_buffer=256M
set-variable= read_buffer=2M
set-variable= write_buffer=2M

[mysqlhotcopy]
interactive-timeout

Running SHOW STATUS displays the following:

| Connections  | 723666 |
| Created_tmp_disk_tables  | 6230   |
| Created_tmp_tables   | 10179  |
| Created_tmp_files| 0  |
| Delayed_insert_threads   | 0  |
| Delayed_writes   | 0  |
| Delayed_errors   | 0  |
| Flush_commands   | 1  |
| Handler_delete   | 138|
| Handler_read_first   | 197703 |
| Handler_read_key | 203042308  |
| Handler_read_next| 377071037  |
| Handler_read_prev| 0  |
| Handler_read_rnd | 1377204|
| Handler_read_rnd_next| 913180300  |
| Handler_update   | 193164 |
| Handler_write| 14314266   |
| Key_blocks_used  | 193053 |
| Key_read_requests| 26041126   |
| Key_reads| 178671 |
| Key_write_requests   | 1463800|
| Key_writes   | 1432420|
| Max_used_connections | 15
| Not_flushed_key_blocks   | 0  |
| Not_flushed_delayed_rows | 0  |
| Open_tables  | 174|
| Open_files   | 255|
| Open_streams | 0  |
| Table_locks_immediate| 2137916|
| Table_locks_waited   | 437|
| Threads_cached   | 7  |
| Threads_created  | 32 |
| Threads_connected| 1  |
| Threads_running  | 1  |
| Uptime   | 759513 |
+--++

thanks,
Paolo


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Fresh 4.1.2 install on Redhat 9

2004-06-18 Thread gerald_clark
It looks like mysql does not own the data directory.
[EMAIL PROTECTED] wrote:
Hello List,
I have done some googling around but can't find an answer to this one.  Brand new box, 
installed with RedHat 9 and trying to run 4.1.2.  This is what I get.
--ja
[EMAIL PROTECTED] mysql]# scripts/mysql_install_db --user mysql --log
Installing all prepared tables
040618 11:06:51  Warning: Asked for 196608 thread stack, but got 126976
./bin/mysqld: File './mySnort.log' not found (Errcode: 13)
040618 11:06:51  Could not use mySnort.log for logging (error 13). Turning
logging off for the whole duration of the MySQL server process. To turn it
on again: fix the cause, shutdown the MySQL server and restart it.
./bin/mysqld: File './mySnort-bin.1' not found (Errcode: 13)
040618 11:06:51  Could not use mySnort-bin for logging (error 13). Turning
logging off for the whole duration of the MySQL server process. To turn it
on again: fix the cause, shutdown the MySQL server and restart it.
ERROR: 1049  Unknown database 'mysql'
040618 11:06:51  Aborting
 


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Recommendation on god MySQL books

2004-06-18 Thread Lou Olsten
I really like the Certification Study Guide we just ordered last week.
Great info that I'd wish I had when I started.  I have no plans to take the
test, but I love the way the info is presented and the questions at the end
help ensure I got it.

Lou
- Original Message - 
From: Bartis, Robert M (Bob) [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Thursday, June 17, 2004 1:44 PM
Subject: Recommendation on god MySQL books


 I'm looking for suggestions on books that would help me to improve my
understanding of MySQL operations, admin operations, replication etc. I'm
new to MySQL and am about to embark on supporting a database for my team to
use in recording test results. Any suggestions and recommendations ones to
stay away from?

 Thanks in advance
 Bob

 -- 
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



EXISTS/NOT EXISTS

2004-06-18 Thread Anton Ivanov
Hi,  I'm trying to figure out how to apply these from the manual, but to
no avail.  Is it possible that my version (4.0.18) does not implement
these?
I have two tables: products and products_by_product_area.  Both have a
field product.  I try

SELECT product from products WHERE NOT EXISTS (SELECT DISTINCT * from
products_by_product_areas WHERE products_by_product_areas.product =
products.product);

Both of these queries run fine on their own.  It looks to me that I'm
simply adapting from the manual, but all I get is

ERROR 1064: You have an error in your SQL syntax.  Check the manual that
corresponds to your MySQL server version for the right syntax to use
near 'EXISTS (SELECT DISTINCT * from products_by_product_areas WHERE

OK, so I grab the example verbatim from the manual:
SELECT DISTINCT store_type FROM Stores
  WHERE EXISTS (SELECT * FROM Cities_Stores
WHERE Cities_Stores.store_type = Stores.store_type);

And run it.  Same error.  Never mind that I don't have these tables: the
query does not compile.  What is going on?



Re: MySQL Installation Problem

2004-06-18 Thread Kofirowster
First, you'll get more help if you post to the list (Mysql General mailing
list [EMAIL PROTECTED])

All the --console does is directs error messages to the console, i.e., the
dos window you have open.
If you leave it off the error messages go to a file.
What you describe below
 without --console as para I'm able to start the
 server.
 is _not_ the server starting correctly. It's _not_ starting and the error
message is going to file when you do it without the --console.
If it had started you would not have gotten back to the prompt, it would
just never come back until the server was stopped elsewhere.


- Original Message -
From: Chukkala Bhaskar [EMAIL PROTECTED]
To: Kofirowster [EMAIL PROTECTED]
Sent: Thursday, June 17, 2004 10:28 PM
Subject: Re: MySQL Installation Problem


 Hi guys,
 I'm new to MySQL.

 I tried the following on my test pc with Win98.
 It gave me error if I use --console as para.
 Any clue?

 without --console as para I'm able to start the
 server.

 Regards

 D:\MySQL\binmysqld --console
 040618 13:11:07  InnoDB: Operating system error number
 32 in a file operation.
 InnoDB: See http://www.innodb.com/ibman.php for
 installation help.
 InnoDB: See section 13.2 at
 http://www.innodb.com/ibman.php
 InnoDB: about operating system error numbers.
 InnoDB: File name .\ibdata1
 InnoDB: File operation call: 'open'.
 InnoDB: Cannot continue operation.

 D:\MySQL\binmysqld

 D:\MySQL\bin



 --- Kofirowster [EMAIL PROTECTED] wrote:
  I have a windows installation of myql. The links
  Michael gave give correct
  info.
  Here's a synopsis:
  To test a mysql install on windows open the
  mysql/bin dir in a dos window.
  At the prompt type
  mysqld --console
 
  you should get a message back that indicates the
  server is running, such as
  mysqld: ready for connections
 
  open a second dos window at the same location and at
  the prompt type
  mysql
 
  you should get a message back similar to this where
  the x's are replaced by
  the version you are running:
  Welcome to the MySQL monitor.  Commands end with ;
  or \g.
  Your MySQL connection id is 1 to server version:
  x.xx.xx-debug
 
  Type 'help' for help.
 
  mysql
 
  HTH,
  Johnny
  - Original Message -
  From: Michael Stassen
  [EMAIL PROTECTED]
  To: Singh, Bijay [EMAIL PROTECTED]
  Cc: Mysql General mailing list
  [EMAIL PROTECTED]
  Sent: Thursday, June 17, 2004 11:01 AM
  Subject: Re: MySQL Installation Problem
 
 
  
   I'm including the list on this.  In general, you
  will get better and
  faster
   responses if you keep threads on the list.
  
   I've never run the mysql server under Windows, but
  I assume the manual is
   accurate
 
 http://dev.mysql.com/doc/mysql/en/Windows_server_first_start.html
   and
  http://dev.mysql.com/doc/mysql/en/NT_start.html.
  
   You must configure phpmyadmin so that when it
  attempts to connect to
  mysql,
   it uses a username-password combination that mysql
  will accept.  If you
  have
   a mysql user with no password, you can use that,
  but I wouldn't recommend
   allowing any unauthenticated connections to mysql.
   See the manual for
  more
  
 
 http://dev.mysql.com/doc/mysql/en/Default_privileges.html.
  
   Michael
  
   Singh, Bijay wrote:
  
I am using localhost with no password. how do
  check whether MySQL is
   running or not. If not how do I start the MySQL on
  Windows ?
   
   
-Original Message-
From: Michael Stassen
  [mailto:[EMAIL PROTECTED]
Sent: Thursday, June 17, 2004 1:29 PM
To: [EMAIL PROTECTED]
Cc: Singh, Bijay; [EMAIL PROTECTED]
Subject: Re: MySQL Installation Problem
   
   
First, make sure mysql server is running.
   
Second, make sure you have properly configured
  phpmyadmin in file
config.inc.php.  In particular, you have to set
  how phpmyadmin will
  connect
to mysql (user and pass).
   
Michael
   
Daniel Clark wrote:
   
   
   After the phpmyadmin login I got the same error.
   
   In the phpmyadmin config files I had to set the
  local IP address that
   phpmyadmin was on.
   
   
Strange.  Didn't it default to localhost?
   
   
   Dear Michael,
   
   I want to use PHP, Apache, mySQL and phpMyAdmin
  for evaluation but I m
  not
   able to get this rite.
   
   
   I am trying to instal MySQL with phpMyAdmin and
  i get the error  when
  view
   
   from :
   
   http://localhost/phpMyAdmin/
   
   
   #2003 - Can't connect to MySQL server on
  'localhost' (10061)
   
   OS - WinXP Pro
   Apache
   phpMyAdmin 2.5.7
   
   I tried reinstalling but not able to get. I
  checked Apache and php they
   work fine except mySQL.
   
   Any help will be appreciated. Feel free to
  email or contact me on phone
  #
   below.
   
   
   
   
  
  
   --
   MySQL General Mailing List
   For list archives: http://lists.mysql.com/mysql
   To unsubscribe:
 
 http://lists.mysql.com/[EMAIL PROTECTED]
  
  
 
  --
  MySQL General Mailing List
  For list archives: http://lists.mysql.com/mysql
 

Re: EXISTS/NOT EXISTS

2004-06-18 Thread Andrew Pattison
You are using a subquery. Subqueries are only supported in version 4.1 and
later. You will eithe rneed to rewrite your query so that it doesn't use a
subquery, or upgrade.

Cheers

Andrew.

- Original Message - 
From: Anton Ivanov [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Friday, June 18, 2004 10:16 PM
Subject: EXISTS/NOT EXISTS


Hi,  I'm trying to figure out how to apply these from the manual, but to
no avail.  Is it possible that my version (4.0.18) does not implement
these?
I have two tables: products and products_by_product_area.  Both have a
field product.  I try

SELECT product from products WHERE NOT EXISTS (SELECT DISTINCT * from
products_by_product_areas WHERE products_by_product_areas.product =
products.product);

Both of these queries run fine on their own.  It looks to me that I'm
simply adapting from the manual, but all I get is

ERROR 1064: You have an error in your SQL syntax.  Check the manual that
corresponds to your MySQL server version for the right syntax to use
near 'EXISTS (SELECT DISTINCT * from products_by_product_areas WHERE

OK, so I grab the example verbatim from the manual:
SELECT DISTINCT store_type FROM Stores
  WHERE EXISTS (SELECT * FROM Cities_Stores
WHERE Cities_Stores.store_type = Stores.store_type);

And run it.  Same error.  Never mind that I don't have these tables: the
query does not compile.  What is going on?




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: EXISTS/NOT EXISTS

2004-06-18 Thread Michael Stassen
Subqeries require mysql 4.1.
The manual offers some suggestions on rewriting subqueries as JOINs 
http://dev.mysql.com/doc/mysql/en/Rewriting_subqueries.html.

In your case, you want something like:
  SELECT product FROM products p
  LEFT JOIN products_by_product_areas a ON p.product = a.product
  WHERE a.product IS NULL;
Michael
Anton Ivanov wrote:
Hi,  I'm trying to figure out how to apply these from the manual, but to
no avail.  Is it possible that my version (4.0.18) does not implement
these?
I have two tables: products and products_by_product_area.  Both have a
field product.  I try
SELECT product from products WHERE NOT EXISTS (SELECT DISTINCT * from
products_by_product_areas WHERE products_by_product_areas.product =
products.product);
Both of these queries run fine on their own.  It looks to me that I'm
simply adapting from the manual, but all I get is
ERROR 1064: You have an error in your SQL syntax.  Check the manual that
corresponds to your MySQL server version for the right syntax to use
near 'EXISTS (SELECT DISTINCT * from products_by_product_areas WHERE
OK, so I grab the example verbatim from the manual:
SELECT DISTINCT store_type FROM Stores
  WHERE EXISTS (SELECT * FROM Cities_Stores
WHERE Cities_Stores.store_type = Stores.store_type);
And run it.  Same error.  Never mind that I don't have these tables: the
query does not compile.  What is going on?


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


DBD::mysql::db do failed: Table 'xxxxx' is read only

2004-06-18 Thread Michael Klama
I hope someone can help me with a problem.  I recently changed one of my web
servers over to a new box and this server has an internal MySQL server which
is used to store several web queried databases.  My old server ran Apache
1.3.27 and my new one is running 2.0.49 -  Old MySQL was 3.2.23 and the new
is 4.0.20 - all of my databases were copied over from the old server and all
work except one.  The one that is not working is updated and called by cgi
scripts which request profiles and add profiles to the database.  I am able
to request info and get results but whenever I try to insert data I get an
error DBD::mysql::db do failed: Table 'x' is read only at
/path/to/cgi/script line xxx. referrer /path/to/cgi/script
 
I have checked all of the file permissions and they are all correct.  Can
anyone shed light on what the problem may be.
 
Thank you
Mike



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: EXISTS/NOT EXISTS

2004-06-18 Thread Anton Ivanov
Thanks, I just found that relevant section in the manual, too.  I wish
it was clearer what version the manual documents -- I am using (almost)
the latest stable release, and I should expect that the manual documents
that.


-Original Message-
From: Michael Stassen [mailto:[EMAIL PROTECTED] 
Sent: Friday, June 18, 2004 2:38 PM
To: Anton Ivanov
Cc: [EMAIL PROTECTED]
Subject: Re: EXISTS/NOT EXISTS

Subqeries require mysql 4.1.

The manual offers some suggestions on rewriting subqueries as JOINs 
http://dev.mysql.com/doc/mysql/en/Rewriting_subqueries.html.

In your case, you want something like:

   SELECT product FROM products p
   LEFT JOIN products_by_product_areas a ON p.product = a.product
   WHERE a.product IS NULL;

Michael

Anton Ivanov wrote:

 Hi,  I'm trying to figure out how to apply these from the manual, but
to
 no avail.  Is it possible that my version (4.0.18) does not implement
 these?
 I have two tables: products and products_by_product_area.  Both have a
 field product.  I try
 
 SELECT product from products WHERE NOT EXISTS (SELECT DISTINCT * from
 products_by_product_areas WHERE products_by_product_areas.product =
 products.product);
 
 Both of these queries run fine on their own.  It looks to me that I'm
 simply adapting from the manual, but all I get is
 
 ERROR 1064: You have an error in your SQL syntax.  Check the manual
that
 corresponds to your MySQL server version for the right syntax to use
 near 'EXISTS (SELECT DISTINCT * from products_by_product_areas WHERE
 
 OK, so I grab the example verbatim from the manual:
 SELECT DISTINCT store_type FROM Stores
   WHERE EXISTS (SELECT * FROM Cities_Stores
 WHERE Cities_Stores.store_type = Stores.store_type);
 
 And run it.  Same error.  Never mind that I don't have these tables:
the
 query does not compile.  What is going on?
 
 




DBD::mysql::db do failed: Table 'xxxxx' is read only

2004-06-18 Thread Michael Klama
I hope someone can help me with a problem.  I recently changed one of my web
servers over to a new box and this server has an internal MySQL server which
is used to store several web queried databases.  My old server ran Apache
1.3.27 and my new one is running 2.0.49 -  Old MySQL was 3.2.23 and the new
is 4.0.20 - all of my databases were copied over from the old server and all
work except one.  The one that is not working is updated and called by cgi
scripts which request profiles and add profiles to the database.  I am able
to request info and get results but whenever I try to insert data I get an
error DBD::mysql::db do failed: Table 'abcd' is read only at
/path/to/cgi/script line xxx. referrer /path/to/cgi/script
 
I have checked all of the file permissions and they are all correct.  Can
anyone shed light on what the problem may be.
 
Thank you
Mike



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



REPOST: DBD::mysql::db do failed: Table 'abcd' is read only

2004-06-18 Thread Michael Klama
Sorry about the characters in the last post.  Spambots didn't like it

I hope someone can help me with a problem.  I recently changed one of my web
servers over to a new box and this server has an internal MySQL server which
is used to store several web queried databases.  My old server ran Apache
1.3.27 and my new one is running 2.0.49 -  Old MySQL was 3.2.23 and the new
is 4.0.20 - all of my databases were copied over from the old server and all
work except one.  The one that is not working is updated and called by cgi
scripts which request profiles and add profiles to the database.  I am able
to request info and get results but whenever I try to insert data I get an
error DBD::mysql::db do failed: Table 'abcd' is read only at
/path/to/cgi/script line abc. referrer /path/to/cgi/script
 
I have checked all of the file permissions and they are all correct.  Can
anyone shed light on what the problem may be.
 
Thank you
Mike



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



AUTO_INCREMENT problem... ER_DUP_ENTRY? (No, it's not a one byte index :)

2004-06-18 Thread Kevin Brock
We have a table with a primary index which is INT NOT NULL 
AUTO_INCREMENT.

After inserting ~87,000,000 entries, we started seeing error 1062, 
ER_DUP_ENTRY.

We can get going again after doing an ALTER TABLE to reset the 
auto_increment starting point, but this takes about an hour...

I've seen a couple of places where how to get around this problem was 
discussed, but nobody seems to discuss *why* this occurs in the first 
place.

Does anyone know why MySQL would start failing to increment an 
auto_increment index properly when it's nowhere near the upper limit?  
Does anyone know a way to get things functioning again without a couple 
of hours downtime?

Hoping there's an answer out there somewhere...
Kevin Brock
[EMAIL PROTECTED]


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: AUTO_INCREMENT problem... ER_DUP_ENTRY? (No, it's not a one byte index :)

2004-06-18 Thread Scott Haneda
on 06/18/2004 05:16 PM, Kevin Brock at [EMAIL PROTECTED] wrote:

 We have a table with a primary index which is INT NOT NULL
 AUTO_INCREMENT.
 
 After inserting ~87,000,000 entries, we started seeing error 1062,
 ER_DUP_ENTRY.
 
 We can get going again after doing an ALTER TABLE to reset the
 auto_increment starting point, but this takes about an hour...
 
 I've seen a couple of places where how to get around this problem was
 discussed, but nobody seems to discuss *why* this occurs in the first
 place.
 
 Does anyone know why MySQL would start failing to increment an
 auto_increment index properly when it's nowhere near the upper limit?
 Does anyone know a way to get things functioning again without a couple
 of hours downtime?
 
 Hoping there's an answer out there somewhere...


While I do not know why, I would suggest you simply drop the PK and recreate
it, this should be a whole lot faster than the alter.
-- 
-
Scott HanedaTel: 415.898.2602
http://www.newgeo.com   Fax: 313.557.5052
[EMAIL PROTECTED]Novato, CA U.S.A.



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Fastest way to load a master table removing duplicates - Not Answered

2004-06-18 Thread Paul Chu
Appreciate any help at all

Thanks, Paul
 
 

-Original Message-
From: Paul Chu [mailto:[EMAIL PROTECTED] 
Sent: Friday, June 18, 2004 10:16 AM
To: [EMAIL PROTECTED]
Subject: Fastest way to load a master table removing duplicates

Hi all,

 

I want to load a Master table millions of rows for other sources with a
unique index on e.g. ssn social sec number.

If  inserted records have a duplicate SSN I don't want to insert those
but put  them in a duplicate table or flag them.

.

The primary key will be an auto-increment field. There will be  other
indexes such as zipcode..

 

What is the fastest way to load these rows and remove duplicates ?

Assume I load the rows to be inserted into another table.

 

1.  Check if the ssn already exists before inserting the row ?

 

2.  Insert the row and ignore duplicate using 

insert into master ( .)  

  select .. From loaddata 

 

I have lots of  files with data that can be saved to load tables and
then inserted into the master table.

 

 

Regards, Paul

 

 



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Full Text Index on Large Tables - Not Answered

2004-06-18 Thread Paul Chu
Appreciate any help at all

Thanks, Paul
  
 

-Original Message-
From: Paul Chu [mailto:[EMAIL PROTECTED] 
Sent: Friday, June 18, 2004 10:16 AM
To: [EMAIL PROTECTED]
Subject: Full Text Index on Large Tables

Hi,

If I have a table with 100 - 200 million rows and I want to search
For records with specific characteristics.

Ex. 
Skills varchar(300) 
Skill id's 10   15  
Accounting finance etc.

Is it advisable to created a field with skill ids and then use the
Skills column in a full text index 

Thanks for your help, Paul




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Clustered Index - Where is data inserted ? Not Answered

2004-06-18 Thread Paul Chu
Appreciate any help at all

Thanks, Paul
  

-Original Message-
From: Paul Chu [mailto:[EMAIL PROTECTED] 
Sent: Friday, June 18, 2004 10:16 AM
To: [EMAIL PROTECTED]
Subject: Clustered Index - Where is data inserted ?

Hi,

Can someone explain how the data is stored in a table using a clustered
index.

Does this mean that the data is inserted in the .myd file in sorted
index order ?

If so, how is space made to insert the new records ? 
Does this make inserting records slow because data is being inserted in
physical sort order in the table ?

Thanks, Paul



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Too Many Connections

2004-06-18 Thread Jeff Smelser
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On Friday 18 June 2004 12:52 pm, Michael McTernan wrote:
 Dear Mark,

 The best way to fix this is by correctly setting your from address in your
 mailer to [EMAIL PROTECTED]

Your email software seems to be wrong. All these people can't be doing 
something wrong.

I think you need to take this off list as well.

- -- 
You go Uruguay, I'll go mine.
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.2.4 (GNU/Linux)

iD8DBQFA06Ehld4MRA3gEwYRAqrzAJ9q7BmXrcPJmq5a84LOcr4qi293vgCfZqa+
nC0Ck3uV8agimCIqWlL6JMI=
=2qNu
-END PGP SIGNATURE-

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: DBD::mysql problem

2004-06-18 Thread Jeff Smelser
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On Friday 18 June 2004 03:00 pm, Kairam, Raj wrote:
 After unzipping and untarring the DBD-mysql-2.9003.tar.gz ( obtained from
 CPAN ), in the DBD-mysql-2.9003 directory I tried the following
 mkdir /tmp/mysql-static
 cp /usr/lib/mysql/*.a  /tmp/mysql-static
 perl Makefile.PL --libs=-L/tmp/mysql-static -lmysqlclient
 make

Some reason your not using the DBD rpm? You probably don't have all the header 
files.

You didn't send the error message or I would tell you what header files to 
install.

- -- 
Sorry if I looked interested. I'm not.
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.2.4 (GNU/Linux)

iD8DBQFA06GZld4MRA3gEwYRAgK1AJ4txXzJw0Kb57OgQM7YLXWXjlA0XwCfQQud
FxUBSbovN99ZYiJxaOC2pMk=
=Onoz
-END PGP SIGNATURE-

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Question about MySQL 4.0.20 and make test failure on Linux

2004-06-18 Thread Tom Williams
Hi!  I'm trying to build MySQL 4.0.20 on RedHat 5.2 (I think) Linux 
system with glibc-2.2.5 and gcc-3.4.0 (which I recently upgraded to).   
The compile runs smoothly, but make test fails.  Here is my configure 
command:

$ ./configure --prefix=/usr/local/mysql-4.0.20 --enable-assembler 
--enable-thread-safe-client --with-mysqld-user=mysql

Here is the output from make test:
[EMAIL PROTECTED] mysql-4.0.20]$ make test
cd mysql-test ; ./mysql-test-run
Installing Test Databases
Removing Stale Files
Installing Master Databases
running  ../sql/mysqld --no-defaults --bootstrap --skip-grant-tables 
--basedir=. --datadir=./var/master-data --skip-innodb --skip-bdb 
--skip-warni
ngs --language=../sql/share/english/
040619  7:39:33  ../sql/mysqld: Shutdown Complete

Installing Slave Databases
running  ../sql/mysqld --no-defaults --bootstrap --skip-grant-tables 
--basedir=. --datadir=./var/slave-data --skip-innodb --skip-bdb 
--skip-warnin
gs --language=../sql/share/english/
040619  7:39:33  ../sql/mysqld: Shutdown Complete

Manager disabled, skipping manager start.
Loading Standard Test Databases
Starting Tests
TEST   RESULT
--
alias  [ pass ]
alter_table[ pass ]
analyse[ pass ]
ansi   [ pass ]
auto_increment [ pass ]
backup [ pass ]
bdb-alter-table-1  [ skipped ]
bdb-alter-table-2  [ skipped ]
bdb-crash  [ skipped ]
bdb-deadlock   [ skipped ]
bdb[ skipped ]
bdb_cache  [ skipped ]
bench_count_distinct   [ pass ]
bigint [ pass ]
binary [ pass ]
bool   [ pass ]
bulk_replace   [ pass ]
case   [ pass ]
cast   [ pass ]
check  [ pass ]
comments   [ pass ]
compare[ pass ]
constraints[ pass ]
convert[ pass ]
count_distinct [ pass ]
count_distinct2[ pass ]
create [ pass ]
ctype_cp1251   [ pass ]
ERROR: /home/tom/mysql-4.0.20/mysql-test/var/run/master.pid was not 
created in 30 seconds;  Aborting
make: *** [test] Error 1
[EMAIL PROTECTED] mysql-4.0.20]$

I've been trying to find info on the test suite in the online doc and I 
decided to post this question here while I continue pouring over the 
doc.  If make test is failing, does that mean I shouldn't use the 
server I just built?

Thanks in advance for your time and assistance!  :)
Peace...
Tom
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Question about MySQL 4.0.20 and make test failure on Linux

2004-06-18 Thread Jeff Smelser
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On Friday 18 June 2004 11:48 pm, Tom Williams wrote:
 Hi!  I'm trying to build MySQL 4.0.20 on RedHat 5.2 (I think) Linux
 system with glibc-2.2.5 and gcc-3.4.0 (which I recently upgraded to).
 The compile runs smoothly, but make test fails.  Here is my configure
 command:

Whoa.. 2.2.5? Redhat 5.2? you realize redhat 5.2. was released in the early 
90's?/ 

Anyway

 $ ./configure --prefix=/usr/local/mysql-4.0.20 --enable-assembler
 --enable-thread-safe-client --with-mysqld-user=mysql

 Here is the output from make test:

snip

 ERROR: /home/tom/mysql-4.0.20/mysql-test/var/run/master.pid was not
 created in 30 seconds;  Aborting
 make: *** [test] Error 1
 [EMAIL PROTECTED] mysql-4.0.20]$

Are you trying to install this as a normal user? Thats what your doing. 

I don't think I have ever ran make test.. :) You should have rights since its 
your home directory..

- -- 
 You know you've been root too long when it's easier
  to change a password than to remember it.
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.2.4 (GNU/Linux)

iD8DBQFA08hDld4MRA3gEwYRAmRLAKDhgTLhvQaZW7NGpiVH6Fl7EiuewQCfesIA
ZGQYT2LfYwj7XqKFBgTpcRc=
=3kQ9
-END PGP SIGNATURE-

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]