Capturing milestone data in a table

2015-03-04 Thread Phil
Hi mysql experts,

I feel like I'm missing something.

I'm trying to capture 'milestone' data when users pass certain metrics or
scores. The score data is held on the user_credits table and changes daily.
Currently just over 3M users on the table and their scores can range from 0
up to the 100's of millions. All increases only (or remain the same) never
decrease.

So I'm trying to insert to a new table to capture when they pass 100, 200,
500, 10001M etc  etc.

Currently I do this with the following statement looping around each
milestone point I've defined ($mile)

insert ignore into user_milestone (select cpid,'$curdate',$mile from
user_credits where metric1  $mile and (metric1 - lastupdate)  $mile)

This certainly works but it's getting slower and slower. Explaining the
statement gives the
following.

mysql explain extended select 1 from stats.user_credits where metric1 
100 and (metric1 - lastupdate)  100\G
*** 1. row ***
   id: 1
  select_type: SIMPLE
table: user_credits
 type: range
possible_keys: score
  key: score
  key_len: 8
  ref: NULL
 rows: 3114186
 filtered: 100.00
Extra: Using where; Using index
1 row in set, 1 warning (0.00 sec)

The 'score' index is on metric1,cpid (which is unique)

So it's having to look at all the rows on the table given the lastupdate is
random like across users.

I can put in a 'high' value which helps restrict the data, say metric1 
200 , but then it would not capture the 100 milestone if the jump was from
99 to 201.

One option would be to create a trigger for each milestone to generate the
data instead. That could be a lot of triggers, not sure if it could be done
in a single trigger, plus then I would have to maintain the trigger when
adding new milestones.

Any other options I'm missing ??

Regards

Phil


Re: RV: independent tables

2011-05-06 Thread Phil
Why not just use a union ?

select userID,NULL as clientID from user where userCodeDrivingLicense =
'321321321' union select NULL as userID,clientID from client where
clientCodeDrivingLicense = '321321321';



2011/5/6 Rocio Gomez Escribano r.go...@ingenia-soluciones.com

 Tables client an user are quite similar, but they don't have any
 intersection, I mean, if somebody is a client, he or she cant be a user.
 So,
 I have his or her driving license and I need to know what kind of person
 is.

 Im trying some join left, right, but I'm unable to get it!!

 Rocío Gómez Escribano
 r.go...@ingenia-soluciones.com


 Polígono Campollano C/F, nº21T
 02007 Albacete (España)
 Tlf:967-504-513  Fax: 967-504-513
 www.ingenia-soluciones.com

 -Mensaje original-
 De: Halász Sándor [mailto:h...@tbbs.net]
 Enviado el: miércoles, 04 de mayo de 2011 22:43
 Para: Rocio Gomez Escribano
 CC: mysql@lists.mysql.com
 Asunto: Re: RV: independent tables

  2011/05/04 16:57 +0200, Rocio Gomez Escribano 
 I suppose my solution is an Join, but they have no intersection
 
 Really?

 Your examples are very much like a simple join, a special case of

 ... client OUTER JOIN user ON clientCodeDrivingLicense =
 userCodeDrivingLicense

 What is wrong with that? (although actually MySQL does not do full outer
 joining. It is needful to get that through a union of left join and right
 join, care taken that the inner join in only one of them appear.)

 Actually, your tables client and user look like the same table with
 field names changed, no other difference. Field names have nothing to do
 with intersection.




 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql?unsub=pchap...@nc.rr.com




-- 
Distributed Computing stats
http://stats.free-dc.org


Is is possible to update a column based on a REGEXP on another column?

2011-01-21 Thread Phil
I have a table which contains a username column which may be constructed
something like

somename[A] or [DDD]someothername

The A or DDD can be anything at all.

I've added a new column to the table to which I'd like to populate with the
value within the square brackets.

I could write something in perl or php to run through each and update them
but was wondering if there is a way to do this within mysql itself?  The
regexp only returns a boolean so I can't see how to use that.

Regards

Phil


-- 
Distributed Computing stats
http://stats.free-dc.org


Trying to remove a filesort.

2010-09-09 Thread Phil
I wonder if anyone could help with a query which I've been unable to prevent
from using a filesort. Might be something obvious I'm overlooking!

I have a table which tracks milestones in distributed computing projects

Create Table: CREATE TABLE `boinc_milestone` (
  `proj` char(6) NOT NULL,
  `id` int(11) NOT NULL,
  `stat_date` date NOT NULL DEFAULT '-00-00',
  `milestone_type` char(1) NOT NULL DEFAULT '0',
  `milestone` double NOT NULL DEFAULT '0',
  `cpid` varchar(32) DEFAULT NULL,
  `team` int(11) DEFAULT NULL,
  PRIMARY KEY (`proj`,`id`,`stat_date`,`milestone`),
  KEY `two` (`proj`,`stat_date`,`id`,`milestone`),
  KEY `cpid` (`cpid`,`proj`,`id`,`stat_date`,`milestone`),
  KEY `team` (`proj`,`team`,`id`,`stat_date`,`milestone`)
) ENGINE=MyISAM DEFAULT CHARSET=latin1

These are added to on a daily basis as users pass the various credit
milestones so for instance you can end up with rows for
1000,5000,1,5,100 etc on different dates as time goes on.

Now on one page for display I want to show the latest milestone for each
project for a particular cpid. The query I use is as follows:

select a.proj,a.id,max(stat_date),max(a.milestone) as
milestone,b.description
 from boinc_milestone a join boinc_projects b on a.proj = b.proj
 where cpid = '$cpid'
 group by proj
 order by stat_date desc

The order by causes the filesort and I can't find an easy way around it.

mysql explain  select a.proj,a.id,max(stat_date),max(a.milestone) as
milestone,b.description from boinc_milestone a join boinc_projects b on
a.proj = b.proj where cpid = 'XXX' group by proj order by stat_date\G
*** 1. row ***
   id: 1
  select_type: SIMPLE
table: a
 type: ref
possible_keys: PRIMARY,two,cpid,team
  key: cpid
  key_len: 35
  ref: const
 rows: 1
Extra: Using where; Using index; Using temporary; Using filesort
*** 2. row ***
   id: 1
  select_type: SIMPLE
table: b
 type: eq_ref
possible_keys: PRIMARY
  key: PRIMARY
  key_len: 10
  ref: stats.a.proj
 rows: 1
Extra: Using where
2 rows in set (0.00 sec)

I could just remove the order by altogether and perform the sort in php
afterwards I guess but any other ideas?

Thanks

Phil

-- 
Distributed Computing stats
http://stats.free-dc.org


Re: Trying to remove a filesort.

2010-09-09 Thread Phil
On average it would be between 10 and 40, certainly no more than 100.



On Thu, Sep 9, 2010 at 2:57 PM, Michael Dykman mdyk...@gmail.com wrote:

 The filesort is probably necessary because of the number of rows in
 the result set to be ordered.  How many rows do you get out of this
 query?

  - michael dykman

 On Thu, Sep 9, 2010 at 1:53 PM, Phil freedc@gmail.com wrote:
  I wonder if anyone could help with a query which I've been unable to
 prevent
  from using a filesort. Might be something obvious I'm overlooking!
 
  I have a table which tracks milestones in distributed computing projects
 
  Create Table: CREATE TABLE `boinc_milestone` (
   `proj` char(6) NOT NULL,
   `id` int(11) NOT NULL,
   `stat_date` date NOT NULL DEFAULT '-00-00',
   `milestone_type` char(1) NOT NULL DEFAULT '0',
   `milestone` double NOT NULL DEFAULT '0',
   `cpid` varchar(32) DEFAULT NULL,
   `team` int(11) DEFAULT NULL,
   PRIMARY KEY (`proj`,`id`,`stat_date`,`milestone`),
   KEY `two` (`proj`,`stat_date`,`id`,`milestone`),
   KEY `cpid` (`cpid`,`proj`,`id`,`stat_date`,`milestone`),
   KEY `team` (`proj`,`team`,`id`,`stat_date`,`milestone`)
  ) ENGINE=MyISAM DEFAULT CHARSET=latin1
 
  These are added to on a daily basis as users pass the various credit
  milestones so for instance you can end up with rows for
  1000,5000,1,5,100 etc on different dates as time goes on.
 
  Now on one page for display I want to show the latest milestone for each
  project for a particular cpid. The query I use is as follows:
 
  select a.proj,a.id,max(stat_date),max(a.milestone) as
  milestone,b.description
  from boinc_milestone a join boinc_projects b on a.proj = b.proj
  where cpid = '$cpid'
  group by proj
  order by stat_date desc
 
  The order by causes the filesort and I can't find an easy way around it.
 
  mysql explain  select a.proj,a.id,max(stat_date),max(a.milestone) as
  milestone,b.description from boinc_milestone a join boinc_projects b on
  a.proj = b.proj where cpid = 'XXX' group by proj order by stat_date\G
  *** 1. row ***
id: 1
   select_type: SIMPLE
 table: a
  type: ref
  possible_keys: PRIMARY,two,cpid,team
   key: cpid
   key_len: 35
   ref: const
  rows: 1
 Extra: Using where; Using index; Using temporary; Using filesort
  *** 2. row ***
id: 1
   select_type: SIMPLE
 table: b
  type: eq_ref
  possible_keys: PRIMARY
   key: PRIMARY
   key_len: 10
   ref: stats.a.proj
  rows: 1
 Extra: Using where
  2 rows in set (0.00 sec)
 
  I could just remove the order by altogether and perform the sort in php
  afterwards I guess but any other ideas?
 
  Thanks
 
  Phil
 
  --
  Distributed Computing stats
  http://stats.free-dc.org
 



 --
  - michael dykman
  - mdyk...@gmail.com

  May the Source be with you.




-- 
Distributed Computing stats
http://stats.free-dc.org


Re: Trying to remove a filesort.

2010-09-09 Thread Phil
Even prior to the group by it's still not likely to ever be more than 200 or
so maximum.

I have the sort_buffer_size at 256Mb so I don't believe it's that either :(

On Thu, Sep 9, 2010 at 3:16 PM, Michael Dykman mdyk...@gmail.com wrote:

 How many rows before the GROUP BY?  Group by is, in effect a sorting
 process..  perhaps that contains enough data to justify going to disk.

 What is the value of the variable sort_buffer_size?
 show variables like '%sort%';

  - md

 On Thu, Sep 9, 2010 at 3:04 PM, Phil freedc@gmail.com wrote:
  On average it would be between 10 and 40, certainly no more than 100.
 
 
  On Thu, Sep 9, 2010 at 2:57 PM, Michael Dykman mdyk...@gmail.com
 wrote:
 
  The filesort is probably necessary because of the number of rows in
  the result set to be ordered.  How many rows do you get out of this
  query?
 
   - michael dykman
 
  On Thu, Sep 9, 2010 at 1:53 PM, Phil freedc@gmail.com wrote:
   I wonder if anyone could help with a query which I've been unable to
   prevent
   from using a filesort. Might be something obvious I'm overlooking!
  
   I have a table which tracks milestones in distributed computing
 projects
  
   Create Table: CREATE TABLE `boinc_milestone` (
`proj` char(6) NOT NULL,
`id` int(11) NOT NULL,
`stat_date` date NOT NULL DEFAULT '-00-00',
`milestone_type` char(1) NOT NULL DEFAULT '0',
`milestone` double NOT NULL DEFAULT '0',
`cpid` varchar(32) DEFAULT NULL,
`team` int(11) DEFAULT NULL,
PRIMARY KEY (`proj`,`id`,`stat_date`,`milestone`),
KEY `two` (`proj`,`stat_date`,`id`,`milestone`),
KEY `cpid` (`cpid`,`proj`,`id`,`stat_date`,`milestone`),
KEY `team` (`proj`,`team`,`id`,`stat_date`,`milestone`)
   ) ENGINE=MyISAM DEFAULT CHARSET=latin1
  
   These are added to on a daily basis as users pass the various credit
   milestones so for instance you can end up with rows for
   1000,5000,1,5,100 etc on different dates as time goes on.
  
   Now on one page for display I want to show the latest milestone for
 each
   project for a particular cpid. The query I use is as follows:
  
   select a.proj,a.id,max(stat_date),max(a.milestone) as
   milestone,b.description
   from boinc_milestone a join boinc_projects b on a.proj =
 b.proj
   where cpid = '$cpid'
   group by proj
   order by stat_date desc
  
   The order by causes the filesort and I can't find an easy way around
 it.
  
   mysql explain  select a.proj,a.id,max(stat_date),max(a.milestone) as
   milestone,b.description from boinc_milestone a join boinc_projects b
 on
   a.proj = b.proj where cpid = 'XXX' group by proj order by stat_date\G
   *** 1. row ***
 id: 1
select_type: SIMPLE
  table: a
   type: ref
   possible_keys: PRIMARY,two,cpid,team
key: cpid
key_len: 35
ref: const
   rows: 1
  Extra: Using where; Using index; Using temporary; Using
 filesort
   *** 2. row ***
 id: 1
select_type: SIMPLE
  table: b
   type: eq_ref
   possible_keys: PRIMARY
key: PRIMARY
key_len: 10
ref: stats.a.proj
   rows: 1
  Extra: Using where
   2 rows in set (0.00 sec)
  
   I could just remove the order by altogether and perform the sort in
 php
   afterwards I guess but any other ideas?
  
   Thanks
  
   Phil
  
   --
   Distributed Computing stats
   http://stats.free-dc.org
  
 
 
 
  --
   - michael dykman
   - mdyk...@gmail.com
 
   May the Source be with you.
 
 
 
  --
  Distributed Computing stats
  http://stats.free-dc.org
 



 --
  - michael dykman
  - mdyk...@gmail.com

  May the Source be with you.




-- 
Distributed Computing stats
http://stats.free-dc.org


Re: Trying to remove a filesort.

2010-09-09 Thread Phil
It's in my.cnf. There is 12Gb in the database server and I watch it fairly
carefully and have not gone into swap yet in the past few years.

On Thu, Sep 9, 2010 at 3:43 PM, Ananda Kumar anan...@gmail.com wrote:

 have u set sort_buffer_size at session level or in my.cnf.
 Setting high value in my.cnf, will cause mysql to run out off MEMORY and
 paging will happen

 regards
 anandkl

 On Fri, Sep 10, 2010 at 1:10 AM, Phil freedc@gmail.com wrote:

 Even prior to the group by it's still not likely to ever be more than 200
 or
 so maximum.

 I have the sort_buffer_size at 256Mb so I don't believe it's that either
 :(

 On Thu, Sep 9, 2010 at 3:16 PM, Michael Dykman mdyk...@gmail.com wrote:

  How many rows before the GROUP BY?  Group by is, in effect a sorting
  process..  perhaps that contains enough data to justify going to disk.
 
  What is the value of the variable sort_buffer_size?
  show variables like '%sort%';
 
   - md
 
  On Thu, Sep 9, 2010 at 3:04 PM, Phil freedc@gmail.com wrote:
   On average it would be between 10 and 40, certainly no more than 100.
  
  
   On Thu, Sep 9, 2010 at 2:57 PM, Michael Dykman mdyk...@gmail.com
  wrote:
  
   The filesort is probably necessary because of the number of rows in
   the result set to be ordered.  How many rows do you get out of this
   query?
  
- michael dykman
  
   On Thu, Sep 9, 2010 at 1:53 PM, Phil freedc@gmail.com wrote:
I wonder if anyone could help with a query which I've been unable
 to
prevent
from using a filesort. Might be something obvious I'm overlooking!
   
I have a table which tracks milestones in distributed computing
  projects
   
Create Table: CREATE TABLE `boinc_milestone` (
 `proj` char(6) NOT NULL,
 `id` int(11) NOT NULL,
 `stat_date` date NOT NULL DEFAULT '-00-00',
 `milestone_type` char(1) NOT NULL DEFAULT '0',
 `milestone` double NOT NULL DEFAULT '0',
 `cpid` varchar(32) DEFAULT NULL,
 `team` int(11) DEFAULT NULL,
 PRIMARY KEY (`proj`,`id`,`stat_date`,`milestone`),
 KEY `two` (`proj`,`stat_date`,`id`,`milestone`),
 KEY `cpid` (`cpid`,`proj`,`id`,`stat_date`,`milestone`),
 KEY `team` (`proj`,`team`,`id`,`stat_date`,`milestone`)
) ENGINE=MyISAM DEFAULT CHARSET=latin1
   
These are added to on a daily basis as users pass the various
 credit
milestones so for instance you can end up with rows for
1000,5000,1,5,100 etc on different dates as time goes
 on.
   
Now on one page for display I want to show the latest milestone for
  each
project for a particular cpid. The query I use is as follows:
   
select a.proj,a.id,max(stat_date),max(a.milestone) as
milestone,b.description
from boinc_milestone a join boinc_projects b on a.proj =
  b.proj
where cpid = '$cpid'
group by proj
order by stat_date desc
   
The order by causes the filesort and I can't find an easy way
 around
  it.
   
mysql explain  select a.proj,a.id,max(stat_date),max(a.milestone)
 as
milestone,b.description from boinc_milestone a join boinc_projects
 b
  on
a.proj = b.proj where cpid = 'XXX' group by proj order by
 stat_date\G
*** 1. row ***
  id: 1
 select_type: SIMPLE
   table: a
type: ref
possible_keys: PRIMARY,two,cpid,team
 key: cpid
 key_len: 35
 ref: const
rows: 1
   Extra: Using where; Using index; Using temporary; Using
  filesort
*** 2. row ***
  id: 1
 select_type: SIMPLE
   table: b
type: eq_ref
possible_keys: PRIMARY
 key: PRIMARY
 key_len: 10
 ref: stats.a.proj
rows: 1
   Extra: Using where
2 rows in set (0.00 sec)
   
I could just remove the order by altogether and perform the sort in
  php
afterwards I guess but any other ideas?
   
Thanks
   
Phil
   
--
Distributed Computing stats
http://stats.free-dc.org
   
  
  
  
   --
- michael dykman
- mdyk...@gmail.com
  
May the Source be with you.
  
  
  
   --
   Distributed Computing stats
   http://stats.free-dc.org
  
 
 
 
  --
   - michael dykman
   - mdyk...@gmail.com
 
   May the Source be with you.
 



 --
 Distributed Computing stats
 http://stats.free-dc.org





-- 
Distributed Computing stats
http://stats.free-dc.org


Re: Trying to remove a filesort.

2010-09-09 Thread Phil
Thanks! I did not know that.

Just tried it and indeed the Created_tmp_disk_tables did not increase, just
the Created_tmp_tables increased by +1. Still not perfect, but it's better
than I thought and at least in memory.

And for the previous mails, I'm not sure why I ever had the sort_buffer_size
that high, have reduced it now.

On Thu, Sep 9, 2010 at 6:30 PM, Travis Ard travis_...@hotmail.com wrote:

 When the explain output says Using filesort, it doesn't necessarily mean
 it is sorting on disk.  It could still be sorting in memory and, thus, be
 reasonably fast.  You might check the value of Created_tmp_disk_tables
 before and after your query to see for sure.

 -Travis

 -Original Message-
 From: Phil [mailto:freedc@gmail.com]
 Sent: Thursday, September 09, 2010 11:54 AM
 To: mysql
 Subject: Trying to remove a filesort.

 I wonder if anyone could help with a query which I've been unable to
 prevent
 from using a filesort. Might be something obvious I'm overlooking!

 I have a table which tracks milestones in distributed computing projects

 Create Table: CREATE TABLE `boinc_milestone` (
  `proj` char(6) NOT NULL,
  `id` int(11) NOT NULL,
  `stat_date` date NOT NULL DEFAULT '-00-00',
  `milestone_type` char(1) NOT NULL DEFAULT '0',
  `milestone` double NOT NULL DEFAULT '0',
  `cpid` varchar(32) DEFAULT NULL,
  `team` int(11) DEFAULT NULL,
  PRIMARY KEY (`proj`,`id`,`stat_date`,`milestone`),
  KEY `two` (`proj`,`stat_date`,`id`,`milestone`),
  KEY `cpid` (`cpid`,`proj`,`id`,`stat_date`,`milestone`),
  KEY `team` (`proj`,`team`,`id`,`stat_date`,`milestone`)
 ) ENGINE=MyISAM DEFAULT CHARSET=latin1

 These are added to on a daily basis as users pass the various credit
 milestones so for instance you can end up with rows for
 1000,5000,1,5,100 etc on different dates as time goes on.

 Now on one page for display I want to show the latest milestone for each
 project for a particular cpid. The query I use is as follows:

 select a.proj,a.id,max(stat_date),max(a.milestone) as
 milestone,b.description
 from boinc_milestone a join boinc_projects b on a.proj = b.proj
 where cpid = '$cpid'
 group by proj
 order by stat_date desc

 The order by causes the filesort and I can't find an easy way around it.

 mysql explain  select a.proj,a.id,max(stat_date),max(a.milestone) as
 milestone,b.description from boinc_milestone a join boinc_projects b on
 a.proj = b.proj where cpid = 'XXX' group by proj order by stat_date\G
 *** 1. row ***
   id: 1
  select_type: SIMPLE
table: a
 type: ref
 possible_keys: PRIMARY,two,cpid,team
  key: cpid
  key_len: 35
  ref: const
 rows: 1
Extra: Using where; Using index; Using temporary; Using filesort
 *** 2. row ***
   id: 1
  select_type: SIMPLE
table: b
 type: eq_ref
 possible_keys: PRIMARY
  key: PRIMARY
  key_len: 10
  ref: stats.a.proj
 rows: 1
Extra: Using where
 2 rows in set (0.00 sec)

 I could just remove the order by altogether and perform the sort in php
 afterwards I guess but any other ideas?

 Thanks

 Phil

 --
 Distributed Computing stats
 http://stats.free-dc.org




-- 
Distributed Computing stats
http://stats.free-dc.org


Re: How to put table definition into another table using SQL?

2010-05-11 Thread Phil
create table TableDEF like TableX;

Or am I missing something ??

On Tue, May 11, 2010 at 11:36 AM, mos mo...@fastmail.fm wrote:

 I'd like to get the field names and data types of a table, say TableX, and
 put it into TableDef using nothing but SQL. I know I can list the table
 definition using Describe Table and then loop through the results and
 insert the first two columns Field and Type into TableDef, but is there a
 way to do it using just SQL?

 Example:
 Describe TableX:

 First_Name  Char(15)  
 Last_Name   Char(20) ...
 Start_Date  Date  ..
 Salary  Double  ..


 And I'd like TableDef to have these rows:

 ColName ColType
 --- --
 First_Name  Char(15)
 Last_Name   Char(20)
 Start_Date  Date
 Salary  Double

 Is there a way to do this with one SQL statement? I'm really looking for
 the MySQL internal table where it stores the table definitions.

 TIA
 Mike


 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql?unsub=pchap...@nc.rr.com




-- 
Distributed Computing stats
http://stats.free-dc.org


Re: Database tables for Exchange rates

2010-05-10 Thread Phil
For the exchange rates only you don't really need more than one table. I
work with an enterprise financial system and we have exchange rate tables
which are updated with data every day.

Something like

BASE_CURR char(3)
NONBASE_CURR char(3)
EFF_DATE   DATE
EXCH_RATE  DECIMAL(15,6)-- or however much precision you need

We have other columns storing tolerances but that should be enough. First 3
columns are your key.

Separate tables for the currency codes themselves.

On Mon, May 10, 2010 at 7:28 AM, Mimi Cafe mimic...@googlemail.com wrote:

 I am designing a database to store exchange rates and other information.
 The
 tables fro the exchange rates will store exchange rates fro all currencies.

 Can I have any suggestions about the number of tables to use for the
 exchange rate? I think I will need at least 2 tables, but I am looking for
 suggestions on how to achieve maximum speed when querying the database and
 also avoid redundant tables or rows.

 Mimi


 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql?unsub=pchap...@nc.rr.com




-- 
Distributed Computing stats
http://stats.free-dc.org


Re: 2d line plot graph vs. time

2009-11-16 Thread Phil
Try Chartdirector, available in many languages..

http://www.advsofteng.com

Regards

Phil

On Mon, Nov 16, 2009 at 12:38 PM, Mikie k...@mikienet.com wrote:

 Hello MySQL list people!

 I need software that will graphically plot 2d line charts vs. time for
 various data points from a MySQL database.

 Any ideas?

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql?unsub=pchap...@nc.rr.com




-- 
Distributed Computing stats
http://stats.free-dc.org


Re: trigger

2009-11-04 Thread Phil
You are missing a BEGIN in the trigger

delimiter |

CREATE TRIGGER greylist AFTER INSERT on greylist
BEGIN
delete from greylist where first_seen  NOW()-60*60*24*5;
END;
|
delimiter ;

Phil


On Wed, Nov 4, 2009 at 2:28 PM, Stefan Onken supp...@stonki.de wrote:

 Hello,

 I am new to using triggers in mysql. I am using mysql 5.1.37  and would
 like to setup a trigger like:

 CREATE TRIGGER greylist AFTER INSERT on greylist
 delete from greylist where first_seen  NOW()-60*60*24*5;
 END;

 When typing this into mysql I am getting an error. Where is my mistake?


 mysql show fields from greylist;
 +---+---+--+-+-+
 | Field | Type  | Null | Key | Default |
 +---+---+--+-+-+
 | id| int(11)   | NO   | PRI | NULL|
 | SenderIP  | varchar(15)   | NO   | MUL | NULL|
 | SenderAddress | varchar(1024) | NO   | MUL | NULL|
 | first_seen| int(11)   | NO   | | NULL|
 +---+---+--+-+-+
 4 rows in set (0,00 sec)

 I would like to archive that after every insert in the greylist table I am
 purging the oldest xx records.

 Stefan



 www.stonki.de : My, myself and I
 www.kbarcode.net : barcode solution for KDE
 www.krename.net : renamer for KDE
 www.proftpd.de : a FTP server...

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql?unsub=pchap...@nc.rr.com




-- 
Distributed Computing stats
http://stats.free-dc.org


Re: trigger

2009-11-04 Thread Phil
ah, yes I'd missed the 'for each row' when I posted.

But for the date math part, look at the column, it's an int() not a date.
Puzzled me a little at the time so I tried it..

mysql select now()-60*60*24*5 from dual;
+---+
| now()-60*60*24*5  |
+---+
| 20091103730524.00 |
+---+
1 row in set (0.00 sec)

Does give back a number at least rather than a date, so I assumed on the
side of the poster that he was storing his value in last_seen as a
number..(which remains to be seen :))

Not the way I would do it, but each to their own!

Phil



On Wed, Nov 4, 2009 at 5:40 PM, Gavin Towey gto...@ffn.com wrote:

 Oops, one more mistake:

 NOW()-60*60*24*5 isn't the way to do date math.  It should be: NOW() -
 INTERVAL 5 DAY

 -Original Message-
 From: Gavin Towey
 Sent: Wednesday, November 04, 2009 2:33 PM
 To: 'Phil'; Mysql; 'Stefan Onken'
 Subject: RE: trigger

 1. Triggers must have FOR EACH ROW -- it's described in the manual:
 http://dev.mysql.com/doc/refman/5.0/en/create-trigger.html

 So the correct syntax would be:
 CREATE TRIGGER greylist_ins AFTER INSERT on greylist FOR EACH ROW delete
 from greylist where first_seen  NOW()-60*60*24*5;

 BEGIN/END and DELIMITER  are not needed for single statement triggers

 2. However you still can't do that.  You can't update the table used in the
 trigger.  What you really want is either a separate cron process, or a mysql
 event (if using 5.1)

 Regards
 Gavin Towey


 -Original Message-
 From: freedc@gmail.com [mailto:freedc@gmail.com] On Behalf Of Phil
 Sent: Wednesday, November 04, 2009 11:42 AM
 To: Mysql
 Subject: Re: trigger

 You are missing a BEGIN in the trigger

 delimiter |

 CREATE TRIGGER greylist AFTER INSERT on greylist
 BEGIN
 delete from greylist where first_seen  NOW()-60*60*24*5;
 END;
 |
 delimiter ;

 Phil


 On Wed, Nov 4, 2009 at 2:28 PM, Stefan Onken supp...@stonki.de wrote:

  Hello,
 
  I am new to using triggers in mysql. I am using mysql 5.1.37  and would
  like to setup a trigger like:
 
  CREATE TRIGGER greylist AFTER INSERT on greylist
  delete from greylist where first_seen  NOW()-60*60*24*5;
  END;
 
  When typing this into mysql I am getting an error. Where is my mistake?
 
 
  mysql show fields from greylist;
  +---+---+--+-+-+
  | Field | Type  | Null | Key | Default |
  +---+---+--+-+-+
  | id| int(11)   | NO   | PRI | NULL|
  | SenderIP  | varchar(15)   | NO   | MUL | NULL|
  | SenderAddress | varchar(1024) | NO   | MUL | NULL|
  | first_seen| int(11)   | NO   | | NULL|
  +---+---+--+-+-+
  4 rows in set (0,00 sec)
 
  I would like to archive that after every insert in the greylist table I
 am
  purging the oldest xx records.
 
  Stefan
 
 
 
  www.stonki.de : My, myself and I
  www.kbarcode.net : barcode solution for KDE
  www.krename.net : renamer for KDE
  www.proftpd.de : a FTP server...
 
  --
  MySQL General Mailing List
  For list archives: http://lists.mysql.com/mysql
  To unsubscribe:http://lists.mysql.com/mysql?unsub=pchap...@nc.rr.com
 
 


 --
 Distributed Computing stats
 http://stats.free-dc.org

 The information contained in this transmission may contain privileged and
 confidential information. It is intended only for the use of the person(s)
 named above. If you are not the intended recipient, you are hereby notified
 that any review, dissemination, distribution or duplication of this
 communication is strictly prohibited. If you are not the intended recipient,
 please contact the sender by reply email and destroy all copies of the
 original message.

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql?unsub=pchap...@nc.rr.com




-- 
Distributed Computing stats
http://stats.free-dc.org


Group by question

2009-01-07 Thread Phil
A question on grouping I've never been able to solve...

create table j (proj char(3), id int, score double,cpid char(32),team
char(10));

insert into j values('aaa',1,100,'a','team1');
insert into j values('bbb',2,200,'a','team1');
insert into j values('ccc',3,300,'a','team2');
insert into j values('aaa',4,100,'b','team2');
insert into j values('bbb',5,300,'b','team1');
insert into j values('ccc',6,400,'b','team1');
insert into j values('aaa',7,101,'c','team1');
insert into j values('bbb',8,302,'c','team2');
insert into j values('ccc',9,503,'c','team2');

mysql select * from j;
+--+--+---+---+---+
| proj | id   | score | cpid  | team  |
+--+--+---+---+---+
| aaa  |1 |   100 | a | team1 |
| bbb  |2 |   200 | a | team1 |
| ccc  |3 |   300 | a | team2 |
| aaa  |4 |   100 | b | team2 |
| bbb  |5 |   300 | b | team1 |
| ccc  |6 |   400 | b | team1 |
| aaa  |7 |   101 | c | team1 |
| bbb  |8 |   302 | c | team2 |
| ccc  |9 |   503 | c | team2 |
+--+--+---+---+---+
9 rows in set (0.00 sec)

mysql select cpid,sum(score),team from j group by cpid;
+---++---+
| cpid  | sum(score) | team  |
+---++---+
| a |600 | team1 |
| b |800 | team2 |
| c |906 | team1 |
+---++---+
3 rows in set (0.00 sec)

Using MAX or MIN on the team gives different but not necessarily closer
results.

mysql select cpid,sum(score),max(team) from j group by cpid;
+---++---+
| cpid  | sum(score) | max(team) |
+---++---+
| a |600 | team2 |
| b |800 | team2 |
| c |906 | team2 |
+---++---+
3 rows in set (0.00 sec)

mysql select cpid,sum(score),min(team) from j group by cpid;
+---++---+
| cpid  | sum(score) | min(team) |
+---++---+
| a |600 | team1 |
| b |800 | team1 |
| c |906 | team1 |
+---++---+
3 rows in set (0.00 sec)

Given that for cpid = 'bbb', they have 2 rows where it is team1,
and only 1 with team2 but the original query gives team2 and rightly so as
it just uses the first row in mysql's slightly illegal (but useful!) use of
allowing other columns in the query but not in the group by.

The question is, is there any way to modify this query so that it would
return the team having the most entries?

Theoretical what I would like:

| cpid  | sum(score) | team  |
+---++---+
| a |600 | team1 |
| b |800 | team1 |
| c |906 | team2 |


If not, is there an easy way to have another column, say mostteam char(10)
and run an update statement on the whole table which would put the correct
value in?

Regards

Phil

-- 
Distributed Computing stats
http://stats.free-dc.org


Re: Limit within groups

2009-01-06 Thread Phil
How about something like

select account,customer,max(total) from (select
account,customer,sum(sale_amount) as total from tablename group by
customer) as y group by account;

Seems to work in my test case..

Regards

Phil



On Tue, Jan 6, 2009 at 3:13 PM, Jerry Schwartz
jschwa...@the-infoshop.comwrote:

 Here's a bit of business that is baffling me, and probably shouldn't. My
 table looks like this:



 account

 customer

 sale_amount



 Each account has multiple customers, and each customer has multiple sales.
 I
 want to get the top 20 customers for each account.



 If I simply do GROUP BY account, customer LIMIT 20, I'll get the first 20
 customers for the first account. If I try GROUP BY account, customer ORDER
 BY SUM(sale_amount) DESC LIMIT 20, I'll get the top 20 customers.



 What am I missing?



 Regards,



 Jerry Schwartz

 The Infoshop by Global Information Incorporated

 195 Farmington Ave.

 Farmington, CT 06032



 860.674.8796 / FAX: 860.674.8341



  http://www.the-infoshop.com www.the-infoshop.com

  http://www.giiexpress.com www.giiexpress.com

 www.etudes-marche.com






-- 
Distributed Computing stats
http://stats.free-dc.org


Re: Limit within groups

2009-01-06 Thread Phil
Yes you're right. I went off on a complete tangent with my thoughts on this
and it does not do what you wanted at all...Sorry!

I can't think of any way to do this via sql only as it's almost a
group_limit_by that you'd want. It seems much more like a procedural call,
so I'd expect you'd need some form of cursor wrapped around the group by to
get the top20 for each account.

Sorry about that!

Phil

On Tue, Jan 6, 2009 at 3:59 PM, Jerry Schwartz
jschwa...@the-infoshop.comwrote:



 -Original Message-
 From: freedc@gmail.com [mailto:freedc@gmail.com] On Behalf Of
 Phil
 Sent: Tuesday, January 06, 2009 3:41 PM
 To: Jerry Schwartz
 Cc: mysql@lists.mysql.com
 Subject: Re: Limit within groups
 
 How about something like
 
 select account,customer,max(total) from (select
 account,customer,sum(sale_amount) as total from tablename group by
 customer) as y group by account;
 
 Seems to work in my test case..
 
 [JS] This would return a record for each customer, wouldn't it? I don't see
 anything in there to limit the number of records returned for each account.
 Regards
 
 Phil
 
 
 
 On Tue, Jan 6, 2009 at 3:13 PM, Jerry Schwartz
 jschwa...@the-infoshop.comwrote:
 
  Here's a bit of business that is baffling me, and probably shouldn't.
 My
  table looks like this:
 
 
 
  account
 
  customer
 
  sale_amount
 
 
 
  Each account has multiple customers, and each customer has multiple
 sales.
  I
  want to get the top 20 customers for each account.
 
 
 
  If I simply do GROUP BY account, customer LIMIT 20, I'll get the
 first 20
  customers for the first account. If I try GROUP BY account, customer
 ORDER
  BY SUM(sale_amount) DESC LIMIT 20, I'll get the top 20 customers.
 
 
 
  What am I missing?
 
 
 
  Regards,
 
 
 
  Jerry Schwartz
 
  The Infoshop by Global Information Incorporated
 
  195 Farmington Ave.
 
  Farmington, CT 06032
 
 
 
  860.674.8796 / FAX: 860.674.8341
 
 
 
   http://www.the-infoshop.com www.the-infoshop.com
 
   http://www.giiexpress.com www.giiexpress.com
 
  www.etudes-marche.com
 
 
 
 
 
 
 --
 Distributed Computing stats
 http://stats.free-dc.org






-- 
Distributed Computing stats
http://stats.free-dc.org


Re: Help with query

2008-12-15 Thread Phil
Am I totally missing something? Why do you believe the two queries should
return the same # of rows? First one has a qualification of proj_adv_date 
'2008-12-16' whilst the second one does not...

On Mon, Dec 15, 2008 at 12:12 PM, Néstor rot...@gmail.com wrote:

 I have a char fiel where I am keeping dates formatted as year-month-day
 (2006-10-09)
 Now I am trying to find all the records between 2 strings (2 dates).   The
 2 queries below
 should return the same number of records by they do not.

 My query is this:
  SELECT COUNT(*) FROM proj where proj_archive=0 AND (proj_adv_date =
 '2008-01-01' AND proj_adv_date  '2008-12-16') order by proj_type,
 proj_adv_date, proj_bid_date, proj_name ASC;
 +-+---+
 | proj_id | proj_adv_date |
 +-+---+
 | 181 | 2008-11-25|
 | 217 | 2008-10-27|
 | 136 | 2008-12-01|
 | 219 | 2008-12-08|
 | 225 | 2008-12-11|
 +-+---+
 5 rows in set (0.00 sec)

 I get only 5 records returned but if I do this query:
 SELECT COUNT(*) FROM proj where proj_archive=0 AND (proj_adv_date 
 '2008-01-01') order by proj_type, proj_adv_date, proj_bid_date, proj_name
 ASC;
 +-+---+
 | proj_id | proj_adv_date |
 +-+---+
 | 181 | 2008-11-25|
 | 221 | 2008-12-23|
 | 108 | 2009-01-00|
 | 173 | 2009-03-00|
 | 149 | 2009-10-00|
 | 143 | 2009-7-00 |
 | 179 | 2010-04-00|
 | 217 | 2008-10-27|
 | 136 | 2008-12-01|
 | 219 | 2008-12-08|
 | 225 | 2008-12-11|
 | 187 | 2009-01-00|
 | 199 | 2009-01-01|
 | 177 | 2009-02-01|
 |  69 | 2009-03-00|
 |  70 | 2009-03-00|
 |  71 | 2009-03-00|
 | 142 | 2009-03-00|
 | 122 | 2009-04-00|
 | 124 | 2009-04-00|
 | 207 | 2009-04-01|
 |  72 | 2009-07-00|
 |  73 | 2009-07-00|
 |  82 | 2009-07-00|
 | 209 | 2009-10-01|
 | 211 | 2009-10-01|
 | 213 | 2010-03-01|
 +-+---+
 27 rows in set (0.00 sec)

 thanks,

 Rotsen :-)




-- 
Distributed Computing stats
http://stats.free-dc.org


Re: mysql big table select speed

2008-09-24 Thread Phil
Just a wild guess but, did you perhaps change the filesystem to a
journalling filsystem when moving to the different server?

I once accidently moved my database from an ext2 to an ext3 partition and it
took me a while to figure out the degradation of queries..

Phil

On Wed, Sep 24, 2008 at 6:16 PM, Carles Pina i Estany [EMAIL PROTECTED]wrote:


 Hello,

 I have a database with a big table: 350 milion of registers. The table
 is a Isam table, very simple:

 mysql describe stadistics;
 +-+--+--+-+-++
 | Field   | Type | Null | Key | Default | Extra
 |
 +-+--+--+-+-++
 | id  | int(11) unsigned | NO   | PRI | NULL| auto_increment
 |
 | param_name  | smallint(11) | NO   | | |
 |
 | param_value | smallint(6)  | YES  | | NULL|
 |
 | date| datetime | NO   | MUL | |
 |
 +-+--+--+-+-++
 4 rows in set (0.00 sec)

 mysql

 I had this database in one server and I moved to another server and now
 the queries are slower (from 12-14 seconds the query that I will show to
 2 min. 50 seconds). Servers hardware are quite similar, and servers
 software installation are similar too (Debian, ext3).

 Mysql version:
 mysql select version();
 +--+
 | version()|
 +--+
 | 5.0.32-Debian_7etch6-log |
 +--+
 1 row in set (0.00 sec)

 While I'm doing this select:
 select count(*) from stadistics where date2008-09-01 and
 date2008-09-05 and param_name=124 and param_value=0;
 (very simple)

 In the explain select there isn't any surprise:
 mysql explain select count(*) from stadistics where date2008-09-01
 and date2008-09-02 and param_name=124 and param_value=0;

 ++-++---+---++-+--+-+-+
 | id | select_type | table  | type  | possible_keys | key|
 key_len | ref  | rows| Extra   |

 y+-++---+---++-+--+-+-+
 |  1 | SIMPLE  | stadistics | range | date_index| date_index | 8
 | NULL | 1561412 | Using where |

 ++-++---+---++-+--+-+-+
 1 row in set (0.00 sec)

 Well, maybe somebody doesn't like the rows value (1561412) but it's what
 we have :-)

 Checking vmstat 1 in the new server doing the query is:

 procs ---memory-- ---swap-- -io -system--
 cpu
  r  b   swpd   free   buff  cache   si   sobibo   in   cs us sy id
 wa
  1  1 36 374684  79952 26163600  1524 0  480  100  1  3 50
 46
  0  1 36 372760  79952 26291200  1300 0  488   74  1  2 50
 47
  0  2 36 370764  79956 26450800  154016  559  258  1  3 49
 48
  0  1 36 368580  79956 26590400  1468 0 1211 1681  7 11 36
 46
  0  2 36 367308  79964 26689600   944   236  575  463  3  3 40
 56
  0  1 36 365076  79964 26855200  1584 0  493   85  1  3 50
 46
  0  1 36 363320  79964 26985200  128416  471   80  1  2 50
 47
  0  2 36 361112  79968 27142000  158416  530  221  2  2 44
 53


 This is very confusing for me! The CPU is in waiting state for IO 50% of
 the
 time. But looking in io bi is very low. For this hard disk I can reach this
 IO bi values (doing an hdparm, for example):
  1  1 36  73124 136968 45116400 56444 0 1140  977  0  6 58
 35
  1  0 36  12612 196792 45076000 72704 0 1873 2273  0 10 48
 41
  0  1 36   9304 211072 43965600 71552   248 1481 1609  0 11 43
 45

 Same query in the same database but in the old server is:
  r  b   swpd   free   buff  cache   si   sobibo   in   cs us sy id
 wa
  4  0 48  14268  58756 72480800 1306828  594 1042 54 27 14
  5
  2  0 48  15596  58304 72410400 12068   196  578  754 70 24  0
  5
 38  0 48  17564  55448 71560400  6320   248  966 1731 58 39  0
  2
  8  0 48  22536  51856 71616800  3796 0  933 3765 71 28  0
  0
  2  0 48  23808  51868 72308400  6992 0  550 2959 74 21  0
  5
  2  0 48  14932  51880 73217200  9080   200  525  409 64 20  0
 16
  2  0 48  13680  51576 73415600 1072432 1263 1577 70 27  0
  3

 Here we have a better execution time, bi is higher, wa is lower. Also, sy
 is
 higher...

 Both systems has the database in a ext3 partition. In the new server I
 stopped the services and blocked writes to that table to avoid problems from
 outside.

 Hdparm results are:
 hdparm -tT /dev/sda

 /dev/sda:
  Timing cached reads:   2262 MB in  2.00 seconds = 1131.52 MB/sec
  Timing buffered disk reads:  210 MB

Re: convert week of the year into a date string

2008-08-20 Thread Phil
I did something similar to this recently. I ended up using the following

select date_sub(curdate(), interval(dayofweek(curdate()) + (($week - week) *
7) - 1) DAY) as mydate.

This was in php and ahead of time I set $week as select week(curdate()).

It could easily be extended with year.

Phil

On Wed, Aug 20, 2008 at 5:31 AM, Joerg Bruehe [EMAIL PROTECTED] wrote:

 Pintér Tibor wrote:


 Ananda Kumar írta:

 Hi All,
 I think i worked on this and found the results.


 I did the below.

 1. Multiplied the week_of_the_year with 7 (7 days per week), to get the
 total number of days from begning of the year.

 2. used mysql function makedate
   makedate(year,number of days from the start of the year)
  makedate(2008,224)
 select makedate(2008,224);
 ++
 | makedate(2008,224) |
 ++
 | 2008-08-11 |


 this is definitely wrong, since you dont care about the fact that the
 frist day of the year is not always Monday


 I agree it is wrong, but for a slightly different reason:
   What is the definition of week of year ?
 One problem is that the weekday of January 1 varies, the other is that the
 definition of week may not be universal (does it start with Sunday or
 Monday ?).

 I know of one widespread definition that (AFAIR) is (loosely)
 The first week which has more than half of its days in a given year is
 called 'week 1' of that year.
 If you take Sunday as the start of the week, this translates to
 Week 1 is the week which contains the first Wednesday of a year.
 (If your week starts Monday, the first Thursday determines it.)

 There is another definition that (loosely) says
 The first week which has all its days in a given year is called 'week 1'
 of that year.
 Again, it is a separate question whether your weeks start Sunday or Monday.

 By both definitions, January 1 need not belong to week 1, it may belong to
 the last week of the previous year.

 See these lines quoted from Linux man date:

 ~ man date | grep week
   ...
   %g last two digits of year of ISO week number (see %G)
   %G year of ISO week number (see %V); normally useful only with
 %V
   %u day of week (1..7); 1 is Monday
   %U week number of year, with Sunday as first day of week
 (00..53)
   %V ISO week number, with Monday as first day of week (01..53)
   %w day of week (0..6); 0 is Sunday
   %W week number of year, with Monday as first day of week
 (00..53)


 According to Stevens (Advanced Programming in the Unix environment, page
 158), the %U and %W codes seem to use the all days (second)
 definition.

 I *guess* that the separate ISO reference implies that the ISO definition
 uses the more than half (first) definition, but I propose you check
 yourself.  From some references, I take it that is ISO 8601.


 To return to the original question:
 Sorry, I do not know a MySQL function to do that mapping.
 Your application language might offer something:

 From C, strftime() and/or strptime() might help.
 From Perl, I assume you can find something in CPAN.
 From other languages, I have no idea off-hand.


 But before coding anything, you have to check your definition of week
 number, there are several to choose from.


 Regards,
 Jörg

 --
 Joerg Bruehe,  MySQL Build Team,  [EMAIL PROTECTED]   (+49 30) 417 01 487
 Sun Microsystems GmbH,   Sonnenallee 1,   D-85551 Kirchheim-Heimstetten
 Geschaeftsfuehrer: Thomas Schroeder, Wolfgang Engels, Dr. Roland Boemer
 Vorsitzender des Aufsichtsrates: Martin Haering Muenchen: HRB161028



 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




-- 
Help build our city at http://free-dc.myminicity.com !


Major Performance Degradation after replacing Hard Drive

2008-07-21 Thread Phil
Hi All,


Given a fairly simple table as follows

CREATE TABLE `common_userx2` (
  `t_proj` char(6) default NULL,
  `t_id` int(11) NOT NULL default '0',
  `t_nick` varchar(50) character set latin1 collate latin1_bin NOT NULL
default '',
  `t_country` varchar(50) NOT NULL default '',
  `t_cpid` varchar(50) NOT NULL default '',
  `t_url` varchar(50) default NULL,
  `t_create_date` int(11) default NULL,
  `t_create_time` bigint(20) NOT NULL,
  `t_has_profile` char(1) NOT NULL,
  `t_team0` int(11) default NULL,
  `t_metric1` double(20,6) NOT NULL default '0.00',
  `t_metric2` double NOT NULL default '0',
  `t_metric3` double NOT NULL default '0',
  `t_metric4` double default NULL,
  `t_active` char(1) NOT NULL default '',
  `t_rev_metric1` double(20,6) NOT NULL default '100.00',
  `t_projrank0` int(11) default NULL,
  `t_rev_metric2` double(20,6) NOT NULL default '100.00',
  `t_racrank0` int(11) default NULL,
  `t_teamrank0` int(11) default NULL,
  `t_countryrank0` int(11) default NULL,
  `t_createdaterank0` int(11) default NULL,
  PRIMARY KEY  (`t_id`),
  KEY `prank` (`t_rev_metric1`,`t_id`),
  KEY `trank` (`t_team0`,`t_rev_metric1`,`t_id`),
  KEY `countryrank` (`t_country`,`t_rev_metric1`,`t_id`),
  KEY `createdaterank` (`t_create_date`,`t_rev_metric1`,`t_id`),
  KEY `racrank` (`t_rev_metric2`,`t_id`)
) ENGINE=MyISAM DEFAULT CHARSET=latin1

I have a ranking update statement as follows

set @rank = 0;
update common_userx2 set projrank0 = @rank:[EMAIL PROTECTED] order by
t_rev_metric1,t_id;

For the largest case this has close to 1M rows.

For weeks it was taking around 10seconds to do this. Yesterday I replaced
the main data drive in the machine with a faster SATA Raptor drive. No
problems occurred, but since then (and the subsequent reboot of the machine)
this particular query is taking 45 minutes!

I can't, for the life of me figure out why performance would be degraded so
much. At first I thought perhaps it might be just disk/mysql caching but the
performance has not increased any in subsequent runs.

Any advice on where to look ?

Phil

-- 
Help build our city at http://free-dc.myminicity.com !


Re: Major Performance Degradation after replacing Hard Drive

2008-07-21 Thread Phil
Nothing else running and no queries go against that table, it's effectively
created just for this, so I would expect the table lock.

Show (full) processlist has nothing but this running..

Confirmed the faster disks by copying 5Gb files between two of the same type
of disk (I installed two of them). 2xfaster than previous disks.

my.cnf

[mysqld]
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock
user=mysql
# Default to using old password format for compatibility with mysql 3.x
# clients (those using the mysqlclient10 compatibility package).
old_passwords=1
key_buffer_size=3072M
max_allowed_packet=16M
thread_stack=128K
thread_cache_size=64
thread_concurrency=8
sort_buffer_size=32M
join_buffer_size=3M
read_buffer_size=16M
query_cache_size=64M
query_cache_limit=8M
table_cache=300
max_connections=500
max_heap_table_size=1024M
tmp_table_size=1024M
myisam_sort_buffer_size=128M
wait_timeout=3000

set-variable=long_query_time=6
log-slow-queries=/var/log/mysql-slow-queries.log

8Gb Ram on this machine which is an intel quad core.

Anything else I'm missing? It's *possible* a colleague had changed the
my.cnf in the last few weeks and mysqld hadn't been restarted, but I don't
see anything obvious in there and he can't remember.

:(



On Mon, Jul 21, 2008 at 7:08 AM, Ananda Kumar [EMAIL PROTECTED] wrote:

 Is there any other job running while the update is happening. Because,
 myisam does a table level lock. Please check the show full processlist.

 Also run mysqladmin -uroot -pxxx status. This would write lock information
 into the machine.err log file. Check in this file also if there is any
 locking happening.

 R u sure, this disk is a FASTER disk then the earlier one.




 On 7/21/08, Phil [EMAIL PROTECTED] wrote:

 Hi All,


 Given a fairly simple table as follows

 CREATE TABLE `common_userx2` (
 `t_proj` char(6) default NULL,
 `t_id` int(11) NOT NULL default '0',
 `t_nick` varchar(50) character set latin1 collate latin1_bin NOT NULL
 default '',
 `t_country` varchar(50) NOT NULL default '',
 `t_cpid` varchar(50) NOT NULL default '',
 `t_url` varchar(50) default NULL,
 `t_create_date` int(11) default NULL,
 `t_create_time` bigint(20) NOT NULL,
 `t_has_profile` char(1) NOT NULL,
 `t_team0` int(11) default NULL,
 `t_metric1` double(20,6) NOT NULL default '0.00',
 `t_metric2` double NOT NULL default '0',
 `t_metric3` double NOT NULL default '0',
 `t_metric4` double default NULL,
 `t_active` char(1) NOT NULL default '',
 `t_rev_metric1` double(20,6) NOT NULL default '100.00',
 `t_projrank0` int(11) default NULL,
 `t_rev_metric2` double(20,6) NOT NULL default '100.00',
 `t_racrank0` int(11) default NULL,
 `t_teamrank0` int(11) default NULL,
 `t_countryrank0` int(11) default NULL,
 `t_createdaterank0` int(11) default NULL,
 PRIMARY KEY  (`t_id`),
 KEY `prank` (`t_rev_metric1`,`t_id`),
 KEY `trank` (`t_team0`,`t_rev_metric1`,`t_id`),
 KEY `countryrank` (`t_country`,`t_rev_metric1`,`t_id`),
 KEY `createdaterank` (`t_create_date`,`t_rev_metric1`,`t_id`),
 KEY `racrank` (`t_rev_metric2`,`t_id`)
 ) ENGINE=MyISAM DEFAULT CHARSET=latin1

 I have a ranking update statement as follows

 set @rank = 0;
 update common_userx2 set projrank0 = @rank:[EMAIL PROTECTED] order by
 t_rev_metric1,t_id;

 For the largest case this has close to 1M rows.

 For weeks it was taking around 10seconds to do this. Yesterday I replaced
 the main data drive in the machine with a faster SATA Raptor drive. No
 problems occurred, but since then (and the subsequent reboot of the
 machine)
 this particular query is taking 45 minutes!

 I can't, for the life of me figure out why performance would be degraded
 so
 much. At first I thought perhaps it might be just disk/mysql caching but
 the
 performance has not increased any in subsequent runs.

 Any advice on where to look ?

 Phil

 --
 Help build our city at http://free-dc.myminicity.com !





-- 
Help build our city at http://free-dc.myminicity.com !


Re: Major Performance Degradation after replacing Hard Drive

2008-07-21 Thread Phil
Possibly..

top - 07:52:58 up 18:04,  3 users,  load average: 4.98, 4.09, 3.20
Tasks: 165 total,   3 running, 162 sleeping,   0 stopped,   0 zombie
Cpu0  :  0.0% us,  0.0% sy, 100.0% ni,  0.0% id,  0.0% wa,  0.0% hi,  0.0%
si
Cpu1  :  0.0% us,  0.3% sy,  0.0% ni,  0.0% id, 96.3% wa,  0.7% hi,  2.7% si
Cpu2  :  0.0% us,  0.3% sy, 99.7% ni,  0.0% id,  0.0% wa,  0.0% hi,  0.0% si
Cpu3  :  0.0% us,  0.0% sy,  0.0% ni,  0.0% id, 100.0% wa,  0.0% hi,  0.0%
si
Mem:   8168104k total,  7694556k used,   473548k free,31040k buffers
Swap:  2008084k total,  160k used,  2007924k free,  6044284k cached

Phil


On Mon, Jul 21, 2008 at 9:41 AM, Ananda Kumar [EMAIL PROTECTED] wrote:

 when you run this update, what is the IO WAIT  from the top command.

 regards
 anandkl


 On 7/21/08, Phil [EMAIL PROTECTED] wrote:
 
  Nothing else running and no queries go against that table, it's
 effectively
  created just for this, so I would expect the table lock.
 
  Show (full) processlist has nothing but this running..
 
  Confirmed the faster disks by copying 5Gb files between two of the same
  type
  of disk (I installed two of them). 2xfaster than previous disks.
 
  my.cnf
 
  [mysqld]
  datadir=/var/lib/mysql
  socket=/var/lib/mysql/mysql.sock
  user=mysql
  # Default to using old password format for compatibility with mysql 3.x
  # clients (those using the mysqlclient10 compatibility package).
  old_passwords=1
  key_buffer_size=3072M
  max_allowed_packet=16M
  thread_stack=128K
  thread_cache_size=64
  thread_concurrency=8
  sort_buffer_size=32M
  join_buffer_size=3M
  read_buffer_size=16M
  query_cache_size=64M
  query_cache_limit=8M
  table_cache=300
  max_connections=500
  max_heap_table_size=1024M
  tmp_table_size=1024M
  myisam_sort_buffer_size=128M
  wait_timeout=3000
 
  set-variable=long_query_time=6
  log-slow-queries=/var/log/mysql-slow-queries.log
 
  8Gb Ram on this machine which is an intel quad core.
 
  Anything else I'm missing? It's *possible* a colleague had changed the
  my.cnf in the last few weeks and mysqld hadn't been restarted, but I
 don't
  see anything obvious in there and he can't remember.
 
  :(
 
 
 
  On Mon, Jul 21, 2008 at 7:08 AM, Ananda Kumar [EMAIL PROTECTED] wrote:
 
   Is there any other job running while the update is happening. Because,
   myisam does a table level lock. Please check the show full
 processlist.
  
   Also run mysqladmin -uroot -pxxx status. This would write lock
  information
   into the machine.err log file. Check in this file also if there is any
   locking happening.
  
   R u sure, this disk is a FASTER disk then the earlier one.
  
  
  
  
   On 7/21/08, Phil [EMAIL PROTECTED] wrote:
  
   Hi All,
  
  
   Given a fairly simple table as follows
  
   CREATE TABLE `common_userx2` (
   `t_proj` char(6) default NULL,
   `t_id` int(11) NOT NULL default '0',
   `t_nick` varchar(50) character set latin1 collate latin1_bin NOT NULL
   default '',
   `t_country` varchar(50) NOT NULL default '',
   `t_cpid` varchar(50) NOT NULL default '',
   `t_url` varchar(50) default NULL,
   `t_create_date` int(11) default NULL,
   `t_create_time` bigint(20) NOT NULL,
   `t_has_profile` char(1) NOT NULL,
   `t_team0` int(11) default NULL,
   `t_metric1` double(20,6) NOT NULL default '0.00',
   `t_metric2` double NOT NULL default '0',
   `t_metric3` double NOT NULL default '0',
   `t_metric4` double default NULL,
   `t_active` char(1) NOT NULL default '',
   `t_rev_metric1` double(20,6) NOT NULL default '100.00',
   `t_projrank0` int(11) default NULL,
   `t_rev_metric2` double(20,6) NOT NULL default '100.00',
   `t_racrank0` int(11) default NULL,
   `t_teamrank0` int(11) default NULL,
   `t_countryrank0` int(11) default NULL,
   `t_createdaterank0` int(11) default NULL,
   PRIMARY KEY  (`t_id`),
   KEY `prank` (`t_rev_metric1`,`t_id`),
   KEY `trank` (`t_team0`,`t_rev_metric1`,`t_id`),
   KEY `countryrank` (`t_country`,`t_rev_metric1`,`t_id`),
   KEY `createdaterank` (`t_create_date`,`t_rev_metric1`,`t_id`),
   KEY `racrank` (`t_rev_metric2`,`t_id`)
   ) ENGINE=MyISAM DEFAULT CHARSET=latin1
  
   I have a ranking update statement as follows
  
   set @rank = 0;
   update common_userx2 set projrank0 = @rank:[EMAIL PROTECTED] order by
   t_rev_metric1,t_id;
  
   For the largest case this has close to 1M rows.
  
   For weeks it was taking around 10seconds to do this. Yesterday I
  replaced
   the main data drive in the machine with a faster SATA Raptor drive. No
   problems occurred, but since then (and the subsequent reboot of the
   machine)
   this particular query is taking 45 minutes!
  
   I can't, for the life of me figure out why performance would be
 degraded
   so
   much. At first I thought perhaps it might be just disk/mysql caching
 but
   the
   performance has not increased any in subsequent runs.
  
   Any advice on where to look ?
  
   Phil
  
   --
   Help build our city at http://free-dc.myminicity.com !
  
  
  
 
 
  --
  Help

Re: Insert ... select ... On Duplicate Update Question

2008-07-21 Thread Phil
So just use REPLACE instead of INSERT...

http://dev.mysql.com/doc/refman/5.0/en/replace.html

On Mon, Jul 21, 2008 at 11:44 AM, mos [EMAIL PROTECTED] wrote:

 At 08:23 PM 7/20/2008, Perrin Harkins wrote:

 On Sun, Jul 20, 2008 at 12:12 AM, mos [EMAIL PROTECTED] wrote:
  Is there a way to get Insert ... select  ... On Duplicate Update to
 update
  the row with the duplicate key?

 That's what it does.

  Why can't it do this?

 What makes you think it can't?

 - Perrin


 Perrin,
   I can't specify all of the columns in a Set statement in the
 OnDuplicate clause because I don't know what the column names are and there
 could be 100 columns. I'd like to use something like:

 insert into Table2 select * from table1 on duplicate key update;

 but this gives me a syntax error.

 Error Code : 1064
 You have an error in your SQL syntax; check the manual that corresponds to
 your MySQL server version for the right syntax to use near '' at line 1

 So it is looking for an Update expression. I'd like it to update all the
 columns in the Select statement to the row with the matching key. After all,
 this is what the statement was trying to do in the first place. I don't see
 why I have to explicitly specify all of the value assignments in the On
 Duplicate phrase over again.

 Mike
 MySQL 5.0.24

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




-- 
Help build our city at http://free-dc.myminicity.com !


Re: Major Performance Degradation after replacing Hard Drive

2008-07-21 Thread Phil
Thanks all for the suggestions. I *think* I've finally figured it out.

At the end of the day I think it was because the new drives were formatted
with ext3. I recreated them in ext2 and performance is back to where it was
on those queries..

I hadn't realised that ext3 would give that much of a degradation with
journaling!

This page gives a rough indication of the time spent in various queries on
one of my runs.

http://stats.free-dc.org/stats.php?page=statsrunproj=fah

You can see after 12pm on sunday 20th when I installed the drives as ext3
and then 8pm tonight (PST) where I put them to ext2. Users RankRAC and
rankinteams was dramatically dfferent.

Phil



On Mon, Jul 21, 2008 at 12:39 PM, Wm Mussatto [EMAIL PROTECTED] wrote:

 On Mon, July 21, 2008 09:14, Brent Baisley wrote:
  Copying 5GB files shows you what kind of performance you would get for
  working with say video, or anything with large contiguous files.
  Database access tends to be random, so you want a drive with faster
  random access, not streaming speed. Try copying thousands of small
  files and compare the speeds.
 
  One odd thing to check is if the old drive supports command queueing
  and the new one does not. I assume that are both SATA drives. All SCSI
  drives support command queueing and it can make a huge difference
  depending on access patterns.
 Also that its turned ON.
  Brent
 
  On Mon, Jul 21, 2008 at 8:42 AM, Phil [EMAIL PROTECTED] wrote:
  Nothing else running and no queries go against that table, it's
  effectively
  created just for this, so I would expect the table lock.
 
  Show (full) processlist has nothing but this running..
 
  Confirmed the faster disks by copying 5Gb files between two of the same
  type
  of disk (I installed two of them). 2xfaster than previous disks.
 
  my.cnf
 
  [mysqld]
  datadir=/var/lib/mysql
  socket=/var/lib/mysql/mysql.sock
  user=mysql
  # Default to using old password format for compatibility with mysql 3.x
  # clients (those using the mysqlclient10 compatibility package).
  old_passwords=1
  key_buffer_size=3072M
  max_allowed_packet=16M
  thread_stack=128K
  thread_cache_size=64
  thread_concurrency=8
  sort_buffer_size=32M
  join_buffer_size=3M
  read_buffer_size=16M
  query_cache_size=64M
  query_cache_limit=8M
  table_cache=300
  max_connections=500
  max_heap_table_size=1024M
  tmp_table_size=1024M
  myisam_sort_buffer_size=128M
  wait_timeout=3000
 
  set-variable=long_query_time=6
  log-slow-queries=/var/log/mysql-slow-queries.log
 
  8Gb Ram on this machine which is an intel quad core.
 
  Anything else I'm missing? It's *possible* a colleague had changed the
  my.cnf in the last few weeks and mysqld hadn't been restarted, but I
  don't
  see anything obvious in there and he can't remember.
 
  :(
 
 
 
  On Mon, Jul 21, 2008 at 7:08 AM, Ananda Kumar [EMAIL PROTECTED]
 wrote:
 
  Is there any other job running while the update is happening. Because,
  myisam does a table level lock. Please check the show full
  processlist.
 
  Also run mysqladmin -uroot -pxxx status. This would write lock
  information
  into the machine.err log file. Check in this file also if there is any
  locking happening.
 
  R u sure, this disk is a FASTER disk then the earlier one.
 
 
 
 
  On 7/21/08, Phil [EMAIL PROTECTED] wrote:
 
  Hi All,
 
 
  Given a fairly simple table as follows
 
  CREATE TABLE `common_userx2` (
  `t_proj` char(6) default NULL,
  `t_id` int(11) NOT NULL default '0',
  `t_nick` varchar(50) character set latin1 collate latin1_bin NOT NULL
  default '',
  `t_country` varchar(50) NOT NULL default '',
  `t_cpid` varchar(50) NOT NULL default '',
  `t_url` varchar(50) default NULL,
  `t_create_date` int(11) default NULL,
  `t_create_time` bigint(20) NOT NULL,
  `t_has_profile` char(1) NOT NULL,
  `t_team0` int(11) default NULL,
  `t_metric1` double(20,6) NOT NULL default '0.00',
  `t_metric2` double NOT NULL default '0',
  `t_metric3` double NOT NULL default '0',
  `t_metric4` double default NULL,
  `t_active` char(1) NOT NULL default '',
  `t_rev_metric1` double(20,6) NOT NULL default '100.00',
  `t_projrank0` int(11) default NULL,
  `t_rev_metric2` double(20,6) NOT NULL default '100.00',
  `t_racrank0` int(11) default NULL,
  `t_teamrank0` int(11) default NULL,
  `t_countryrank0` int(11) default NULL,
  `t_createdaterank0` int(11) default NULL,
  PRIMARY KEY  (`t_id`),
  KEY `prank` (`t_rev_metric1`,`t_id`),
  KEY `trank` (`t_team0`,`t_rev_metric1`,`t_id`),
  KEY `countryrank` (`t_country`,`t_rev_metric1`,`t_id`),
  KEY `createdaterank` (`t_create_date`,`t_rev_metric1`,`t_id`),
  KEY `racrank` (`t_rev_metric2`,`t_id`)
  ) ENGINE=MyISAM DEFAULT CHARSET=latin1
 
  I have a ranking update statement as follows
 
  set @rank = 0;
  update common_userx2 set projrank0 = @rank:[EMAIL PROTECTED] order by
  t_rev_metric1,t_id;
 
  For the largest case this has close to 1M rows.
 
  For weeks it was taking around 10seconds to do this. Yesterday I

Insert into...on duplicate key problem

2008-07-09 Thread Phil
Is it possible to do an insert into with subselect and group by with an
additional on duplicate insert ?

CREATE TABLE NEW_TABLE (
 `a` varchar(10),
 `b` double
) engine=MyISAM;


INSERT INTO NEW_TABLE (select old.x,sum(old.y) from OLD_TABLE old group by
old.a)
 on duplicate key
 update b=sum(old.y);

I get invalid group by clause on that.

Currently I achieve the same thing using two seperate queries, but wondered
if I could consolidate as they take ~ 30mins in total (much more complex
tables).

Anyway, more curious than anything on why it's disallowed and if it's just
something silly I'm missing.

Phil


-- 
Help build our city at http://free-dc.myminicity.com !


Re: Insert into...on duplicate key problem

2008-07-09 Thread Phil
Sorry, that was just a typo,

should have been

INSERT INTO NEW_TABLE (select old.x,sum(old.y) from OLD_TABLE old group by
old.x)
  on duplicate key
  update b=sum(old.y);

but this gives

ERROR  (HY000): Invalid use of group function

INSERT INTO NEW_TABLE (select old.x,sum(old.y) from OLD_TABLE old group by
old.x)

works fine...

Any ideas ?

Phil

On Wed, Jul 9, 2008 at 1:07 PM, Ananda Kumar [EMAIL PROTECTED] wrote:

 you should say group by old.x and not old.a


 On 7/9/08, Arthur Fuller [EMAIL PROTECTED] wrote:

 I think that you need to select old.a otherwise you cannot group by it.

 Arthur

 On 7/9/08, Phil [EMAIL PROTECTED] wrote:
 
  Is it possible to do an insert into with subselect and group by with an
  additional on duplicate insert ?
 
  CREATE TABLE NEW_TABLE (
`a` varchar(10),
`b` double
  ) engine=MyISAM;
 
 
  INSERT INTO NEW_TABLE (select old.x,sum(old.y) from OLD_TABLE old group
 by
  old.a)
on duplicate key
update b=sum(old.y);
 
  I get invalid group by clause on that.
 
  Currently I achieve the same thing using two seperate queries, but
 wondered
  if I could consolidate as they take ~ 30mins in total (much more complex
  tables).
 
  Anyway, more curious than anything on why it's disallowed and if it's
 just
  something silly I'm missing.
 
  Phil
 





-- 
Help build our city at http://free-dc.myminicity.com !


Re: Joining a table to itself

2008-07-08 Thread Phil
I think you have just got your table names confused. Try this one

SELECT

 cats.CatId,

 cats.cat As cat,

 cats1.catid AS catid1,

 cats1.cat As cat1,

 cats2.catid AS catid2,

 cats2.cat AS cat2,

 cats3.catid AS catid3,

 cats3.cat AS cat3

FROM

 vb_ldcats as cats

LEFT JOIN vb_ldcats As cats1 ON cats1.ParentId = cats.catid

LEFT JOIN vb_ldcats As cats2 ON cats2.ParentId = cats1.catid

LEFT JOIN vb_ldcats As cats3 ON cats3.ParentId = cats2.catid

Phil



On Tue, Jul 8, 2008 at 3:45 PM, Jim MacDiarmid [EMAIL PROTECTED]
wrote:

 I'm hoping someone can help me with this.  I have a table of categories
 that
 I'm trying to join to itself, but I keep getting the error unknown column:
 Cats1.parentid  in on clause.





 Here is the SQL for the table:

 CREATE TABLE `vb_ldcats` (

  `catid`int(10) AUTO_INCREMENT NOT NULL,

  `cattypeid`int(10) NOT NULL,

  `cat`  varchar(50),

  `parentid` int,

  `relatedid`int,

  `description`  text,

  /* Keys */

  PRIMARY KEY (`catid`)

 ) ENGINE = MyISAM;





 And here is the query:



 SELECT

  cats.CatId,

  cats.cat As cat,

  cats1.catid AS catid1,

  cats1.cat As cat1,

  cats2.catid AS catid2,

  cats2.cat AS cat2,

  cats3.catid AS catid3,

  cats3.cat AS cat3

 FROM

  vb_ldcats as cats3

 LEFT JOIN vb_ldcats As cats ON cats1.ParentId = cats.catid

 LEFT JOIN vb_ldcats As cats1 ON cats2.ParentId = cats1.catid

 LEFT JOIN vb_ldcats As cats2 ON cats3.ParentId = cats2.catid



 Thanks in advance!  J



 Jim






-- 
Help build our city at http://free-dc.myminicity.com !


Filesort on query

2008-06-09 Thread Phil
I have a table as follows containing approx 1.5M rows. I pull data from it
based on the CPID and insert into an 'overall' table. After breaking down
the statement somewhat, an explain still gives a filesort. Is there any way
to avoid a filesort for this query ?

mysql explain select a.cpid ,sum(a.metric1)
,sum(a.metric2),0,0,a.country,min(a.create_date),min(a.create_time),a.nick
from boinc_user a group by a.cpid order by a.cpid;
++-+---+--+---+--+-+--+-+-+
| id | select_type | table | type | possible_keys | key  | key_len | ref  |
rows| Extra   |
++-+---+--+---+--+-+--+-+-+
|  1 | SIMPLE  | a | ALL  | NULL  | NULL | NULL| NULL |
1443130 | Using temporary; Using filesort |
++-+---+--+---+--+-+--+-+-+
1 row in set (0.00 sec)


CREATE TABLE `boinc_user` (
  `proj` char(6) NOT NULL,
  `id` int(11) NOT NULL default '0',
  `stat_date` date NOT NULL default '-00-00',
  `nick` varchar(50) character set latin1 collate latin1_bin NOT NULL
default '',
  `country` varchar(50) NOT NULL default '',
  `cpid` varchar(50) NOT NULL default '',
  `url` varchar(50) default NULL,
  `create_date` int(11) NOT NULL,
  `create_time` bigint(20) NOT NULL,
  `has_profile` char(1) NOT NULL,
  `team0` int(11) default NULL,
  `team1` int(11) default NULL,
  `metric1` double NOT NULL default '0',
  `metric2` double NOT NULL default '0',
  `metric3` double NOT NULL default '0',
  `metric4` double default NULL,
  `today` double default '0'
  PRIMARY KEY  (`proj`,`id`),
  KEY `trank` (`proj`,`team0`,`metric1`,`id`),
  KEY `forstats` (`proj`,`metric1`,`id`),
  KEY `name` (`proj`,`id`),
  KEY `racrank` (`proj`,`metric2`,`id`),
  KEY `cpid` (`cpid`,`proj`),
  KEY `today` (`proj`,`today`,`id`),
  KEY `prank` (`proj`,`projrank0`,`id`)
) ENGINE=MyISAM DEFAULT CHARSET=latin1

Regards

Phil


-- 
Help build our city at http://free-dc.myminicity.com !


Re: Filesort on query

2008-06-09 Thread Phil
Any settings which would let mysql do this in memory ? There is 8Gb Ram on
the box dedicated for mysql.

my.cnf snippet as follows

old_passwords=1
key_buffer_size=2048M
max_allowed_packet=16M
thread_stack=128K
thread_cache_size=64
thread_concurrency=8
sort_buffer_size=32M
join_buffer_size=3M
read_buffer_size=16M
query_cache_size=64M
query_cache_limit=8M
table_cache=100
max_connections=250
max_heap_table_size=64M
myisam_sort_buffer_size=64M
wait_timeout=3000


On Mon, Jun 9, 2008 at 3:54 PM, Gerald L. Clark 
[EMAIL PROTECTED] wrote:

 Phil wrote:

 I have a table as follows containing approx 1.5M rows. I pull data from it
 based on the CPID and insert into an 'overall' table. After breaking down
 the statement somewhat, an explain still gives a filesort. Is there any
 way
 to avoid a filesort for this query ?

 mysql explain select a.cpid ,sum(a.metric1)
 ,sum(a.metric2),0,0,a.country,min(a.create_date),min(a.create_time),a.nick
 from boinc_user a group by a.cpid order by a.cpid;

 ++-+---+--+---+--+-+--+-+-+
 | id | select_type | table | type | possible_keys | key  | key_len | ref
  |
 rows| Extra   |

 ++-+---+--+---+--+-+--+-+-+
 |  1 | SIMPLE  | a | ALL  | NULL  | NULL | NULL| NULL
 |
 1443130 | Using temporary; Using filesort |

 ++-+---+--+---+--+-+--+-+-+
 1 row in set (0.00 sec)


 CREATE TABLE `boinc_user` (
  `proj` char(6) NOT NULL,
  `id` int(11) NOT NULL default '0',
  `stat_date` date NOT NULL default '-00-00',
  `nick` varchar(50) character set latin1 collate latin1_bin NOT NULL
 default '',
  `country` varchar(50) NOT NULL default '',
  `cpid` varchar(50) NOT NULL default '',
  `url` varchar(50) default NULL,
  `create_date` int(11) NOT NULL,
  `create_time` bigint(20) NOT NULL,
  `has_profile` char(1) NOT NULL,
  `team0` int(11) default NULL,
  `team1` int(11) default NULL,
  `metric1` double NOT NULL default '0',
  `metric2` double NOT NULL default '0',
  `metric3` double NOT NULL default '0',
  `metric4` double default NULL,
  `today` double default '0'
  PRIMARY KEY  (`proj`,`id`),
  KEY `trank` (`proj`,`team0`,`metric1`,`id`),
  KEY `forstats` (`proj`,`metric1`,`id`),
  KEY `name` (`proj`,`id`),
  KEY `racrank` (`proj`,`metric2`,`id`),
  KEY `cpid` (`cpid`,`proj`),
  KEY `today` (`proj`,`today`,`id`),
  KEY `prank` (`proj`,`projrank0`,`id`)
 ) ENGINE=MyISAM DEFAULT CHARSET=latin1

 Regards

 Phil



 With no where clause, and aggregate functions, it is faster to do a full
 table read, and the sort the aggregated results.
 --
 Gerald L. Clark
 Sr. V.P. Development
 Supplier Systems Corporation
 Unix  since 1982
 Linux since 1992




-- 
Help build our city at http://free-dc.myminicity.com !


Re: Incorrect information in file: './maindb/users.frm'

2008-06-04 Thread Phil
Just a very quick guess but is innobd engine running ?

SHOW STATUS like '%inno%'

On Wed, Jun 4, 2008 at 6:44 PM, Stut [EMAIL PROTECTED] wrote:

 On 4 Jun 2008, at 23:10, Stut wrote:

 HELP!!

 Our database just died. SHOW TABLE STATUS shows the message in the status
 line for every table except the one that's MyISAM - the rest are InnoDB.

 Is there any way to rebuild the .frm files for the InnoDB tables?


 Can anyone help? I know I haven't waited very long for an answer but this
 is (obviously) a massive problem for us. Do I need to resort to restoring
 the last full backup or is there a way to rebuild the .frm files? I've
 googled my butt off but can't find anything related to this.

 Thanks.


 -Stut

 --
 http://stut.net/

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




-- 
Help build our city at http://free-dc.myminicity.com !


Re: DESC index column

2008-05-21 Thread Phil
What I've done in the past is to create extra columns which contain the
reverse of a number/date used previously in an index.

So, for instance if it's a simple INT column (A) and you know the max would
be 999 for example, create an extra column and populate that with
(1000 - A) and use it as an ASC index.

Same can be done with dates.

Not always applicable, but it works and is fairly easy to implement.

Phil

On Tue, May 20, 2008 at 2:20 PM, Bof [EMAIL PROTECTED] wrote:

 Hi all -
 Is there a good workaround for mysql's lack of 'DESC'
 functionality when creating an index?

 I'm looking at migrating an Oracle RAC database to
 mysql (InnoDB or Cluster - testing both at the
 moment), and the Oracle database uses a lot of
 multi-column indexes with some colums indexed in
 descending order.

 If I can't emulate the descending index colums somehow
 it's likely to seriously impact performance and
 possibly derail the prospect of migration - help!

 cheers
 Iain
 --










 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




-- 
Help build our city at http://free-dc.myminicity.com !


Re: DESC index column

2008-05-21 Thread Phil
yes, you'd have to alter the queries to use the new index. As I say it's
very application dependent and does not always apply, but you can normally
shoehorn any application to use it.

Phil

On Wed, May 21, 2008 at 9:22 AM, Bof [EMAIL PROTECTED] wrote:

 Hi Phil -

 Thanks for the suggestion. Might that involve possibly
 changing queries in the web application hitting the
 database so that it uses the new column, or would the
 indexing on the new column take care of speeding up
 the existing queries?

 cheers
 Iain
 --


 --- Phil [EMAIL PROTECTED] wrote:

  What I've done in the past is to create extra
  columns which contain the
  reverse of a number/date used previously in an
  index.
 
  So, for instance if it's a simple INT column (A) and
  you know the max would
  be 999 for example, create an extra column and
  populate that with
  (1000 - A) and use it as an ASC index.
 
  Same can be done with dates.
 
  Not always applicable, but it works and is fairly
  easy to implement.
 
  Phil
 
  On Tue, May 20, 2008 at 2:20 PM, Bof
  [EMAIL PROTECTED] wrote:
 
   Hi all -
   Is there a good workaround for mysql's lack of
  'DESC'
   functionality when creating an index?
  
   I'm looking at migrating an Oracle RAC database to
   mysql (InnoDB or Cluster - testing both at the
   moment), and the Oracle database uses a lot of
   multi-column indexes with some colums indexed in
   descending order.
  
   If I can't emulate the descending index colums
  somehow
   it's likely to seriously impact performance and
   possibly derail the prospect of migration - help!
  
   cheers
   Iain
   --
  
  
  
  
  
  
  
  
  
  
   --
   MySQL General Mailing List
   For list archives: http://lists.mysql.com/mysql
   To unsubscribe:
 
 http://lists.mysql.com/[EMAIL PROTECTED]
  
 
 
 
  --
  Help build our city at http://free-dc.myminicity.com
  !
 







-- 
Help build our city at http://free-dc.myminicity.com !


Re: Performance

2008-04-22 Thread Phil
I'm sure if you created an index on
client_id,client_unit_id,transaction_date (with optionally something else to
make unique) it would increase performance.

What does an EXPLAIN give you?

Phil

On Tue, Apr 22, 2008 at 11:41 AM, Bruno B. B. Magalhães 
[EMAIL PROTECTED] wrote:

 Hi everybody,

 I am back to this list after a long period away due to work time
 restrictions... I have great news and a few interesting applications that I
 will release to the mysql community very soon, most probably as open source.

 But now I have a performance problem with a client of mine, that I was not
 able to solve... The problem is that I have a very large table in terms of
 data, about 7.000.000 financial transactions records, with the following
 table (translated from portuguese):

 CREATE TABLE `transactions` (
  `client_id` int(5) unsigned zerofill NOT NULL default '0',
  `client_unit_id` int(4) unsigned zerofill NOT NULL default '',
  `client_property_id` int(6) unsigned zerofill NOT NULL default '00',
  `transaction_id` int(6) unsigned zerofill NOT NULL default '00',
  `transaction_account_id` int(3) unsigned zerofill NOT NULL default '000',
  `transaction_classification_id` int(3) unsigned NOT NULL default '0',
  `transaction_category_id` int(4) unsigned zerofill NOT NULL default
 '',
  `transaction_complement` varchar(200) NOT NULL,
  `transaction_date` date default NULL,
  `transaction_amount` decimal(16,2) NOT NULL,
  `transaction_parcel` varchar(8) NOT NULL,
  `transaction_nature` varchar(1) NOT NULL
  KEY `transactions_idx_1`
 (`client_id`,`client_unit_id`,`client_property_id`,`transaction_account_id`,

  
 `transaction_classification_id`,`transaction_category_id`,`transaction_id`,`transaction_date`,`transaction_nature`)
 ) ENGINE=MyISAM DEFAULT CHARSET=utf8

 And most the queries are similar to this one:

  SELECT UNIX_TIMESTAMP(transactions.transaction_date) AS date,
   transactions.transaction_complement AS complement,
   transactions.transaction_parcel AS parcel,
   transactions.transaction_amount AS amount,
   transactions.transaction_nature AS nature,
   transactions_categories.transaction_category_description
 AS category_description
  FROM transactions AS transactions
 LEFT JOIN transactions_categories AS transactions_categories
ON transactions.transaction_category_id =
 transactions_categories.transaction_category_id
 WHERE transactions.client_id = :client
   AND transactions.client_unit_id = :unit
   AND transactions.transaction_date = :start_date
   AND transactions.transaction_date = :stop_date
 ORDER BY transactions.transaction_date,
   transactions.transaction_id ASC

 So the most important indexes are client_id , client_unit_id ,
 client_property_id , transaction_account_id ,  transaction_classification_id
  , transaction_category_id , transaction_id , transaction_date ,
 transaction_nature, and most of the time they are called together, I thing
 the most problematic part of those queries are the date range part, should I
 use a different index only for this column to maintain the index small?

 Most of the financials reports today takes about 8 to 12 seconds to be
 generated for one month (course that I have to sum previous months totals to
 give the balance).

 Thanks in advance...

 Regards,
 Bruno B B Magalh'aes

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




-- 
Help build our city at http://free-dc.myminicity.com !


Re: a strange problem

2008-04-22 Thread Phil
Not knowing your msqyl version, perhaps it's the form of your LIMIT clause.

try LIMIT 0,10 instead.

Phil

2008/4/22 liaojian_163 [EMAIL PROTECTED]:

 hi,all.
 In my mysql server,I have a strange problem.
 can someone help me?
 Thank you.

 mysql select id,classid,newstime from phome_ecms_zhichang  where
 classid=41 and id 2500 order by id desc  limit 10;
 +--+-+-+
 | id   | classid | newstime|
 +--+-+-+
 | 2543 |  41 | 2008-04-22 21:55:22 |
 | 2540 |  41 | 2008-04-19 12:29:30 |
 | 2537 |  41 | 2008-04-18 17:38:50 |
 | 2536 |  41 | 2008-04-18 17:37:56 |
 | 2534 |  41 | 2008-04-18 12:22:24 |
 | 2533 |  41 | 2008-04-18 01:19:49 |
 | 2532 |  41 | 2008-04-18 01:18:42 |
 | 2527 |  41 | 2008-04-16 18:45:34 |
 | 2526 |  41 | 2008-04-16 18:43:03 |
 | 2523 |  41 | 2008-04-16 08:47:16 |
 +--+-+-+
 10 rows in set (0.00 sec)

 mysql select id,classid,newstime from phome_ecms_zhichang  where
 classid=41 order by id desc  limit 10;
 +--+-+-+
 | id   | classid | newstime|
 +--+-+-+
 | 2540 |  41 | 2008-04-19 12:29:30 |
 | 2537 |  41 | 2008-04-18 17:38:50 |
 | 2536 |  41 | 2008-04-18 17:37:56 |
 | 2534 |  41 | 2008-04-18 12:22:24 |
 | 2533 |  41 | 2008-04-18 01:19:49 |
 | 2532 |  41 | 2008-04-18 01:18:42 |
 | 2527 |  41 | 2008-04-16 18:45:34 |
 | 2526 |  41 | 2008-04-16 18:43:03 |
 | 2523 |  41 | 2008-04-16 08:47:16 |
 | 2522 |  41 | 2008-04-15 15:34:55 |
 +--+-+-+


 mysql select id,classid,newstime from phome_ecms_zhichang  where
 classid=41 and id 0 order by id desc  limit 10;
 Empty set (0.00 sec)


 desc  phome_ecms_zhichang;

 +-+--+--+-+-++
 | Field   | Type | Null | Key | Default | Extra
|

 +-+--+--+-+-++
 | id  | int(11)  | NO   | PRI | NULL|
 auto_increment |
 | classid | smallint(6)  | NO   | MUL | 0   |
|
 | onclick | int(11)  | NO   | | 0   |
|
 | newspath| varchar(50)  | NO   | | |
|
 | keyboard| varchar(255) | NO   | | |
|
 | keyid   | varchar(255) | NO   | | |
|
 | userid  | int(11)  | NO   | | 0   |
|
 | username| varchar(30)  | NO   | | |
|
 | ztid| varchar(255) | NO   | | |
|
 | checked | tinyint(1)   | NO   | MUL | 0   |
|
 | istop   | tinyint(4)   | NO   | | 0   |
|
 | truetime| int(11)  | NO   | MUL | 0   |
|
 | ismember| tinyint(1)   | NO   | | 0   |
|
 | dokey   | tinyint(1)   | NO   | | 0   |
|
 | userfen | int(11)  | NO   | | 0   |
|
 | isgood  | tinyint(1)   | NO   | | 0   |
|
 | titlefont   | varchar(50)  | NO   | | |
|
 | titleurl| varchar(200) | NO   | | |
|
 | filename| varchar(60)  | NO   | | |
|
 | filenameqz  | varchar(28)  | NO   | | |
|
 | fh  | tinyint(1)   | NO   | | 0   |
|
 | groupid | smallint(6)  | NO   | | 0   |
|
 | newstempid  | smallint(6)  | NO   | | 0   |
|
 | plnum   | int(11)  | NO   | | 0   |
|
 | firsttitle  | tinyint(1)   | NO   | | 0   |
|
 | isqf| tinyint(1)   | NO   | | 0   |
|
 | totaldown   | int(11)  | NO   | | 0   |
|
 | title   | varchar(200) | NO   | | |
|
 | newstime| datetime | NO   | MUL | -00-00 00:00:00 |
|
 | titlepic| varchar(200) | NO   | | |
|
 | closepl | tinyint(1)   | NO   | | 0   |
|
 | havehtml| tinyint(1)   | NO   | | 0   |
|
 | lastdotime  | int(11)  | NO   | | 0   |
|
 | haveaddfen  | tinyint(1)   | NO   | | 0   |
|
 | infopfen| int(11)  | NO   | | 0   |
|
 | infopfennum | int(11)  | NO   | | 0   |
|
 | votenum | int(11)  | NO   | | 0   |
|
 | ftitle  | varchar(200

Re: select does too much work to find rows where primary key does not match

2008-04-15 Thread Phil
I would have thought your not = though is matching a lot more rows every
time..

I would look into using where not exists as a subselect

delete from bar where not exists (select 'y' from foo where foo.phone =
bar.phone);

something like that.

On Tue, Apr 15, 2008 at 5:00 PM, Patrick J. McEvoy [EMAIL PROTECTED]
wrote:

 I have two MyISAM tables; each uses 'phone' as a primary key. Finding rows
 where the primary keys match is efficient:

 mysql explain select bar.phone from foo,bar where foo.phone=bar.phone;

 ++-+---++---+-+-+---+---+-+
 | id | select_type | table | type   | possible_keys | key | key_len |
 ref   | rows  | Extra   |

 ++-+---++---+-+-+---+---+-+
 |  1 | SIMPLE  | bar   | index  | PRIMARY   | PRIMARY | 10  |
 NULL  | 77446 | Using index |
 |  1 | SIMPLE  | foo   | eq_ref | PRIMARY   | PRIMARY | 10  |
 ssa.bar.phone | 1 | Using index |

 ++-+---++---+-+-+---+---+-+
 2 rows in set (0.00 sec)


 Finding rows in one table that do not match a row in the other table is
 wildly inefficient:

 mysql explain select bar.phone from foo,bar where foo.phone!=bar.phone;

 ++-+---+---+---+-+-+--+-+--+
 | id | select_type | table | type  | possible_keys | key | key_len |
 ref  | rows| Extra|

 ++-+---+---+---+-+-+--+-+--+
 |  1 | SIMPLE  | bar   | index | NULL  | PRIMARY | 10  |
 NULL |   77446 | Using index  |
 |  1 | SIMPLE  | foo   | index | NULL  | PRIMARY | 10  |
 NULL | 3855468 | Using where; Using index |

 ++-+---+---+---+-+-+--+-+--+
 2 rows in set (0.00 sec)

 (This is the same for 'NOT', '!=', or ''.)

 The amount of work should be identical in both cases: grab a row, look up
 by primary key in the other table, proceed.

 My real goal is to delete rows in the smaller table if there is no match
 in the larger table:

delete from bar using foo,bar where not bar.phone=foo.phone;

 but it runs for hours. I suppose I could SELECT INTO a new table and
 rename the tables, but that seems dorky.

 Is there any way to force SELECT/DELETE to look up the primary key rather
 than scan the entire index?

 Thanks.


 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




-- 
Help build our city at http://free-dc.myminicity.com !


Group by function and avg on char

2008-03-27 Thread Phil
Hi all,

got a simple problem I'm trying to solve without success.

Given the following table

CREATE TABLE `scores` (
  `proj` char(3) NOT NULL default '',
  `id` int(11) NOT NULL default '0',
  `score` double default NULL,
  `cpid` char(32) default NULL,
  `team` char(20) default NULL,
  PRIMARY KEY  (`proj`,`id`),
  KEY `cpid` (`cpid`,`id`)
) ENGINE=MyISAM DEFAULT CHARSET=latin1 |

insert into scores values ('a',1,100,'aaa','X');
insert into scores values ('b',2,50,'aaa','X');
insert into scores values ('c',2,500,'aaa','Y');

I have the following sql to get the sum of scores for the cpid (cross
project id)

select cpid,sum(score) from scores group by cpid;

This is simple enough and works fine. However I also wish to select the team
given this case, I'd like to get 'X' as there are two instances of 'X' and
only one of 'Y'

Is this possible in the same sql statement, something like an AVG for a
string, or a median perhaps.

Regards

Phil







-- 
Help build our city at http://free-dc.myminicity.com !


Re: how to search apostrophes in sql

2008-03-25 Thread Phil
You need to escape the apostrophe first

so

select count(*) from table where field like '%\'%'

On Tue, Mar 25, 2008 at 2:37 PM, Saravanan [EMAIL PROTECTED] wrote:

 hi lists,

 I want to count the number of rows containing ' aphostrophe in a
 particular field. I tried with

 select count(*) from table where field like %'%

 i get only 0 counts. but I am sure that exists in many rows.

 how to query them

 Saravanan



  
 
 Be a better friend, newshound, and
 know-it-all with Yahoo! Mobile.  Try it now.
 http://mobile.yahoo.com/;_ylt=Ahu06i62sR8HDtDypao8Wcj9tAcJ

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




-- 
Help build our city at http://free-dc.myminicity.com !


Re: Optimize db update

2008-03-20 Thread Phil
Are the table structures identical ?

If so, you could just move the data files themselves.

Otherwise consider using

unload from table B into TAB seperated format (mysql load format)
truncate table A
load data infile into table A



On Thu, Mar 20, 2008 at 2:20 PM, Daniel Brown [EMAIL PROTECTED] wrote:

 On Thu, Mar 20, 2008 at 1:41 PM, Velen [EMAIL PROTECTED] wrote:
 
   Actually I am updating TableA in DatabaseA with values from TableB in
 DatabaseB.  Database B is on a stand alone PC.  I'm using VB6 to create a
 .txt file containing data from TableB then using VB6 once more to
 recronstruct the table in DatabaseA then remove all data which are already
 in TableA and insert the remaining.
 [snip!]
   How can I optimise this process? and What are the alternatives
 available ?

 If you don't absolutely need to use VB6, why not use something
 with native support like PHP?

 ?
 function dba_query($sql) { // Simply return the connection resource ID
// Select the primary database
$dba_conn =
 mysql_connect('hostname_a','username_a','password_a') or
 die(mysql_error());
$dba_db = mysql_select_db('database_a',$dba_conn);
$r = mysql_query($sql,$dba_conn) or die(mysql_error());
return $r;
 }

 function dbb_query($sql) { // Simply return the connection resource ID
// Select the secondary database
$dbb_conn =
 mysql_connect('hostname_b','username_b','password_b') or
 die(mysql_error());
$dbb_db = mysql_select_db('database_b',$dbb_conn);
$r = mysql_query($sql,$dbb_conn) or die(mysql_error());
return $r;
 }

 $sql = SELECT field1,field2,field3,field4 FROM table_a;
 $result = dba_query($sql) or die(mysql_error());
 while($row = mysql_fetch_array($result)) {
$ssql = INSERT INTO table_b(field1,field2,field3,field4)
VALUES(
'.$row['field1'].',
'.$row['field2'].',
'.$row['field3'].',
'.$row['field4'].'
};
dbb_query($ssql) or die(mysql_error());
 }
 ?

If you decide to go that route, I recommend subscribing to the
 PHP-DB list at http://php.net/mailinglists (referred to there as
 Databases and PHP).  You should see a significant gain in
 performance using a native client as opposed to what you're now using
 (probably an ODBC DSN, MyODBC, or a JDBC hack).

 --
 /Daniel P. Brown
 Forensic Services, Senior Unix Engineer
 1+ (570-) 362-0283

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




-- 
Help build our city at http://free-dc.myminicity.com !


Re: Auto Fill blank Rows

2008-03-12 Thread Phil
you could do something like

select dummy.row_id,real.reference from dummy left join real on
real.row_id=dummy.row_id;

would give NULL on the 'missing' rows,

On Wed, Mar 12, 2008 at 12:50 PM, roger.maynard 
[EMAIL PROTECTED] wrote:

 Anyone got any bright ideas of how to solve this one?

 I have documents which can contain up to 15 rows of information.
 Each row as it is added to the document reference can have any ROW_ID
 from 1 to 15 when it is added.

 I want to be able to Auto Fill the blank rows on a SELECT.

 eg

 data in table is

 ROW_ID|Reference
 ===
 1 |Reference Line 1
 3 |Reference Line 3
 9 |Reference Line 9
 11|Reference Line 11
 15|Reference Line 15

 RESULT REQUIRED is

 ROW_ID|Reference
 ===
 1 |Reference Line 1
 2 |
 3 |Reference Line 3
 4 |
 5 |
 6 |
 7 |
 8 |
 9 |Reference Line 9
 10|
 11|Reference Line 11
 12|
 13|
 14|
 15|Reference Line 15

 I've been playing about with joins on a dummy table containing just
 rows 1 to 15, but am stuck in my thinking at the moment.

 Any help gratefully received

 Roger




-- 
Help build our city at http://free-dc.myminicity.com !


Re: Migrate HUGE Database

2008-03-10 Thread Phil
mysqldump from the commandline. You are most likely running into php
execution time limits using phpmyadmin

OR you could probably just copying the underlying files, .frm,MYI and MYD

I've successfully done that with myisam databases going from version 4 - 5
on tables exceeding 50M rows. Not sure about innoDB though.




On Mon, Mar 10, 2008 at 1:29 PM, Terry Babbey [EMAIL PROTECTED] wrote:

 Hello,

 I have a huge database that I would like to migrate from a server
 running 4.0.16 to a server running the Windows version 5.0.45. The
 database is approximately 3,500,000 records. I get timeout errors using
 PHPMyAdmin to export the data.



 Does anyone have any suggestions for how I can do this?



 Thanks,

 Terry



 Terry Babbey

 Infrastructure Specialist

 Information Technology, Lambton College of Applied Arts  Technology

 [EMAIL PROTECTED], 519.542.7751 x3293






-- 
Help build our city at http://free-dc.myminicity.com !


Reverse index

2008-03-06 Thread Phil
In my never ending quest for speed ups I've been trying the following..

I pull in xml data for roughly (at the peak) 1.8M hosts from the BOINC
[EMAIL PROTECTED] hosts files.

Each host will have a unique id, a score, createdate and possibly a country
 team (as well as a number of other characteristics)

These have to be ranked in multiple ways.

A basic ranking is just by the score which I hold as a double, I index this
along with the id of the host computer.

A more complex ranking is for score within teams.

I use some sql as follows for this, fastest I've found to date

set @rank = 0,@pos = 0,@team:=null,@score:=null;;
  update host_table set teamrank=
greatest( @rank:= if(@team = team and @score = rev_score, @rank,
  if(@team  team,1, @rank+1)),
   least(0,@pos := if(@team = team, @pos+1,1)),
   least(0,@team := team))
  order by team,rev_score,id

Now note that the column is rev_score. Because mysql does not support
descending indexes, I added a column for which I subtract the score from
1,000,000,000 and use that as an index.

(score is unlikely to get above that anytime soon)

My question is, is this worth it? It certainly seems to be faster to me, but
not as much as I expected.

I did try originally subtracting from 0, but that caused the rankings to be
incorrect..

Regards


Re: Debugging mysql limits

2008-03-04 Thread Phil
Just inheritance from an old design that has passed it's limits.

I actually have a development version which does just that, but there is a
lot of work to convert many php scripts and sql to include the new column.
It's some way away from live though, so the problem I outlined still exists.

Phil

On Tue, Mar 4, 2008 at 4:03 AM, Thufir [EMAIL PROTECTED] wrote:

 On Thu, 28 Feb 2008 11:19:40 -0500, Phil wrote:

  I have 50 plus tables lets call them A_USER, B_USER, C_USER etc which I
  daily refresh with updated (and sometimes new) data.
 
  I insert the data into a temporary table using LOAD DATA INFILE. This
  works great and is very fast.


 May I ask why you have fifty plus tables with, apparently, the same
 schema?  Why not have one table with an extra column user?



 -Thufir


 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Debugging mysql limits

2008-02-29 Thread Phil
Just a little more info on this.

I tried setting all of this up on a home server with, as far as I can see,
more or less identical specs with the exception being that it's a 64bit
linux build rather than 32bit.

Same insert on duplicate update takes 3 mins.

I spent all day yesterday trying to figure out what limits are being hit
without success.

Would certainly appreciate any pointers to look at..

Phil

On Thu, Feb 28, 2008 at 11:19 AM, Phil [EMAIL PROTECTED] wrote:

 I'm trying to figure out which limits I'm hitting on some inserts.

 I have 50 plus tables lets call them A_USER, B_USER, C_USER etc which I
 daily refresh with updated (and sometimes new) data.

 I insert the data into a temporary table using LOAD DATA INFILE. This
 works great and is very fast.

 Then I do an

 INSERT INTO A_USER (Select col1,col2,col3...,col 20, 0,0,0,0,0,0,etc etc
 from A_TEMP) on DUPLICATE KEY UPDATE col1=A_TEMP.col1,col2= etc

 The sizes in the tables range from 500 entries up to 750,000.

 two of them in the 200,000 range take 2-3 mins for this to complete, the
 largest at 750,000 takes over an hour.

 a sampling of my cnf file is

 old_passwords=1
 max_connections = 50
 max_user_connections = 50
 table_cache=2000
 open_files_limit=4000
 log-slow-queries = /var/log/mysql-slow.log
 long_query_time = 12
 log-queries-not-using-indexes
 thread_cache_size = 100
 query_cache_size = 64M
 key_buffer_size = 512M
 join_buffer_size = 24M
 sort_buffer_size = 64M
 read_buffer_size = 4M
 tmp_table_size = 64M
 max_heap_table_size = 64M

 There is 2Gb Ram in the server which I would gladly increase if I knew I
 could tweak these settings to fix this?

 Any ideas what I should do to figure out what is causing it?

 Regards

 Phil




Debugging mysql limits

2008-02-28 Thread Phil
I'm trying to figure out which limits I'm hitting on some inserts.

I have 50 plus tables lets call them A_USER, B_USER, C_USER etc which I
daily refresh with updated (and sometimes new) data.

I insert the data into a temporary table using LOAD DATA INFILE. This works
great and is very fast.

Then I do an

INSERT INTO A_USER (Select col1,col2,col3...,col 20, 0,0,0,0,0,0,etc etc
from A_TEMP) on DUPLICATE KEY UPDATE col1=A_TEMP.col1,col2= etc

The sizes in the tables range from 500 entries up to 750,000.

two of them in the 200,000 range take 2-3 mins for this to complete, the
largest at 750,000 takes over an hour.

a sampling of my cnf file is

old_passwords=1
max_connections = 50
max_user_connections = 50
table_cache=2000
open_files_limit=4000
log-slow-queries = /var/log/mysql-slow.log
long_query_time = 12
log-queries-not-using-indexes
thread_cache_size = 100
query_cache_size = 64M
key_buffer_size = 512M
join_buffer_size = 24M
sort_buffer_size = 64M
read_buffer_size = 4M
tmp_table_size = 64M
max_heap_table_size = 64M

There is 2Gb Ram in the server which I would gladly increase if I knew I
could tweak these settings to fix this?

Any ideas what I should do to figure out what is causing it?

Regards

Phil


Re: Insert...on duplicate with aggregate

2008-02-27 Thread Phil
Awesome!

Thanks Baron, works perfectly..

Phil

On Tue, Feb 26, 2008 at 10:06 PM, Baron Schwartz [EMAIL PROTECTED] wrote:

 Hi!

 On Tue, Feb 26, 2008 at 7:04 PM, Phil [EMAIL PROTECTED] wrote:
  I have a table countrystats defined as
 
   CREATE TABLE IF NOT EXISTS `countrystats` (
`proj` char(6) NOT NULL default '',
`country` char(50) NOT NULL default '',
`score` double default NULL,
`nusers` int(11) default NULL,
`RAC` double default NULL,
`last_update` double default NULL,
PRIMARY KEY  (`proj`,`country`)
   ) ENGINE=MyISAM DEFAULT CHARSET=latin1;
 
   To get the data I can do the following
 
   select 'rsi',country,sum(metric1) as total,count(*),sum(metric2) from
   user_table group by country;
 
   This works fine, but then I tried  to populate the table with
 
   INSERT INTO countrystats (select 'rsi', country,sum(metric1) as
   total,count(*) as count,sum(metric2) as sumrac,0 from user_table group
 by
   country) on duplicate key update last_update=total -
   score,score=total,nusers=count,RAC=sumrac;
 
   which gives me
 
   ERROR 1054 (42S22): Unknown column 'total' in 'field list'
 
   now the insert on it's own without the on duplicate works just fine..
 so why
   does the update not like the 'named' column??
 
   Any ideas? Can it be done in a single statement?

 That is an odd error.  But you can work around it this way:

 INSERT ...
 SELECT * FROM (
  SELECT... GROUP BY
 ) AS derived_table
 ON DUPLICATE KEY 



Re: joining and grouping

2008-02-27 Thread Phil
I'm confused as to why you need the subselect at all?

As it's all the same table why can't you just use

  select candidate,count(*) as total from vote where voter  '$me' group by
candidate order by total
desc;

On Wed, Feb 27, 2008 at 9:04 AM, Olav Mørkrid [EMAIL PROTECTED]
wrote:

 hello

 i have a table vote which has the columns voter and candidate. i
 would like to make a list of the most popular candidates *except*
 those who are on my favorite list. using a sub-select, it's easy:

 my favorites:
 select candidate from vote where voter = '$me';

 most popular:
 select candidate from vote group by candidate order by count(*) desc;

 sub-select:
 select candidate from vote where candidate not in (select candidate
 from vote where voter = '$me') group by candidate order by count(*)
 desc;

 however, sub-selects are very slow, so i need to find a speedy way.
 i'm familiar with joins, but don't know how to use it for this case
 where grouping is involved.

 please get in touch if you know how to solve it.

 thanks!

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: joining and grouping

2008-02-27 Thread Phil
Ok then, so

  select candidate,count(*) as total from vote where (voter  '$me' and
vote =1) group by candidate order by total
desc;

On Wed, Feb 27, 2008 at 9:37 AM, Olav Mørkrid [EMAIL PROTECTED]
wrote:

 hi phil, i forgot to mention one thing.

 the table also has a column called vote which is either 0 (no vote
 given) or 1 (vote given). this column is required for other purposes.

 my favorites:
 select candidate from vote where voter = '$me' and vote = 1;

 most popular:
 select candidate from vote where vote = 1
 group by candidate order by count(*) desc;

 when generating the desired list (most popular minus my favorites) it
 is important that a candidate is excluded from the result set if *i*
 voted for him -- even if a million other people voted for him.

 is this clearer?

 On 27/02/2008, Phil [EMAIL PROTECTED] wrote:
  I'm confused as to why you need the subselect at all?
 
  As it's all the same table why can't you just use
 
   select candidate,count(*) as total from vote where voter  '$me' group
 by
  candidate order by total
  desc;
 
  On Wed, Feb 27, 2008 at 9:04 AM, Olav Mørkrid [EMAIL PROTECTED]
  wrote:
 
   hello
  
   i have a table vote which has the columns voter and candidate. i
   would like to make a list of the most popular candidates *except*
   those who are on my favorite list. using a sub-select, it's easy:
  
   my favorites:
   select candidate from vote where voter = '$me';
  
   most popular:
   select candidate from vote group by candidate order by count(*) desc;
  
   sub-select:
   select candidate from vote where candidate not in (select candidate
   from vote where voter = '$me') group by candidate order by count(*)
   desc;
  
   however, sub-selects are very slow, so i need to find a speedy way.
   i'm familiar with joins, but don't know how to use it for this case
   where grouping is involved.
  
   please get in touch if you know how to solve it.
  
   thanks!
  
   --
   MySQL General Mailing List
   For list archives: http://lists.mysql.com/mysql
   To unsubscribe:
 http://lists.mysql.com/[EMAIL PROTECTED]
  
 

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Insert...on duplicate with aggregate

2008-02-26 Thread Phil
I have a table countrystats defined as

CREATE TABLE IF NOT EXISTS `countrystats` (
  `proj` char(6) NOT NULL default '',
  `country` char(50) NOT NULL default '',
  `score` double default NULL,
  `nusers` int(11) default NULL,
  `RAC` double default NULL,
  `last_update` double default NULL,
  PRIMARY KEY  (`proj`,`country`)
) ENGINE=MyISAM DEFAULT CHARSET=latin1;

To get the data I can do the following

select 'rsi',country,sum(metric1) as total,count(*),sum(metric2) from
user_table group by country;

This works fine, but then I tried  to populate the table with

INSERT INTO countrystats (select 'rsi', country,sum(metric1) as
total,count(*) as count,sum(metric2) as sumrac,0 from user_table group by
country) on duplicate key update last_update=total -
score,score=total,nusers=count,RAC=sumrac;

which gives me

ERROR 1054 (42S22): Unknown column 'total' in 'field list'

now the insert on it's own without the on duplicate works just fine.. so why
does the update not like the 'named' column??

Any ideas? Can it be done in a single statement?


Re: How to delete duplicates with full row comapring

2008-02-07 Thread Phil
  Hello!
 
  I am looking for an easy solution for eliminate duplicates but on a row 
  level.
 
  I am having 2 tables. 1 destination for all not duplicated info (a)
  and 1 for input table (b) which might have duplicates related to table
  a. Now I am using this kind of insert:
 
  INSERT INTO a
  SELECT fields
  FROM b
  WHERE ... NOT EXISTS (
  SELECT * FROM a WHERE (a.a,a.b,a.c,a.d)=(b.a,b.b,b.c,b.d)
  )
 
  Looks like it works but is it any solution for row level compare
  without naming all fields? For example WHERE (a.*) = (b.*) instead of
  currently used (a.a,a.b,...)=(b.a,b.b,...).

Have you tried

create table B as select * from A where 1 group by 'index';


Very slow update

2008-02-07 Thread Phil
I'm trying to write an update which generates ranking data for a table.

Table is as follows

CREATE TABLE `A` ( `id` INT NOT NULL ,
`score` DOUBLE NOT NULL ,
`projrank` INT NOT NULL ,
`other` VARCHAR( 10 ) NOT NULL
) ENGINE = MYISAM

Real table actually contains 30 or so more fields but it gives a similar
issue

Score changes often, so 4 times per day I want to re-rank the data.

Primary index is on score desc, id

So I run the following

set @rank:=0;
update A set [EMAIL PROTECTED]:=rank+1 order by score desc,id

For 20,000 rows the update takes 0.8 seconds
For 50,000 rows it takes 1.9 seconds
For 140,000 rows it takes ~ 5 seconds

Scale up to 400,000 and it takes 7 minutes??

I'm sure it's probably some setting in my.cnf but I've tweaking them to no
avail. Currently I have them set as follows

key_buffer_size=256M
max_allowed_packet=16M
thread_stack=128K
thread_cache_size=8
sort_buffer_size=48M
join_buffer_size=3M
read_buffer_size=4M
query_cache_size=4M
query_cache_limit=4M
table_cache=100
max_connections=20
max_heap_table_size=64M
myisam_sort_buffer_size=64M

which is probably too high in places.

Ideas? The full table actually has 1.7M rows in it which takes over an hour
to update.. I've been puzzling on this for weeks now.

Server is a 4core opteron 275 with 2Gb ram

Thanks


Re: How do I get off this list that I do not remember joining in the first place!!!!

2006-05-25 Thread Phil Robbins

I've read the notice AND tried to unsubscribe TWICE.  I still get the mail.



++
Phil Robbins
Auckland
New Zealand
++

_
Need more speed? Get Xtra Broadband @ 
http://jetstream.xtra.co.nz/chm/0,,202853-1000,00.html



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



How do I get off this list that I do not remember joining in the first place!!!!

2006-05-24 Thread Phil Robbins





++
Phil Robbins
Auckland
New Zealand
++

_
Discover fun and games at  @  http://xtramsn.co.nz/kids


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Looking for free MySQL Administrator

2006-05-21 Thread Phil Robbins
Why am I suddenly getting huge volumes of mail about SQL?  What forum is 
this, and how do I get out of it?




++
Phil Robbins
Auckland
New Zealand
++

_
Find the coolest online games @ http://xtramsn.co.nz/gaming


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: How to find out about SSL connection?

2006-05-21 Thread Phil Robbins


Why am I suddenly getting huge volumes of mail about SQL?  What forum is 
this, and how do I get out of it?



++
Phil Robbins
Auckland
New Zealand
++

_
Discover fun and games at  @  http://xtramsn.co.nz/kids


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



No Database Selected error - only one .asp page affected

2005-08-26 Thread Phil Jones
This is quite odd. I have five .asp pages all using
the exact same connection code. For some reason, one
of the pages is getting a No Database Selected
error, yet the other four are not, which leads me to
believe my DSN config works fine. Below are the
details.

 Getting the following error:  

Microsoft OLE DB Provider for ODBC Drivers error
'80004005' 
No Database Selected 
line 64

  Here is line 64:  
Set rs = conn.execute(sql)

 Here are env details:  

MySQL ODBC Driver: 3.51.11.1
Lang: ASP
O/S: Win Server 2003
Webserver: IIS

 Here is entire connection code:  

set conn = Server.CreateObject(ADODB.Connection)  
set rs = Server.CreateObject(ADODB.Recordset) 

conn.open DRIVER={MySQL ODBC 3.51
Driver};SERVER=localhost;
UID=b-r;PWD=d-r;DSN=b-c; 

sql = select state_idx, state_abbr from state where
active_ind = 'A' 

Set rs = conn.execute(sql)

__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Installing Mysql beta on Debian

2005-06-22 Thread hameau . phil
Hi,

I've been successfully using mysql 5.0.x on my win32 development machine.

I would like to install it on a server running stable Debian. Unfortunately,
there are no .deb packages for the MySQL 5.0 series.

I have tried to use the linux precompiled binaries but my debian complains about
not being able to connect using /var/run/.../mysqld.sock and the
/etc/init.d/mysql start script will tell me something went wrong.
I have accidentally deleted /etc/mysql/debian-start and replaced it with an
empty file :(.

Now, what I would like to do is to either
1) use the precompiled binaries without breaking apt's database (ie overwriting
the files that already exist) and using canonical debian path conventions.
2) compile by myself, but I *really* need some help with all the options
./configure provides, in order, again, to use the debian patch conventions.

thanks in advance,
Phil

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: B-tree index question

2004-10-21 Thread Phil Bitis
From: Sergei Golubchik [EMAIL PROTECTED]
But for auto_increment field (on BIGINT, I believe ?),
you'll have hundreds of keys on one key page, so logarithm base will be
few hundreds, and log N should be just 3-5. That is, it should be only
~3-5 times slower as compared to the table with one hundred rows.
Hi again, does the key page size differ depending on the type of the column 
(BIGINT, INT, etc)? Is there any way I can work out the key page size, or 
configure it?

Cheers,
-Phil 


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: B-tree index question

2004-10-21 Thread Phil Bitis
Hi Gary,
Yeah... we thought about those. Sergei said:
you'll have hundreds of keys on one key page, so logarithm base will be
few hundreds, and log N should be just 3-5. That is, it should be only
~3-5 times slower as compared to the table with one hundred rows.
So say key base is 200, log 200 (10^9) = 3.91
Splitting it into 10 smaller tables would make log 200 (10^9) = 3.47, which
isn't a huge amount of difference I guess.
Still, worth testing to see how it performs in practice I guess. Thanks for
the tip :)
Cheers,
-Phil
- Original Message - 
From: Gary Richardson [EMAIL PROTECTED]
To: Phil Bitis [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]
Sent: Thursday, October 21, 2004 2:45 AM
Subject: Re: B-tree index question


If you are using MyISAM tables, have you thought about using MERGE
tables instead? You could partition your table into several smaller
tables. I don't know how the performance would be on a billion record
table, but from my understanding it would shrink your index down.
http://dev.mysql.com/doc/mysql/en/MERGE.html
On Wed, 20 Oct 2004 11:09:43 +0100, Phil Bitis [EMAIL PROTECTED] 
wrote:
Thanks for the informative reply Sergei,
We're actually just using an INT field at the moment, we were going to 
move
over to BIGINT when we start using 64-bit MySQL (soon).
Do you know where I should look for information on writing our own table
handler?

Thanks,
-Phil
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe: 
http://lists.mysql.com/[EMAIL PROTECTED]



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: B-tree index question

2004-10-20 Thread Phil Bitis
Thanks for the reply. We're actually using  DELAY_KEY_WRITE on our tables.
The manual says (7.2.15):
Declaring a MyISAM table with the DELAY_KEY_WRITE=1 table option makes 
index updates faster because they are not flushed to disk until the table is 
closed.

Does this work with all types of indexes, including primary keys?
- Original Message - 
From: mos [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Wednesday, October 20, 2004 4:20 AM
Subject: Re: B-tree index question


Phil,
The fastest method to load data into a table is to use Load Data 
Infile. If the table is empty when the command is executed, then the 
index is not updated until after the command completes. Otherwise if you 
are loading a lot of data, you may want to drop the index and rebuild it 
later. Unfortunately Alter Table table_name disable keys won't work on 
unique indexes (primary).

Mike
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: B-tree index question

2004-10-20 Thread Phil Bitis
Thanks for the informative reply Sergei,
We're actually just using an INT field at the moment, we were going to move 
over to BIGINT when we start using 64-bit MySQL (soon).
Do you know where I should look for information on writing our own table 
handler?

Thanks,
-Phil
- Original Message - 
From: Sergei Golubchik [EMAIL PROTECTED]
To: Phil Bitis [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]
Sent: Wednesday, October 20, 2004 9:23 AM
Subject: Re: B-tree index question


Hi!
On Oct 23, Phil Bitis wrote:
Hello,
We want to be able to insert records into a table containing a billion
records in a timely fashion.
The table has one primary key, which I understand is implemented using
B-trees, causing insertion to slow by log N.
Corect.
But for auto_increment field (on BIGINT, I believe ?),
you'll have hundreds of keys on one key page, so logarithm base will be
few hundreds, and log N should be just 3-5. That is, it should be only
~3-5 times slower as compared to the table with one hundred rows.
The key field is an auto_increment field.
The table is never joined to other tables.
Is there any way we could implement the index ourselves, by modifying
the MyISAM table handler perhaps? Or writing our own?
Hmm, MyISAM can only do B-tree indexes. It won't be easy to add a
completely different index algorithm to it.
Writing your own table handler could be easier.
In our setup record n is always the nth record that was inserted in
the table, it would be nice to just skip n * recordsize to get to the
record.
Right, assuming all records have the same length, you can just write nth
record at the offest n * recordsize on inserts, and use the value of n
as a key on reads. Of course, it's a very specialized storage engine,
not that much of general use - but it's very specialized to handle your
case, so it can be the fastest solution.
Also, could someone shed some light on how B-tree indexes work. Do
they behave well when values passed in are sequential (1, 2, 3, ...)
rather than random values?
Yes.
B-tree is always balanced: http://www.nist.gov/dads/HTML/btree.html
Regards,
Sergei
--
  __  ___ ___   __
 /  |/  /_ __/ __/ __ \/ /   Sergei Golubchik [EMAIL PROTECTED]
/ /|_/ / // /\ \/ /_/ / /__  MySQL AB, Senior Software Developer
/_/  /_/\_, /___/\___\_\___/  Osnabrueck, Germany
  ___/  www.mysql.com
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe: 
http://lists.mysql.com/[EMAIL PROTECTED]


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


B-tree index question

2004-10-19 Thread Phil Bitis
Hello,

We want to be able to insert records into a table containing a billion records in a 
timely fashion.
The table has one primary key, which I understand is implemented using B-trees, 
causing insertion to slow by log N.
The key field is an auto_increment field.
The table is never joined to other tables.
Is there any way we could implement the index ourselves, by modifying the MyISAM table 
handler perhaps? Or writing our own?
In our setup record n is always the nth record that was inserted in the table, it 
would be nice to just skip n * recordsize to get to the record.

Also, could someone shed some light on how B-tree indexes work. Do they behave well 
when values passed in are sequential (1, 2, 3, ...) rather than random values?

Thanks in advance,
-Phil

LOAD DATA INFILE

2004-07-22 Thread Phil Ewington - 43 Plc
Hi All,

Tearing my hair out with this one! I am trying to load data from a text file
from the command line, this is the syntax...

LOAD DATA INFILE 'users_1.csv' INTO TABLE 'ma0133' FIELDS TERMINATED BY ','
OPTIONALLY ENCLOSED BY '' ESCAPED BY '\\' LINES TERMINATED BY '\r\n';

The error I get is...

ERROR 1049: Unknown database '\n''
ERROR:
Can't connect to the server

Hmm, I accessed the server using...

mysql -u root -p databasename

What am I doing wrong here?

TIA

Phil.
---
Outgoing mail is certified Virus Free.
Checked by AVG anti-virus system (http://www.grisoft.com).
Version: 6.0.725 / Virus Database: 480 - Release Date: 19/07/2004


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: LOAD DATA INFILE

2004-07-22 Thread Phil Ewington - 43 Plc
Resolved, used this syntax...

LOAD DATA INFILE '/path/from/root/to/file.csv' INTO TABLE ma0133 FIELDS
TERMINATED BY ',' OPTIONALLY ENCLOSED BY '' ESCAPED BY '\\' LINES
TERMINATED BY '\r\n'


- Phil. 
---
Outgoing mail is certified Virus Free.
Checked by AVG anti-virus system (http://www.grisoft.com).
Version: 6.0.725 / Virus Database: 480 - Release Date: 19/07/2004


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Access denied for user: 'root@localhost'

2004-05-05 Thread Phil Ewington - 43 Plc
Hi All,

I have just installed mysql-4.0.18 on my cobalt RaQ4 and for the first time
seemed to actually got somewhere! however, my existing PHP scripts failed to
connect to localhost as the password has not been set after the install. I
ran the following command (obviously seriously misunderstood the docs)

mysqladmin -u root password new-password

I thought the above syntax would ask me for the password but it didn't now
all commands return

'Access denied for user: '[EMAIL PROTECTED]' (Using password: NO)'

How do I actually set a password, and more importantly, how do I reset the
password for root, and what did I do? Any help will be greatly appreciated.

TIA

Phil.


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Installing MySQL-3.23.54-1 pkg on RaQ4i

2004-04-15 Thread Phil Ewington - 43 Plc
Hi All,

I am trying to install MySQL-3.23.54-1 pkg on a RaQ4i (from pgkmaster.com),
which is something I have done on numerous occasions without problem.
However, this RaQ for some reason does not have /tmp/mysql.sock installed
when the package is installed. When I uninstall the pkg, the script warns
that /tmp/mysql.sock could not be removed as it does not exist. I have
installed the pkg and uninstalled and each time the same thing, no
/tmp/mysql.sock!! Can I simply copy this from one of my other servers into
/home/mysql and then create a sym link to it in /tmp??

Any assistance will be greatly appreciated.

TIA

Phil.


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: AddressBook CMS

2004-03-19 Thread Phil
Yeah, I realize that asking any of you guide me through this kind of project
would be asking way too much. That's why I said point me in the right
direction. I'd just like some recommendations as to what is required to
setup something like this. The last thing I want is to slave over a setup
for months only to realize that because of my initial lack of knowledge I
should have done things differently. Maybe these questions can help clarify
what I'm asking of you:

What are the cosiderations behind this? Should I use an Apache/PHP type
config or maybe there's a client/server software I don't know of...

Should I forget about importing from Outlook and start from scratch to
ensure the db is clean?

Maybe you know of something similar to MyGroupWare, phpGrouWare or
moregroupware that only deals with contacts like some sort of glorified db
driven Phonebook.

I'm just looking for insight from people that know databases... that's all.

Thanks,
Phil
 

 -Original Message-
 From: Rhino [mailto:[EMAIL PROTECTED] 
 Sent: Friday, March 19, 2004 12:34 PM
 To: Philippe LeCavalier
 Subject: Re: AddressBook CMS
 
 I don't understand your question. Are you asking us to tell 
 you how to design a database for your contact information or 
 how to write a program to access your database? Surely you 
 realize that these questions are far too big to be answered 
 in a simple email.  Or are you asking for recommendations for 
 courses that teach these things?
 
 We'll help if we can but I think you'll need to clarify just 
 what you're asking first.
 
 Rhino
 
 
 - Original Message -
 From: Philippe LeCavalier [EMAIL PROTECTED]
 To: [EMAIL PROTECTED]
 Sent: Friday, March 19, 2004 12:06 PM
 Subject: AddressBook CMS
 
 
  Hi Everyone,
 
  Fisrtoff, I know nothing about databases and even less 
 about web design.
  What I do know is that I want to move my Outlook contacts 
 (~10,000) to a
  real database :)
  I've looked at FileMaker Pro and MyGroupWare and the likes 
 but I just want
  something simple where I can export my contacts and sort them into
  templates. And those have either to much cost attached to 
 them or simply
 try
  to offer to many features for what I want.
 
  Can someone point me in the right direction? I know I need 
 to learn MySQL
  and that I will (I'm getting more and more comfortable with 
 PHPmyAdmin).
 My
  main concern is how to access and manage the content in a 
 user-friend way.
 
  Cheers,
  Phil
 
 
 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: How to determine when a MySQL database was last modified?

2004-02-07 Thread Phil
Thanks for checking this out. However, with InnoDB tables you only get
the .frm files under a database directory and these are only modified
when you change the definition of a table. The data itself is kept in a
big binary file(s) one directory up, whose modification time cannot be
used to deduce that a particular database has changed since it must be
shared by all databases that have InnoDB table in... unless you only
have 1 database with InnoDB tables in! Cheers,
Phil


On Sat, 2004-02-07 at 17:41, Brian Reichert wrote:
 On Fri, Feb 06, 2004 at 08:18:10PM +, Phil wrote:
  Doesn't seem to change the mtime on table files. It appears that for
  InnoDB tables these files are only updated when the definition of a
  table is changed. The content of the all InnoDB tables is kept in one or
  two massive files directly under the 'data' directory!
 
 At _some_ point your data has to end up on disk.  I haven't read
 up on the caching that MySQL does.  I know you can manually FLUSH
 TABLES, but that's of no use to you: you want to passively detect
 when the file's changed.
 
 I just did a one-record update to a test database:
 
 Before:
 
   # ls -ld user.*
   -rw-rw  1 mysql  mysql  18168 Dec 22 16:58 user.MYD
   -rw-rw  1 mysql  mysql  20480 Dec 26 18:00 user.MYI
   -rw-rw  1 mysql  mysql   8794 Dec 11 14:20 user.frm
 
 After:
 
   # ls -ld user.*
   -rw-rw  1 mysql  mysql  18168 Feb  7 12:33 user.MYD
   -rw-rw  1 mysql  mysql  20480 Feb  7 12:33 user.MYI
   -rw-rw  1 mysql  mysql   8794 Dec 11 14:20 user.frm
 
 I updated that same record again:
 
   # ls -ld user.*
   -rw-rw  1 mysql  mysql  18168 Feb  7 12:34 user.MYD
   -rw-rw  1 mysql  mysql  20480 Feb  7 12:34 user.MYI
   -rw-rw  1 mysql  mysql   8794 Dec 11 14:20 user.frm
 
 This is with MySQL 3.23.58 and MyISAM tables.
 
 So, in my (very) limited testing; changing a table's content does
 promptly correspond to an updated mtime of the data and index
 columns.
 
 I do see that you're using InnoDB tables; I'll try to set another
 test environment.
 
 Good luck, anyway...
 
 -- 
 Brian Reichert[EMAIL PROTECTED]
 37 Crystal Ave. #303  Daytime number: (603) 434-6842
 Derry NH 03038-1713 USA   BSD admin/developer at large


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



How to determine when a MySQL database was last modified?

2004-02-06 Thread Phil
Hi,

I have many smallish, discrete MySQL databases, each of which I would
like to backup individually (mysqldump seems fine for this). However,
there's no point re-backing up a database that has not changed since the
last time it was backed up. So how can I tell if when a MySQL database
was last modified, so that I can decide whether to run mysqldump on it
again or not? Any help with this would be much appreciated.

Thanks,
Phil


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: How to determine when a MySQL database was last modified?

2004-02-06 Thread Phil
Thanks. But I would have thought that such information would have been
kept automatically somewhere by the server, and it's just a case of how
to get at it. I have quite a few tables in each database so I don't
really want to have to maintain a timestamp on each update, and then go
around all of them at backup time :(

Anyone got any other ideas?


On Fri, 2004-02-06 at 14:09, gerald_clark wrote:
 Add a timestamp field to each table.
 
 Phil wrote:
 
 Hi,
 
 I have many smallish, discrete MySQL databases, each of which I would
 like to backup individually (mysqldump seems fine for this). However,
 there's no point re-backing up a database that has not changed since the
 last time it was backed up. So how can I tell if when a MySQL database
 was last modified, so that I can decide whether to run mysqldump on it
 again or not? Any help with this would be much appreciated.
 
 Thanks,
 Phil
 
 
   
 
 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: How to determine when a MySQL database was last modified?

2004-02-06 Thread Phil
Thanks. I'm using InnoDB tables (for transactions) and there's no sign
of any .MYD files for them.

I'm starting to think that maybe this information isn't available :(
Anyone any other ideas?


On Fri, 2004-02-06 at 14:17, Dan Greene wrote:
 I'm not 100% sure on this, but what about the .myd file timestamp?
 
  -Original Message-
  From: gerald_clark [mailto:[EMAIL PROTECTED]
  Sent: Friday, February 06, 2004 9:09 AM
  To: Phil
  Cc: [EMAIL PROTECTED]
  Subject: Re: How to determine when a MySQL database was last modified?
  
  
  Add a timestamp field to each table.
  
  Phil wrote:
  
  Hi,
  
  I have many smallish, discrete MySQL databases, each of which I would
  like to backup individually (mysqldump seems fine for this). However,
  there's no point re-backing up a database that has not 
  changed since the
  last time it was backed up. So how can I tell if when a 
  MySQL database
  was last modified, so that I can decide whether to run 
  mysqldump on it
  again or not? Any help with this would be much appreciated.
  
  Thanks,
  Phil
  
  

  
  
  
  
  -- 
  MySQL General Mailing List
  For list archives: http://lists.mysql.com/mysql
  To unsubscribe:
  http://lists.mysql.com/[EMAIL PROTECTED]
  
  


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: How to determine when a MySQL database was last modified?

2004-02-06 Thread Phil
Nice try... but 'show table status' just displays NULL for Update_time -
maybe because they're InnoDB tables. Besides, I didn't really want to
have to squirrel around all the tables to see if the DB itself has been
changed.

Since what I want to do doesn't seem possible I'll carry on as usual...
backing everything up :( Also, I'll suggest it as an enhancement.
Thanks.


On Fri, 2004-02-06 at 14:28, Schwartz, Evelyn wrote:
 You can try the 'show table status' from mysql.   There is an update_time that lists 
 the last modified date for the table.
  
 I also found out that these types of commands work with perl DBD::mysql.  You can 
 treat the command like a normal sql statement and the results are returned like any 
 other sql.  Pretty cool.
  
 IMHO I wouldn't bother with this.  Just take the backup.  As long as you only keep 
 the most recent backup online I don't see the harm.  Why do the extra work and risk 
 not having backups?
  
 Evelyn
 
   -Original Message- 
   From: Phil [mailto:[EMAIL PROTECTED] 
   Sent: Fri 2/6/2004 9:27 AM 
   To: gerald_clark 
   Cc: [EMAIL PROTECTED] 
   Subject: Re: How to determine when a MySQL database was last modified?
   
   
 
   Thanks. But I would have thought that such information would have been
   kept automatically somewhere by the server, and it's just a case of how
   to get at it. I have quite a few tables in each database so I don't
   really want to have to maintain a timestamp on each update, and then go
   around all of them at backup time :(
   
   Anyone got any other ideas?
   
   
   On Fri, 2004-02-06 at 14:09, gerald_clark wrote:
Add a timestamp field to each table.
   
Phil wrote:
   
Hi,

I have many smallish, discrete MySQL databases, each of which I would
like to backup individually (mysqldump seems fine for this). However,
there's no point re-backing up a database that has not changed since the
last time it was backed up. So how can I tell if when a MySQL database
was last modified, so that I can decide whether to run mysqldump on it
again or not? Any help with this would be much appreciated.

Thanks,
Phil


 

   
   
   
   --
   MySQL General Mailing List
   For list archives: http://lists.mysql.com/mysql
   To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
   
   
 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: How to determine when a MySQL database was last modified?

2004-02-06 Thread Phil
Thanks Gowtham and Ed. However, even this solution seems a bit dodgy
when it comes to backing up... I'll stick with backing up all databases
for now, and put in an enhancement request. Thanks,
Phil


On Fri, 2004-02-06 at 16:35, [EMAIL PROTECTED] wrote:
 You could try the following:
 1) Perform normal backup.
 2) Run sql command flush status.--Resets most status
 variables to zero.
 3) Next week prior to backup, run sql commands:
   show status like 'Handler_delete'
   show status like 'Handler_update'
   show status like 'Handler_write'
If any of these values are greater than zero then a table has been
 modified.
You should also note the server start date just in case a server
 crahed
or restarted which will also reset the status variables to zero.
 
 Handler_delete - Number of times a row was deleted from a table.
 Handler_update - Number of requests to update a row in a table.
 Handler_write  - Number of requests to insert a row in a table.
 
 Ed
 
 -Original Message-
 From: Michael Stassen [mailto:[EMAIL PROTECTED] 
 Sent: Friday, February 06, 2004 9:18 AM
 To: Gowtham Jayaram
 Cc: Phil; Schwartz, Evelyn; [EMAIL PROTECTED]
 Subject: Re: How to determine when a MySQL database was last modified?
 
 
 mysqlshow gives the same results as SHOW TABLE STATUS, which, 
 unfortunately, doesn't seem to give created/updated dates for InnoDB
 tables.
 
 Michael
 
 Gowtham Jayaram wrote:
  If offline tools works for you try './mysqlshow.exe
  -vi db_name'.  This provides useful information such
  as 'Create Time' 'Update Time' and 'Check Time'.
  
  Gowtham.
  
  --- Phil [EMAIL PROTECTED] wrote:
  
 Nice try... but 'show table status' just displays
 NULL for Update_time -
 maybe because they're InnoDB tables. Besides, I
 didn't really want to
 have to squirrel around all the tables to see if the
 DB itself has been
 changed.
 
 Since what I want to do doesn't seem possible I'll
 carry on as usual...
 backing everything up :( Also, I'll suggest it as an
 enhancement.
 Thanks.
 
 
 On Fri, 2004-02-06 at 14:28, Schwartz, Evelyn wrote:
 
 You can try the 'show table status' from mysql.  
 
 There is an update_time that lists the last modified
 date for the table.
 
  
 I also found out that these types of commands work
 
 with perl DBD::mysql.  You can treat the command
 like a normal sql statement and the results are
 returned like any other sql.  Pretty cool.
 
  
 IMHO I wouldn't bother with this.  Just take the
 
 backup.  As long as you only keep the most recent
 backup online I don't see the harm.  Why do the
 extra work and risk not having backups?
 
  
 Evelyn
 
-Original Message- 
From: Phil [mailto:[EMAIL PROTECTED] 
Sent: Fri 2/6/2004 9:27 AM 
To: gerald_clark 
Cc: [EMAIL PROTECTED] 
Subject: Re: How to determine when a MySQL
 
 database was last modified?
 


 
Thanks. But I would have thought that such
 
 information would have been
 
kept automatically somewhere by the server, and
 
 it's just a case of how
 
to get at it. I have quite a few tables in each
 
 database so I don't
 
really want to have to maintain a timestamp on
 
 each update, and then go
 
around all of them at backup time :(

Anyone got any other ideas?


On Fri, 2004-02-06 at 14:09, gerald_clark wrote:
 Add a timestamp field to each table.

 Phil wrote:

 Hi,
 
 I have many smallish, discrete MySQL
 
 databases, each of which I would
 
 like to backup individually (mysqldump seems
 
 fine for this). However,
 
 there's no point re-backing up a database that
 
 has not changed since the
 
 last time it was backed up. So how can I tell
 
 if when a MySQL database
 
 was last modified, so that I can decide
 
 whether to run mysqldump on it
 
 again or not? Any help with this would be much
 
 appreciated.
 
 
 Thanks,
 Phil
 
 
  
 



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:   
 
  http://lists.mysql.com/[EMAIL PROTECTED]
  


 
 
 
 -- 
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:   
 
  
  http://lists.mysql.com/[EMAIL PROTECTED]
  
  
  
  __
  Do you Yahoo!?
  Yahoo! Finance: Get your refund fast by filing online.
  http://taxes.yahoo.com/filing.html
  
 
 
 -- 
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:
 http://lists.mysql.com/[EMAIL PROTECTED]
 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: How to determine when a MySQL database was last modified?

2004-02-06 Thread Phil
Doesn't seem to change the mtime on table files. It appears that for
InnoDB tables these files are only updated when the definition of a
table is changed. The content of the all InnoDB tables is kept in one or
two massive files directly under the 'data' directory!



On Fri, 2004-02-06 at 18:13, Brian Reichert wrote:
 On Fri, Feb 06, 2004 at 09:17:32AM -0500, Dan Greene wrote:
  I'm not 100% sure on this, but what about the .myd file timestamp?
 
 Well, it depends on which table type, obvously.  There are several
 files for each database, see what the mtime is on each of them, to
 determine what's a live file.
 
 If you're using rsync for backups, I wouldn't worry; it's good at
 moving partial files around.
 
 -- 
 Brian Reichert[EMAIL PROTECTED]
 37 Crystal Ave. #303  Daytime number: (603) 434-6842
 Derry NH 03038-1713 USA   BSD admin/developer at large


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Installing MySQL on Fedora

2003-11-29 Thread Phil Ellett
As far as I am aware mysqladmin is a server control program so it comes with
the mysql-server RPM .. The one listed below is purely a mysql client and
shared library package.

Regards,

Phil.


From: Todd Cary [mailto:[EMAIL PROTECTED] 
Sent: 30 November 2003 01:02
To: [EMAIL PROTECTED]
Subject: Installing MySQL on Fedora

I tried installing MySQL on Fedora and all seemed to go fine, but there is
no mysqladmin.   Have I missed something here?

And I am not sure where the mysql executable resides.  The version is

MySQL-3.23.58-1.i386.rpm

Todd
-- 




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



corrupt table - need some guru help!

2003-10-15 Thread Phil Swenson
We've had problems with a production database having indexes getting
corrupt.  There are about 17 million records (500,000 new records a day)
in the table (MyISAM) and we run a delete script against it every night
to delete old data so the table doesn't get too big.  We are running
mysql 3.23.56 on Red Hat 7.2 using the JDBC driver.
 
Periodically we get a corrupt table with an error message like:
 
From mysqlcheck kenobi --check-only-changed
 
kenobi.dg_analysis_reading
warning  : Table is marked as crashed
error: Key in wrong position at page 1335379968
error: Corrupt
 
Doing a repair quick or a repair on the table doesn't solve the
problem  Doing more extensive repairs would take too long, I've
found dropping the indexes and recreating is the only reasonable way to
solve it.  But that's a pain and can take hours.
 
My questions are:
 
1)   Is there any way we can utilize that page # in the error
message to fix the problem?
2)   Why does the table get corrupted in the first place (happens
about one a month)?
3)   Would upgrading to MySQL 4.0 help?
 
 
I'd really appreciate help on this one. we are stumped!
 


MySQL access issue

2003-09-15 Thread Phil Perrin
Hello everyone,
 
Kind of an oddball question but I'll try to make it as clear as
possible. 
We have a Solaris server, that we have root access to. It houses mysql
db's and information.
I was not the admin for the db's and frankly I'm not an MySQL buff to be
honest.
 
Our DB Admin is gone now, I need access to mysql db's, but I don't know
what the username/password was for them. What are my options here to be
able to not lose this info and get root access into the db's in
mysql?
 
Thank you,
 
~Phil


Cannot Connect to Server

2003-09-01 Thread Phil Ewington - 43 Plc
Hi All,

Having problems with connecting to mysql server as we have had to change all
the IP addresses on the server. I am assuming that localhost is the problem,
how can I manually change localhost to be an IP address?

The error message is:
Warning: mysql_connect(): Can't connect to local MySQL server through
socket '/var/tmp/mysql.sock' (111) in /home/sites/.. on line 3
Unable to select database

TIA

Phil.


Phil Ewington - Technical Director
--
43 Plc
35 Broad Street, Wokingham
Berkshire RG40 1AU
T: +44 (0)118 978 9500
F: +44 (0)118 978 4994
E: mailto:[EMAIL PROTECTED]
W: http://www.43plc.com



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Cannot Connect to Server

2003-09-01 Thread Phil Ewington - 43 Plc
Could not read response from last sender :o(



-Original Message-
From: Phil Ewington - 43 Plc [mailto:[EMAIL PROTECTED]
Sent: 01 September 2003 11:35
To: Mysql
Subject: Cannot Connect to Server


Hi All,

Having problems with connecting to mysql server as we have had to change all
the IP addresses on the server. I am assuming that localhost is the problem,
how can I manually change localhost to be an IP address?

The error message is:
Warning: mysql_connect(): Can't connect to local MySQL server through
socket '/var/tmp/mysql.sock' (111) in /home/sites/.. on line 3
Unable to select database

TIA

Phil.


Phil Ewington - Technical Director
--
43 Plc
35 Broad Street, Wokingham
Berkshire RG40 1AU
T: +44 (0)118 978 9500
F: +44 (0)118 978 4994
E: mailto:[EMAIL PROTECTED]
W: http://www.43plc.com



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Import from Excel to MySQL

2003-08-19 Thread Phil Perrin
I'm a relative newbie to MySql, and I've got a small project I'm working
on and hopefully you folks can either point me in the proper direction
or give me a little help.
 
I have multiple spreadsheets in Excel format and in .csv format too,
that I would like to possibly import to a mysql database. All of the
excel and csv files have is one huge colum and they are only 1 cell on
each line. An estimate of 7k-8k domain names I need to run a whois on.
 
Basically I need a way to import them from the Excel sheet to the
database so at that point I can manipulate the data and use a php script
of some sort to run a whois after extracting the domain name, and then
return the results to the database and have it attached to the domain
name.
 
Any help would be greatly appreciated!
 
~Phil


Functions as default values

2003-08-07 Thread Phil Ewington - 43 Plc
Hi All,

I am trying to use a function as a default value for a column but do not
seem to get the desired result. I want to use the NOW() function for a
last_updated column, here is my code...

CREATE TABLE test_table (
  last_updated datetime NOT NULL default `NOW()`
) TYPE=MyISAM;

This gives an error;

CREATE TABLE test_table (
  last_updated datetime NOT NULL default 'NOW()'
) TYPE=MyISAM;

Now the table shows a default value of -00-00 00:00:00, when I add a new
row the value of last_updated is also -00-00 00:00:00. I am using MySQL
3.23.37, can anyone help?

TIA


Phil Ewington - Technical Director
--
43 Plc
35 Broad Street, Wokingham
Berkshire RG40 1AU
T: +44 (0)118 978 9500
F: +44 (0)118 978 4994
E: mailto:[EMAIL PROTECTED]
W: http://www.43plc.com



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Functions as default values

2003-08-06 Thread Phil Ewington - 43 Plc
No functions as default values is a bummer, but timestamp will do he trick,
so thanks for your help; much appreciated.


Phil.

-Original Message-
From: Cybot [mailto:[EMAIL PROTECTED]
Sent: 06 August 2003 15:37
To: [EMAIL PROTECTED]
Subject: Re: Functions as default values


 I am trying to use a function as a default value for a column but do not
 seem to get the desired result. I want to use the NOW() function for a
 last_updated column, here is my code...

 CREATE TABLE test_table (
   last_updated datetime NOT NULL default `NOW()`
 ) TYPE=MyISAM;

 This gives an error;

 CREATE TABLE test_table (
   last_updated datetime NOT NULL default 'NOW()'
 ) TYPE=MyISAM;

 Now the table shows a default value of -00-00 00:00:00, when I add a
new
 row the value of last_updated is also -00-00 00:00:00. I am using
MySQL
 3.23.37, can anyone help?

funcion as default-value is not allowed AFAIK

but timestamp will help you to get what you want!
http://www.mysql.com/doc/en/DATETIME.html


--
Sebastian Mendel

www.sebastianmendel.de
www.tekkno4u.de
www.nofetish.com


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Create Temporary Table

2003-07-23 Thread Phil Bitis
For what it's worth, I get the same problem with 4.0.13, and have posted the
same question a few times with no response.
It works fine at the command line, but not through the mysql++ API.
It doesn't work on my home/work machines (both running XP), though my
colleague doesn't experience the problem at all even though he's using
4.0.13 on 2000/XP.

- Original Message -
From: Russell R Snella [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Wednesday, July 23, 2003 5:57 PM
Subject: Create Temporary Table


 Hello,

 Mysql Version 4.0.11a-gamma

 When I try and run the following query


 create temporary table russ1 as

 select rcust.*

 from rcust;


 and I receive the error message:

 Error Code : 1044

 Access denied for user: '[EMAIL PROTECTED]' to database 'customer'

 (0 ms taken)



 I can run the query from the command line of the server and the query
 works correctly.

 I have run the query GRANT ALL PRIVILEGES ON customer.* TO
 [EMAIL PROTECTED]



 In addition, I flushed the privileges, and I stopped and restarted the
 database.



 Thank you, for you help in advance.





 Russell R. Snella







-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Looking up duplicate record or adding new unique record

2003-07-23 Thread Phil Bitis
CREATE TABLE tbllayer (
  LayerID int(11) NOT NULL default '0',
  LayerSize int(11) NOT NULL default '0',
  IceTypeID int(11) NOT NULL default '0',
  Fingerprint char(16) binary default NULL,
  PRIMARY KEY  (LayerID),
  UNIQUE KEY Fingerprint (Fingerprint),
  KEY IceTypeID (IceTypeID)
) TYPE=MyISAM;

We have an internet monitoring application which stores objects in the above
table, with the fingerprint an MD4 of the object. In general about 30% of
the time an object monitored is already in the table, in which case we don't
want to re-insert it, we just want to find out it's ID. The percentage may
vary between 10% and 50% though.

Currently we have a cache in our application which works like this:

- object is monitored and its fingerprint taken
- is the fingerprint in the cache? if so, take its ID from there
- if not, do a SELECT on the table - if a match is found add it to the cache
and use its ID
- if not, INSERT the record into the tablem use its ID and possibly add it
to the cache too

Ok. In general, is it better to:

- do a SELECT to see if the record exists and if not INSERT it
or
- do an INSERT, and if it fails then do a SELECT to locate the record

What about if the duplicate ratio is high or low?

Cheers,
-Phil





-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: URGENT : Benchmark

2003-07-22 Thread Phil Bitis
Does your university have a webpage indicating what advice is acceptable and
not considered plagarism?

- Original Message -
From: Antonio Jose Rodrigues Neto [EMAIL PROTECTED]
To: [EMAIL PROTECTED]; [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]
Sent: Tuesday, July 22, 2003 9:54 PM
Subject: URGENT : Benchmark


 Help me ...

 Antonio Jose Rodrigues Neto [EMAIL PROTECTED]
wrote:Hi All,

 I almost finish my MSC Thesis and my thesis is iSCSI Performance. I would
like to make tests using Fibre Channel and iSCSI with MYSQL. Please I will
need informations (cookbook) how does implement tunning on MySQL - Linux
RedHat 9.

 I install sql-bench and I ran tests against iscsi and nfs (just a test)
and generate the files RUN-iscsi and RUN-nfs. How does use the tool
compare-results?

 What book (the best book) do you recommend for me about mysql?

 Please help me?


 -
 Do you Yahoo!?
 Yahoo! SiteBuilder - Free, easy-to-use web site design software


 -
 Do you Yahoo!?
 Yahoo! SiteBuilder - Free, easy-to-use web site design software



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: More duhh! questions

2003-07-16 Thread Phil Bitis
I've been having the same problem, using mysql++ with mysql 4.0.13.
It works just fine entered into mysql-front or mysql.exe, but not through
mysql++
Can I check the CREATE TEMPORARY TABLES privilege through the API?

- Original Message -
From: Victoria Reznichenko [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Wednesday, July 16, 2003 8:43 AM
Subject: Re: More duhh! questions


 Degan, George E, JR, MGSVC [EMAIL PROTECTED] wrote:
  I am attempting to create a temporary table to do a complex query and I
get an error:
  error 1044: Access denied for user: '@localhost' to database
'shopsample'
  what can I do to keep this from happening?  I am using the production
version of mySQL 4.0.13 in windows 2000.

 Does user have CREATE TEMPORARY TABLES privilege?


 --
 For technical support contracts, goto https://order.mysql.com/?ref=ensita
 This email is sponsored by Ensita.net http://www.ensita.net/
__  ___ ___   __
   /  |/  /_ __/ __/ __ \/ /Victoria Reznichenko
  / /|_/ / // /\ \/ /_/ / /__   [EMAIL PROTECTED]
 /_/  /_/\_, /___/\___\_\___/   MySQL AB / Ensita.net
___/   www.mysql.com





 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]





-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Improving insertion performance by locking tables

2003-07-14 Thread Phil Bitis
Is there a limit to the number of records I can insert in a multiple-value
insert?

- Original Message -
From: Rudy Metzger [EMAIL PROTECTED]
To: Phil Bitis [EMAIL PROTECTED]; [EMAIL PROTECTED]
Sent: Monday, July 14, 2003 10:18 AM
Subject: RE: Improving insertion performance by locking tables


From what I know is, that MySQL always locks the MyISAM table before you
insert, update or delete something from it. So the key here is not so
much if you should lock the table, but how you insert the data (single
inserts vs multi inserts). Multi inserts are the way to go. By locking
the table you actually would loose time.

/rudy

-Original Message-
From: Phil Bitis [mailto:[EMAIL PROTECTED]
Sent: zaterdag 12 juli 2003 19:00
To: [EMAIL PROTECTED]
Subject: Improving insertion performance by locking tables

Hello,

We've got an application which does many multiple-value inserts to
different
tables.

Is it worth locking a table before doing a multiple-value insert with
say 50
records? If so, what is the number of records that makes it worthwhile?
If not, is it worth locking a table before doing 2 seperate such
inserts?

How do these locks affect select statements involving the locked tables?

Thanks in advance,
-Phil



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]





-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



This list

2003-07-14 Thread Phil Bitis
Might it be worth looking at the mailing list manager software for this
list? ACCU's mailing lists use Majordomo and add this line to the rfc822
headers:

Reply-To: [EMAIL PROTECTED]

You can still see the sender's email address if you want to reply directly.



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: SELECT TOP

2003-07-14 Thread Phil Bitis
Yeah, put LIMIT 20 on the end.

- Original Message -
From: Jim McAtee [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Monday, July 14, 2003 11:12 PM
Subject: SELECT TOP


 What's wrong with the following query?  The application used to use Access
via
 ODBC, now running MySQL 3.23.xx, MyISAM table, accessed via MyODBC
3.51.01.

 // Return last 20 searches
 SELECT TOP 20 h.historyid, h.searchstring, h.serverid, s.shortname
 FROM history h
   INNER JOIN servers s ON h.serverid = s.serverid
 WHERE h.employeeid = 9
 ORDER BY h.historyid DESC

 Works fine once I remove the TOP 20 from the query.  If this isn't
supported,
 is there an equivalent?

 Thanks,
 Jim


 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]





-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: does mySQL support a boolean data type?

2003-07-13 Thread Phil Bitis
You can use BIT or BOOL, but these are currently just synonyms for
TINYINT(1). Still your best bet though.

The manual says under new features planned for 5.1: Optimise BIT type to
take 1 bit (now BIT takes 1 char)

- Original Message -
From: Dan Anderson [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]
Sent: Sunday, July 13, 2003 6:21 PM
Subject: Re: does mySQL support a boolean data type?


  why don't you use int(1) and set it to 0 or 1?

 Storage space is an issue because I am designing a very large database
 for a client.  Every byte I save per row will translate into many many
 megs.  So if all I need is a single bit for true or false I want to get
 as close to that single bit as possible.  (Although most bools end up
 being more then a single bit because of architecture issues).

 So, to put a long story short, I am trying to make every column's
 internal data structure as tiny as possible.

 -Dan


 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]





-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: does mySQL support a boolean data type?

2003-07-13 Thread Phil Bitis
int(1) takes up 4 bytes worth of space, and just displays 1 character.
BIT or TINYINT(1) take up 1 byte.

- Original Message -
From: [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Sunday, July 13, 2003 6:05 PM
Subject: Re: does mySQL support a boolean data type?


why don't you use int(1) and set it to 0 or 1?

-yves


-Ursprngliche Nachricht-
Von: Dan Anderson [EMAIL PROTECTED]
An: [EMAIL PROTECTED]
Gesendet: Sonntag, 13. Juli 2003 18:41
Betreff: does mySQL support a boolean data type?


 I ran a search through the mySQL manual and google and could not find a
 satisfactory answer.  Does mySQL support the declaration of a boolean
 data type?  Currently I am using VARCHAR(6)s with either 'TRUE' or
 'FALSE' and would like to cut down on the storage.

 Thanks in advance,

 Dan Anderson


 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]





-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: does mySQL support a boolean data type?

2003-07-13 Thread Phil Bitis
Presumably if you don't specify a display size it defaults to the maximum.
I'm just quoting from the manual, have a look at 6.2 Column Types

- Original Message -
From: [EMAIL PROTECTED]
To: Phil Bitis [EMAIL PROTECTED]; [EMAIL PROTECTED]
Sent: Sunday, July 13, 2003 7:40 PM
Subject: Re: does mySQL support a boolean data type?


ehm, why do i say INT(1) then, and not just INT?

but one could also take a 1 char type and perform some AND and OR on that
one by hand. it's not really fast but it's possible right now. and it should
be possible to convert the table later (for version 5.1 maybe) and with a
little abstraction in the programme code, it shouldn't be too hard.

-yves


-Ursprngliche Nachricht-
Von: Phil Bitis [EMAIL PROTECTED]
An: [EMAIL PROTECTED]
Gesendet: Sonntag, 13. Juli 2003 19:57
Betreff: Re: does mySQL support a boolean data type?


 int(1) takes up 4 bytes worth of space, and just displays 1 character.
 BIT or TINYINT(1) take up 1 byte.

 - Original Message -
 From: [EMAIL PROTECTED]
 To: [EMAIL PROTECTED]
 Sent: Sunday, July 13, 2003 6:05 PM
 Subject: Re: does mySQL support a boolean data type?


 why don't you use int(1) and set it to 0 or 1?

 -yves


 -Ursprngliche Nachricht-
 Von: Dan Anderson [EMAIL PROTECTED]
 An: [EMAIL PROTECTED]
 Gesendet: Sonntag, 13. Juli 2003 18:41
 Betreff: does mySQL support a boolean data type?


  I ran a search through the mySQL manual and google and could not find a
  satisfactory answer.  Does mySQL support the declaration of a boolean
  data type?  Currently I am using VARCHAR(6)s with either 'TRUE' or
  'FALSE' and would like to cut down on the storage.
 
  Thanks in advance,
 
  Dan Anderson
 
 
  --
  MySQL General Mailing List
  For list archives: http://lists.mysql.com/mysql
  To unsubscribe:
 http://lists.mysql.com/[EMAIL PROTECTED]
 

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]





 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]





-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: query help!!!

2003-07-13 Thread Phil Bitis
Hiya. I take it you mean ancestors rather than descendants.

For finding descendants I've been using tables like this:

ID  ParentIDLineage
1000/100
101100/100/101
102100/100/102
103101/100/101/103
104103/100/101/103/104

You can find children by using LEFT(). A problem with this scheme is you
need to know the maximum length of the IDs in the Lineage field. I've been
using 8 hex digits as the width of the IDs there just to be safe. Not sure
if this would work for what you need.

Another thing you could do is create another table to hold hierarchy
relationships.
The data above would appear in this table like so:

ID   AncestorID
101 100
102 100
103 101
103 100
104 100
104 101
104 103

As you can see the deeper a record is in the hierarchy, the more records it
would generate.

- Original Message -
From: bruce [EMAIL PROTECTED]
To: [EMAIL PROTECTED]; [EMAIL PROTECTED]
Sent: Monday, July 14, 2003 2:29 AM
Subject: query help!!!


 Hi...

 I have a questiona problem actually!!!

 I'm trying to figure out how to determine the descendents of a given child
 would be. The table defs are...:

 CREATE TABLE ItemTbl
 (
 id int(5) NOT NULL auto_increment,
 itemname varchar(25) NOT NULL default '',
 unique (itemname),
 PRIMARY KEY  (id)
 ) TYPE=MyISAM;


 #
 # assettypetbl
 #
 # b. douglas july 10,03
 # added status. allows user from web to modify the asset obj
 # type. user is able to set items within an asset type
 # to be added/deleted from asset type.
 #
 #
 # Used to store the name/parentid of a given item. the table is used
 # in conjunction with the ItemTbl. This table permits duplicate
 # item names, as a given item can be a child of multiple parent items.
 # In other words, the item can belong to multiple item structures
 #
 #
 # The table contains the following information:
 #
 # itemname varchar(25) NOT NULL default '',(this gets the image/thumb)
 # parent_id int(5) NOT NULL default '',
 #

 CREATE TABLE atbl
 (
 itemname varchar(25) NOT NULL default '', #(this gets the image/thumb)
 parent_id int(5) NOT NULL default '',
 ) TYPE=MyISAM;


 #
 # test data
 #
 insert into itemtbl (itemname) values ('tire');
 insert into itemtbl (itemname) values ('rim');
 insert into itemtbl (itemname) values ('hub');
 insert into itemtbl (itemname) values ('wheel');
 insert into itemtbl (itemname) values ('car');
 insert into itemtbl (itemname) values ('engine');
 insert into itemtbl (itemname) values ('window');
 insert into itemtbl (itemname) values ('airplane');

 insert into atbl (itemname, parent_id) values ('tire', 4);
 insert into atbl (itemname, parent_id) values ('rim', 4);
 insert into atbl (itemname, parent_id) values ('hub', 4);
 insert into atbl (itemname, parent_id) values ('wheel', 5);
 insert into atbl (itemname, parent_id) values ('car', 0);
 insert into atbl (itemname, parent_id) values ('engine', 5);
 insert into atbl (itemname, parent_id) values ('window', 5);
 insert into atbl (itemname, parent_id) values ('airplane', 0);
 insert into atbl (itemname, parent_id) values ('wheel', 8);

 The atbl contains the parent item relationships... for the items...

 Baisically, I need a way of determining what the
 parents/grandparents/great-grandparents/etc... are for a given item My
 concern is that a user might add an item and a parent, and I would get
into
 an item being a parent of itself...

 So..does anyone have a good way that I could create a query to generate
the
 descendents of a given item??

 Thanks for any pointers/information that might help!!!

 Regards,

 Bruce
 [EMAIL PROTECTED]
 (925) 866-2790







 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]





-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Improving insertion performance by locking tables

2003-07-12 Thread Phil Bitis
Hello,

We've got an application which does many multiple-value inserts to different
tables.

Is it worth locking a table before doing a multiple-value insert with say 50
records? If so, what is the number of records that makes it worthwhile?
If not, is it worth locking a table before doing 2 seperate such inserts?

How do these locks affect select statements involving the locked tables?

Thanks in advance,
-Phil



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



columns to rows

2003-07-11 Thread Phil Evans
Hi there. I am a rank amateur at this trying to make sense out of a heap (and growing) 
of data.

I have a resultset with this structure:

nodatadate
1uytd1
1klhd2
1oiud3
2kjhd1
2kljhd2
2asdd3

that I wish to convert to this structure.

no d1d2d3
1 uytklhoiu
2 kjhkljh   asd 

Given that the original has over 100,000 records, I was hoping to find some reasonable 
way of doing it.

Thanking you,

PhilE



Create Temporary Table problem

2003-07-09 Thread Phil Bitis
As a way of getting around the lack of subselect (I'm aware this is coming
soon) we're parsing sql queries, running subselects and storing their
results in a temporary table and replacing the subselect in the sql with the
temporary table name.

This has been working fine, but on upgrading to v4.0.13 the CREATE
TEMPORARY TABLE some sql part is failing with this error:

Access denied for user: '@localhost' to database 'uclan_database'

Any ideas?



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Create Temporary Table problem

2003-07-09 Thread Phil Bitis
Further to this, I should point out everything works fine in mysql-front or
at the mysql console.

The problem shows up when using mysql++, a BadQuery exception is thrown.

 query.reset();
 query  CREATE TEMPORARY TABLE   sTemporary   TYPE=HEAP
MAX_ROWS=1   subselect;

 try
 {
query.parse();
query.execute();
 }



- Original Message -
From: Phil Bitis [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Wednesday, July 09, 2003 12:41 PM
Subject: Create Temporary Table problem


 As a way of getting around the lack of subselect (I'm aware this is coming
 soon) we're parsing sql queries, running subselects and storing their
 results in a temporary table and replacing the subselect in the sql with
the
 temporary table name.

 This has been working fine, but on upgrading to v4.0.13 the CREATE
 TEMPORARY TABLE some sql part is failing with this error:

 Access denied for user: '@localhost' to database 'uclan_database'

 Any ideas?




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Create Temporary Table problem

2003-07-09 Thread Phil Bitis
  Further to this, I should point out everything works fine in mysql-front
  or at the mysql console.
 
  The problem shows up when using mysql++, a BadQuery exception is thrown.
 
   query.reset();
   query  CREATE TEMPORARY TABLE   sTemporary   TYPE=HEAP
  MAX_ROWS=1   subselect;
 
   try
   {
  query.parse();
  query.execute();
   }
 

 This is simple to solve.

 As recommended in MySQL++ manual, use stream only for queries returning
result set. For the queries like above use exec() method.

Point taken, changed it to just use exec() and the same problem occurs. It
was working previously with the code above though (mysql 3), and it works
fine entered at the mysql console.



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Create Temporary Table

2003-07-06 Thread Phil Dowson
Hi,

PHP Version 4.3.1
Mysql Version 4.0.13

When I try and run the following query

CREATE TEMPORARY TABLE temptable AS SELECT * FROM permtable;

on each database, the first one will run it with no problems at all. The
second returns the error:

[server] ERROR 1044: Access denied for user: '[EMAIL PROTECTED]' to database
'database'

I have tried running the same query from MySQL at my site, from via the PHP
interface I am working with and also via PHPMyAdmin, each returns the same
error.

One thing I should note. The CREATE TEMPORARY TABLE syntax does not work,
however if I were to use the CREATE TABLE syntax, it works fine. But I'd
rather use the TEMPORARY tables.

This problem appears to be intermittent, it will work fine for a few days,
then fail for an undetermined amount of time. I have checked with my ISP,
they are sure that the rights haven't changed, and I have CREATE TEMPORARY
TABLE rights.

Any ideas why this is intermittent?

Thanks for your help!

Phil Dowson



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: [PHP-DB] Create Temporary Table

2003-07-06 Thread Phil Dowson
The IP address CAN change, but doesn't change on such a basis that it would
cause a problem. You see the issue also appears whether I am [EMAIL PROTECTED]
or [EMAIL PROTECTED] it doesn't matter where I am coming from.

Thx


- Original Message - 
From: Stephen March [EMAIL PROTECTED]
To: Phil Dowson [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]; [EMAIL PROTECTED]
Sent: Sunday, July 06, 2003 2:29 am
Subject: Re: [PHP-DB] Create Temporary Table


 Does the ip address of where you are trying to access the mysql database
 from (client), change?If you have a dynamic ip address
 your ISP should have a grant something to the effect of:

 grant all on database.* to [EMAIL PROTECTED] identified by 'password';

 This would allow your client to be any address on the 10.1.X.X  network.
   You can just grant access to the database to [EMAIL PROTECTED] if you want that
 user to be able to access the database from any ip address.   This might
 have some security implications for you, if it's wide open on the
 Internet.   That's the only reason I can think for intermittent problems.

 Cheers,
 ~Steve

 Phil Dowson wrote:

 Hi,
 
 PHP Version 4.3.1
 Mysql Version 4.0.13
 
 When I try and run the following query
 
 CREATE TEMPORARY TABLE temptable AS SELECT * FROM permtable;
 
 on each database, the first one will run it with no problems at all. The
 second returns the error:
 
 [server] ERROR 1044: Access denied for user: '[EMAIL PROTECTED]' to database
 'database'
 
 I have tried running the same query from MySQL at my site, from via the
PHP
 interface I am working with and also via PHPMyAdmin, each returns the
same
 error.
 
 One thing I should note. The CREATE TEMPORARY TABLE syntax does not work,
 however if I were to use the CREATE TABLE syntax, it works fine. But I'd
 rather use the TEMPORARY tables.
 
 This problem appears to be intermittent, it will work fine for a few
days,
 then fail for an undetermined amount of time. I have checked with my ISP,
 they are sure that the rights haven't changed, and I have CREATE
TEMPORARY
 TABLE rights.
 
 Any ideas why this is intermittent?
 
 Thanks for your help!
 
 Phil Dowson
 
 
 
 
 







-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Stopped working after update

2003-07-03 Thread Phil Rotsky
I've just installed various updates to SuSE 8.2 via SuSE's web site. One of 
these was an update to MySQL to fix a security bug. Now MySQL doesn't work!

During boot-up, I get the message that MySQL failed. In the log it says:
starting service mysql
failed
[..]S13mysql start exits with status 1

When I try to start MySQL manually, it also fails. When I looked in mysqld.log 
it seems to think mysql is already running because port 3306 is taken. In 
YaST runlevel editor I looked and, sure enough, it believes mysql is up and 
running. However, when I try to use it I get:

'Can't connect to local MySQL server through socket 
'/var/lib/mysql/mysql.sock'

And sure enough, mysql.sock doesn't exist in that directory. In fact, I can't 
find it anywhere on the machine! Why would it just vanish?

Before I really start screwing things up, any thoughts where I should 
start...?

-- 
a+
Steve

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



  1   2   >