Re: replace / insert into with sum not working

2007-04-18 Thread Tanner Postert

i've tried a flat insert without the () around the select, i just put them
there for easier readability to a coworker. i get the same result from the
query,

insert into summary

select sum(x) as x, .
from table
where condition = 'true'
group by whatever


error: FUNCTION: database_name.sum is not a function. is this a bug?


On 4/17/07, Michael Dykman [EMAIL PROTECTED] wrote:


In most common places where a VALUES list is specified you can
substitute a select statement which produces such a list. SQL is an
algebra after all.  The only thing wrong with the syntax of the
original poster was the braces around the select statement itself.
Drop them and it should work fine.

Now the REPLACE might be more elegantly handled with a INSERT...ON
DUPLICATE KEY UPDATE.. construct, but that wasn't the question...

On 4/17/07, Baron Schwartz [EMAIL PROTECTED] wrote:
 Hi Tanner,

 Tanner Postert wrote:
  i'm using the following query:
 
  REPLACE INTO vviews_total(
  SELECT uuser_id, sum( vviews.views ) AS views, sum( vviews.embeds ) AS
  embeds, sum( vviews.plinks ) AS plinks, sum( vviews.`30d` ) AS 30d,
sum(
  vviews.`7d` ) AS 7d, sum( vviews.`24h` ) AS 24h, sum( vviews.site30d )
AS
  site30d, sum( site7d ) AS site7d, sum( vviews.site24h ) AS site24h,
sum(
  vviews.click ) AS click, now( ) AS last_dt
  FROM vviews
  JOIN video ON ( video.id = vviews.video_id )
  WHERE video.nsfw =0
  GROUP BY vviews.uuser_id )
 
 
  the query works fine. it grabs the SUM for each of the fields and sums
 them
  to insert into the other table, but when i try to run the above
query.. .i
  get an errors
 
  FUNCTION db.sum does not exist.


 I think the syntax you want is

 REPLACE INTO tbl (col, col, ... col) SELECT...

 Not

 REPLACE INTO tbl(SELECT)

 Baron

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




--
- michael dykman
- [EMAIL PROTECTED]

- All models are wrong.  Some models are useful.



Results taking over 15 seconds!

2007-04-18 Thread icoste

Hi, I am having problems with result times. I run a search from my website
and it takes around 15 seconds to generate the mysql query and then 40
seconds to load the page! 

Does anyone have any ideas on this because it is racking my brains! 

The address is http://www.icoste.com

Thanks in advance.
-- 
View this message in context: 
http://www.nabble.com/Results-taking-over-15-seconds%21-tf3599623.html#a10054498
Sent from the MySQL - General mailing list archive at Nabble.com.


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Results taking over 15 seconds!

2007-04-18 Thread Ananda Kumar

can u please let us know the mysql query that gets generated during your
search.
Also, get the explain plan of the sql.

regards
anandkl


On 4/18/07, icoste [EMAIL PROTECTED] wrote:



Hi, I am having problems with result times. I run a search from my website
and it takes around 15 seconds to generate the mysql query and then 40
seconds to load the page!

Does anyone have any ideas on this because it is racking my brains!

The address is http://www.icoste.com

Thanks in advance.
--
View this message in context:
http://www.nabble.com/Results-taking-over-15-seconds%21-tf3599623.html#a10054498
Sent from the MySQL - General mailing list archive at Nabble.com.


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




RE: Millisecond time stamp

2007-04-18 Thread Tim Lucia


 -Original Message-
 From: John Comerford [mailto:[EMAIL PROTECTED]
 Sent: Tuesday, April 17, 2007 10:50 PM
 To: mysql@lists.mysql.com
 Subject: Millisecond time stamp
 
 Hi Folks,
 
 I am putting together a table to hold log entries.  I was going to index
 it on a field with a type of TimeStamp.  Unfortunately this will not
 suit my needs because I could have more than one log entry per second.
 As far as my (limited) knowledge goes I have two options to get around
 this:
 
 A) One character string contain a string I build with the date and
 milliseconds tagged onto the end
 b) Two separate fields  Field1 TimeStamp  Field2 Milliseconds

WHY are you indexing the Timestamp?  It should not be your primary key
because, as you point out, it might allow duplicates.  Even case B is not
foolproof.  If you're indexing it for searching, then your application
should be prepared to handle multiple log records for any value.

For uniqueness, add an auto_increment primary key column.  That way, you can
tell two+ log records apart even when they have the same timestamp.

Tim
 
 I am leaning towards approach B, but saying that it's more gut instinct
 than anything else.  My questions are:
 
 1) Is there some a better way to achieve what I want ?
 2) Which of the two methods above would/do you use ?
 
 I am using PHP to access the database and I am fairly new to both PHP
 and MySQL.
 
 TIA,
   JC
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Problem on millions of records in one table?

2007-04-18 Thread Brent Baisley
It all depends on how complicated your data and searches are. I've got tables that add 2-3 million per day and I don't have 
performance problems. Although we only retain at most 500 millions records, not a full years worth.


That said, you can get horrible performance out of mysql with tables as small as 100,000 records if you don't structure your queries 
correctly or use a good table structure. If I know the tables are going to grow quickly and I don't need the entire dataset all the 
time, I'll use merge tables. This makes it easy to remove old data easily from the default table set.


- Original Message - 
From: He, Ming Xin PSE NKG [EMAIL PROTECTED]

To: mysql@lists.mysql.com
Sent: Tuesday, April 17, 2007 11:03 PM
Subject: Problem on millions of records in one table?


Hi,all

The number of the records in one table increase constantly. As
evaluated, the amount would increase to at least 30 millions within one
year.  So we worry about whether mysql could handle such a big amount of
records with good performance. Or need we some other solutions to avoid
this problem ,such as using Partition, dividing a big table and etc. Any
help or idea would be greatly appreciated.

Best Regards
mingxin


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Results taking over 15 seconds!

2007-04-18 Thread Ananda Kumar

not the variable,
Its the sql statements which get fired at the database when you do the
search on you application.
May be you should check you app server for this sql statements.

regards
anandkl


On 4/18/07, Ananda Kumar [EMAIL PROTECTED] wrote:


can u please let us know the mysql query that gets generated during your
search.
Also, get the explain plan of the sql.

regards
anandkl


 On 4/18/07, icoste [EMAIL PROTECTED] wrote:


 Hi, I am having problems with result times. I run a search from my
 website
 and it takes around 15 seconds to generate the mysql query and then 40
 seconds to load the page!

 Does anyone have any ideas on this because it is racking my brains!

 The address is http://www.icoste.com

 Thanks in advance.
 --
 View this message in context:
 
http://www.nabble.com/Results-taking-over-15-seconds%21-tf3599623.html#a10054498
 Sent from the MySQL - General mailing list archive at 
Nabble.comhttp://nabble.com/
 .


 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED]





Re: Problem on millions of records in one table?

2007-04-18 Thread Jay Pipes

Brent Baisley wrote:
It all depends on how complicated your data and searches are. I've got 
tables that add 2-3 million per day and I don't have performance 
problems. Although we only retain at most 500 millions records, not a 
full years worth.


That said, you can get horrible performance out of mysql with tables as 
small as 100,000 records if you don't structure your queries correctly 
or use a good table structure. If I know the tables are going to grow 
quickly and I don't need the entire dataset all the time, I'll use merge 
tables. This makes it easy to remove old data easily from the default 
table set.


Hi!  Have you tried out the new partitioning features of MySQL 5.1 to do 
this?  Would be cool if you had some performance numbers comparing the 
older MERGE table method with the newer partitioning...


Cheers!

Jay

- Original Message - From: He, Ming Xin PSE NKG 
[EMAIL PROTECTED]

To: mysql@lists.mysql.com
Sent: Tuesday, April 17, 2007 11:03 PM
Subject: Problem on millions of records in one table?


Hi,all

The number of the records in one table increase constantly. As
evaluated, the amount would increase to at least 30 millions within one
year.  So we worry about whether mysql could handle such a big amount of
records with good performance. Or need we some other solutions to avoid
this problem ,such as using Partition, dividing a big table and etc. Any
help or idea would be greatly appreciated.

Best Regards
mingxin





--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Suddenly : mysqld: Can't create/write to file PID

2007-04-18 Thread Dave Shariff Yadallee - System Administrator a.k.a. The Root of the Problem
I am suddenly seeing this in my error log:

070418 08:43:57  mysqld started
070418  8:43:57 [ERROR] /usr/contrib/libexec/mysqld: Can't create/write to file
'/usr/contrib/var/doctor.nl2k.ab.ca.pid' (Errcode: 13)
070418  8:43:57 [ERROR] Can't start server: can't create PID file: Undefined 
error: 0
070418 08:43:57  mysqld ended 


HOw is this fixed?

-- 
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Results taking over 15 seconds!

2007-04-18 Thread Jerry Schwartz
The sheer number of tables and rows might be part of the problem with
displaying the page, although it's hard for me to say. At one point I put
some HTML comments inside a PHP loop, so they were generated for every line
item on a page, and was astonished at how long it took to load and render.

As for the query, you didn't give us anything to go on. What kind of WHERE
are you doing? Are you using a full-text search, or a LIKE?

Regards,

Jerry Schwartz
Global Information Incorporated
195 Farmington Ave.
Farmington, CT 06032

860.674.8796 / FAX: 860.674.8341


 -Original Message-
 From: icoste [mailto:[EMAIL PROTECTED]
 Sent: Wednesday, April 18, 2007 4:37 AM
 To: mysql@lists.mysql.com
 Subject: Results taking over 15 seconds!


 Hi, I am having problems with result times. I run a search
 from my website
 and it takes around 15 seconds to generate the mysql query and then 40
 seconds to load the page!

 Does anyone have any ideas on this because it is racking my brains!

 The address is http://www.icoste.com

 Thanks in advance.
 --
 View this message in context:
 http://www.nabble.com/Results-taking-over-15-seconds%21-tf3599
 623.html#a10054498
 Sent from the MySQL - General mailing list archive at Nabble.com.


 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:
 http://lists.mysql.com/[EMAIL PROTECTED]






-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Millisecond time stamp

2007-04-18 Thread David T. Ashley

On 4/17/07, John Comerford [EMAIL PROTECTED] wrote:


Hi Folks,

A) One character string contain a string I build with the date and
milliseconds tagged onto the end
b) Two separate fields  Field1 TimeStamp  Field2 Milliseconds

I am leaning towards approach B, but saying that it's more gut instinct
than anything else.  My questions are:

1) Is there some a better way to achieve what I want ?
2) Which of the two methods above would/do you use ?



What you are looking for is the notion of a
server-globally-unique-identifier (SGUID).  You want something that can't
occur again.

Milliseconds isn't precise enough.  You want microseconds or nanoseconds.

You also need to consider these things:

a)With PHP (assuming web access), multiple processes can be active at the
same time.  Thus, even with microseconds, it is possible for two different
processes to get exactly the same timestamp (especially since the
microseconds counters are typically updated only on hardware timer
interrupts, which don't occur every microsecond).  Thus, you need more than
just time to ensure uniqueness.

b)It is helpful if the string sorting order of the field is also the time
order.

The best approach I've found is to use the following fields, fixed length,
concatenated:

a)Integer time (seconds).

b)Microtime (microseconds or nanoseconds).

c)PID of the process.

with the provision that the code must execute a spinlock to wait for the
microtime to change (otherwise, the next call could get the same identifier,
or--although no practical system is this fast--the current process could end
and another could run with the same PID and get the same time.

Using the three fields above with the spin-lock, the string generated is
guaranteed unique for the lifetime of the server (assuming that nobody
tampers with the machine time).

The result from above is guaranteed unique because no two processes can have
the same PID at the same time.

Here is some sample code:

http://fboprime.e3ft.com/vcvsgpl01/viewcvs.cgi/gpl01/webprojs/fboprime/sw/phplib/sguid.inc?rev=1.4content-type=text/vnd.viewcvs-markup

You can figure out how to navigate from the above to find the other include
files referenced.

If you have any questions and I don't seem to reply to something you post on
this list, please write me at [EMAIL PROTECTED] and reply to the server's reply
to make it through my SPAM filtering.  I don't always watch this mailing
list closely.

Dave


Re: Suddenly : mysqld: Can't create/write to file PID

2007-04-18 Thread Gerald L. Clark
Dave Shariff Yadallee - System Administrator a.k.a. The Root of the 
Problem wrote:

I am suddenly seeing this in my error log:

070418 08:43:57  mysqld started
070418  8:43:57 [ERROR] /usr/contrib/libexec/mysqld: Can't create/write to file
'/usr/contrib/var/doctor.nl2k.ab.ca.pid' (Errcode: 13)
070418  8:43:57 [ERROR] Can't start server: can't create PID file: Undefined 
error: 0
070418 08:43:57  mysqld ended 



HOw is this fixed?

The mysql user must have execute and write privilege on /usr/contrib/var 
and all subdirectories.


--
Gerald L. Clark
Supplier Systems Corporation

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Millisecond time stamp

2007-04-18 Thread David T. Ashley

On 4/18/07, Tim Lucia [EMAIL PROTECTED] wrote:


-Original Message-
From: John Comerford [mailto:[EMAIL PROTECTED]
Sent: Tuesday, April 17, 2007 10:50 PM
To: mysql@lists.mysql.com
Subject: Millisecond time stamp

Hi Folks,

I am putting together a table to hold log entries.  I was going to index
it on a field with a type of TimeStamp.  Unfortunately this will not
suit my needs because I could have more than one log entry per second.
As far as my (limited) knowledge goes I have two options to get around
this:

A) One character string contain a string I build with the date and
milliseconds tagged onto the end
b) Two separate fields  Field1 TimeStamp  Field2 Milliseconds


WHY are you indexing the Timestamp?  It should not be your primary key
because, as you point out, it might allow duplicates.  Even case B is not
foolproof.  If you're indexing it for searching, then your application
should be prepared to handle multiple log records for any value.

For uniqueness, add an auto_increment primary key column.  That way, you can
tell two+ log records apart even when they have the same timestamp.

Tim

-

Dave Ashley's note:  Tim's solution is better than the one I proposed if
this is your only problem.  In my applications, typically the need for
unique identifiers comes up in may places, so I'm used to using that style
of solution.  However, adding an auto-increment primary key column will get
the same effect with a lot less work.  Also, it is probably more efficient
due to the absence of a spin-lock.


Re: replace / insert into with sum not working

2007-04-18 Thread Tanner Postert

i am using 5.0.22

but i found the problem. i'm an idiot. there was a space in one of my sum(x)
entries; sum (x), when i removed that, it worked fine.

thanks.

On 4/18/07, Tanner Postert [EMAIL PROTECTED] wrote:


5.0.22, does anyone know a workaround?

On 4/18/07, Michael Dykman [EMAIL PROTECTED] wrote:

 yeah, it sure looks like a bug.  What version of MySQL are you
 ruinning?  I haven't used REPLACE in a long time but the plain old
 SELECT like you have specified above is a form I use very often and I
 have never seen it fail.

 - michael


 On 4/18/07, Tanner Postert [EMAIL PROTECTED] wrote:
  i've tried a flat insert without the () around the select, i just put
 them
  there for easier readability to a coworker. i get the same result from
 the
  query,
 
  insert into summary
 
  select sum(x) as x, .
  from table
  where condition = 'true'
  group by whatever
 
 
  error: FUNCTION: database_name.sum is not a function. is this a bug?
 
 
  On 4/17/07, Michael Dykman [EMAIL PROTECTED]  wrote:
  
   In most common places where a VALUES list is specified you can
   substitute a select statement which produces such a list. SQL is an
   algebra after all.  The only thing wrong with the syntax of the
   original poster was the braces around the select statement itself.
   Drop them and it should work fine.
  
   Now the REPLACE might be more elegantly handled with a INSERT...ON
   DUPLICATE KEY UPDATE.. construct, but that wasn't the question...
  
   On 4/17/07, Baron Schwartz [EMAIL PROTECTED] wrote:
Hi Tanner,
   
Tanner Postert wrote:
 i'm using the following query:

 REPLACE INTO vviews_total(
 SELECT uuser_id, sum( vviews.views ) AS views, sum(
 vviews.embeds ) AS
 embeds, sum( vviews.plinks ) AS plinks, sum( vviews.`30d` ) AS
 30d,
   sum(
 vviews.`7d` ) AS 7d, sum( vviews.`24h` ) AS 24h, sum(
 vviews.site30d )
   AS
 site30d, sum( site7d ) AS site7d, sum( vviews.site24h ) AS
 site24h,
   sum(
 vviews.click ) AS click, now( ) AS last_dt
 FROM vviews
 JOIN video ON ( video.id = vviews.video_id )
 WHERE video.nsfw =0
 GROUP BY vviews.uuser_id )


 the query works fine. it grabs the SUM for each of the fields
 and sums
them
 to insert into the other table, but when i try to run the above
   query.. .i
 get an errors

 FUNCTION db.sum does not exist.
   
   
I think the syntax you want is
   
REPLACE INTO tbl (col, col, ... col) SELECT...
   
Not
   
REPLACE INTO tbl(SELECT)
   
Baron
   
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
   
   
  
  
   --
   - michael dykman
   - [EMAIL PROTECTED]
  
   - All models are wrong.  Some models are useful.
  
 


 --
 - michael dykman
 - [EMAIL PROTECTED]

 - All models are wrong.  Some models are useful.





Re: Suddenly : mysqld: Can't create/write to file PID

2007-04-18 Thread Dave Shariff Yadallee - System Administrator a.k.a. The Root of the Problem
On Wed, Apr 18, 2007 at 12:42:56PM -0500, Gerald L. Clark wrote:
 Dave Shariff Yadallee - System Administrator a.k.a. The Root of the 
 Problem wrote:
 I am suddenly seeing this in my error log:
 
 070418 08:43:57  mysqld started
 070418  8:43:57 [ERROR] /usr/contrib/libexec/mysqld: Can't create/write to 
 file
 '/usr/contrib/var/doctor.nl2k.ab.ca.pid' (Errcode: 13)
 070418  8:43:57 [ERROR] Can't start server: can't create PID file: 
 Undefined error: 0
 070418 08:43:57  mysqld ended 
 
 
 HOw is this fixed?
 
 The mysql user must have execute and write privilege on /usr/contrib/var 
 and all subdirectories.


Fixed.

Also  

Gerald for your information.

SpamCheck: spam,
SpamAssassin (not cached, score=13.112, required 5, BOTNET 5.00,
BOTNET_CLIENT 0.00, BOTNET_CLIENTWORDS 0.00,
BOTNET_IPINHOSTNAME 0.00, FH_HOST_EQ_D_D_D_D 1.99, RDNS_DYNAMIC 0.10,
RELAY_CHECKER 6.00, RELAY_CHECKER_IPHOSTNAME 0.01,
RELAY_CHECKER_KEYWORDS 0.01)   
 -- 
 Gerald L. Clark
 Supplier Systems Corporation
 
 -- 
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
 
 
 -- 
 This message has been scanned for viruses and
 dangerous content by MailScanner, and is
 believed to be clean.
 

-- 
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



variables in grant statement

2007-04-18 Thread CJ
I would like to be able to use variables for user and table names in  grant 
statements executed in a stroed procedure. It looks like grant statements are 
not compatible with prepare. Does any one know a good way to do this.
  Thanks

   
-
Ahhh...imagining that irresistible new car smell?
 Check outnew cars at Yahoo! Autos.

Re: Millisecond time stamp

2007-04-18 Thread John Comerford
Thanks for the replies guys, banging my head against the wall for not 
thinking of using an auto increment integer to handle the sequence, I've 
got to cut back on those Friday night beers



On 4/17/07, John Comerford [EMAIL PROTECTED] wrote:


Hi Folks,

A) One character string contain a string I build with the date and
milliseconds tagged onto the end
b) Two separate fields  Field1 TimeStamp  Field2 Milliseconds

I am leaning towards approach B, but saying that it's more gut instinct
than anything else.  My questions are:

1) Is there some a better way to achieve what I want ?
2) Which of the two methods above would/do you use ?


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Millisecond time stamp

2007-04-18 Thread John Meyer

John Comerford wrote:
Thanks for the replies guys, banging my head against the wall for not 
thinking of using an auto increment integer to handle the sequence, 
I've got to cut back on those Friday night beers



Okay, color me confused, but what exactly are you wanting to do anyway?


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Problem on millions of records in one table?

2007-04-18 Thread He, Ming Xin PSE NKG
Hi,  Baisley

How did you config your DB? Did you specify one single file or several
files to store your DB? I am not sure whether it affects the performance
as well. 

And Could you give more details about your merge tables. Your experience
is greatly useful for us, thanks a lot.

-Original Message-
From: Brent Baisley [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, April 18, 2007 9:37 PM
To: He, Ming Xin PSE NKG; mysql@lists.mysql.com
Subject: Re: Problem on millions of records in one table?

It all depends on how complicated your data and searches are. I've got
tables that add 2-3 million per day and I don't have 
performance problems. Although we only retain at most 500 millions
records, not a full years worth.

That said, you can get horrible performance out of mysql with tables as
small as 100,000 records if you don't structure your queries 
correctly or use a good table structure. If I know the tables are going
to grow quickly and I don't need the entire dataset all the 
time, I'll use merge tables. This makes it easy to remove old data
easily from the default table set.

- Original Message - 
From: He, Ming Xin PSE NKG [EMAIL PROTECTED]
To: mysql@lists.mysql.com
Sent: Tuesday, April 17, 2007 11:03 PM
Subject: Problem on millions of records in one table?


Hi,all

The number of the records in one table increase constantly. As
evaluated, the amount would increase to at least 30 millions within one
year.  So we worry about whether mysql could handle such a big amount of
records with good performance. Or need we some other solutions to avoid
this problem ,such as using Partition, dividing a big table and etc. Any
help or idea would be greatly appreciated.

Best Regards
mingxin


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Millisecond time stamp

2007-04-18 Thread John Comerford
I am writing a type of message stack that is built while a task is 
processing and presented to the user at the end of processing.  I do it 
using a function like


Logit(Blah, blah, blah);

Which records the SessionID, Sequence and Message, which is presented to 
the user in sequence at the end of processing.


I was thinking of indexing on Timestamp which lead to my first post.  
But as Tim pointed out I am better off using an auto-increment (hence 
the banging my head against the wall for not thinking of that myself).


So as far as I am concerned, problem solved



John Meyer wrote:

John Comerford wrote:
Thanks for the replies guys, banging my head against the wall for not 
thinking of using an auto increment integer to handle the sequence, 
I've got to cut back on those Friday night beers



Okay, color me confused, but what exactly are you wanting to do anyway?




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Problem on millions of records in one table?

2007-04-18 Thread He, Ming Xin PSE NKG
 Hi, Pipes
Is it reliable to use MySQL 5.1 in a commercial product now since it is
still a beta version?

-Original Message-
From: Jay Pipes [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, April 18, 2007 10:04 PM
To: Brent Baisley
Cc: He, Ming Xin PSE NKG; mysql@lists.mysql.com
Subject: Re: Problem on millions of records in one table?

Brent Baisley wrote:
 It all depends on how complicated your data and searches are. I've got

 tables that add 2-3 million per day and I don't have performance 
 problems. Although we only retain at most 500 millions records, not a 
 full years worth.
 
 That said, you can get horrible performance out of mysql with tables
as 
 small as 100,000 records if you don't structure your queries correctly

 or use a good table structure. If I know the tables are going to grow 
 quickly and I don't need the entire dataset all the time, I'll use
merge 
 tables. This makes it easy to remove old data easily from the
default 
 table set.

Hi!  Have you tried out the new partitioning features of MySQL 5.1 to do

this?  Would be cool if you had some performance numbers comparing the 
older MERGE table method with the newer partitioning...

Cheers!

Jay

 - Original Message - From: He, Ming Xin PSE NKG 
 [EMAIL PROTECTED]
 To: mysql@lists.mysql.com
 Sent: Tuesday, April 17, 2007 11:03 PM
 Subject: Problem on millions of records in one table?
 
 
 Hi,all
 
 The number of the records in one table increase constantly. As
 evaluated, the amount would increase to at least 30 millions within
one
 year.  So we worry about whether mysql could handle such a big amount
of
 records with good performance. Or need we some other solutions to
avoid
 this problem ,such as using Partition, dividing a big table and etc.
Any
 help or idea would be greatly appreciated.
 
 Best Regards
 mingxin
 
 


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]