RE: converting numeric to date-time?

2014-09-02 Thread Ed Mierzwa (emierzwa)
SET @tz=@@session.time_zone ;
SET SESSION time_zone = '+0:00' ;
SELECT DATE_FORMAT(
 FROM_UNIXTIME(1409304102.153)  /*your epoch column here*/
 ,'%Y-%m-%d %a %H:%i:%s.%f GMT');
SET SESSION time_zone = @tz ;

 2014-08-29 Fri 09:21:42.153000 GMT


 (or)


SELECT DATE_FORMAT(
 FROM_UNIXTIME(1409304102.153) - INTERVAL ( 
TIMESTAMPDIFF(SECOND,UTC_TIMESTAMP(),NOW()) ) SECOND
 ,'%Y-%m-%d %a %H:%i:%s.%f GMT') ;

 2014-08-29 Fri 09:21:42.153000 GMT


-Original Message-
From: Philip Amadeo Saeli [mailto:psa...@zorodyne.com] 
Sent: Monday, September 01, 2014 5:51 PM
To: Rajeev Prasad
Cc: MYSQL General List
Subject: Re: converting numeric to date-time?

* Rajeev Prasad rp.ne...@yahoo.com [2014-09-01 17:55]:
 I have a column in a table which is epoch time including milliseconds.
 
 e.g. = 1409304102153 
 
 
 now i want to display all fields in the table but this field as: 2014-8-29 
 Fri 09:21:42: GMT  (whatever comes in )
 
 
 and i am not finding anything on web about how to do that.
 
 can anyone help please.
 
 ty.
 Rajeev

I do not know how to do it directly in MySQL, but if you can dump the
table and post-process, this may be helpful on Linux:  The date(1) cmd
can translate between formats, e.g. (taking the above value),

  date -d @1409304102.153 +%Y-%m-%d %a %H:%M:%S.%N
  2014-08-29 Fri 04:21:42.15300

--Phil

-- 
Philip Amadeo Saeli
openSUSE, CentOS, RHEL
psa...@zorodyne.com

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



RE: Does MySQL have RETURNING in the language?

2008-10-16 Thread emierzwa
No, not unique to PostgreSQL. Microsoft SQL Server has the OUTPUT
Clause. The major difference is MS has an extra feature that allows the
OUTPUT or some form of the output to be reused in yet another INSERT.
This would allow you to DELETE a block rows from table and insert them
into a work table for further processing.

http://msdn.microsoft.com/en-us/library/ms177564(SQL.90).aspx 

Returns information from, or expressions based on, each row affected by
an INSERT, UPDATE, or DELETE statement. These results can be returned to
the processing application for use in such things as confirmation
messages, archiving, and other such application requirements.
Alternatively, results can be inserted into a table or table variable.

Ed

-Original Message-
From: D. Dante Lorenso [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, October 15, 2008 3:11 PM
To: [EMAIL PROTECTED]
Cc: mysql@lists.mysql.com
Subject: Re: Does MySQL have RETURNING in the language?

D. Dante Lorenso wrote:
 There's an awesome feature that was added to PostgreSQL a while back 
 called RETURNING that allows you to make an INSERT, UPDATE, and DELETE

 statement behave like a SELECT statement.
 ...
 Does RETURNING exist in any current release of MySQL or is it on the 
 TODO list even?  If it's not, how can I go about asking to have it put

 on there?

For more information on RETURNING for INSERT statements, read a little 
of this from the PostgreSQL documentation:

   http://www.postgresql.org/docs/8.3/interactive/sql-insert.html

 From what I can tell this is unique to PostgreSQL.  I really want this 
functionality in MySQL.  Where do I go to ask for it?

-- Dante

--
D. Dante Lorenso
[EMAIL PROTECTED]


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: REGEXP vs LIKE/OR

2008-08-15 Thread emierzwa
It looks like LIKE is only slightly faster(on my XP), hardly worth
mentioning. Go with what is easier for you to read or for portability if
you need it. IMHO

set @a='gfdueruie baz hdhrh';select BENCHMARK(500, (select 1 from
dual WHERE @a LIKE '%foo%' OR @a LIKE '%bar%' OR @a LIKE '%baz%')) as
elapse_time;
# average 750ms


set @a='gfdueruie baz hdhrh';select BENCHMARK(500, (select 1 from
dual WHERE @a REGEXP 'foo|bar|baz' != 0)) as elapse_time;
# average 770ms 


Ed

-Original Message-
From: Morten Primdahl [mailto:[EMAIL PROTECTED] 
Sent: Friday, August 15, 2008 5:12 AM
To: mysql
Subject: REGEXP vs LIKE/OR


Hi,

I want to retrieve all records where the field value contains either  
foo, bar or baz. Like so:

SELECT id FROM table WHERE value LIKE '%foo%' OR value LIKE '%bar%' OR  
value LIKE '%baz%';

But then I stumbled upon REGEXP, and can do the same this way:

SELECT id FROM table WHERE value REGEXP 'foo|bar|baz' != 0;

Any opinions on what's the better approach and why?

Thanks

Morten





-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Table aliasing

2008-08-01 Thread emierzwa
I think what you want is CREATE VIEW test.v AS SELECT * FROM t; 

http://dev.mysql.com/doc/refman/5.0/en/create-view.html

Ed

-Original Message-
From: Martijn Tonies [mailto:[EMAIL PROTECTED] 
Sent: Friday, August 01, 2008 6:34 AM
To: mysql@lists.mysql.com
Subject: Re: Table aliasing

Hello Richard,

 Is there a way in MySQL to define an alias for a table, so in effect
 it has two names? For migration purposes.

No, it does not support aliasses.

But I think you could use the merge storage engine to do pretty
much the same, have a look here:

http://dev.mysql.com/doc/refman/5.1/en/merge-storage-engine.html


Martijn Tonies
Database Workbench - tool for InterBase, Firebird, MySQL, NexusDB,
Oracle 
MS SQL Server
Upscene Productions
http://www.upscene.com
My thoughts:
http://blog.upscene.com/martijn/
Database development questions? Check the forum!
http://www.databasedevelopmentforum.com


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: recursion

2008-02-27 Thread emierzwa
I'm not aware of MySQL supporting this feature. Microsoft does and calls
it common table expression (CTE). The UNION is necessary as this the
part that links the anchor query, Part1 of the UNION to the recursive
query, Part2. Part2 of the UNION must reference the produced temporary
table called ReachableFrom in your example. The Part2 query is put in
an internal loop constantly inserting new rows into the CTE and then
referencing them in the next loop. Once the loop no longer generates new
rows the CTE stops and the final query is executed.

Ed

-Original Message-
From: news [mailto:[EMAIL PROTECTED] On Behalf Of Thufir
Sent: Wednesday, February 27, 2008 12:28 AM
To: mysql@lists.mysql.com
Subject: recursion

I'm reading SQL for dummies and one of the more interesting sections 
was on recursion.  The example query was something like:

WITH RECURSIVE
   ReachableFrom (Source, Destination)
   AS (SELECT Source, Destination
  FROM FLIGHT
   UNION
   SELECT in.Source, out.Destination
  FROM ReachableFrom in, FLIGHT out
  WHERE in.Destination = out.Source
   )
SELECT * FROM ReachableFrom
WHERE Source = Portland;

I'm a bit thrown by the union.  Can this be simplified to:


WITH RECURSIVE
   ReachableFrom (Source, Destination)
   AS (SELECT Source, Destination
  FROM FLIGHT
   )
SELECT * FROM ReachableFrom
WHERE Source = Portland;



thanks,

Thufir


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Reset a auto increment field?

2007-08-29 Thread emierzwa
 To change the value of the AUTO_INCREMENT  counter to be used for new
rows, do this:

ALTER TABLE t2 AUTO_INCREMENT = value;

You cannot reset the counter to a value less than or equal to any that
have already been used. For MyISAM, if the value is less than or equal
to the maximum value currently in the AUTO_INCREMENT column, the value
is reset to the current maximum plus one. For InnoDB, you can use ALTER
TABLE ... AUTO_INCREMENT = value as of MySQL 5.0.3, but if the value is
less than the current maximum value in the column, no error message is
given and the current sequence value is not changed. 


Ed 

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, August 29, 2007 11:02 AM
To: Jason Pruim
Cc: MySQL List
Subject: Re: Reset a auto increment field?

 Is there away to reset an auto incrementing field count? I have a 
 database that currently has 935 records in it but because I have 
 deleted a few the current number used for NEW records is 938 :) How 
 can I get it to count the records and assign a record number based on 
 the total count?

 Hope that makes sense! Thanks for looking! :)

 --

 Jason Pruim
 Raoset Inc.
 Technology Manager
 MQC Specialist
 3251 132nd ave
 Holland, MI, 49424
 www.raoset.com
 [EMAIL PROTECTED]

AFAIK, you need to drop and then recreate the auto-increment field, 
otherwise you'll get holes when you delete a record.

David



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Reset a auto increment field?

2007-08-29 Thread emierzwa
Yes, for a MyIsam type table.

Ed 

-Original Message-
From: Jason Pruim [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, August 29, 2007 11:53 AM
To: emierzwa
Cc: [EMAIL PROTECTED]; mysql@lists.mysql.com
Subject: Re: Reset a auto increment field?

If I understand you correctly, if my table is MyISAM, after I did a  
delete query I could just: ALTER TABLE t2 AUTO_INCREMENT=1; and  
that would cause the auto increment value to be set to 901 (Assuming  
900 total current records) on the next insert?


On Aug 29, 2007, at 1:48 PM, [EMAIL PROTECTED]  
[EMAIL PROTECTED] wrote:

  To change the value of the AUTO_INCREMENT  counter to be used for new
 rows, do this:

 ALTER TABLE t2 AUTO_INCREMENT = value;

 You cannot reset the counter to a value less than or equal to any that
 have already been used. For MyISAM, if the value is less than or equal
 to the maximum value currently in the AUTO_INCREMENT column, the value
 is reset to the current maximum plus one. For InnoDB, you can use  
 ALTER
 TABLE ... AUTO_INCREMENT = value as of MySQL 5.0.3, but if the  
 value is
 less than the current maximum value in the column, no error message is
 given and the current sequence value is not changed.


 Ed

 -Original Message-
 From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
 Sent: Wednesday, August 29, 2007 11:02 AM
 To: Jason Pruim
 Cc: MySQL List
 Subject: Re: Reset a auto increment field?

 Is there away to reset an auto incrementing field count? I have a
 database that currently has 935 records in it but because I have
 deleted a few the current number used for NEW records is 938 :) How
 can I get it to count the records and assign a record number based on
 the total count?

 Hope that makes sense! Thanks for looking! :)

 --

 Jason Pruim
 Raoset Inc.
 Technology Manager
 MQC Specialist
 3251 132nd ave
 Holland, MI, 49424
 www.raoset.com
 [EMAIL PROTECTED]

 AFAIK, you need to drop and then recreate the auto-increment field,
 otherwise you'll get holes when you delete a record.

 David



 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql? 
 [EMAIL PROTECTED]



--

Jason Pruim
Raoset Inc.
Technology Manager
MQC Specialist
3251 132nd ave
Holland, MI, 49424
www.raoset.com
[EMAIL PROTECTED]



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Trigger OLD question

2007-05-09 Thread emierzwa
You might try:
 
INSERT INTO table_track  select OLD.*;

-Original Message-
From: Olaf Stein [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, May 09, 2007 2:59 PM
To: MySql
Subject: Trigger OLD question

Hi All,

I have a table with 205 columns. When an update statement updates a row
in
this table I want a trigger that creates a record of the old row in a
separate table.
The following works fine when not too many columns need to be written
into
the other table

CREATE TRIGGER track_table
BEFORE UPDATE ON table FOR EACH ROW
BEGIN
INSERT INTO table_track (value1,value2) VALUES (OLD.value1,old.value2);
END;

Is the any way of using OLD for all columns or generally another way of
doing this.

Thanks
Olaf


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: myisamcheck

2007-02-27 Thread emierzwa
Perhaps you can add the sql command version of myisamchk to the server
init file to run at startup. Where a line in this file was: myisamchk
tbl_name

[mysqld]
init_file=/path/to/data-directory/mysqld_init.sql 

Ed

-Original Message-
From: Payne [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, February 27, 2007 5:32 AM
To: mysql@lists.mysql.com
Subject: myisamcheck

Hi,

I am currently using SuSE and like to be able use myisamcheck when start

up mysql. Is there a command that I can added to my init.d script that 
would do that.

Thanks,

Payne

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: only update if values different

2006-12-11 Thread emierzwa
I agree, you should just update it since the standard operation for
MYSQL is
to only apply updates if the value is changing.

http://dev.mysql.com/doc/refman/5.0/en/update.html
If you set a column to the value it currently has,
 MySQL notices this and does not update it.
 
Ed

-Original Message-
From: Peter [mailto:[EMAIL PROTECTED] 
Sent: Saturday, December 09, 2006 3:42 PM
To: Nick Meyer
Cc: mysql@lists.mysql.com
Subject: Re: only update if values different

Hello,
 with good indeces 100 000 rows is basically nothing.

Give it a try.

Peter

Nick Meyer wrote:
 What is the best way to UPDATE a row only if values are different? We
 have a mainframe extract that literally has 100,000 rows and am
worried
 about the performance of just running INSERTs each night.
 
 Is there a simple comparison command or would you have to nest a
SELECT
 statement? Thank you,
 
 Nick
 
 

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: UNIQUE KEY vs NULLs

2006-12-11 Thread emierzwa
It is expected behavior, you can make the unique key a primary key
instead. This should prevent this situation.

Ed 

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] 
Sent: Monday, December 11, 2006 7:42 AM
To: mysql@lists.mysql.com
Subject: UNIQUE KEY vs NULLs

Hi,

I have an InnoDB table similar to this:

CREATE TABLE Target
(IMSI VARCHAR(15) ASCII,
 IMEI VARCHAR(15) ASCII,
 UNIQUE KEY (IMSI, IMEI));

After playing a bit with it, I managed to add duplicate records, if one
of
the fields was a NULL:

+-+-+
| IMSI| IMEI|
+-+-+
| NULL| 35195600126418  |
| NULL| 35195600126418  |
+-+-+


Is this a bug, or a feature? :-)

If it is a feature, than how can I assure uniqueness for a table in a
sense
that won't allow such duplicates?

Thx

ImRe



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Concatenate a column in multiple rows into one column?

2006-11-16 Thread emierzwa
Try this...

select a.title, group_concat(c.name SEPARATOR '  ') as name
from album a
 join albumartist b on b.albumid=a.albumid
 join artist c on c.artistid=b.artistid
group by a.title; 

Ed

-Original Message-
From: James Eaton [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, November 15, 2006 9:11 PM
To: mysql@lists.mysql.com
Subject: Concatenate a column in multiple rows into one column?

I have a database with roughly the following structure:

album
--
albumid INT
title   VARCHAR(255)

artist
--
artistidINT
nameVARCHAR(255)

albumartist
--
albumid INT
artistidINT


From the above, you can see that any given album can have more than one 
artist.  For instance:

album title: A Meeting by the River
artist name: Ry Cooder
artist name: V.M. Bhatt

I'd like to run a query that returns one row per album, but somehow also

returns all of the artists associated with the album (ideally
concatenated 
with an ampersand seperator).  Can this be done?

titleartists
---  --
A Meeting by the River   Ry Cooder  V.M.Bhat


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Lost Connection executing query

2006-01-04 Thread emierzwa
What are your wait_timeout and/or interactive_timeout values set to?
Does the server crash and auto restart? Check server's up time.
Do both servers have the exact table schema? Same column datatypes and
indexes to be specific.
Although your data volumn may be similar, can the actual data be
problamatic?
Can you rewrite the UPDATE statement as a SELECT statement to see if you
can target the rows you are expecting to target?

You can check section A.2.8. MySQL server has gone away in the online
manual, which also covers your message, for list of things to try.
http://dev.mysql.com/doc/refman/5.0/en/gone-away.html

Ed
-Original Message-
From: Tripp Bishop [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, January 03, 2006 4:45 PM
To: mysql@lists.mysql.com
Subject: Lost Connection executing query

Howdy all,

First off, We're running 5.0.15.

Theres a particular update statement that we run that
updates data in several tables. On our mac OSX test
server (also running 5.0.15) the query executes
quickly and without any errors or warnings.

On our linux box, which is our production box, we get
the following error as soon as the query is executed:

ERROR 2013 (HY000): Lost connection to MySQL server
during query

The databases have a similar amount of data in them.

I've googled on the error but mostly get pages
containing questions about the error when generated by
stored procedures and mostly on 5.0.3. We're not using
stored procedures. This is just a straight-up query.

Here's the query:

UPDATE customer_indicator 
INNER JOIN customer_search_pref ON 
customer_search_pref.customer_id =
customer_indicator.customer_id AND 
customer_search_pref.office_id =
customer_indicator.office_id 
LEFT JOIN contact_log ON 
contact_log.customer_id =
customer_indicator.customer_id 
LEFT JOIN sent ON sent.pref_id =
customer_search_pref.pref_id 
SET customer_indicator.office_id = 33, 
customer_search_pref.office_id =33, 
customer_indicator.agent_id = 105, 
sent.office_id = 33, 
contact_log.office_id = 33, 
customer_indicator.next_message_id = 4403 
WHERE customer_indicator.customer_id = 78437 AND 
  customer_indicator.office_id = 34;

The approximate sizes of the tables involved are:

customer_indicator: 40K records
customer_search_pref: 45K
contact_log: 390K
sent: 20M (million)

So my question is, what are some possible causes of
this error? Why would trying to execute this query
cause the connection to be lost? Why would the query
work fine on our mac system and fail on the prodcution
box?

Thanks,

Tripp



__ 
Yahoo! DSL - Something to write home about. 
Just $16.99/mo. or less. 
dsl.yahoo.com 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Lost Connection executing query

2006-01-04 Thread emierzwa
Your import or CHECK TABLE sound like your best bets.

Interesting about your SELECT statement conversion though, under the
optimization section, it suggests you may still have a problem. Can you
run your EXPLAIN SELECT on your MAC for comparison?

7.2.4. How MySQL Optimizes WHERE Clauses
Early detection of invalid constant expressions. MySQL quickly detects
that some SELECT statements are impossible and returns no rows.  

Ed


-Original Message-
From: Tripp Bishop [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, January 04, 2006 11:01 AM
To: emierzwa; mysql@lists.mysql.com
Subject: RE: Lost Connection executing query

The query fails instantly so I don't think it's a
timeout issue. 

The wait_timeout and interactive_timeout variables are
set to 28800.

The server seems to be crashing and auto restarting
because as you suggested the uptime is small when I do
a show status right after attempting the query.

The schemas are identical and most of the data is the
same.

When I try to rewrite the update as a select I get an
impossible where clause when I do an explain on the
select.

It can't be a max packet issue because the actual
query is really small.

The query runs fine on the MAC and takes about 1
second to run.

I could break this update statement up into 4 seperate
update statements but I'd prefer to keep it as one. I
did notice that the MySQL manual suggests running
CHECK TABLE on the table(s) involved but no other
queries that operate against these tables seem to be
having trouble so it seems unlikely that table
corruption would be a problem.

We did recently upgrade the server from 4.0.40 to
5.0.15 and we did not dump the tables and reimport
them. On the MAC we did do a dump and reimport. I
wonder if that could be the cause of this problem. I
had forgetten about that important difference.

Cheers,

Tripp


--- [EMAIL PROTECTED] wrote:

 What are your wait_timeout and/or
 interactive_timeout values set to?
 Does the server crash and auto restart? Check
 server's up time.
 Do both servers have the exact table schema? Same
 column datatypes and
 indexes to be specific.
 Although your data volumn may be similar, can the
 actual data be
 problamatic?
 Can you rewrite the UPDATE statement as a SELECT
 statement to see if you
 can target the rows you are expecting to target?
 
 You can check section A.2.8. MySQL server has gone
 away in the online
 manual, which also covers your message, for list of
 things to try.

http://dev.mysql.com/doc/refman/5.0/en/gone-away.html
 
 Ed
 -Original Message-
 From: Tripp Bishop [mailto:[EMAIL PROTECTED] 
 Sent: Tuesday, January 03, 2006 4:45 PM
 To: mysql@lists.mysql.com
 Subject: Lost Connection executing query
 
 Howdy all,
 
 First off, We're running 5.0.15.
 
 Theres a particular update statement that we run
 that
 updates data in several tables. On our mac OSX test
 server (also running 5.0.15) the query executes
 quickly and without any errors or warnings.
 
 On our linux box, which is our production box, we
 get
 the following error as soon as the query is
 executed:
 
 ERROR 2013 (HY000): Lost connection to MySQL server
 during query
 
 The databases have a similar amount of data in them.
 
 I've googled on the error but mostly get pages
 containing questions about the error when generated
 by
 stored procedures and mostly on 5.0.3. We're not
 using
 stored procedures. This is just a straight-up query.
 
 Here's the query:
 
 UPDATE customer_indicator 
 INNER JOIN customer_search_pref ON 
 customer_search_pref.customer_id =
 customer_indicator.customer_id AND 
 customer_search_pref.office_id =
 customer_indicator.office_id 
 LEFT JOIN contact_log ON 
 contact_log.customer_id =
 customer_indicator.customer_id 
 LEFT JOIN sent ON sent.pref_id =
 customer_search_pref.pref_id 
 SET customer_indicator.office_id = 33, 
 customer_search_pref.office_id =33, 
 customer_indicator.agent_id = 105, 
 sent.office_id = 33, 
 contact_log.office_id = 33, 
 customer_indicator.next_message_id = 4403 
 WHERE customer_indicator.customer_id = 78437 AND 
   customer_indicator.office_id = 34;
 
 The approximate sizes of the tables involved are:
 
 customer_indicator: 40K records
 customer_search_pref: 45K
 contact_log: 390K
 sent: 20M (million)
 
 So my question is, what are some possible causes of
 this error? Why would trying to execute this query
 cause the connection to be lost? Why would the query
 work fine on our mac system and fail on the
 prodcution
 box?
 
 Thanks,
 
 Tripp
 
 
   
 __ 
 Yahoo! DSL - Something to write home about. 
 Just $16.99/mo. or less. 
 dsl.yahoo.com 
 
 
 -- 
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:

http://lists.mysql.com/[EMAIL PROTECTED]
 
 




__ 
Yahoo! DSL - Something to write home about. 
Just $16.99/mo. or less. 
dsl.yahoo.com 


-- 
MySQL General

RE: set sql mode

2005-12-29 Thread emierzwa
An error is only thown during an INSERT or UPDATE statement, otherwise
you just get a warning.

- ERROR_FOR_DIVISION_BY_ZERO 

Produce an error in strict mode (otherwise a warning) when we encounter
a division by zero (or MOD(X,0)) during an INSERT or UPDATE. If this
mode is not given, MySQL instead returns NULL for divisions by zero. If
used in INSERT IGNORE or UPDATE IGNORE, MySQL generates a warning for
divisions by zero, but the result of the operation is NULL. 
 
Ed

-Original Message-
From: wangxu [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, December 28, 2005 7:23 PM
To: Gleb Paharenko; mysql@lists.mysql.com
Subject: Re: set sql mode

My sql_mode is
STRICT_TRANS_TABLES,ANSI_QUOTES,ERROR_FOR_DIVISION_BY_ZERO.

But when i execute select 1/0 from ht_detail,the result is Null.

No error throw out.

Why?
- Original Message - 
From: Gleb Paharenko [EMAIL PROTECTED]
To: mysql@lists.mysql.com
Sent: Wednesday, December 28, 2005 9:26 PM
Re: set sql mode


 Hello.
 
 But now i wish sql mode only include REAL_AS_FLOAT,PIPES_AS_CONCAT.
 
 Do you want this:
 
 mysql set @@sql_mode='REAL_AS_FLOAT,PIPES_AS_CONCAT';
 Query OK, 0 rows affected (0.00 sec)
 
 mysql select @@sql_mode;
 +---+
 | @@sql_mode|
 +---+
 | REAL_AS_FLOAT,PIPES_AS_CONCAT |
 +---+
 
 
 
 It showing
 REAL_AS_FLOAT,PIPES_AS_CONCAT,ANSI_QUOTES,IGNORE_SPACE,ANSI when i
 set sql mode is ansi.
 But now i wish sql mode only include REAL_AS_FLOAT,PIPES_AS_CONCAT.
 Can I  achieve it?wangxu wrote:
 
 
 -- 
 For technical support contracts, goto
https://order.mysql.com/?ref=ensita
 This email is sponsored by Ensita.NET http://www.ensita.net/
__  ___ ___   __
   /  |/  /_ __/ __/ __ \/ /Gleb Paharenko
  / /|_/ / // /\ \/ /_/ / /__   [EMAIL PROTECTED]
 /_/  /_/\_, /___/\___\_\___/   MySQL AB / Ensita.NET
___/   www.mysql.com
 
 
 
 
 -- 
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]
 
 

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Benchmark()

2005-12-15 Thread emierzwa
The document says it evaluates the 'expr', so I'm guessing it is only
function calls and arithmetic evaluation. Although I have tried selects
without any luck. It could probably be documented a little more clearly,
or event better yet...a supported feature.

Ed

-Original Message-
From: Raimundo Sierra [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, December 14, 2005 9:37 PM
To: mysql@lists.mysql.com
Subject: Benchmark()

It is not clear to me what benchmark really does. Example:

SELECT SQL_NO_CACHE * FROM mytable where myVarcharRow like 'sometext%' 
or otherVarcharRow like 'someothertext';

takes approx. 0.3 seconds. If you benchmark this query, which to my 
understanding should run and not just parse the query, it is faster??

SELECT BENCHMARK(1000, SELECT SQL_NO_CACHE * FROM mytable where 
myVarcharRow like 'sometext %' or otherVarcharRow like 'othertext %');

returns 1 row in set (0.12 sec)

Another similar example is:

SELECT * FROM Table;
returns: 59770 rows in set (0.23 sec)
SELECT BENCHMARK(10, SELECT * FROM Table);
returns: 1 row in set (0.01 sec)

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Intelligent Converters product: MSSQL-to-MySQL

2005-09-13 Thread emierzwa
I believe the (16) is the size of offset pointer used to locate the text
data elsewhere on the disk. They are all (16 bits)...Sybase does this as
well and is an odd (ok it's stupid) notation. This is one of those
things you need to adjust by hand when convert from one db to another.

Ed

-Original Message-
From: Ryan Stille [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, September 13, 2005 7:38 AM
To: [EMAIL PROTECTED]
Cc: mysql@lists.mysql.com
Subject: RE: Intelligent Converters product: MSSQL-to-MySQL

 OK, you confused me. How can 16 be bigger than 16? For character-based
 fields, the number in the parentheses shows the length of the field,
 does it not? The first part, the text or char or varchar or ...,
 tells the engine what kind of data a field will contain. 

 How can TEXT(16) hold more data than VARCHAR(16) ? 

I'm not sure how the (16) is used on a TEXT field.  MS SQL has TEXT
fields and VARCHAR fields.  A VARCHAR(16) in MSSQL would be the same as
a VARCHAR(16) in MySQL.   But in my MSSQL database I have data with
thousands of characters in a TEXT(16) field.

-Ryan

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: RE: MSSQL to MySQL mapping

2005-08-22 Thread emierzwa
This is the old syntax for OUTER JOIN. Not very ANSI...

select a.userID 
from USers a
 left join UserGroups b on b.UserGroupID=a.UserGroupID;


-Original Message-
From: Rajesh Mehrotra [mailto:[EMAIL PROTECTED] 
Sent: Monday, August 22, 2005 7:50 AM
To: John c; mysql@lists.mysql.com
Subject: RE: RE: MSSQL to MySQL mapping


User % instead of *  

-Original Message-
From: John c [mailto:[EMAIL PROTECTED] 
Sent: Monday, August 22, 2005 9:48 AM
To: mysql@lists.mysql.com
Subject: FW: RE: MSSQL to MySQL mapping


Also mysql complaints when I use the *= syntax:

select a.userID from USers a, UserGroups b where
a.UserGroupID*=b.UserGroupID;

Any ideas? What is the equivalent syntax for MySQL.

Thank you
John C

From: John c [EMAIL PROTECTED]
To: mysql@lists.mysql.com
Subject: FW: RE: MSSQL to MySQL mapping
Date: Mon, 22 Aug 2005 13:18:43 +


I am trying to adjust sql_mode but with no success. I use MySQL Query 
Browser from a remote mashine, login as root and I run set global 
sql_mode='ansi';

Then I run MySQLCommand line client from the MySQL server and the 
sql_mode is unchanged.

I also tried to set the sql_mode in the My.ini file but it does not 
seem to have any effect. We have a web application that connects to 
MySQL. The applicatio written to communicate to SQL Server and the 
MySQL default mode cannot interpret the sql queries.

How do I set the sql_mode, with what tool, how do I ensure what sql 
mode is currently set?

Thank you for your help
John C


From: Arjan Hulshoff [EMAIL PROTECTED]
To: John c [EMAIL PROTECTED]
CC: mysql@lists.mysql.com
Subject: RE: MSSQL to MySQL mapping
Date: Thu, 11 Aug 2005 09:20:08 +0200


Have you adjusted your queries? MSSQL uses a different dialect 
(T-SQL), then MySQL uses. There are quite some difference, although 
some queries still might work.

Arjan.


-Original Message-
From: John c [mailto:[EMAIL PROTECTED]
Sent: Wednesday, August 10, 2005 08:12 PM
To: mysql@lists.mysql.com
Subject: MSSQL to MySQL mapping

We have a web based application running on IIS 5.0 using MS SQL Server

2000 as the DBMS; we use ODBC to connect to the DB. We migrated our DB

to MySQL and used the MySQL ODBC driver. It appears that some of the 
SQL statements that are accepted by SQL Server 2000 are not accepted 
by MySQL. Is there a mapping from MSSQL to MySQL statements?

Thank you
John C.

_
Express yourself instantly with MSN Messenger! Download today - it's 
FREE!
http://messenger.msn.click-url.com/go/onm00200471ave/direct/01/


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]



--
The information contained in this communication and any attachments is

confidential and may be privileged, and is for the sole use of the 
intended recipient(s). Any unauthorized review, use, disclosure or 
distribution is prohibited. If you are not the intended recipient, 
please notify the sender immediately by replying to this message and 
destroy all copies of this message and any attachments. ASML is 
neither liable for the proper and complete transmission of the 
information contained in this communication, nor for any delay in its
receipt.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]


_
Don't just search. Find. Check out the new MSN Search! 
http://search.msn.click-url.com/go/onm00200636ave/direct/01/


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]


_
Express yourself instantly with MSN Messenger! Download today - it's
FREE! 
http://messenger.msn.click-url.com/go/onm00200471ave/direct/01/


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Deleted rows

2005-08-18 Thread emierzwa
I beleive the below solution will not detect rows missing from the very
top of the table,
try this...
 
select ifnull((select max(a.test_id) +1
from tests a
where a.test_idb.test_id),1) as 'from'
  ,b.test_id -1 as 'to'
from tests b 
 left outer join tests x
on x.test_id=b.test_id -1
where x.test_id is NULL and b.test_id1
order by 1 

Ed



From: Peter Brawley [mailto:[EMAIL PROTECTED] 
Sent: Thursday, August 18, 2005 10:18 AM
To: [EMAIL PROTECTED]
Cc: Scott Hamm; 'Mysql '
Subject: Re: Deleted rows


Scott, Shawn,

The OP would like to detect that 4,5,6, and 7 are missing from the 
sequence. Your query would have only found that 7 was missing.

Right! For sequences longer than 1 you need something like...

SELECT 
  a.id+1 AS 'Missing From', 
  MIN(b.id)-1 AS 'To'
FROM test AS a, test AS b
WHERE a.id  b.id
GROUP BY a.id
HAVING a.id + 1  MIN(b.id)
ORDER BY 1;

PB

-

[EMAIL PROTECTED] wrote: 


Peter, 

Your query may work for data with single-row gaps (like his
example data) but it will not work if the sequence skips more than one
number. 

Look at this sequence: 1,2,3,8,9,10 

The OP would like to detect that 4,5,6, and 7 are missing from
the sequence. Your query would have only found that 7 was missing. 

Nice try, but sorry. It just won't meet the need. 

Shawn Green
Database Administrator
Unimin Corporation - Spruce Pine 

Peter Brawley [EMAIL PROTECTED]
mailto:[EMAIL PROTECTED]  wrote on 08/18/2005 10:56:34 AM:

 Scott,
 
 How do I execute a query that shows missing ID's like so:
 
 SELECT id AS i 
 FROM tbl
 WHERE i  1 AND NOT EXISTS( 
   SELECT id FROM tbl WHERE id = i - 1
 );
 
 PB
 
 -
 
 Scott Hamm wrote: 
 If I got a table as follows:
 
 
 ID foo
 1 12345
 2 12346
 4 12348
 6 12349
 7 12388
 9 12390
 How do I execute a query that shows missing ID's like so:
 
 3
 5
 8
 
 I wouldn't expect for it to show deleted data that was
deleted, just show 
 the skipped ID's.
 
 That way I determine if operator deleted too much (cheating at
QC)
 
 Is it possible?
   
 
 
 No virus found in this incoming message.
 Checked by AVG Anti-Virus.
 Version: 7.0.338 / Virus Database: 267.10.12/75 - Release
Date: 8/17/2005
   No virus found in this outgoing message.
 Checked by AVG Anti-Virus.
 Version: 7.0.338 / Virus Database: 267.10.12/75 - Release
Date: 8/17/2005
 
 
 -- 
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED] 




No virus found in this incoming message.
Checked by AVG Anti-Virus.
Version: 7.0.338 / Virus Database: 267.10.12/75 - Release Date:
8/17/2005
  



RE: use of indexes

2005-07-22 Thread emierzwa
I believe the conflict here is the OR. Try this...

select * from table
  where field1 = 'VALUE1' and field2 like 'VALUE2%'
union
select * from table
  where field3 = 'VALUE1' and field2 like 'VALUE2%'

Currently, MySql can't utilize two indexes on the same table at
the same time but it is on their list of to-do`s, this will be
a cool feature. The UNION will allow you to use both composite
indexes at the same time because it is two queries.

Ed

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] 
Sent: Friday, July 22, 2005 6:04 AM
To: Chris Faulkner
Cc: mysql@lists.mysql.com
Subject: Re: use of indexes

The system cannot used the index on field2 because it is the second half

of the index in both cases, and it can only use indexes in order. It 
cannot use the separate indexes on field 1 and field 2 because the are 
ORred together.

If you rephrase your query

SELECT * from table 
  WHERE field2 LIKE 'VALUE2%
AND ((field1 = 'VALUE1') OR (field3 = 'VALUE3')) ;

it becomes obvious that an index on field2 will be used, followed by 
searches of the results field1 and field3 .

As a matter of interest, what numbers of hits do you expect on each of
the 
three terms separately? If the field2 hit is is pretty selective, it
does 
not really matter what the others do.

Alec

Chris Faulkner [EMAIL PROTECTED] 
22/07/2005 12:46
Please respond to
Chris Faulkner [EMAIL PROTECTED]


To
mysql@lists.mysql.com
cc

Subject
Re: use of indexes

Hi 

field2 is indexed. I have 2 indexes. One is on field1 and field2, the
second indexes field3 and field2.

You mean a separate index which only indexes field2 ? Ithought that
the type of query I am doing is a good reason for doing composite
indexes.


Chris

On 7/22/05, Eugene Kosov [EMAIL PROTECTED] wrote:
 Chris Faulkner wrote:
  HI
 
  I have a query like this
 
  select * from table where (
  ( field1 = 'VALUE1' and field2 like 'VALUE2%' )
  OR
  ( field3 = 'VALUE1' and field2 like 'VALUE2%' )
  )
 
  I have created two composite indexes - one on field1 + field2 and
one
  on field3 + field2. Explain on the SQL indicates that the indexes
are
  possibly used. The query takes an age to run and looking at my log
  indicates a full table scan.
 
  I have also tried indexing just field1 and field3 separately but
this
  doesn't help. I have run an analyze.
 
  Chris
 
 
 Mysql use an index only if indexed field(s) present(s) in both OR 
arguments..
 Sorry, but i can't find it in docs right now, so i can't give you any 
helpful link.
 
 I think index on field2 may help you here..
 

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Avoiding full table scans

2005-06-30 Thread emierzwa
You could leave your separate tables by region and create a MERGE
table over them. Your application only needs to reference the MERGE
table name, eliminating the need for your recompiles and MySql will
Handle the underlying tables. You can add or remove tables from the
MERGE at any time. I did notice a user comment in the online doc
http://dev.mysql.com/doc/mysql/en/merge-storage-engine.html
that said there was a 128 table limit, but I never tried that many.

Ed

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] 
Sent: Thursday, June 30, 2005 2:56 AM
To: [EMAIL PROTECTED]
Cc: mysql
Subject: Re: Avoiding full table scans

David Kagiri [EMAIL PROTECTED] wrote on 30/06/2005 09:44:11:

 Hi
 I our database there is one table that will grow into tetrabytes 
 within a short time.
 
 I would like to know how i can reduce full table scans.
 
 I have used separate tables as per region for now but the problem is
 if i create a new table i am forced to recompile the application.
 
 I can easily avoid this by using one table and an extra field to 
 flag which region a paricular transaction happened.However i need 
 ideas of how i can avoid full table scans because slow queries are 
 unacceptable. it will also give me more flexibility in writing reports

 
 I will gladly appreciate any links that are specific to this problem
 and case studies.There people who need to be convinced

The answer to your question is Indexes. Ensure that you have indexes on 
your tables for all the different searches you do. Use the EXPLAIN
command 
to find out which SELECTs are doing full table scans, and add Indexes as

appropriate.

Alec



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: MERGE Table Question

2005-06-10 Thread emierzwa
If you're concerned about subtle data type differences you can also
run one   show create table myMergeTable
statement for each source table and diff them.

Ed 

-Original Message-
From: mos [mailto:[EMAIL PROTECTED] 
Sent: Friday, June 10, 2005 8:35 AM
To: mySQL list
Subject: Re: MERGE Table Question

At 09:00 AM 6/10/2005, you wrote:
Hello everyone,
 We're going to be using the MERGE TABLE option of Mysql when we 
 build our Data Warehouse. Basically (for you Oracle folks), we're
going 
 to mimic PARTITIONS. My question is this: Other than dumping syntax 
 through a utility and comparing text, is there any way I can read the 
 structure directly through the *.FRM or *.MYD files to make certain
that 
 ALL tables match exactly so the MERGE will work?

Regards,
George


George,
 You could try things like:

show tables like myMergeTable%

to get a list of tables making up the merge. (Assuming the name of the 
merge tables all start with the similar name)

Then with the results of that query execute for each table name
returned:

show columns from tablex

and

show index from tablex

You could then compare each tablex definition with each other, probably 
using a program  or even SQL if that is the route you wanted to take.

Mike 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: why NOT NULL in PRIMARY key??

2005-04-26 Thread emierzwa
Not every DBMS...

MSSQL:
Create Unique Index
Microsoft(r) SQL Server(tm) checks for duplicate values when the index
is created (if data already exists) and checks each time data is added
with an INSERT or UPDATE statement. If duplicate key values exist, the
CREATE INDEX statement is canceled and an error message giving the first
duplicate is returned. Multiple NULL values are considered duplicates
when UNIQUE index is created.


SYBASE:
Create Unique Index
Prohibits duplicate index (also called key) values. The system checks
for duplicate key values when the index is created (if data already
exists), and checks each time data is added with an insert or update. If
there is a duplicate key value or if more than one row contains a null
value, the command is aborted and an error message giving the duplicate
is printed. 


 Ed

-Original Message-
From: Jay Blanchard [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, April 26, 2005 6:50 AM
To: Jigal van Hemert; mysql@lists.mysql.com
Subject: RE: why NOT NULL in PRIMARY key??

Since NULL is the absence of a value and PRIMARY keys must have a value
a NULL column cannot be included as a portion of a PRIMARY key. AFAIK
this is the case with every RDBMS out there. Asking the development team
might get you a more informative response.

There is a vast difference between a zero value, blank value and NULL
(absence of value).

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: why NOT NULL in PRIMARY key??

2005-04-26 Thread emierzwa
Martin, Shawn, you are correct. An oversight on my part...this is why I
still follow this list, I am always able to learn something and never
cease to be humbled.

Ed

-Original Message-
From: Martijn Tonies [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, April 26, 2005 8:01 AM
To: emierzwa; mysql@lists.mysql.com
Subject: Re: why NOT NULL in PRIMARY key??



 Not every DBMS...

 MSSQL:
 Create Unique Index
 Microsoft(r) SQL Server(tm) checks for duplicate values when the index
 is created (if data already exists) and checks each time data is added
 with an INSERT or UPDATE statement. If duplicate key values exist, the
 CREATE INDEX statement is canceled and an error message giving the
first
 duplicate is returned. Multiple NULL values are considered duplicates
 when UNIQUE index is created.


 SYBASE:
 Create Unique Index
 Prohibits duplicate index (also called key) values. The system checks
 for duplicate key values when the index is created (if data already
 exists), and checks each time data is added with an insert or update.
If
 there is a duplicate key value or if more than one row contains a null
 value, the command is aborted and an error message giving the
duplicate
 is printed.

An unique index is not a primary key constraint.

With regards,

Martijn Tonies
Database Workbench - developer tool for InterBase, Firebird, MySQL  MS
SQL
Server
Upscene Productions
http://www.upscene.com


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Temporal databases MySQL

2005-04-15 Thread emierzwa
What you describe makes sense and would certainly work, I don't know
that I would call it a temporal solution. The ENUM (I, U, D) seams a bit
redundant with time. This model resembles a traditional application log
or trace file, which is highly desirable for a records keeping system,
like for a phone company, or auto dealership. But if your looking for a
dynamic time based model to support systems that might track plants and
animals that have lived or live on earth that show extinction,
reintroduction and evolution...your stuck with start times and end times
marking valid entries and a large WHERE clause using BETEEN st_date
AND en_date. If you stick with the T_HIST model you may want to check
out the MERGE tables, they can help segment your older histories while
still giving you a VIEW like access, assuming you can't go to the MySql
5 release.

Ed

-Original Message-
From: news [mailto:[EMAIL PROTECTED] On Behalf Of Daniel BODEA
Sent: Saturday, April 09, 2005 8:46 PM
To: mysql@lists.mysql.com
Subject: Re: Temporal databases  MySQL

I was thinking about the following model for the application I'm working
on:

Any given table T holds the conventional data associated with instant
NOW,
no temporal data at all. There are tables T_HIST for every table T with
an
identical structure plus one date column which is set to NOW on every
insert
(these tables are only inserted into) and another ENUM column holding an
identifier for the operation (I, U or D). An index would be created over
both (date, operation).

An INSERT in T is duplicated in T_HIST. An UPDATE is first performed in
T
and the resulting row is INSERTed in T_HIST. A DELETE first copies the
column into T_HIST and the row is deleted from T.

As long as all T tables have PKs that are guaranteed not to be recycled,
I
suppose the benefits would be the following:

---
T tables can be kept compact and fast for all mundane operations.

Since the history can only grow and is logically separated, it can use
separate storage strategies better fit for much larger amounts of data
compared to tables T.

Temporal data remains a minimal addition while allowing for all (?)
temporal
queries to be performed. I initially thought there would be
circumstances
where queries would have to perform a join to the corresponding T tables
but
then the ENUM column should fully replace the join.

Index usage for temporal queries in the MySQL context should be optimal
when
using the date column as the main index on a table that is naturally
guaranteed to have this column ordered at all times.
---

Views and triggers would be simulated by the application which should
not be
too incumbent considering that the application needs to provide some
easy
means of changing the querying instant anyway.

I'm in no way favoring any temporal model if it's not for its ability to
perform best on a given SQL engine (MySQL in this case). Not being that
familiar with the inner workings of MySQL though, I can only submit the
module above to the attention of MySQL specialists who may have the time
to
post back their thoughts.

Thanks,
Daniel

Daniel BODEA [EMAIL PROTECTED] wrote in message
news:[EMAIL PROTECTED]
 Hi Shawn,

 I really meant temporal and not temporary. Temporal as in TSQL2.
Databases
 that on the one hand accumulate all changes to data over time along
with
 accurate time information and on the other hand provide varying
degrees of
 transparency in querying this data based on the theory of instants and
 aggregated intervals of time.

 Most of the resources available online are largely academic though.

 Google :
 http://www.google.com/search?hl=enq=temporal+database

 Troels' links has a good temporal databases section :
 http://troels.arvin.dk/db/rdbms/links/

 The TAU Project that has some experimental code for several engines of
which
 MySQL :
 http://www.cs.arizona.edu/tau/

 I need to use this fully in a project that uses MySQL 4.1.latest and
in a
 way that's independent of the structure of tables comprising the
 application.

 I'm not looking for TSQL2 implementations for MySQL or other types of
 esoteric implementations at the SQL level. I was just interested in
hearing
 from people who have used MySQL to implement this model in a
production
 environment and what they could say about both the storage of temporal
data
 and the optimization of queries on past instants and intervals.

 There are several partially incompatible ways of doing this in a
generic
 relational context but as always, only one is most fit for a given SQL
 engine and I'm currently asking about it for MySQL.

 I can't possibly be the first one to push this thing onto MySQL based
on
 production-quality requirements.

 Thanks,
 Daniel

  I am not familiar with the use of the adjective temporal with the
word
  database. To me temporal means of or having to do with time or
it's
  measurement. Could you have meant temporary which means to me
  non-permanent or transitory in nature.?
 
  Even if you had meant 

RE: Re: Temporal databases MySQL

2005-04-06 Thread emierzwa
There was a magazine Database Programming  Design, now defunct...too
much detail for the average IT Manager so it didn't sell well...that did
a nice multi-part series on the subject. Including the SQL3 connection.
I could only locate a couple of fragments online...it used to be all
online if you were a subscriber and had your password...I'll have look
around for mine and see if I can locate more info. You'll find some
references to and from Rick Snodgrass in the links, most anything he
writes I find worth reading. The short of it is, at this point you have
to roll your own...you can implement the NOW time slice using a VIEW
over a main table(s) where the view`s WHERE clause has as it`s
end_date set to a magic value to indicate infinity or current state
1/1/ or 1/1/ if you engine accepts the value. The main table
or tables would contain a start_time  end_time representing the time
slice. For most engines, using multiple indexes concurrently on the same
table is not possible so you must either use self joins or store the
start and end date in separate tables with a common id. Cluster tables
can help significantly when dealing date extraction as long you are not
updating the rows and cause significant page splits. This is where
separate tables for the start  end times would allow you to create two
clustered indexes, one for each table.

We use this type environment, a little, on our test floor to represent
all our running equipment and their current state. Production is mostly
concerned with the NOW view of things and engineering tends to be more
interested in various times in the past to help make decisions for the
future. To speed up the NOW view for production we strategically place
triggers and some procs to funnel the state/info from dozens of tables
to just one small table (wide but short) representing key data. This
table is used directly for many reports and is heavily index to allow
good joins back to the source tables when needed.

It's hard to get to complicated with this time slice thing and still
have your average app developer keep up with everything and use it
correctly...so for the most part we wait for full support by the DBMS.

Good luck
Ed

http://www.dbpd.com/vault/9810/temporal.html

http://www.dbpd.com/vault/9810snod.html
  

-Original Message-
From: news [mailto:[EMAIL PROTECTED] On Behalf Of Daniel BODEA
Sent: Tuesday, April 05, 2005 11:11 AM
To: mysql@lists.mysql.com
Subject: Re: Temporal databases  MySQL

Hi Shawn,

I really meant temporal and not temporary. Temporal as in TSQL2.
Databases
that on the one hand accumulate all changes to data over time along with
accurate time information and on the other hand provide varying degrees
of
transparency in querying this data based on the theory of instants and
aggregated intervals of time.

Most of the resources available online are largely academic though.

Google :
http://www.google.com/search?hl=enq=temporal+database

Troels' links has a good temporal databases section :
http://troels.arvin.dk/db/rdbms/links/

The TAU Project that has some experimental code for several engines of
which
MySQL :
http://www.cs.arizona.edu/tau/

I need to use this fully in a project that uses MySQL 4.1.latest and in
a
way that's independent of the structure of tables comprising the
application.

I'm not looking for TSQL2 implementations for MySQL or other types of
esoteric implementations at the SQL level. I was just interested in
hearing
from people who have used MySQL to implement this model in a production
environment and what they could say about both the storage of temporal
data
and the optimization of queries on past instants and intervals.

There are several partially incompatible ways of doing this in a generic
relational context but as always, only one is most fit for a given SQL
engine and I'm currently asking about it for MySQL.

I can't possibly be the first one to push this thing onto MySQL based on
production-quality requirements.

Thanks,
Daniel

 I am not familiar with the use of the adjective temporal with the
word
 database. To me temporal means of or having to do with time or
it's
 measurement. Could you have meant temporary which means to me
 non-permanent or transitory in nature.?

 Even if you had meant temporary, I rarely hear it used as a database
 design term except when used with the word table as in temporary
 table. (http://dev.mysql.com/doc/mysql/en/create-table.html)

 However, if the TAU project is doing research on databases that are
 displaced or movable through time, this may be something I want to get
 involved with. What is their URL?

 Shawn Green
 Database Administrator
 Unimin Corporation - Spruce Pine




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Use of limit with IN on subquery

2004-12-03 Thread emierzwa
You could do something like this, not sure what your intent is if among
the top total_amt is a single exact amount that occurred 30 or 40
times...are you implying the top 10 items or the top 10 distinct items?

select *
from table_z a
where 10=(select count(*)
   from table_z b
   where b.k1=a.k1
 and b.total_amt=a.total_amt)

Ed

-Original Message-
From: Rick Robinson [mailto:[EMAIL PROTECTED] 
Sent: Friday, December 03, 2004 9:14 AM
To: 'Roger Baklund'; 'Mysql'
Subject: RE: Use of limit with IN on subquery


Hi Roger-
Thanks for responding so quickly.

Hmm.  I like it.  It would get a bit hairy if I wanted top 50 or top
100.  And
if I wanted the top # to be dynamic, I'll need to construct the query on
the
fly...but that may be very workable.  I need to play with it a bit to
see how it
will perform.  I have a table with about 500,000 rows (not really too
big), so
I'm hopeful.

Thanks again. 
Best regards,
Rick

-Original Message-
From: Roger Baklund [mailto:[EMAIL PROTECTED] 
Sent: Friday, December 03, 2004 10:49 AM
To: Mysql
Cc: [EMAIL PROTECTED]
Subject: Re: Use of limit with IN on subquery

Rick Robinson wrote:
 I'm using MySQL 4.1.7, trying to do a subquery that's apparently 
 unsupported - I'm hoping someone can provide a quick alternative for
me.
  
 I have a simple table Z with 3 columns, k1, k2, and total_amt, where 
 k1 and k2 make up the primary key.  I want to create a report that 
 lists the the top 10 total_amt for each k1.

Hm... top 10... group-wise maximum is maybe not the best approach. 
Maybe a self-join is better in this case. A small experiment:

use test;
create table tt2 (
   k1 char(1) not null,
   k2 int not null,
   total_amt int,
   primary key(k1,k2));
insert into tt2 values('a',1,412),('a',2,142),('a',3,123),('a',4,312);
insert into tt2 values('b',1,441),('b',2,251),('b',3,421),('b',4,331);
select * from tt2;

+++---+
| k1 | k2 | total_amt |
+++---+
| a  |  1 |   412 |
| a  |  2 |   142 |
| a  |  3 |   123 |
| a  |  4 |   312 |
| b  |  1 |   441 |
| b  |  2 |   251 |
| b  |  3 |   421 |
| b  |  4 |   331 |
+++---+
8 rows in set (0.02 sec)


select t1.k1,
   max(t1.total_amt) first,
   max(t2.total_amt) second,
   max(t3.total_amt) third
from
   tt2 t1
left join tt2 t2 on
   t2.k1 = t1.k1 and
   t2.total_amt  t1.total_amt
left join tt2 t3 on
   t3.k1 = t1.k1 and
   t3.total_amt  t2.total_amt
group by
   t1.k1;

++---++---+
| k1 | first | second | third |
++---++---+
| a  |   412 |312 |   142 |
| b  |   441 |421 |   331 |
++---++---+
2 rows in set (0.05 sec)

--
Roger





-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: How can I avoid warm-up time?

2004-09-10 Thread emierzwa
You didn't mention you release number, but on 4.1 you can use LOAD
INDEX INTO CACHE. You can also use options like --init-file and enter
your startup sql there for your warmup.

Ed

-Original Message-
From: Andy Eastham [mailto:[EMAIL PROTECTED] 
Sent: Friday, September 10, 2004 7:36 AM
To: Mysql List
Subject: RE: How can I avoid warm-up time?


Tamar,

The only way to fill the caches up is to execute queries.  If there is a
delay between your server coming up and the application being used, try
executing the queries that your application will use from a start-up
script
(you'll need to work these out carefully.  This way the caches will be
full
of the correct data when the application makes it's first queries. If
the
application starts straight away at boot time, you're stuck.

As for, the system gong slow after being inactive, this sounds like your
system could be swapping.  This could be caused by your caches taking up
all
or most of the available memory, leaving none for the operating system
and
other things running.  When your application goes idle, the operating
system
will copy its cache memory into swap space on the disk, and read its own
and
other applications memory back from the swap space into memory.  The
solutions to this are
1) Add more memory
2) Reduce the size of your caches so there is enough memory left for the
rest of the system.

Of course, the initial delay could be related to swapping too.

Andy

 -Original Message-
 From: Tamar Rosen [mailto:[EMAIL PROTECTED]
 Sent: 10 September 2004 12:56
 To: [EMAIL PROTECTED]
 Subject: How can I avoid warm-up time?
 
 Hi,
 
 
 
 We are now in the last phases of testing a new system on Mysql. We are
 using all MyISAM tables, on Linux.
 
 
 
 We found that after the mysql server is restarted, our application is
very
 slow for a couple of minutes. We understand that this is because the
 caches have to fill up first - the mysql key cache (we made it big
enough
 to hold all the indexes) and the OS cache.
 
 
 
 My question: is there any way we can preload the caches so that we
don't
 experience this warm-up time?
 
 
 
 Also, we found that if the server is not active for some time, say
 overnight, then again we experience the warm-up time, even though the
 mysql server was not restarted. This leads to totally unpredictable
 performance. At very low loads, the times are the worse!!!
 
 
 
 If anyone had a similar experience and/or possible suggestions on how
to
 solve this problem, it will be greatly appreciated.
 
 
 
 Thanks,
 
 
 
 Tamar Rosen
 
 www.gurunet.com
 
 
 
 



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Dump question: transactions vs. locking

2004-09-10 Thread emierzwa
Maybe MyISAM is still a better choice for this use...?

For MyISAM and BDB tables you can specify AUTO_INCREMENT on a secondary
column (or three columns in your case) in a multiple-column index. In
this case, the generated value for the AUTO_INCREMENT column is
calculated as MAX(auto_increment_column)+1 WHERE prefix=given-prefix.
This is useful when you want to put data into ordered groups. 

See the link:
http://dev.mysql.com/doc/mysql/en/example-AUTO_INCREMENT.html

Ed

-Original Message-
From: Fagyal Csongor [mailto:[EMAIL PROTECTED] 
Sent: Friday, September 10, 2004 4:53 AM
To: [EMAIL PROTECTED]
Subject: Dump question: transactions vs. locking


Hi,

I think I have some problems understanding how transactions work, and 
how they relate to locking. Can someone please explain?
The question is this:

I have a table where I have a column into which I insert numbers in 
sequencial order - it is not set to auto_increment, though, because the 
field is only unique with two other columns. The fields are: 
mainid,subid and userid (in the table invoices).
Before I changed to InnoDB, using MyISAM I used to lock the table, get 
MAX(subid), do an insert, then unlock. It was something like this (this 
is Perl):

# lock the table
$dbh-do('LOCK TABLES invoices WRITE');
# next subid is MAX(subid)
my ($nextsubid) = $dbh-selectrow_array('SELECT MAX(subid) FROM invoices

WHERE userid=? AND mainid=?', undef, $userid, $mainid);
# increment by 1
$nextsubid++;
# insert all stuff with new $subid
$dbh-do('INSERT INTO invoices ');   # set subid here
# unlock
$dbh-do('UNLOCK TABLES');

So what if I change to transactions? Should I simply just substitute 
LOCK/UNLOCK by BEGIN/COMMIT, or should I still use LOCK/UNLOCK? If the 
later, what is the correct order: BEGIN and then LOCK, or LOCK and then 
BEGIN? Also, am I right that as in transactional mode I always need a 
commit, I MUST use BEGIN/COMMIT, and cannot keep the original simple 
LOCK/UNLOCK?

Thank you,
- Csongor

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Why won't mysql use the index? WAS: strange table speed issue

2004-06-25 Thread emierzwa
The general rule of thumb (in Sybase not sure for MySQL), is if using an
index hits more than 10% of the tables total volume it is
faster/cheaper/less evasive on the drive to just table scan the table
opposed to read an index row, get the representing table row pointer and
seek the table for each qualifying row. Your aproximate ration is
.6m/3=20%, again 10% is just a rule of thumb, many other things come
into play...

The reason your count(*) used the index is because it doesn't
request/result any data table columns. Since the index and the data
table has the same number of rows and your where clause only uses
indexed columns it faster to read/scan the index row because it is org
varchar(80) bytes shorter and each disk i/o and can read/buffer more
index rows that table rows in the same size disk read.

My free humble opinion,
Ed

-Original Message-
From: MerchantSense [mailto:[EMAIL PROTECTED] 
Sent: Friday, June 25, 2004 10:56 AM
To: [EMAIL PROTECTED]
Subject: Why won't mysql use the index? WAS: strange table speed issue


This is crazy.  If someone can help me out, I'll pay them!

A table:

+--+-+--+-+-+---+
| Field| Type| Null | Key | Default | Extra |
+--+-+--+-+-+---+
| ip_start | bigint(20)  |  | MUL | 0   |   |
| ip_end   | bigint(20)  |  | | 0   |   |
| org  | varchar(80) |  | | |   |
+--+-+--+-+-+---+

Populated with numbers for the 1st 2 fields,  about 2.9 million records

Indexes as such:
mysql show index from ip2org;
+++--+--+-+-
--+-
+--++-+
| Table  | Non_unique | Key_name | Seq_in_index | Column_name |
Collation |
Cardinality | Sub_part | Packed | Comment |
+++--+--+-+-
--+-
+--++-+
| ip2org |  1 | dex  |1 | ip_start| A
|
2943079 | NULL | NULL   | |
| ip2org |  1 | dex  |2 | ip_end  | A
|
2943079 | NULL | NULL   | |
+++--+-
-+-+---+-+--++--
---+

I do this query:
mysql explain SELECT org from ip2org use index (dex) where
ip_start=1094799892 and ip_end=1094799892;
++--+---+--+-+--+-+-

---+
| table  | type | possible_keys | key  | key_len | ref  | rows|
Extra
|
++--+---+--+-+--+-+-

---+
| ip2org | ALL  | dex   | NULL |NULL | NULL | 2943079 |
where
used |
++--+---+--+-+--+-+-

---+

And it will not use the index, but if I do this ( a count):
mysql explain SELECT count(*) from ip2org use index (dex) where
ip_start=1094799892 and ip_end=1094799892;
++---+---+--+-+--++-

+
| table  | type  | possible_keys | key  | key_len | ref  | rows   |
Extra
|
++---+---+--+-+--++-

+
| ip2org | range | dex   | dex  |   8 | NULL | 594025 |
where
used; Using index |
++---+---+--+-+--++-

+

It will use the index.

WHY can't I get it to use the index on a query with siple firlds with
numeric values??  The query takes about 12 seconds   in fact when I
do
the count, it still takes that long (maybe it just *thinks* it's using
the
indexes !).  this should return a value in less than  sec I've
used
tables this big without this problem before... what's up?   No matter
how a
screw around with the indexes, I can't make it work...

Help! :)




-Original Message-
From: Michael Stassen [mailto:[EMAIL PROTECTED] 
Sent: Thursday, June 24, 2004 11:41 PM
To: Marc Slemko
Cc: MerchantSense; [EMAIL PROTECTED]
Subject: Re: strange table speed issue

I'm not certain, but I don't think a multi-column index will help here.
The

manual is unclear on how a multi-column index is used when you are
comparing

the first key part to a range rather than to a constant, but I get the 
impression it doesn't use the second key part in that case.  For you,
that 
would mean your multi-column index is no better than your single column 
indexes.

The problem is that with either column, the range of matches is large
enough

that the optimizer judges a table scan will be quicker than all those
key 
lookups.  You can see this in the EXPLAIN output, type = ALL and rows =
the 
size of your table.  Both indicate a full table scan.

You may be able to do better if you know something about the ranges
defined 
by ip_start and ip_end, particularly if ip2org is relatively static.
You 

RE: Some BLOB help please.

2004-06-23 Thread emierzwa
You might save some space if you compress() before storing. Depending on
file content I'm seeing 0-50% savings?

select length(load_file('c:/temp/SomeFile.pdf')) as old_size
  ,length(compress(load_file('c:/temp/SomeFile.pdf'))) as new_size

Ed
-Original Message-
From: Michael Stassen [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, June 22, 2004 11:01 PM
To: Justin Crone
Cc: [EMAIL PROTECTED]
Subject: Re: Some BLOB help please.



Justin Crone wrote:
snip
 
 So as I said, I am rather pleased with the performance and the ease at

 which I was able to get this up and running. However The problem
is 
 I do have limits, and one of those being disk space. Those 10,000
files 
 are taking up 21 GB of space in the database. However the actual space

 required by the files is around 5GB on the file system. The average
file 
 size is about 1.9MB, so it would seem that each row inserted into the 
 database is conforming to that 1.9MB average, giving me this 21GB
table.

Could you explain that again?  If average file size is 1.9Mb, then 21 Gb
for 
10,780 files is about right.  On the other hand, if the total is 5 Gb,
then 
5Gb/10,780 yields about .47 Mb average per file.  So which is it?  Do
your 
files average 1.9Mb, in which case we must wonder how you stored them in

only 5Gb, or do your files average .47 Mb, in which case we must figure
out 
why they are roughly 4 times as big when stored as BLOBs?  (The manual 
http://dev.mysql.com/doc/mysql/en/Storage_requirements.html says BLOBs

take length + 2bytes to store .)

 I would like to know if there is something that I can change to get
these
 numbers in line with each other, or if this is just the way of things.
 Current projections for the total documents needed to complete the
rotation
 of these files is 720,000 documents. Which if the 1.9MB average keeps,
that
 puts me in the neighborhood of 1.4TB of storage.

Even at .47Mb per file, that's about 330 Gb of storage required just for
the 
720,000 files.

Michael


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Where are BLOBs / TEXTs stored?

2004-06-17 Thread emierzwa
Found this at http://dev.mysql.com/doc/mysql/en/BLOB.html, basically
it's not stored in the table row along with any other columns in the
same row.

12.4.2 The BLOB and TEXT Types
Each BLOB or TEXT value is represented internally by a separately
allocated object. This is in contrast to all other column types, for
which storage is allocated once per column when the table is opened. 

Ed

-Original Message-
From: Luis R. Rodriguez [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, June 16, 2004 6:37 PM
To: [EMAIL PROTECTED]
Subject: Where are BLOBs / TEXTs stored?

[ Please CC me ]

Hi,

http://dev.mysql.com/doc/mysql/en/Storage_requirements.html

Page says:
---
The maximum size of a row in a MyISAM table is 65,534 bytes. Each BLOB
and TEXT column accounts for only five to nine bytes toward this size.
---

Question: So where are BLOBs and TEXTs entries stored?

Luis

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Where are BLOBs / TEXTs stored?

2004-06-17 Thread emierzwa
When looking at the directory that stores all the files for a database,
I see my table that has a TEXT column. I see
tbl.frm,tbl.myi,tbl.myd. This is the same for tables that do not
have TEXT columns, so I can rule out separate files as a possible
storage method for TEXT. There are also no extra files in the directory
which could have been used as a generic storage area for TEXT data.
Since we already know the row size is not increased by the size of the
TEXT column, only by the size of the pointer used to point to the
location of the TEXT (1,2,3 or 4 bytes). This means the TEXT space
allocation occurs when the row is inserted and is placed in the *.myd
file at some proprietary location and method. And the relative pointer
to this area is placed in the TEXT field of the original containing row.

Ed

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] 
Sent: Thursday, June 17, 2004 8:36 AM
To: emierzwa
Cc: [EMAIL PROTECTED]; [EMAIL PROTECTED]
Subject: RE: Where are BLOBs / TEXTs stored?

I am curious about this, too. However, I don't think that you answer the
original question.

Are BLOBs stored as separate files, one file per object? Are they
combined
into a single large BLOB file? are they aggregated into several medium
sized files? Answering  where are they stored on the disk may be a
better
response as the docs state that they are not stored in the MyISAM table
itself but as separate objects (which means what, exactly?)

Shawn Green
Database Administrator
Unimin Corporation - Spruce Pine



 

  [EMAIL PROTECTED]

  om   To:
[EMAIL PROTECTED], [EMAIL PROTECTED] 
   cc:

  06/17/2004 09:17 Fax to:

  AM   Subject:  RE: Where are
BLOBs / TEXTs stored?
 

 





Found this at http://dev.mysql.com/doc/mysql/en/BLOB.html, basically
it's not stored in the table row along with any other columns in the
same row.

12.4.2 The BLOB and TEXT Types
Each BLOB or TEXT value is represented internally by a separately
allocated object. This is in contrast to all other column types, for
which storage is allocated once per column when the table is opened.

Ed

-Original Message-
From: Luis R. Rodriguez [mailto:[EMAIL PROTECTED]
Sent: Wednesday, June 16, 2004 6:37 PM
To: [EMAIL PROTECTED]
Subject: Where are BLOBs / TEXTs stored?

[ Please CC me ]

Hi,

http://dev.mysql.com/doc/mysql/en/Storage_requirements.html

Page says:
---
The maximum size of a row in a MyISAM table is 65,534 bytes. Each BLOB
and TEXT column accounts for only five to nine bytes toward this size.
---

Question: So where are BLOBs and TEXTs entries stored?

 Luis

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Where are BLOBs / TEXTs stored?

2004-06-17 Thread emierzwa
Here's an excerpt from the MySQL internal doc, the format is better in
html but not appropriate for this forum. The last paragraph sums it up:


Hexadecimal Display of Table1.MYD file
F1 61 62 63 00 F5 64 00 66 00 ... .abc..d e. 


Here's how to read this hexadecimal-dump display:


The hexadecimal numbers F1 61 62 63 00 F5 64 20 66 00 are byte values
and the column on the right is an attempt to show the same bytes in
ASCII. 
The F1 byte means that there are no null fields in the first row. 
The F5 byte means that the second column of the second row is NULL. 
(It's probably easier to understand the flag setting if you restate F5
as 0101 binary, and (a) notice that the third flag bit from the
right is on, and (b) remember that the first flag bit is the X bit.) 


There are complications -- the record header is more complex if there
are variable-length fields -- but the simple display shown in the
example is exactly what you'd see if you looked at the MySQL Data file
with a debugger or a hexadecimal file dumper. 


So much for the fixed format. Now, let's discuss the dynamic format. 


The dynamic file format is necessary if rows can vary in size. That will
be the case if there are BLOB columns, or true VARCHAR columns.
(Remember that MySQL may treat VARCHAR columns as if they're CHAR
columns, in which case the fixed format is used.) A dynamic row has more
fields in the header. The important ones are the actual length, the
unused length, and the overflow pointer. The actual length is the
total number of bytes in all the columns. The unused length is the total
number of bytes between one physical record and the next one. The
overflow pointer is the location of the rest of the record if there are
multiple parts. 


For example, here is a dynamic row: 
  03, 00 start of header
04 actual length
0c unused length
01, fc flags + overflow pointer
   data in the row
   unused bytes
   -- next row starts here)

 


In the example, the actual length and the unused length are short (one
byte each) because the table definition says that the columns are short
-- if the columns were potentially large, then the actual length and the
unused length could be two bytes each, three bytes each, and so on. In
this case, actual length plus unused length is 10 hexadecimal (sixteen
decimal), which is a minimum. 

As for the third format -- packed -- we will only say briefly that: 

Numeric values are stored in a form that depends on the range (start/end
values) for the data type. 
All columns are packed using either Huffman or enum coding. 
For details, see the source files /myisam/mi_statrec.c (for fixed
format), /myisam/mi_dynrec.c (for dynamic format), and
/myisam/mi_packrec.c (for packed format). 

Note: Internally, MySQL uses a format much like the fixed format which
it uses for disk storage. The main differences are: 

BLOBs have a length and a memory pointer rather than being stored
inline. 
True VARCHAR (a column storage which will be fully implemented in
version 5.0) will have a 16-bit length plus the data. 
All integer or floating-point numbers are stored with the low byte
first. Point (3) does not apply for ISAM storage or internals. 

Ed


-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] 
Sent: Thursday, June 17, 2004 9:00 AM
To: [EMAIL PROTECTED]
Cc: emierzwa; [EMAIL PROTECTED]; [EMAIL PROTECTED]
Subject: RE: Where are BLOBs / TEXTs stored?


[EMAIL PROTECTED] wrote on 17/06/2004 15:35:36:

 
 I am curious about this, too. However, I don't think that you answer
the
 original question.
 
 Are BLOBs stored as separate files, one file per object? Are they 
combined
 into a single large BLOB file? are they aggregated into several medium
 sized files? Answering  where are they stored on the disk may be a 
better
 response as the docs state that they are not stored in the MyISAM
table
 itself but as separate objects (which means what, exactly?)

This is not how I read the section of the manual. Normally, a database
row 
is a single Object within the MyISAM file, whcih contains many 
Objects. Successive numeric fields will be stored in adjacent words of

the MyISAM file in exactly the order you see them when you do a SELECT 
*. If you want to access this record, then only one disk seek is needed

to fetch it. However, because large BLOBs are rarely involved in
searches, 
rather than creating a single huge record with the BLOB embedded in it, 
the BLOB is stored elsewhere *in the same .myd file*, with only a
pointer 
to the position of the blob within the file.

The upside of this is that for searches not involving the BLOB field,
and 
after the indexes have been exausted, only the relatively small non-BLOB

needs to be read and checked. The downside is that if the search
involves 
the BLOB field, or if the BLOB field

RE: Where are BLOBs / TEXTs stored?

2004-06-17 Thread emierzwa
Here's one more quote, it is more relative to Alec's comment/concern
about access speed.

It is unlikely that the 'field and offset are on different pages'
unless the record contains a large BLOB.

Ed

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Sending multiple queries to MySQL (sorry, hit send by accident!)

2004-06-14 Thread emierzwa
Yes, you can save using multiple statements. We send
hundreds...thousands of statements in a single batch in our machine
automation application. The communication cost is relatively fixed (with
in reason), where the actual work to be done (mostly inserts) was very
fast for us in our table schemas.

Ed

-Original Message-
From: Andy Hall [mailto:[EMAIL PROTECTED] 
Subject: Sending multiple queries to MySQL (sorry, hit send by
accident!)

Hi,

I have a situation where, within a PHP loop, I am creating over one
hundred SQL statement (INSERTS and UPDATES).

Would it cut any overhead to send 20 ; seperated queries in one
mysql_query() statement, and therefore only execute mysql_query() 5
times for 100 statements, as opposed to sending each statement
individually?

So, 

mysql_query(INSERT 1, UPDATE 1..., INSERT 2..., UPDATE 2..);

or:

mysql_query(INSERT 1);
mysql_query(UPDATE 1...);
mysql_query(INSERT 2...);
mysql_query(UPDATE 2..);

Do you save by not making as many connections?

Thanks

Andy.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Multiple Queries

2004-06-14 Thread emierzwa
Assuming both scenarios only had one connection open for all the
statements,
you probably will see a savings. You will need to benchmark your
situation.

You will need MySQL 4.1 or greater and your ODBC driver will need to
support
this version as well.

Ed

-Original Message-
From: Paul McNeil [mailto:[EMAIL PROTECTED] 
Sent: Monday, June 14, 2004 1:24 PM
To: MySQL General
Subject: Multiple Queries


RE -
I have a situation where, within a PHP loop, I am creating over one
hundred SQL statement (INSERTS and UPDATES).

Would it cut any overhead to send 20 ; seperated queries in one
mysql_query() statement, and therefore only execute mysql_query() 5
times for 100 statements, as opposed to sending each statement
individually?
--

I know that it is possible to send multiple update / insert statements.
However, it was my understanding that this is not normal SQL.  I am
using
the Opta driver and it fails when I try to do this.  I have successfully
done it in MS-SQL.  I would test this thoroughly and make sure that it's
not
going to fail if the DB driver of DB version is swapped out from under
the
code.

God Bless

Paul C. McNeil
Developer in Java, MS-SQL, MySQL, and web technologies.

GOD BLESS AMERICA!
To God Be The Glory!

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Query help with grouping and latest date.

2004-05-20 Thread emierzwa
Since your on 4.1, give this a try...

select * 
from tbl as a
where a.recdate=(select max(b.recdate)
 from tbl as b
 where b.id=a.id and b.mount=a.mount)

Ed
-Original Message-
From: Duncan Hill [mailto:[EMAIL PROTECTED] 
Subject: Re: Query help with grouping and latest date.

On Thursday 20 May 2004 12:49, Egor Egorov might have typed:
 Duncan Hill [EMAIL PROTECTED] wrote:
  I have a table that has data that looks like:
  ++-+---+-+
 
  | id | recdate | mount | perused |
 
  ++-+---+-+
 
  |  1 | 2004-05-20 10:46:12 | QUAR  |  80 |
  |  2 | 2004-05-20 11:05:11 | QUAR  |  99 |
  |  2 | 2004-05-20 11:33:27 | ROOT  |  99 |
  |  2 | 2004-05-20 11:33:34 | QUAR  |  88 |
 

 From version 4.1 subquery helps you to get result using one query.

Thanks Egor, I'll start poking at it using subqueries, as I'm running
4.1.1.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Heap table limitations

2004-05-20 Thread emierzwa
You can't do efficient ranging on a HASH indexed column without a table
scan...

Select * from heap_table where indx_col between 247 and 258

This would table scan your 200meg table even if indx_col is a PRIMARY
KEY, using the default hash key. Hash key was the only choice prior to
4.1, but now you can use BTREE for HEAP tables. Again, it depends on how
you use your tables.

If your server blinks, scheduled or otherwise, you loose all of your
table contents...forcing you to reload your data.

Make sure you place a practical MAX_ROWS= on your create table to
prevent your table from accidentally eating all of you're available
memory.

Ed

-Original Message-
From: Daniel Cummings [mailto:[EMAIL PROTECTED] 
Subject: Heap table limitations

We want to convert a 200+ meg table to a heap table.  We've gotten
pretty
good performance converting from InnoDb to MyIsam table but converting
to a
heap table would give an even bigger performance boost.

Does making a heap file this large make any sense?

Are there size limitations to heap tables?
TIA
Dan

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Rename database

2004-05-20 Thread emierzwa
MERGE tables have a table_name.MRG that contains pathed names to the
original table location. You can carefuly hand edit these or run an
ALTER command to fix them.

ALTER TABLE merge_table_name  UNION=(table_1,table_2...)

Ed

-Original Message-
From: Ed Reed [mailto:[EMAIL PROTECTED] 
Sent: Thursday, May 20, 2004 2:53 PM
To: [EMAIL PROTECTED]
Subject: RE: Rename database


Is there anything wrong with just stopping the server and renaming the
database's directory in the DATA directory? I've used that method
without any problems. It also works very well for making a copy of the
database to a new differently named database. Are there any pitfalls
that I haven't encountered yet?
 
Thanks

 Bartis, Robert M (Bob) [EMAIL PROTECTED] 5/20/04 1:37:20 PM

http://dev.mysql.com/doc/mysql/en/RENAME_TABLE.html 

-Original Message-
From: Ngim Ngau - Kzresults [mailto:[EMAIL PROTECTED] 
Sent: Thursday, May 20, 2004 4:50 PM
To: [EMAIL PROTECTED] 
Subject: Rename database


Hi, Is there a way I can rename a database? or at least copy an old
database
with existing tables into
a new one?

Thanks.


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql 
To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED] 

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql 
To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED]




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Fulltext searching

2004-05-08 Thread emierzwa
I beleive this is the built-in word list file you were looking for. And the
word beyond was in the list. It would probably be helpful if there were
a SHOW command that listed the active built-in stopwords.

Ed

-
#include ftdefs.h
ulong ft_min_word_len=4;
ulong ft_max_word_len=HA_FT_MAXCHARLEN;
ulong ft_query_expansion_limit=5;
char ft_boolean_syntax[]=+ -()~*:\\|;

const HA_KEYSEG ft_keysegs[FT_SEGS]={
{
HA_KEYTYPE_VARTEXT,   /* type */
63,   /* language (will be overwritten) */
0, 0, 0,  /* null_bit, bit_start, bit_end */
HA_VAR_LENGTH | HA_PACK_KEY,  /* flag */
HA_FT_MAXBYTELEN, /* length */
HA_FT_WLEN,   /* start */
0,/* null_pos */
NULL  /* charset  */
  },
  {
/*
  Note, this (and the last HA_KEYTYPE_END) segment should NOT
  be packed in any way, otherwise w_search() won't be able to
  update key entry 'in vivo'
*/
  HA_FT_WTYPE, 63, 0, 0, 0, HA_NO_SORT, HA_FT_WLEN, 0, 0, NULL
  }
};

const struct _ft_vft _ft_vft_nlq = {
  ft_nlq_read_next, ft_nlq_find_relevance, ft_nlq_close_search,
  ft_nlq_get_relevance,  ft_nlq_reinit_search
};
const struct _ft_vft _ft_vft_boolean = {
  ft_boolean_read_next, ft_boolean_find_relevance, ft_boolean_close_search,
  ft_boolean_get_relevance,  ft_boolean_reinit_search
};


FT_INFO *ft_init_search(uint flags, void *info, uint keynr,
byte *query, uint query_len, byte *record)
{
  FT_INFO *res;
  if (flags  FT_BOOL)
res= ft_init_boolean_search((MI_INFO *)info, keynr, query, query_len);
  else
res= ft_init_nlq_search((MI_INFO *)info, keynr, query, query_len, flags,
record);
  return res;
}

const char *ft_stopword_file = 0;
const char *ft_precompiled_stopwords[] = {

#ifdef COMPILE_STOPWORDS_IN

/* This particular stopword list was taken from SMART distribution
   ftp://ftp.cs.cornell.edu/pub/smart/smart.11.0.tar.Z
   it was slightly modified to my taste, though
 */

  a's,
  able,
  about,
  above,
  according,
  accordingly,
  across,
  actually,
  after,
  afterwards,
  again,
  against,
  ain't,
  all,
  allow,
  allows,
  almost,
  alone,
  along,
  already,
  also,
  although,
  always,
  am,
  among,
  amongst,
  an,
  and,
  another,
  any,
  anybody,
  anyhow,
  anyone,
  anything,
  anyway,
  anyways,
  anywhere,
  apart,
  appear,
  appreciate,
  appropriate,
  are,
  aren't,
  around,
  as,
  aside,
  ask,
  asking,
  associated,
  at,
  available,
  away,
  awfully,
  be,
  became,
  because,
  become,
  becomes,
  becoming,
  been,
  before,
  beforehand,
  behind,
  being,
  believe,
  below,
  beside,
  besides,
  best,
  better,
  between,
  beyond,
  both,
  brief,
  but,
  by,
  c'mon,
  c's,
  came,
  can,
  can't,
  cannot,
  cant,
  cause,
  causes,
  certain,
  certainly,
  changes,
  clearly,
  co,
  com,
  come,
  comes,
  concerning,
  consequently,
  consider,
  considering,
  contain,
  containing,
  contains,
  corresponding,
  could,
  couldn't,
  course,
  currently,
  definitely,
  described,
  despite,
  did,
  didn't,
  different,
  do,
  does,
  doesn't,
  doing,
  don't,
  done,
  down,
  downwards,
  during,
  each,
  edu,
  eg,
  eight,
  either,
  else,
  elsewhere,
  enough,
  entirely,
  especially,
  et,
  etc,
  even,
  ever,
  every,
  everybody,
  everyone,
  everything,
  everywhere,
  ex,
  exactly,
  example,
  except,
  far,
  few,
  fifth,
  first,
  five,
  followed,
  following,
  follows,
  for,
  former,
  formerly,
  forth,
  four,
  from,
  further,
  furthermore,
  get,
  gets,
  getting,
  given,
  gives,
  go,
  goes,
  going,
  gone,
  got,
  gotten,
  greetings,
  had,
  hadn't,
  happens,
  hardly,
  has,
  hasn't,
  have,
  haven't,
  having,
  he,
  he's,
  hello,
  help,
  hence,
  her,
  here,
  here's,
  hereafter,
  hereby,
  herein,
  hereupon,
  hers,
  herself,
  hi,
  him,
  himself,
  his,
  hither,
  hopefully,
  how,
  howbeit,
  however,
  i'd,
  i'll,
  i'm,
  i've,
  ie,
  if,
  ignored,
  immediate,
  in,
  inasmuch,
  inc,
  indeed,
  indicate,
  indicated,
  indicates,
  inner,
  insofar,
  instead,
  into,
  inward,
  is,
  isn't,
  it,
  it'd,
  it'll,
  it's,
  its,
  itself,
  just,
  keep,
  keeps,
  kept,
  know,
  knows,
  known,
  last,
  lately,
  later,
  latter,
  latterly,
  least,
  less,
  lest,
  let,
  let's,
  like,
  liked,
  likely,
  little,
  look,
  looking,
  looks,
  ltd,
  mainly,
  many,
  may,
  maybe,
  me,
  mean,
  meanwhile,
  merely,
  might,
  more,
  moreover,
  most,
  mostly,
  much,
  must,
  my,
  myself,
  name,
  namely,
  nd,
  near,
  nearly,
  necessary,
  need,
  needs,
  neither,
  never,
  nevertheless,
  new,
  next,
  nine,
  no,
  nobody,
  non,
  none,
  noone,
  nor,
  normally,
  not,
  nothing,
  novel,
  now,
  nowhere,
  obviously,
  of,
  off,
  often,
  

RE: comparison of timestamps

2004-05-05 Thread emierzwa
My guess is because 200400 is an illegal time. If you try a
legal time, January 1, midnight (2004010100) it should work.
You can also convert the time to integer and then compare values.

select last from users where last  2004010100 
select last from users where cast(last as SIGNED)  200400

Ed

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] 
There must be a simple reason for this, but I am really stumped.

Look at this:

mysql select last from users where last  200400 limit 10;
++
| last   |
++
| 20040426191837 |
| 20040504191410 |
| 20040430170916 |
| 20040424005350 |
| 20040504192210 |
| 20040503210716 |
| 20040502235436 |
| 20031218112740 |  NOTICE THIS ONE
| 20040504210456 |
| 20040503190616 |
++
10 rows in set (0.00 sec)

**

Notice there is a 2003 (14-digit) timestamp included even though I
queried
the database for just timestamps greater than 200400 (14
digits).

I am stumped.

Can anyone tell me the simple reason why this is not working right?

Thanks

Ed


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: if exists

2004-05-05 Thread emierzwa
Silently ignore duplicate index inserts...

insert IGNORE into PERSON ( NAME ) values ( 'Test')

Ed
-Original Message-
From: Gerhard Gaupp [mailto:[EMAIL PROTECTED] 
Hello

I'm looking for something like

if not exists( select 1 from PERSON where NAME = 'Test' ) then
   insert into PERSON ( NAME ) values ( 'Test');
end if;

I did it with Sybase Database.
I need to get Data from Excel and have to be shure that there is no data
more than once in the table.
So I have to generate a script with test, if this data is in teh table
yet, or not, before inserting.

Did You Understand? Sorry for my English

Greetings from Germany

Gerhard

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: mysql as a spatial database backend

2004-04-23 Thread emierzwa
James, I replied to you back in January. I tried it with an 854kb WKT
block with out any problems. You had sent me a couple WKT samples
offline, each of which had errors in them. After I corrected them they
worked for me. If you would like to try once more, send me a file,
offline, of your table create stmt and insert sql and I'll be happy to
try it again. I am using the latest source code build of 4.1.2 from
BitKeeper on XP.

Ed

-Original Message-
From: James S reid [mailto:[EMAIL PROTECTED] 
Sent: Friday, April 23, 2004 7:12 AM
To: [EMAIL PROTECTED]
Subject: Re: mysql as a spatial database backend


Ive posted thsi query twice and got no reply - Im sure somebody must
know
thw answer!!!

whast the field length limitations for insertion of a WKT string into a
geometry column?

yours, close to giving up

james


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: LIST function?

2004-04-20 Thread emierzwa
This will do it...

SELECT id, group_concat(distinct buddy_id)
FROM buddies
GROUP BY id

Ed

-Original Message-
Hi,

When using aggregate functions, I know you can retrieve the MAX, MIN,
SUM, etc from all the values in your specific group from the GROUP BY. 

Is there any function to simply return a list of the values in the
group?

e.g.

SELECT id, LIST(buddy_id)
FROM buddies
GROUP BY id

which would return:

idbuddy_id
1 1,3,5
2 2,3

I cant see why this wouldnt be possible but I havent found anything yet.

Thanks for any help

Andy Hall.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Setting variables on update

2004-02-23 Thread emierzwa
Try this...works on 4.1.2

UPDATE  ControlTable SET NextID= 1 + (select @prevval := NextID)

Ed

-Original Message-
From: Sasha Pachev [mailto:[EMAIL PROTECTED] 
Sent: Monday, February 23, 2004 11:19 AM
To: Matt Fagan
Cc: [EMAIL PROTECTED]
Subject: Re: Setting variables on update


Matt Fagan wrote:
 Hello,
 
 I'm trying to run an SQL update statement like:
 
 UPDATE ControlTable SET @prevval := NextID, NextID = NextID
 + 1
 
 I've tried using a range of different syntax, but nothing
 seems to work. This is something that does work in another
 SQL server I used to use.
 
 Does anybody know if this is something MySQL can do, and if
 so, what is the correct syntax? Note that I can't use an
 autoincrement column because the IDs aren't being put into
 a database.
 
 I've searched through MySQL docs and mailing lists but
 couldn't find any mention of this. Any help would be
 appreciated.

A UDF or your own static function might work. Worst case, MySQL source
can be 
modified to make this work.


-- 
Sasha Pachev
Create online surveys at http://www.surveyz.com/

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: A current row updating counter for a mass Update or similar?

2004-02-10 Thread emierzwa
Sure, try this. I'm using 4.1.2 in case it matters.

set @n=0;
UPDATE Ranks_table
  SET Rank= (select @n:[EMAIL PROTECTED])
ORDER BY Score DESC;

Ed

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] 
Sent: Monday, February 09, 2004 6:37 PM
To: [EMAIL PROTECTED]
Subject: A current row updating counter for a mass Update or
similar?


Hi all, I'll just get straight to the problem (simplified):

Say I have the following table/columns:
ID, Score, Rank
1, 20, 0
2, 50, 0
3, 10, 0
4, 5, 0
5, 97, 0

Is it possible, in a single query (using sub-queries if necessary), to
give
all of these ID's the correct rank... ie if such a thing existed it
might
look like:

UPDATE table_name SET rank=(current_row_updating+1) ORDER BY score DESC;

Which would, you'd hope, make the table data then read:
ID, Score, Rank
1, 20, 3
2, 50, 2
3, 10, 4
4, 5, 5
5, 97, 1

But I could find no nice looking commands that would give me this
(perhaps
that is not possible due to the nature of the database structure and the
nature of updating? Though there must be some counter for it to report
back
with how many rows matched, how many were updated etc.).

I also tried with a sub-query which the equivalent for the above example
might look something like:

UPDATE table_name SET rank=(SELECT COUNT(*) FROM table_name AS t2 WHERE
t2.scoretable_name.score);

But couldn't seem to make it work - I received MySQL error messages with
my
attempts, I think possibly because I am trying to use the table I am
trying
to update to also pull select data from during the update - and it'd
need a
little adding to it anyway - if scores were equal, it'd want to default
to
lower ID number wins the higher rank.

I can't seem to find any obvious nice way of doing this, other than the
poor
method of looping/iterating through with an Ordered select, and then
sending individual UPDATES for each and every row received by the
select
(which in the above simplified example would be 6 query calls - 1 to get
the
select, followed by 5 seperate updates sent by PHP using PHP to know
what
rank # we were currently on).

Thanks in advance, I hope somebody can point me to a possible solution
of
getting all of this with a single query as I have attempted, I'd
appreciate
any help on this matter.



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: How to determine when a MySQL database was last modified?

2004-02-06 Thread emierzwa
You could try the following:
1) Perform normal backup.
2) Run sql command flush status.  --Resets most status
variables to zero.
3) Next week prior to backup, run sql commands:
  show status like 'Handler_delete'
  show status like 'Handler_update'
  show status like 'Handler_write'
   If any of these values are greater than zero then a table has been
modified.
   You should also note the server start date just in case a server
crahed
   or restarted which will also reset the status variables to zero.

Handler_delete - Number of times a row was deleted from a table.
Handler_update - Number of requests to update a row in a table.
Handler_write  - Number of requests to insert a row in a table.

Ed

-Original Message-
From: Michael Stassen [mailto:[EMAIL PROTECTED] 
Sent: Friday, February 06, 2004 9:18 AM
To: Gowtham Jayaram
Cc: Phil; Schwartz, Evelyn; [EMAIL PROTECTED]
Subject: Re: How to determine when a MySQL database was last modified?


mysqlshow gives the same results as SHOW TABLE STATUS, which, 
unfortunately, doesn't seem to give created/updated dates for InnoDB
tables.

Michael

Gowtham Jayaram wrote:
 If offline tools works for you try './mysqlshow.exe
 -vi db_name'.  This provides useful information such
 as 'Create Time' 'Update Time' and 'Check Time'.
 
 Gowtham.
 
 --- Phil [EMAIL PROTECTED] wrote:
 
Nice try... but 'show table status' just displays
NULL for Update_time -
maybe because they're InnoDB tables. Besides, I
didn't really want to
have to squirrel around all the tables to see if the
DB itself has been
changed.

Since what I want to do doesn't seem possible I'll
carry on as usual...
backing everything up :( Also, I'll suggest it as an
enhancement.
Thanks.


On Fri, 2004-02-06 at 14:28, Schwartz, Evelyn wrote:

You can try the 'show table status' from mysql.  

There is an update_time that lists the last modified
date for the table.

 
I also found out that these types of commands work

with perl DBD::mysql.  You can treat the command
like a normal sql statement and the results are
returned like any other sql.  Pretty cool.

 
IMHO I wouldn't bother with this.  Just take the

backup.  As long as you only keep the most recent
backup online I don't see the harm.  Why do the
extra work and risk not having backups?

 
Evelyn

 -Original Message- 
 From: Phil [mailto:[EMAIL PROTECTED] 
 Sent: Fri 2/6/2004 9:27 AM 
 To: gerald_clark 
 Cc: [EMAIL PROTECTED] 
 Subject: Re: How to determine when a MySQL

database was last modified?

 
 

 Thanks. But I would have thought that such

information would have been

 kept automatically somewhere by the server, and

it's just a case of how

 to get at it. I have quite a few tables in each

database so I don't

 really want to have to maintain a timestamp on

each update, and then go

 around all of them at backup time :(
 
 Anyone got any other ideas?
 
 
 On Fri, 2004-02-06 at 14:09, gerald_clark wrote:
  Add a timestamp field to each table.
 
  Phil wrote:
 
  Hi,
  
  I have many smallish, discrete MySQL

databases, each of which I would

  like to backup individually (mysqldump seems

fine for this). However,

  there's no point re-backing up a database that

has not changed since the

  last time it was backed up. So how can I tell

if when a MySQL database

  was last modified, so that I can decide

whether to run mysqldump on it

  again or not? Any help with this would be much

appreciated.

  
  Thanks,
  Phil
  
  
   
  
 
 
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:   

 http://lists.mysql.com/[EMAIL PROTECTED]
 
 
 



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:   

 
 http://lists.mysql.com/[EMAIL PROTECTED]
 
 
 
 __
 Do you Yahoo!?
 Yahoo! Finance: Get your refund fast by filing online.
 http://taxes.yahoo.com/filing.html
 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: SQL and productivity

2004-01-30 Thread emierzwa
One more consideration, if your 5mil of records contains duplicates you
won't get the opportunity to load the table using REPLACE or INSERT
IGNORE without the UNIQUE index already in place. You would have to
distinct the rows first or manually remove the duplicates before
creating the index after load. If that's the case it would probably cost
you more time.

Ed

-Original Message-
From: Peter J Milanese [mailto:[EMAIL PROTECTED] 
Subject: RE: SQL and productivity

Heh..

I wasn't going by the docs when I responded... I had to reindex a 50+mil
row table... Not a good time :)  Then I started preparing the structure
a
bit before taking in the data. Seemed to work better in my circumstance.

P

-Dan Greene [EMAIL PROTECTED] wrote: -
From: Dan Greene [EMAIL PROTECTED]
Subject: RE: SQL and productivity

From what I've read ( I think in the MySQL docs, might have been here in
the list), technically it will take less time to add the indexes after
the
table creation, than the overhead of index updating per-insert.

Either way, it's gonna take a loong time.

 -Original Message-
 From: Peter J Milanese [mailto:[EMAIL PROTECTED]
 Subject: Re: SQL and productivity

 I'd start with the indexes in place. 5+mil records will take
 quite some
 time to index after the fact.

 P

 -Krasimir_Slaveykov [EMAIL PROTECTED] wrote: -
 From: Krasimir_Slaveykov [EMAIL PROTECTED]
 Subject: SQL and productivity

 Hello ,

 I must create a temp table with 5 or more million records.
 I need it for creating some reports, so I need to create indexes too.
 My question is:
 When total time for work will be smaller? :
 1-st case: I create table and create indexes immediately before
 filling data.
 2-nd case: I create table, filling data and then create indexes?
 --
 Best regards,
 Krasimir_Slaveykov
 mailto: [EMAIL PROTECTED] [EMAIL PROTECTED]


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Index on DATETIME column?

2004-01-22 Thread emierzwa
Paul wrote:

Is there such a thing as an index on the first 8 characters
of a DATETIME column?

No.  Indexes on partial column values are available only for string
types.


Won't PACK_KEYS=1 help in a situation like this?

From the Documentation:
6.5.3 CREATE TABLE Syntax
If you don't use PACK_KEYS, the default is to only pack strings, not
numbers. If you use PACK_KEYS=1, numbers will be packed as well. When
packing binary number keys, MySQL will use prefix compression. This
means that you will only get a big benefit from this if you have many
numbers that are the same. Prefix compression means that every key needs
one extra byte to indicate how many bytes of the previous key are the
same for the next key (note that the pointer to the row is stored in
high-byte-first order directly after the key, to improve compression).
This means that if you have many equal keys on two consecutive rows, all
following ``same'' keys will usually only take 2 bytes (including the
pointer to the row). 

Ed

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Efficient SQL Statement

2004-01-21 Thread emierzwa
You could just use the benchmark function?

select BENCHMARK(1000, 'dfsfsdfs' like 'F%' )   /* 0.45 sec. */
select BENCHMARK(1000, 'dfsfsdfs' between 'F' and 'Fzzz' ) /* 0.55
sec. */
select BENCHMARK(1000, left('dfsfsdfs',1)='F' ) /* 0.79 sec. */

The times go up a little if the strings are a match.

Ed

-Original Message-
From: Roger Baklund [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, January 21, 2004 1:11 PM
To: [EMAIL PROTECTED]
Cc: Hassan Shaikh
Subject: Re: Efficient SQL Statement


* Hassan Shaikh
 Which one of the following statements is more efficient?

 SELECT * FROM COUNTRY WHERE LEFT(CNNAME,1)='B';

 Or

 SELECT * FROM COUNTRY WHERE CNNAME LIKE 'B%';

The second statement will normally be the most effective, because the
server
don't need to perform a function on the column for each row. However,
the
LIKE operator is relatively heavy, the _most_ effective in this case
is
probably:

SELECT * FROM COUNTRY WHERE CNNAME='B' AND CNNAME'C'

 or

SELECT * FROM COUNTRY WHERE CNNAME BETWEEN 'B' AND 'Bzzz'

--
Roger


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Efficient SQL Statement

2004-01-21 Thread emierzwa
I'm on WinXP, 2.6ghz.  mysqld-nt Alpha 4.1.2

Either way, I was surprised to see the like to be in the top
performers and left() to be last.

Ed

---
* ed aka emierzwa at micron.com
 You could just use the benchmark function?

 select BENCHMARK(1000, 'dfsfsdfs' like 'F%' )   /* 0.45 sec. */
 select BENCHMARK(1000, 'dfsfsdfs' between 'F' and 'Fzzz' ) /* 0.55
 sec. */
 select BENCHMARK(1000, left('dfsfsdfs',1)='F' ) /* 0.79 sec. */

I get a different result. On my (slow) computer, the BETWEEN operator is
the
fastest:

mysql select BENCHMARK(1000, 'dfsfsdfs' like 'F%' );
1 row in set (2.17 sec)

mysql select BENCHMARK(1000, 'dfsfsdfs' between 'F' and 'Fzzz' );
1 row in set (2.06 sec)

mysql select BENCHMARK(1000, left('dfsfsdfs',1)='F' );
1 row in set (2.68 sec)

mysql select BENCHMARK(1000, 'dfsfsdfs' = 'F' and 'dfsfsdfs' 
'Fzzz' );
1 row in set (2.13 sec)


--
Roger

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: spatial types

2004-01-19 Thread emierzwa
I tried it with an 854kb WKT block with out any problems. A total of 64k
points in fact. Two things, did you correctly close the polygon (first
and last points must be the same). Did you use the astext() function to
display the polygon in your select statement?

CREATE TABLE big (
g geometry NOT NULL default '',
spatial index (g(12))
) ENGINE=MyISAM DEFAULT CHARSET=latin1;

Sample sql was 854k bytes (took 136 sec. to insert):
insert into big (g)
values(
GeomFromText('polygon((
1 1,
2 2,
3 3,
...
63999 63999,
64000 64000,
64000 1,
1 1
))')
)

The Query:
select astext(g) from big

Sample Results (took .5 sec. to return):
POLYGON((1 1,2 2,3 3,4 4,5 5,6 6,7 7,8 8,9 9,10 10,
11 11,12 12,13 13,14 14,15 15,16 16,17 17,18,19 19,

63998 63998,63999 63999,64000 64000,64000 1,1 1))

The big.MYD was 1meg in size.

Ed

-Original Message-
From: James S reid [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, January 13, 2004 5:59 AM
To: [EMAIL PROTECTED]
Subject: spatial types


hi - ive been playing with the OGC support for WKT but cant find data
type size constraints for GEOMETRY types. does anybody know what they
are? Ive a polygon with  140,000 bytes as WKT but inserts all produce a
NULL geometry...

any ideas?

regards

Programming today is a race between software engineers striving 
to build bigger and better idiot-proof programs, and the Universe 
trying to produce bigger and better idiots. 
So far, the Universe is winning.

Rich Cook

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: three highest and lowest column values

2003-12-30 Thread emierzwa
Using correlated subqueries on 4.1 you could do it all in sql:

SELECT ProductId, MarketId, Type, Price
FROM Products a
WHERE 3=(SELECT count(*)   --finds highest prices
  FROM Products b
  where b.ProductId=a.ProductId
AND b.MarketId=a.MarketId
AND b.Type=a.Type
AND b.Price=a.Price)
UNION   --UNION ALL will remove duplicates between highest/lowest
SELECT ProductId, MarketId, Type, Price
FROM Products a
WHERE 3=(SELECT count(*)   --finds lowest prices
  FROM Products b
  WHERE b.ProductId=a.ProductId
AND b.MarketId=a.MarketId
AND b.Type=a.Type
AND b.Price=a.Price)
ORDER BY ProductId, MarketId, Type, Price

Thanks,
Ed

Subject: Re: three highest and lowest column values
* Roland Niederbrucker
 from the following Product-table

 ProductId

 MarketId

 CustomerId

 Type

 Price

 ...

 i would like to get the three highest and lowest prices

 for each group of ProductId,MarketId  Type.

 How could i do that ???

Do you use a programming language, or are you looking for a pure SQL
solution? What version of mysql are you using? It is easy to do this
programatically, the new mysql 5.0 with stored procedures could have
helped
you, the GROUP_CONCAT() function from 4.1 could be used, and unions from
4.0
maybe could be used too.

To do it programatically, any mysql version (python-ish metacode):

rs = q(SELECT ProductId,MarketId,Type,Price FROM Products
  ORDER BY ProductId,MarketId,Type,Price DESC)
p = m = t = false
counter = 0
for row in rs:
  P,M,T = row[ProductId],row[MarketId],row[Type]
  if ((P == p) and (M == m) and (T == t)):
if counter  3:
  print row[Price],
counter = counter + 1
  else:
print \n+P,M,T,row[Price],
counter = 1
  p,m,t = P,M,T

That should output the three highest prices for each group, repeat
without
the DESC in the SELECT to get the three lowest prices.

--
Roger


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Solved - median (was Re: mean/median/mode)

2003-12-12 Thread emierzwa
Using your exact dataset listed at the bottom, I tried using derived
tables to combine your implementation under Alpha 4.1.1 as a single
statement. I expected a message saying it was to complex or out of some
resource but the server goes to 100 percent cpu and I have to kill it. I
even got it to crash once. Funny thing is after I kill it my
client(mysqlcc) gets the correct result setgo figure. Thanks for the
post, I really like your histogram example.
Using mysqld-nt Alpha 4.1.1 on WinXP, Dell 2.6ghz 600mg ram.

Thanks,
Ed

SELECT s.name, AVG(s.val) AS median
FROM (SELECT x.name, x.val
  FROM data x, data y
  WHERE x.name=y.name
  GROUP BY x.name, x.val
  HAVING SUM(y.val = x.val) = COUNT(*)/2
 AND SUM(y.val = x.val) = COUNT(*)/2) AS s
GROUP BY s.name

-Original Message-
On Friday, December 5, Robert Citek wrote:
 
 On Wednesday, December 3, 2003, at 06:27  PM, Robert Citek wrote:
 
 How can I calculate the mean/median/mode from a set of data using
SQL?
 
 
 After a bit of googling, I found this link:
   http://mysql.progen.com.tr/doc/en/Group_by_functions.html
 and a few answers in the comments towards the bottom.
 
 Below I've included a sample table and the solution I used to
calculate 
 the median.
 
 Regards,
 - Robert
 
 -
 
 DROP TABLE IF EXISTS data;
 
 CREATE TABLE data (
   name char(1) default NULL,
   val int(5) default NULL
 ) TYPE=MyISAM;
 
 INSERT INTO data VALUES
   ('a',2), ('a',2), ('a',2), ('a',2), ('a',20),
   ('b',4), ('b',4), ('b',4), ('b',4), ('b',40);
 
 CREATE TEMPORARY TABLE medians SELECT x.name, x.val
  FROM data x, data y
  WHERE x.name=y.name
  GROUP BY x.name, x.val
  HAVING ((COUNT(*)-(SUM(SIGN(1-SIGN(y.val-x.val) =
floor((COUNT(*) 
 +1)/2)) and
   ((COUNT(*)-(SUM(SIGN(1+SIGN(y.val-x.val) = floor((COUNT(*)
+1)/2));
 
 SELECT name, AVG(val) AS median FROM medians group by name;
 
  Output
 
 +--++
 | name | median |
 +--++
 | a| 2. |
 | b| 4. |
 +--++

Robert,

I don't believe this is correct.  I think it only appears correct due to

the particular nature of your sample data.  Try it with different data 
to see what I mean.  For example,

DROP TABLE IF EXISTS data;

CREATE TABLE data (name char(1) default NULL, val int default NULL);

INSERT INTO data
VALUES ('a',1),  ('a',2),  ('a',2),  ('a',2),  ('a',3),  ('a',6),
('a',7),
('a',11), ('a',11), ('a',12), ('a',13), ('a',13), ('a',20);
INSERT INTO data VALUES ('b',2), ('b',3), ('b',4), ('b',5);

DROP TABLE IF EXISTS medians;

CREATE TEMPORARY TABLE medians SELECT x.name, x.val
FROM data x, data y
WHERE x.name=y.name
GROUP BY x.name, x.val
HAVING
((COUNT(*)-(SUM(SIGN(1-SIGN(y.val-x.val) = floor((COUNT(*) +1)/2))
AND
((COUNT(*)-(SUM(SIGN(1+SIGN(y.val-x.val) = floor((COUNT(*) +1)/2));

SELECT name, AVG(val) AS median FROM medians group by name;

 Output
+--++
| name | median |
+--++
| a| 6.5000 |
| b| 3.5000 |
+--++

As there are an odd number of values with name = 'a', we should choose 
the middle value, 7, as the median, but we got 6.5.  (I'm assuming we 
agree that the definition of median is the middle value for N odd and 
the average of the 2 middle values for N even.)

I tried to see if I could tweak your query to get the right result, but,

between the complexity of your HAVING clause and the wrinkle of getting 
separate answers for each value in the name column, I just couldn't wrap

my head around it.  So, I went looking and found a supposed solution at 
http://www.oreilly.com/catalog/transqlcook/chapter/ch08.html.

It is overly complex, purports to get an answer in a single select, and 
is simply wrong in the case where there are an even number of values. 
(The author also gives a definition of statistical median, as opposed to

financial median, which doesn't match my Intro Stats book.)  But I 
understood what it was trying to do and came up with the following:

 To get the median of the values in a column:

DROP TABLE IF EXISTS medians;

CREATE TEMPORARY TABLE medians
SELECT x.val medians
FROM data x, data y
GROUP BY x.val
HAVING SUM(y.val = x.val) = COUNT(*)/2
AND SUM(y.val = x.val) = COUNT(*)/2;

SELECT AVG(medians) AS median FROM medians;

 Output using my sample data given above
++
| median |
++
| 5. |
++


 To get the median of the values in a column for each value in
 another column:

DROP TABLE IF EXISTS medians;

CREATE TEMPORARY TABLE medians
SELECT x.name, x.val medians
FROM data x, data y
WHERE x.name=y.name
GROUP BY x.name, x.val
HAVING SUM(y.val = x.val) = COUNT(*)/2
AND SUM(y.val = x.val) = COUNT(*)/2;

SELECT name, AVG(medians) AS median FROM medians GROUP BY name;

 Output using my sample data given above
+--++
| name | median |
+--++
| a| 7. |
| b| 3.5000 |
+--++

I've tested these with several different data sets, so I'm fairly