Re: How to delete all rows....

2006-09-26 Thread Chris

mos wrote:

At 08:41 PM 9/25/2006, you wrote:

Dilipkumar wrote:

Hi,
Its delete * from table will only do if you go for a truncate it will 
recreate the table structure ?

It's better to use delete.


Can you explain why? I'd go for instant truncate rather than waiting 
around for delete to finish.


The drawback of truncate is it requires a table lock and if people are 
updating the table, the truncate will have to wait for the locks to 
complete.


So will a delete from table when it's myisam.


There is also RI to worry about.


Good point.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Best MySQL configuation

2006-09-26 Thread Sayed Hadi Rastgou Haghi

Dear All,
What is the best configuration for a server with 2 XEON dual core CPU and
4GB Ram and 200GB RAID 5 hard?

--
Sincerely,
Hadi Rastgou
A Google Account is the key that unlocks the world of Google.
a href= http://www.spreadfirefox.com/?q=affiliatesamp;id=0amp;t=1; Get
FireFox! /a


Re: Best MySQL configuation

2006-09-26 Thread Jangita @ FSA

Is it dedicated to mysql only? what operating system does it run?

Sayed Hadi Rastgou Haghi wrote:

Dear All,
What is the best configuration for a server with 2 XEON dual core CPU and
4GB Ram and 200GB RAID 5 hard?



--
PO Box 26453 00504
Nairobi, Kenya.
TEL: +254722996532
[EMAIL PROTECTED]

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Requesting help with subquery

2006-09-26 Thread Zembower, Kevin
I'm trying to use a subquery for the first time, and am having some
problems. I'm hoping someone can give me some suggestions on ways of
doing what I want, and further suggestions if there's a better way of
going about my task.

I have a database of publications in different languages. main
categories are organized into sub categories with baseitems of
publications. Each baseitem can be printed in one or more of six
languages. My SQL query so far is:

SELECT
   m.title AS Main Category,
   s.title AS Sub Category,
   b.partno AS Part Number,
   (SELECT lv.title
  FROM langversion AS lv
  JOIN baseitem AS b2 ON lv.baseitemid = b2.baseitemid
  WHERE lv.langid = 1 # English = 1
  AND b.baseitemid=lv.baseitemid
   ) as English Title,
   IF(ISNULL(SELECT lv.langversionid
  FROM langversion AS lv
  JOIN baseitem AS b3 ON lv.baseitemid =
b3.baseitemid
  WHERE lv.langid = 1
  AND b.baseitemid = lv.baseitemid
 )), 'Y', 'N') AS Lang Avail
FROM maincategory AS m
JOIN subcategory AS s ON m.maincatid=s.maincatid
JOIN baseitem AS b ON s.subcatid=b.subcatid
WHERE
   b.available = Y
ORDER BY m.title, s.title;

This gives me an error at line 11, IF(ISNULL(SELECT  This should
give me a Y if the English version exists, and a N otherwise.

If I modify it like this, it works:

SELECT 
   m.title AS Main Category, 
   s.title AS Sub Category, 
   b.partno AS Part Number,
   (SELECT lv.title
  FROM langversion AS lv 
  JOIN baseitem AS b2 ON lv.baseitemid = b2.baseitemid
  WHERE lv.langid = 1
  AND b.baseitemid=lv.baseitemid
   ) as English Title,
   (SELECT lv.langversionid 
  FROM langversion AS lv
  JOIN baseitem AS b3 ON lv.baseitemid =
b3.baseitemid
  WHERE lv.langid = 1 
  AND b.baseitemid = lv.baseitemid
   ) AS Lang Avail
FROM maincategory AS m 
JOIN subcategory AS s ON m.maincatid=s.maincatid 
JOIN baseitem AS b ON s.subcatid=b.subcatid 
WHERE 
   b.available = Y 
ORDER BY m.title, s.title;

I think this demonstrates that the two subqueries are working.

What I'd ultimately like to do is produce a string like YNNYYN where Y
is printed if the language version of the baseitem exists (is not
null?). I was going to do this by creating a SELECT subquery for each
language version possible and CONCAT the Y or N together.

Can anyone suggest where I'm going wrong in my attempts? Is there a
better way overall to produce this information?

Thanks in advance for your help and suggestions.

-Kevin

Kevin Zembower
Internet Services Group manager
Center for Communication Programs
Bloomberg School of Public Health
Johns Hopkins University
111 Market Place, Suite 310
Baltimore, Maryland  21202
410-659-6139 

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Requesting help with subquery

2006-09-26 Thread Johan Höök

Hi Kevin,
I didn't look that close at it but it
should be IFNULL, not ISNULL which is SQLserver's
version of it.

/Johan

Zembower, Kevin skrev:

I'm trying to use a subquery for the first time, and am having some
problems. I'm hoping someone can give me some suggestions on ways of
doing what I want, and further suggestions if there's a better way of
going about my task.

I have a database of publications in different languages. main
categories are organized into sub categories with baseitems of
publications. Each baseitem can be printed in one or more of six
languages. My SQL query so far is:

SELECT
   m.title AS Main Category,
   s.title AS Sub Category,
   b.partno AS Part Number,
   (SELECT lv.title
  FROM langversion AS lv
  JOIN baseitem AS b2 ON lv.baseitemid = b2.baseitemid
  WHERE lv.langid = 1 # English = 1
  AND b.baseitemid=lv.baseitemid
   ) as English Title,
   IF(ISNULL(SELECT lv.langversionid
  FROM langversion AS lv
  JOIN baseitem AS b3 ON lv.baseitemid =
b3.baseitemid
  WHERE lv.langid = 1
  AND b.baseitemid = lv.baseitemid
 )), 'Y', 'N') AS Lang Avail
FROM maincategory AS m
JOIN subcategory AS s ON m.maincatid=s.maincatid
JOIN baseitem AS b ON s.subcatid=b.subcatid
WHERE
   b.available = Y
ORDER BY m.title, s.title;

This gives me an error at line 11, IF(ISNULL(SELECT  This should
give me a Y if the English version exists, and a N otherwise.

If I modify it like this, it works:

SELECT 
   m.title AS Main Category, 
   s.title AS Sub Category, 
   b.partno AS Part Number,

   (SELECT lv.title
  FROM langversion AS lv 
  JOIN baseitem AS b2 ON lv.baseitemid = b2.baseitemid

  WHERE lv.langid = 1
  AND b.baseitemid=lv.baseitemid
   ) as English Title,
   (SELECT lv.langversionid 
  FROM langversion AS lv

  JOIN baseitem AS b3 ON lv.baseitemid =
b3.baseitemid
  WHERE lv.langid = 1 
  AND b.baseitemid = lv.baseitemid

   ) AS Lang Avail
FROM maincategory AS m 
JOIN subcategory AS s ON m.maincatid=s.maincatid 
JOIN baseitem AS b ON s.subcatid=b.subcatid 
WHERE 
   b.available = Y 
ORDER BY m.title, s.title;


I think this demonstrates that the two subqueries are working.

What I'd ultimately like to do is produce a string like YNNYYN where Y
is printed if the language version of the baseitem exists (is not
null?). I was going to do this by creating a SELECT subquery for each
language version possible and CONCAT the Y or N together.

Can anyone suggest where I'm going wrong in my attempts? Is there a
better way overall to produce this information?

Thanks in advance for your help and suggestions.

-Kevin

Kevin Zembower
Internet Services Group manager
Center for Communication Programs
Bloomberg School of Public Health
Johns Hopkins University
111 Market Place, Suite 310
Baltimore, Maryland  21202
410-659-6139 




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]

Re: Mysql and Perl

2006-09-26 Thread Jørn Dahl-Stamnes
On Monday 25 September 2006 20:05, you wrote:
 there are two pieces -- dbi, and dbd::mysql. you installed the former
 but not the latter.

 go to:

   http://cpan.perl.org

 and search for mysql ...

Found it, but it won't install:

# perl Makefile.PL
I will use the following settings for compiling and testing:

  cflags        (mysql_config) = -I/usr/include/mysql -g -pipe 
-Wp,-D_FORTIFY_SOURCE=2 -fexceptions -m32 -march=i386 -mtune=pentium4 
-fasynchronous-unwind-tables -D_GNU_SOURCE -D_FILE_OFFSET_BITS=64 
-D_LARGEFILE_SOURCE -fno-strict-aliasing
  embedded      (mysql_config) =
  libs          (mysql_config) = -L/usr/lib/mysql -lmysqlclient -lz -lcrypt 
-lnsl -lm -L/usr/lib -lssl -lcrypto
  mysql_config  (guessed     ) = mysql_config
  nocatchstderr (default     ) = 0
  nofoundrows   (default     ) = 0
  ssl           (guessed     ) = 1
Use of uninitialized value in printf at Makefile.PL line 172, PIPE line 103.
  test_user     (            ) =
  testdb        (default     ) = test
  testhost      (default     ) =
  testpassword  (default     ) =
  testsocket    (default     ) =
  testuser      (default     ) =

To change these settings, see 'perl Makefile.PL --help' and
'perldoc INSTALL'.

Checking if your kit is complete...
Looks good
Multiple copies of Driver.xst found 
in: /usr/lib64/perl5/site_perl/5.8.6/x86_64-linux-thread-multi/auto/DBI/ 
/usr/lib64/perl5/vendor_perl/5.8.6/x86_64-linux-thread-multi/auto/DBI/ 
at Makefile.PL line 724
Using DBI 1.52 (for perl 5.008006 on x86_64-linux-thread-multi) installed 
in /usr/lib64/perl5/site_perl/5.8.6/x86_64-linux-thread-multi/auto/DBI/
Writing Makefile for DBD::mysql

# make
cp lib/DBD/mysql.pm blib/lib/DBD/mysql.pm
cp lib/DBD/mysql/GetInfo.pm blib/lib/DBD/mysql/GetInfo.pm
cp lib/Mysql.pm blib/lib/Mysql.pm
cp lib/DBD/mysql/INSTALL.pod blib/lib/DBD/mysql/INSTALL.pod
cp lib/Mysql/Statement.pm blib/lib/Mysql/Statement.pm
cp lib/Bundle/DBD/mysql.pm blib/lib/Bundle/DBD/mysql.pm
gcc -c  -I/usr/lib64/perl5/site_perl/5.8.6/x86_64-linux-thread-multi/auto/DBI 
-I/usr/include/mysql -g -pipe -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -m32 
-march=i386 -mtune=pentium4 -fasynchronous-unwind-tables -D_GNU_SOURCE 
-D_FILE_OFFSET_BITS=64 -D_LARGEFILE_SOURCE -fno-strict-aliasing 
-DDBD_MYSQL_WITH_SSL -DDBD_MYSQL_INSERT_ID_IS_GOOD -g  -D_REENTRANT 
-D_GNU_SOURCE -DDEBUGGING -fno-strict-aliasing -pipe -I/usr/local/include 
-D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64 -I/usr/include/gdbm -O2 -g -pipe 
-Wp,-D_FORTIFY_SOURCE=2 -fexceptions -m64 -mtune=nocona   
-DVERSION=\3.0007\ -DXS_VERSION=\3.0007\ -fPIC 
-I/usr/lib64/perl5/5.8.6/x86_64-linux-thread-multi/CORE   dbdimp.c
dbdimp.c:1: error: CPU you selected does not support x86-64 instruction set
make: *** [dbdimp.o] Error 1

# uname -a
Linux hostname 2.6.11-1.1369_FC4 #1 Thu Jun 2 22:56:33 EDT 2005 x86_64 
x86_64 x86_64 GNU/Linux

It seems like it does not accept my 64-bit processor as a 64-bit prosessor...

I also got some hints about using cpan (perl -MCPAN -eshell) but I felt like a 
prisoner @ Guantamo who had to answer questions to CIA... At one point I had 
no answer so I could not continue.
-- 
Jørn Dahl-Stamnes
homepage: http://www.dahl-stamnes.net/dahls/

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Help with query

2006-09-26 Thread Visolve DB Team
Hi,

Try with FULLTEXT search.  Alter the table to make the search columns as 
FULLTEXT columns, with MyISAM engine and retrieve the records using MATCH 
keyword.

Ref:http://dev.mysql.com/doc/refman/5.0/en/fulltext-search.html

Thanks,
ViSolve DB Team.

- Original Message - 
From: Ed Curtis [EMAIL PROTECTED]
To: mysql@lists.mysql.com
Sent: Monday, September 25, 2006 11:47 PM
Subject: Help with query


 
 I'm trying to do a keyword search within a phrase saved in a table.
 
 Here's the query:
 
 SELECT * from closedtickets WHERE
 
 keyphrase LIKE '%$keyword1%'
 
 OR keyphrase LIKE '%$keyword2%'
 
 OR keyphrase LIKE '%$keyword3%'
 
 The problem I'm having is that the query is returning every record in the
 table. I only want it to return the records where the keywords (any
 combination) are contained within 'keyphrase' Any ideas?
 
 Thanks,
 
 Ed
 
 
 
 -- 
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Langsames Insert, Optimierung

2006-09-26 Thread malinux
Hallo Group,
ich lese im Moment ein XML aus und möchte es in eine Datenbank schreiben.
Nun folgendes, ich lese und generiere die Inserts via PHP, und füge diese in ein
Array ein.
Nachdem das Auslesen fertig ist will ich das Array in die Datenbank schreiben:

CREATE TABLE `result` (
  `uID` varchar(7) collate latin1_german1_ci NOT NULL,
  `cID` smallint(3) NOT NULL default '0',
  `sID` smallint(3) NOT NULL default '0',
  `matchday` smallint(2) NOT NULL default '0',
  `matchperiod` varchar(15) collate latin1_german1_ci default NULL,
  `date` datetime default NULL,
  `stattype` varchar(10) collate latin1_german1_ci default NULL,
  `stattypedata` varchar(35) collate latin1_german1_ci default NULL,
  `awayscore` smallint(2) NOT NULL default '0',
  `awayside` varchar(10) collate latin1_german1_ci NOT NULL,
  `awayteamref` varchar(10) collate latin1_german1_ci NOT NULL,
  `homescore` smallint(2) NOT NULL default '0',
  `homeside` varchar(10) collate latin1_german1_ci NOT NULL,
  `hometeamref` varchar(10) collate latin1_german1_ci NOT NULL,
  PRIMARY KEY  (`uID`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1 COLLATE=latin1_german1_ci
COMMENT='Results';

Datensatz:
INSERT INTO `opt_result` (`uID`, `cID`, `sID`, `matchday`, `matchperiod`,
`date`, `stattype`, `stattypedata`, `awayscore`, `awayside`, `awayteamref`,
`homescore`, `homeside`, `hometeamref`) VALUES
('g46282', 87, 7, 1, 'FullTime', '2006-08-13 13:00:00', 'Venue', 'MSV-Arena', 1,
'Away', 't2012', 2, 'Home', 't751')


Problem ist aber dass es 300 Datensätze sind und mein MySQL 5.0.18 brauch für
jeden Datensatz 0.05 - 0.08 Sekunden, also komme ich auf eine Gesamtzeit von
~15 Sekunden.
Diese Zeit muss verringert werden ~ 3 Sekunden wären noch akzeptabel.
Idee o. MySQL Optimierungen?

MfG



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



odd behaviour with auto_increment

2006-09-26 Thread Jorrit Kronjee
Dear list,

I discovered something that seems to be odd behaviour.

I have a basic table with one column set to auto_increment:

mysql DESCRIBE basic_table;
+---+-+--+-+-++
| Field | Type| Null | Key | Default | Extra  |
+---+-+--+-+-++
| Id| int(11) |  | PRI | NULL| auto_increment |
| simple_column | varchar(11) |  | | ||
+---+-+--+-+-++
2 rows in set (0.00 sec)

One of the values of `Id' was later changed from 1 to 0, resulting in
the following output.

mysql SELECT * FROM basic_table;
++---+
| Id | simple_column |
++---+
|  0 | foo   |
|  2 | bar   |
++---+
2 rows in set (0.00 sec)


Now if I create a dump of the table, I get the following queries (and a
few more uninteresting locking queries, which I won't paste here):

CREATE TABLE `basic_table` (
  `Id` int(11) NOT NULL auto_increment,
  `simple_column` varchar(11) NOT NULL default '',
  PRIMARY KEY  (`Id`)
) ENGINE=MyISAM DEFAULT CHARSET=utf8;

INSERT INTO `basic_table` VALUES (0,'foo'),(2,'bar');

If I run this dump from the MySQL CLI with '\. dump.sql' I get the same
table as with the previous SELECT.

However, if I run these two commands from the dump directly on the mysql
CLI, the table looks as follows:

mysql SELECT * FROM basic_table;
++---+
| Id | simple_column |
++---+
|  1 | foo   |
|  2 | bar   |
++---+
2 rows in set (0.00 sec)

Apparently MySQL thought I didn't know a Id-value for the row with 'foo'
(because it was zero), so it made one up. This is kind of odd, because a
dump query should always generate the same dataset. Am I doing anything
wrong? Is this a known bug?

- Jorrit

P.S. For this test, I've used MySQL version 4.1.21-log.

-- 
System Developer

Infopact Network Solutions
Hoogvlietsekerkweg 170
3194 AM  Rotterdam Hoogvliet
tel. +31 (0)88 - 4636700
fax. +31 (0)88 - 4636799
mob. +31 (0)6 - 14105968
[EMAIL PROTECTED]
http://www.infopact.nl/


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Hanging batched statement

2006-09-26 Thread Andreas Schlicker

Hi all,

I have a Java application that fills a MyISAM table with a batched statement. 
Sometimes the statement does not stop, and it is not possible to kill the 
statement. Killing the application does not work, and also killing the statement 
with MySQL-Administrator does not help. The only thing I can do is restarting 
the MySQL server.


Does anybody had similar problems? I'm running MySQL 5.0.24a on Debian Linux 
with Kernel 2.6.13. I use Java 1.5.0_07 with MySQL Connector 5.0.3. In the 
connection URL I use the parameters:


cachePrepStmts=truerewriteBatchedStatements=true

Could this be a problem with rewriteBatchedStatements? Any ideas?

Thanks,
Andreas

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Help for query

2006-09-26 Thread Xiaobo Chen
Hi, all

The version of Mysql is 4.0.15 on Solaris 9. I am in such a situation.
There are 2 tables something like these:

Table A:
--
location|timestamp | other fields
---


Table B


location|timestamp | other fields
-

(location. timestamp) make the 'primary key' for each table.

The task is that: the locations are the same, given a timestamp from table
A, I need to find the record in table B which has the closest timestamp as
the given one in table A.

I checked the book and research the internet, but didn't find a hint or
solution yet. Could any one give me a hint please?

Thanks in advance.

Xiaobo


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Langsames Insert, Optimierung

2006-09-26 Thread Dominik Klein

Diese Zeit muss verringert werden ~ 3 Sekunden wären noch akzeptabel.
Idee o. MySQL Optimierungen?


Du kannst versuchen, alle Zeilen in einem Statement einzufügen.

also so etwa:
insert into test values (1),(2),(3),...,(n);

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Updating two fields from an aggregate query

2006-09-26 Thread Dan Buettner

Robert, you might give insert ... select ... on duplicate key update a try:
http://dev.mysql.com/doc/refman/5.0/en/insert-select.html

something like this (untested):

INSERT INTO parent (id, maxChildAge, childCount)
SELECT parentid, MAX(age) as maxAge, COUNT(*) as ct
FROM child
WHERE parentid IN (1, 2, 3, 4)
ON DUPLICATE KEY UPDATE maxChildAge=maxAge, childCount=ct

That seems like it ought to work as long as the id column in the
parent table is a unique key.

One consideration is that if you are writing this query
programmatically and using a long list of parentid values in the IN
clause, the SQL could get pretty long, perhaps too long for the
default value of MAX_ALLOWED_PACKET.  You can adjust that up pretty
easily, but bear in mind you need to adjust it for both client and
server.
http://dev.mysql.com/doc/refman/5.0/en/packet-too-large.html

HTH,
Dan

On 9/26/06, Robert DiFalco [EMAIL PROTECTED] wrote:

I have two tables that are related:

Parent
   LONG id
   LONG childCount
   LONG maxChildAge
   ...

Child
   LONG parentId
   LONG age
   ...

There can be thousands of parents and millions of children, that is why
I have denormalized childCount and maxChildAge. The values are too
expensive to calculate each time the data is viewed so I update these
values each time a Child is added, removed, or modified.

I currently have to update the Parent table with two queries like so:

   SELECT MAX( Child.age ), COUNT(*)
   FROM Child
   WHERE parentID = x;

   UPDATE Parent
   SET maxChildAge = MAX, childCount = COUNT
   WHERE id = x;

Worse yet I might be updating the stats for several hundred Parents at a
time, so I have to loop through the above where x is the current
Parent.id in the batch. What I would like to do is something like the
following (made up syntax):

   UPDATE Parent
   SET maxChildAge AND childCount = COUNT
   WITH ( SELECT MAX( Child.age ), COUNT(*)
  FROM Child
WHERE parentID = Parent.id )
   WHERE id IN ( set_of_parents_to_update );

Any suggestions?

TIA

R.


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Help for query

2006-09-26 Thread Xiaobo Chen
Yes, you're right, Jerry.

Thing is that I can NOT use 'min_t' in 2) query, but have to plug in the
actual value from 1) query. This isn't what I want.

If I tried this:

1) create temporary table tbl_min as (select min(abs(Ta - timestamp)) as
min_t from B;

so tbl_min is like:

min_t   |

0.00012 |
-

2) then, I say:

  select min_t, * from tbl_min, B where (timestamp + min_t) = Ta or
(timestamp - min_t) = Ta;

then I will get the record as:

--
min_t | other fields
---

you see, the field 'min_t' isn't necessary there.

Wait, if I say:

select * from tbl_min, B where (timestamp + min_t) = Ta or (timestamp -
min_t) = Ta;

Yes, I got the result without 'min_t' in it.

But this solution still used more than 1 query and used a temporary table
to hold the imtermidiate value.

Any better solution?

Thanks.



 Your first query returns a scalar (single value), right? You can put its
 value into a variable, and use that in the second query. It's not exactly
 what you wanted, but it will work without external programming.


 Regards,

 Jerry Schwartz
 Global Information Incorporated
 195 Farmington Ave.
 Farmington, CT 06032

 860.674.8796 / FAX: 860.674.8341

 -Original Message-
 From: Xiaobo Chen [mailto:[EMAIL PROTECTED]
 Sent: Tuesday, September 26, 2006 10:09 AM
 To: [EMAIL PROTECTED]
 Cc: mysql@lists.mysql.com
 Subject: Re: Help for query

 I found if I divided into 2 steps, I will find the record in table B:

 Ta - the given timestamp from table A;

 1) select min(abs(Ta - timestamp)) as min_t from B;

 2) select * from B where (timestamp + min_t = Ta) or (timestamp - min_t =
 Ta);

 But, how can I make these 2 steps into 1 query?

 Thanks.

 Xiaobo

 Hi, all

 The version of Mysql is 4.0.15 on Solaris 9. I am in such a situation.
 There are 2 tables something like these:

 Table A:
 --
 location|timestamp | other fields
 ---


 Table B

 
 location|timestamp | other fields
 -

 (location. timestamp) make the 'primary key' for each table.

 The task is that: the locations are the same, given a timestamp from
 table
 A, I need to find the record in table B which has the closest timestamp
 as
 the given one in table A.

 I checked the book and research the internet, but didn't find a hint or
 solution yet. Could any one give me a hint please?

 Thanks in advance.

 Xiaobo


 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:
 http://lists.mysql.com/[EMAIL PROTECTED]





 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]





-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Updating two fields from an aggregate query

2006-09-26 Thread Robert DiFalco
I have two tables that are related:

Parent
   LONG id
   LONG childCount
   LONG maxChildAge
   ...

Child
   LONG parentId
   LONG age
   ...

There can be thousands of parents and millions of children, that is why
I have denormalized childCount and maxChildAge. The values are too
expensive to calculate each time the data is viewed so I update these
values each time a Child is added, removed, or modified.

I currently have to update the Parent table with two queries like so:

   SELECT MAX( Child.age ), COUNT(*) 
   FROM Child 
   WHERE parentID = x;

   UPDATE Parent
   SET maxChildAge = MAX, childCount = COUNT
   WHERE id = x;

Worse yet I might be updating the stats for several hundred Parents at a
time, so I have to loop through the above where x is the current
Parent.id in the batch. What I would like to do is something like the
following (made up syntax):

   UPDATE Parent
   SET maxChildAge AND childCount = COUNT
   WITH ( SELECT MAX( Child.age ), COUNT(*) 
  FROM Child 
WHERE parentID = Parent.id )
   WHERE id IN ( set_of_parents_to_update );

Any suggestions?

TIA

R.  


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Langsames Insert, Optimierung

2006-09-26 Thread malinux
Hallo,
das hat schon geholfen, mir war bis dato noch nicht bewusst gewesen dass die
Unterschiede so gewaltig sind. Gibts noch andere Möglichkeiten.
- Vom Scripting her ist das nicht unbedingt optimal
- Wie viele Zeilen können es maximal sein o. sollten es sein?

MfG

Zitat von Dominik Klein [EMAIL PROTECTED]:

  Diese Zeit muss verringert werden ~ 3 Sekunden wären noch akzeptabel.
  Idee o. MySQL Optimierungen?

 Du kannst versuchen, alle Zeilen in einem Statement einzufügen.

 also so etwa:
 insert into test values (1),(2),(3),...,(n);




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: odd behaviour with auto_increment

2006-09-26 Thread Dan Buettner

Jorrit, it's a known behavior, not a bug.

Recent versions of MySQL will, when given a zero (0) as a value for an
auto incrementing identity column, simply fill in the next auto
incrementing value ... unless you flip a switch to specifically tell
it not to do that ... see:
http://dev.mysql.com/doc/refman/4.1/en/server-sql-mode.html and look
for NO_AUTO_VALUE_ON_ZERO

HTH,
Dan



On 9/26/06, Jorrit Kronjee [EMAIL PROTECTED] wrote:

Dear list,

I discovered something that seems to be odd behaviour.

I have a basic table with one column set to auto_increment:

mysql DESCRIBE basic_table;
+---+-+--+-+-++
| Field | Type| Null | Key | Default | Extra  |
+---+-+--+-+-++
| Id| int(11) |  | PRI | NULL| auto_increment |
| simple_column | varchar(11) |  | | ||
+---+-+--+-+-++
2 rows in set (0.00 sec)

One of the values of `Id' was later changed from 1 to 0, resulting in
the following output.

mysql SELECT * FROM basic_table;
++---+
| Id | simple_column |
++---+
|  0 | foo   |
|  2 | bar   |
++---+
2 rows in set (0.00 sec)


Now if I create a dump of the table, I get the following queries (and a
few more uninteresting locking queries, which I won't paste here):

CREATE TABLE `basic_table` (
  `Id` int(11) NOT NULL auto_increment,
  `simple_column` varchar(11) NOT NULL default '',
  PRIMARY KEY  (`Id`)
) ENGINE=MyISAM DEFAULT CHARSET=utf8;

INSERT INTO `basic_table` VALUES (0,'foo'),(2,'bar');

If I run this dump from the MySQL CLI with '\. dump.sql' I get the same
table as with the previous SELECT.

However, if I run these two commands from the dump directly on the mysql
CLI, the table looks as follows:

mysql SELECT * FROM basic_table;
++---+
| Id | simple_column |
++---+
|  1 | foo   |
|  2 | bar   |
++---+
2 rows in set (0.00 sec)

Apparently MySQL thought I didn't know a Id-value for the row with 'foo'
(because it was zero), so it made one up. This is kind of odd, because a
dump query should always generate the same dataset. Am I doing anything
wrong? Is this a known bug?

- Jorrit

P.S. For this test, I've used MySQL version 4.1.21-log.

--
System Developer

Infopact Network Solutions
Hoogvlietsekerkweg 170
3194 AM  Rotterdam Hoogvliet
tel. +31 (0)88 - 4636700
fax. +31 (0)88 - 4636799
mob. +31 (0)6 - 14105968
[EMAIL PROTECTED]
http://www.infopact.nl/


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Help for query

2006-09-26 Thread Xiaobo Chen
I found if I divided into 2 steps, I will find the record in table B:

Ta - the given timestamp from table A;

1) select min(abs(Ta - timestamp)) as min_t from B;

2) select * from B where (timestamp + min_t = Ta) or (timestamp - min_t =
Ta);

But, how can I make these 2 steps into 1 query?

Thanks.

Xiaobo

 Hi, all

 The version of Mysql is 4.0.15 on Solaris 9. I am in such a situation.
 There are 2 tables something like these:

 Table A:
 --
 location|timestamp | other fields
 ---


 Table B

 
 location|timestamp | other fields
 -

 (location. timestamp) make the 'primary key' for each table.

 The task is that: the locations are the same, given a timestamp from table
 A, I need to find the record in table B which has the closest timestamp as
 the given one in table A.

 I checked the book and research the internet, but didn't find a hint or
 solution yet. Could any one give me a hint please?

 Thanks in advance.

 Xiaobo


 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Help for query

2006-09-26 Thread Jerry Schwartz
Your first query returns a scalar (single value), right? You can put its
value into a variable, and use that in the second query. It's not exactly
what you wanted, but it will work without external programming.


Regards,

Jerry Schwartz
Global Information Incorporated
195 Farmington Ave.
Farmington, CT 06032

860.674.8796 / FAX: 860.674.8341

-Original Message-
From: Xiaobo Chen [mailto:[EMAIL PROTECTED]
Sent: Tuesday, September 26, 2006 10:09 AM
To: [EMAIL PROTECTED]
Cc: mysql@lists.mysql.com
Subject: Re: Help for query

I found if I divided into 2 steps, I will find the record in table B:

Ta - the given timestamp from table A;

1) select min(abs(Ta - timestamp)) as min_t from B;

2) select * from B where (timestamp + min_t = Ta) or (timestamp - min_t =
Ta);

But, how can I make these 2 steps into 1 query?

Thanks.

Xiaobo

 Hi, all

 The version of Mysql is 4.0.15 on Solaris 9. I am in such a situation.
 There are 2 tables something like these:

 Table A:
 --
 location|timestamp | other fields
 ---


 Table B

 
 location|timestamp | other fields
 -

 (location. timestamp) make the 'primary key' for each table.

 The task is that: the locations are the same, given a timestamp from table
 A, I need to find the record in table B which has the closest timestamp as
 the given one in table A.

 I checked the book and research the internet, but didn't find a hint or
 solution yet. Could any one give me a hint please?

 Thanks in advance.

 Xiaobo


 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]





-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: RE: Updating two fields from an aggregate query

2006-09-26 Thread Dan Buettner

I'd expect it to be quicker, especially in your situation where you
are updating potentially hundreds of records at a time.  If you have
250 records to update, today you're performing 500 queries - first a
select and then an update for each parentid.  This is one query for
all 250 records.

I haven't ever used INSERT - SELECT - ON DUPLICATE KEY UPDATE myself
for anything in production, but I have used its cousin, REPLACE, on a
couple of projects which have been in production for more than 5
years, and it's very fast.

Dan

On 9/26/06, Robert DiFalco [EMAIL PROTECTED] wrote:

Hdo you think this would perform better than simply using the
two queries? I wonder if the overhead associated with the ON DUPLICATE
KEY exception and handler might not outweigh the benefits of a single
query.

-Original Message-
From: Dan Buettner [mailto:[EMAIL PROTECTED]
Sent: Tuesday, September 26, 2006 7:15 AM
To: Robert DiFalco
Cc: mysql@lists.mysql.com
Subject: Re: Updating two fields from an aggregate query

Robert, you might give insert ... select ... on duplicate key update a
try:
http://dev.mysql.com/doc/refman/5.0/en/insert-select.html

something like this (untested):

INSERT INTO parent (id, maxChildAge, childCount) SELECT parentid,
MAX(age) as maxAge, COUNT(*) as ct FROM child WHERE parentid IN (1, 2,
3, 4) ON DUPLICATE KEY UPDATE maxChildAge=maxAge, childCount=ct

That seems like it ought to work as long as the id column in the parent
table is a unique key.

One consideration is that if you are writing this query programmatically
and using a long list of parentid values in the IN clause, the SQL could
get pretty long, perhaps too long for the default value of
MAX_ALLOWED_PACKET.  You can adjust that up pretty easily, but bear in
mind you need to adjust it for both client and server.
http://dev.mysql.com/doc/refman/5.0/en/packet-too-large.html

HTH,
Dan

On 9/26/06, Robert DiFalco [EMAIL PROTECTED] wrote:
 I have two tables that are related:

 Parent
LONG id
LONG childCount
LONG maxChildAge
...

 Child
LONG parentId
LONG age
...

 There can be thousands of parents and millions of children, that is
 why I have denormalized childCount and maxChildAge. The values are

 too expensive to calculate each time the data is viewed so I update
 these values each time a Child is added, removed, or modified.

 I currently have to update the Parent table with two queries like so:

SELECT MAX( Child.age ), COUNT(*)
FROM Child
WHERE parentID = x;

UPDATE Parent
SET maxChildAge = MAX, childCount = COUNT
WHERE id = x;

 Worse yet I might be updating the stats for several hundred Parents at

 a time, so I have to loop through the above where x is the current
 Parent.id in the batch. What I would like to do is something like the
 following (made up syntax):

UPDATE Parent
SET maxChildAge AND childCount = COUNT
WITH ( SELECT MAX( Child.age ), COUNT(*)
   FROM Child
 WHERE parentID = Parent.id )
WHERE id IN ( set_of_parents_to_update );

 Any suggestions?

 TIA

 R.


 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]





--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Strange insert: ERROR 1172 (42000)

2006-09-26 Thread Wai-Sun Chia

This is so weird! I'm getting a:
ERROR 1172 (42000): Result consisted of more than one row

I've setup 2 machines. Same table (dbsetup.sql) , same insert
statement (foo.sql), same version (5.0.24a).

Only difference is:
One is my laptop on FC5-i386, the other is a server on RHEL4u4-i386

Any clues? Help!

p.s. I've attached the table definition and the insert statement.
INSERT INTO logs
	(host, facility, priority, level, tag, datetime, program, msg,
	event_id, username, computer_name)
	VALUES (
		REPLACE(TRIM('SGBSPR01.isddc.men.maxis.com.my'),'  ',''),
		REPLACE(TRIM('user'),'	',''),
		REPLACE(TRIM('notice'),'	',''),
		REPLACE(TRIM('notice'),'	',''),
		REPLACE(TRIM('0d'),'	',''),
		REPLACE(TRIM('2006-09-26 22:06:49'),'	',''),
		REPLACE(TRIM('MSWinEventLog;0;Security;6077;Tue Sep 26 22'),'	',''),
		REPLACE(TRIM('MSWinEventLog;0;Security;6077;Tue Sep 26 22:06:41 2006;593;Security;SYSTEM;User;Success Audit;SGBSPR01;Detailed Tracking;;A process has exited:   Process ID: 5708Image File NameC:\\WINDOWS\\system32\\cmd.exe   User Name:  SGBSPR01$   Domain:ISDDCLogon ID:   (0x0,0x3E7);6070'), ':	',':'),
		SUBSTRING_INDEX(SUBSTRING_INDEX('MSWinEventLog;0;Security;6077;Tue Sep 26 22:06:41 2006;593;Security;SYSTEM;User;Success Audit;SGBSPR01;Detailed Tracking;;A process has exited:Process ID: 5708   Image File Name: C:\\WINDOWS\\system32\\cmd.exe  User Name:  SGBSPR01$   Domain: ISDDC   Logon ID:   (0x0,0x3E7);6070', ';',  6), ';', -1),
		SUBSTRING_INDEX(SUBSTRING_INDEX('MSWinEventLog;0;Security;6077;Tue Sep 26 22:06:41 2006;593;Security;SYSTEM;User;Success Audit;SGBSPR01;Detailed Tracking;;A process has exited:Process ID: 5708   Image File Name: C:\\WINDOWS\\system32\\cmd.exe  User Name:  SGBSPR01$   Domain: ISDDC   Logon ID:   (0x0,0x3E7);6070', ';',  8), ';', -1),
		SUBSTRING_INDEX(SUBSTRING_INDEX('MSWinEventLog;0;Security;6077;Tue Sep 26 22:06:41 2006;593;Security;SYSTEM;User;Success Audit;SGBSPR01;Detailed Tracking;;A process has exited:Process ID: 5708   Image File Name: C:\\WINDOWS\\system32\\cmd.exe  User Name:  SGBSPR01$   Domain: ISDDC   Logon ID:   (0x0,0x3E7);6070', ';', 11), ';', -1)
	);
CREATE TABLE IF NOT EXISTS logs (
seq bigint(12) unsigned NOT NULL primary key auto_increment,
hostchar(35),
facilitychar(8),
prioritychar(7),
level   char(7),
tag char(2),
datetimedatetime,
program char(15),
msg text,
event_idinteger,
usernamechar(25),
computer_name   char(25),
index   msg_idx (msg(255)),
index   host_idx (host),
index   program_idx (program),
index   datetime_idx (datetime),
index   priority_idx (priority),
index   facility_idx (facility),
index   event_id_idx (event_id),
index   username_idx (username),
index   computer_name_idx (computer_name)
) TYPE=MyISAM row_format=fixed;


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]

RE: Updating two fields from an aggregate query

2006-09-26 Thread Robert DiFalco
Hdo you think this would perform better than simply using the
two queries? I wonder if the overhead associated with the ON DUPLICATE
KEY exception and handler might not outweigh the benefits of a single
query. 

-Original Message-
From: Dan Buettner [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, September 26, 2006 7:15 AM
To: Robert DiFalco
Cc: mysql@lists.mysql.com
Subject: Re: Updating two fields from an aggregate query

Robert, you might give insert ... select ... on duplicate key update a
try:
http://dev.mysql.com/doc/refman/5.0/en/insert-select.html

something like this (untested):

INSERT INTO parent (id, maxChildAge, childCount) SELECT parentid,
MAX(age) as maxAge, COUNT(*) as ct FROM child WHERE parentid IN (1, 2,
3, 4) ON DUPLICATE KEY UPDATE maxChildAge=maxAge, childCount=ct

That seems like it ought to work as long as the id column in the parent
table is a unique key.

One consideration is that if you are writing this query programmatically
and using a long list of parentid values in the IN clause, the SQL could
get pretty long, perhaps too long for the default value of
MAX_ALLOWED_PACKET.  You can adjust that up pretty easily, but bear in
mind you need to adjust it for both client and server.
http://dev.mysql.com/doc/refman/5.0/en/packet-too-large.html

HTH,
Dan

On 9/26/06, Robert DiFalco [EMAIL PROTECTED] wrote:
 I have two tables that are related:

 Parent
LONG id
LONG childCount
LONG maxChildAge
...

 Child
LONG parentId
LONG age
...

 There can be thousands of parents and millions of children, that is 
 why I have denormalized childCount and maxChildAge. The values are

 too expensive to calculate each time the data is viewed so I update 
 these values each time a Child is added, removed, or modified.

 I currently have to update the Parent table with two queries like so:

SELECT MAX( Child.age ), COUNT(*)
FROM Child
WHERE parentID = x;

UPDATE Parent
SET maxChildAge = MAX, childCount = COUNT
WHERE id = x;

 Worse yet I might be updating the stats for several hundred Parents at

 a time, so I have to loop through the above where x is the current 
 Parent.id in the batch. What I would like to do is something like the 
 following (made up syntax):

UPDATE Parent
SET maxChildAge AND childCount = COUNT
WITH ( SELECT MAX( Child.age ), COUNT(*)
   FROM Child
 WHERE parentID = Parent.id )
WHERE id IN ( set_of_parents_to_update );

 Any suggestions?

 TIA

 R.


 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]





--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Requesting help with subquery

2006-09-26 Thread Chris Sansom

At 11:40 -0400 26/9/06, Zembower, Kevin wrote:

   IF(ISNULL(SELECT lv.langversionid
  FROM langversion AS lv
  JOIN baseitem AS b3 ON lv.baseitemid =
b3.baseitemid
  WHERE lv.langid = 1
  AND b.baseitemid = lv.baseitemid
 )), 'Y', 'N') AS Lang Avail


Looks to me as if your parentheses don't balance 
here - you have an extra ')' in that last line.


--
Cheers... Chris
Highway 57 Web Development -- http://highway57.co.uk/

I once preached peaceful coexistence with Windows.
You may laugh at my expense - I deserve it.
   -- Jean-Louis Gassé, former Apple executive ( Be CEO)

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Requesting help with subquery

2006-09-26 Thread Zembower, Kevin
Thank you, Johan and Chris, for finding my obvious mistakes. Unfortunately, 
even after fixing them, I still get an SQL error:

[EMAIL PROTECTED]:~$ cat OrderDB-requested.sql
SELECT 
   m.title AS Main Category, 
   s.title AS Sub Category, 
   b.partno AS Part Number,
   (SELECT lv.title
  FROM langversion AS lv 
  JOIN baseitem AS b2 ON lv.baseitemid = b2.baseitemid
  WHERE lv.langid = 1 # English = 1
  AND b.baseitemid=lv.baseitemid
   ) as English Title,
   IF((IFNULL(SELECT lv.langversionid 
  FROM langversion AS lv
  JOIN baseitem AS b3 ON lv.baseitemid = b3.baseitemid
  WHERE lv.langid = 1 
  AND b.baseitemid = lv.baseitemid
   ), 0), 'Y', 'N') AS Lang Avail
FROM maincategory AS m 
JOIN subcategory AS s ON m.maincatid=s.maincatid 
JOIN baseitem AS b ON s.subcatid=b.subcatid 
WHERE 
   b.available = Y 
ORDER BY m.title, s.title;
[EMAIL PROTECTED]:~$ mysql orderDB OrderDB-requested.sql
ERROR 1064 (42000) at line 1: You have an error in your SQL syntax; check the 
manual that corresponds to your MySQL server version for the right syntax to 
use near 'SELECT lv.langversionid 
  FROM langversion AS lv
  JO' at line 11
[EMAIL PROTECTED]:~$

-Kevin

-Original Message-
From: Chris Sansom [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, September 26, 2006 12:45 PM
To: Zembower, Kevin; mysql@lists.mysql.com
Subject: Re: Requesting help with subquery

At 11:40 -0400 26/9/06, Zembower, Kevin wrote:
IF(ISNULL(SELECT lv.langversionid
   FROM langversion AS lv
   JOIN baseitem AS b3 ON lv.baseitemid =
b3.baseitemid
   WHERE lv.langid = 1
   AND b.baseitemid = lv.baseitemid
  )), 'Y', 'N') AS Lang Avail

Looks to me as if your parentheses don't balance 
here - you have an extra ')' in that last line.

-- 
Cheers... Chris
Highway 57 Web Development -- http://highway57.co.uk/

I once preached peaceful coexistence with Windows.
You may laugh at my expense - I deserve it.
-- Jean-Louis Gassé, former Apple executive ( Be CEO)

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Partition Help

2006-09-26 Thread Michael Gargiullo
I'm working on a project in which we'd like to convert from Oracle to
MySQL. We need to partition our data for speed concerns.  Currently in
Oracle I create 8, 3 hour partitions for each day (Currently running
450M -750M rec inserts/day). I was looking for matching functionality in
MySQL, but it seams daily partitions are as close as I'm going to come.

 

We're running 5.1.10 and I'm having a bit of trouble creating partitions
in both new tables and altering old tables.  Below is one example of
what I've tried.

 

Can anyone shed some light on this subject?

 

 

-Mike

 

create table t1 (c1 int default NULL, c2 varchar(30) default NULL, c3
datetime default NULL) engine=myisam PARTITION BY RANGE(to_days(c3))

  PARTITION p0 VALUES LESS THAN (to_days('2006-09-24'))(

SUBPARTITION s0a

  DATA DIRECTORY = '/FW_data1'

  INDEX DIRECTORY = '/FW_indx1'

  ),

 PARTITION p1 VALUES LESS THAN (to_days('2006-09-26'))(

SUBPARTITION s1a

  DATA DIRECTORY = '/FW_data2'

  INDEX DIRECTORY = '/FW_indx2'

  )

  PARTITION p2 VALUES LESS THAN (to_days('2006-09-28'))(

SUBPARTITION s2a

  DATA DIRECTORY = '/FW_data3'

  INDEX DIRECTORY = '/FW_indx3'

  )

);



Re: Partition Help

2006-09-26 Thread mos

At 02:03 PM 9/26/2006, you wrote:

I'm working on a project in which we'd like to convert from Oracle to
MySQL. We need to partition our data for speed concerns.  Currently in
Oracle I create 8, 3 hour partitions for each day (Currently running
450M -750M rec inserts/day). I was looking for matching functionality in
MySQL, but it seams daily partitions are as close as I'm going to come.



We're running 5.1.10 and I'm having a bit of trouble creating partitions
in both new tables and altering old tables.  Below is one example of
what I've tried.



Can anyone shed some light on this subject?



-Mike



Mike,
How is this table being updated?

a) From one source like a batch job?
b) Or from hundreds of users concurrently?

If a), then why not just create 1 table per day (or 3 tables per day) and 
when you want to reference (the entire day or) a week, just create a Merge 
Table?

http://dev.mysql.com/doc/refman/5.1/en/merge-storage-engine.html

If b), then you need to use InnoDb tables because that has row locks 
compared to MyISAM's table locks.


Mike




create table t1 (c1 int default NULL, c2 varchar(30) default NULL, c3
datetime default NULL) engine=myisam PARTITION BY RANGE(to_days(c3))

  PARTITION p0 VALUES LESS THAN (to_days('2006-09-24'))(

SUBPARTITION s0a

  DATA DIRECTORY = '/FW_data1'

  INDEX DIRECTORY = '/FW_indx1'

  ),

 PARTITION p1 VALUES LESS THAN (to_days('2006-09-26'))(

SUBPARTITION s1a

  DATA DIRECTORY = '/FW_data2'

  INDEX DIRECTORY = '/FW_indx2'

  )

  PARTITION p2 VALUES LESS THAN (to_days('2006-09-28'))(

SUBPARTITION s2a

  DATA DIRECTORY = '/FW_data3'

  INDEX DIRECTORY = '/FW_indx3'

  )

);


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Partition Help

2006-09-26 Thread Michael Gargiullo


-Original Message-
From: mos [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, September 26, 2006 3:40 PM
To: mysql@lists.mysql.com
Subject: Re: Partition Help

At 02:03 PM 9/26/2006, you wrote:
I'm working on a project in which we'd like to convert from Oracle to
MySQL. We need to partition our data for speed concerns.  Currently in
Oracle I create 8, 3 hour partitions for each day (Currently running
450M -750M rec inserts/day). I was looking for matching functionality
in
MySQL, but it seams daily partitions are as close as I'm going to come.



We're running 5.1.10 and I'm having a bit of trouble creating
partitions
in both new tables and altering old tables.  Below is one example of
what I've tried.



Can anyone shed some light on this subject?



-Mike


Mike,
 How is this table being updated?

a) From one source like a batch job?
b) Or from hundreds of users concurrently?

If a), then why not just create 1 table per day (or 3 tables per day)
and 
when you want to reference (the entire day or) a week, just create a
Merge 
Table?
http://dev.mysql.com/doc/refman/5.1/en/merge-storage-engine.html

If b), then you need to use InnoDb tables because that has row locks 
compared to MyISAM's table locks.

Mike


We're using the Load infile function to load the data generated by
another process. We do not do updates, but occasionally need to either
walk the table or run a query against it. On Oracle, we currently need 3
hour partitions to keep the 5 indexes timely.

This system handles 450-750 Million inserted rows per day with 5 fields
being indexed. This number will be closer to 2 Billion records / day by
Spring 2007 we've been told.

For example, I diverted the full flow of data to MySQL for 15 minutes
and inserted 9 Million records with a back up of loader files.  I need
to speed this up. Unfortunately, table structure and indexes are static
and cannot be changed. 

-Mike



create table t1 (c1 int default NULL, c2 varchar(30) default NULL, c3
datetime default NULL) engine=myisam PARTITION BY RANGE(to_days(c3))

   PARTITION p0 VALUES LESS THAN (to_days('2006-09-24'))(

 SUBPARTITION s0a

   DATA DIRECTORY = '/FW_data1'

   INDEX DIRECTORY = '/FW_indx1'

   ),

  PARTITION p1 VALUES LESS THAN (to_days('2006-09-26'))(

 SUBPARTITION s1a

   DATA DIRECTORY = '/FW_data2'

   INDEX DIRECTORY = '/FW_indx2'

   )

   PARTITION p2 VALUES LESS THAN (to_days('2006-09-28'))(

 SUBPARTITION s2a

   DATA DIRECTORY = '/FW_data3'

   INDEX DIRECTORY = '/FW_indx3'

   )

);

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Partition Help

2006-09-26 Thread mos

At 02:53 PM 9/26/2006, Michael Gargiullo wrote:



-Original Message-
From: mos [mailto:[EMAIL PROTECTED]
Sent: Tuesday, September 26, 2006 3:40 PM
To: mysql@lists.mysql.com
Subject: Re: Partition Help

At 02:03 PM 9/26/2006, you wrote:
I'm working on a project in which we'd like to convert from Oracle to
MySQL. We need to partition our data for speed concerns.  Currently in
Oracle I create 8, 3 hour partitions for each day (Currently running
450M -750M rec inserts/day). I was looking for matching functionality
in
MySQL, but it seams daily partitions are as close as I'm going to come.



We're running 5.1.10 and I'm having a bit of trouble creating
partitions
in both new tables and altering old tables.  Below is one example of
what I've tried.



Can anyone shed some light on this subject?



-Mike


Mike,
 How is this table being updated?

a) From one source like a batch job?
b) Or from hundreds of users concurrently?

If a), then why not just create 1 table per day (or 3 tables per day)
and
when you want to reference (the entire day or) a week, just create a
Merge
Table?
http://dev.mysql.com/doc/refman/5.1/en/merge-storage-engine.html

If b), then you need to use InnoDb tables because that has row locks
compared to MyISAM's table locks.

Mike


We're using the Load infile function to load the data generated by
another process. We do not do updates, but occasionally need to either
walk the table or run a query against it. On Oracle, we currently need 3
hour partitions to keep the 5 indexes timely.

This system handles 450-750 Million inserted rows per day with 5 fields
being indexed. This number will be closer to 2 Billion records / day by
Spring 2007 we've been told.

For example, I diverted the full flow of data to MySQL for 15 minutes
and inserted 9 Million records with a back up of loader files.  I need
to speed this up. Unfortunately, table structure and indexes are static
and cannot be changed.

-Mike



Mike,
I've done a lot of Load Data with large tables and as you no doubt 
discovered, as the number of rows in the table increases, the insert speed 
decreases. This is due to the extra effort involved in maintaining the 
index as the rows are being loaded. As the index grows in size, it takes 
longer to maintain the index. This is true of any database. MyISAM tables 
are going to be faster than InnoDb in this case.


You can speed it up by:
1) Add as much memory as possible in the machine because building the index 
will be much faster if it has lots of ram.
2) Modify your My.Cnf file so key_buffer_size=1500M or more.  (Assuming you 
have 3gb or more installed) This allocates memory for building the index.
3) If the table is empty before you add any rows to it, Load Data will run 
much faster because it will build the index *after* all rows have been 
loaded. But if you have as few as 1 row in the table before running Load 
Data, the index will have to be maintained as the rows are inserted and 
this slows down the Load Data considerably.
4) Try throwing an exclusive lock on the table before loading the data. I'm 
not sure but this might help.
5) If your table already has rows in it before running Load Data, and the 
table has indexes defined, it is much faster if your disable the indexes to 
the table before running Load Data, and then enable the index after Load 
Data has completed. See Alter Table Enable/Disable Indexes for more info.
6) If you are using Alter Table to add indexes after the table has data, 
make sure you are adding all indexes in one Alter Table statement because 
MySQL will copy the table each time the Alter Table is run.


If you are going to be adding 2 billion rows per day, you might want to try 
1 table per hour which will reduce the number of rows to  100 million 
which may be more manageable (assuming 24 hour day). You can then create a 
merge table on the 24 rows so you can traverse them. You can of course 
create a merge table just for the morning hours, afternoon hours, evening 
hours etc.. Name each table like: 20060925_1400 for 4PM on 9/25/2006. Of 
course you may also want to summarize this data into a table so you don't 
need all of this raw data lying around.


Hope this helps.

Mike







create table t1 (c1 int default NULL, c2 varchar(30) default NULL, c3
datetime default NULL) engine=myisam PARTITION BY RANGE(to_days(c3))

   PARTITION p0 VALUES LESS THAN (to_days('2006-09-24'))(

 SUBPARTITION s0a

   DATA DIRECTORY = '/FW_data1'

   INDEX DIRECTORY = '/FW_indx1'

   ),

  PARTITION p1 VALUES LESS THAN (to_days('2006-09-26'))(

 SUBPARTITION s1a

   DATA DIRECTORY = '/FW_data2'

   INDEX DIRECTORY = '/FW_indx2'

   )

   PARTITION p2 VALUES LESS THAN (to_days('2006-09-28'))(

 SUBPARTITION s2a

   DATA DIRECTORY = '/FW_data3'

   INDEX DIRECTORY = '/FW_indx3'

   )

);

--
MySQL General Mailing List
For list archives: 

RE: Partition Help

2006-09-26 Thread Michael Gargiullo

Mike


We're using the Load infile function to load the data generated by
another process. We do not do updates, but occasionally need to either
walk the table or run a query against it. On Oracle, we currently need
3
hour partitions to keep the 5 indexes timely.

This system handles 450-750 Million inserted rows per day with 5 fields
being indexed. This number will be closer to 2 Billion records / day by
Spring 2007 we've been told.

For example, I diverted the full flow of data to MySQL for 15 minutes
and inserted 9 Million records with a back up of loader files.  I need
to speed this up. Unfortunately, table structure and indexes are static
and cannot be changed.

-Mike


Mike,
 I've done a lot of Load Data with large tables and as you no
doubt 
discovered, as the number of rows in the table increases, the insert
speed 
decreases. This is due to the extra effort involved in maintaining the 
index as the rows are being loaded. As the index grows in size, it takes

longer to maintain the index. This is true of any database. MyISAM
tables 
are going to be faster than InnoDb in this case.

You can speed it up by:
1) Add as much memory as possible in the machine because building the
index 
will be much faster if it has lots of ram.
2) Modify your My.Cnf file so key_buffer_size=1500M or more.  (Assuming
you 
have 3gb or more installed) This allocates memory for building the
index.
3) If the table is empty before you add any rows to it, Load Data will
run 
much faster because it will build the index *after* all rows have been 
loaded. But if you have as few as 1 row in the table before running Load

Data, the index will have to be maintained as the rows are inserted and 
this slows down the Load Data considerably.
4) Try throwing an exclusive lock on the table before loading the data.
I'm 
not sure but this might help.
5) If your table already has rows in it before running Load Data, and
the 
table has indexes defined, it is much faster if your disable the indexes
to 
the table before running Load Data, and then enable the index after Load

Data has completed. See Alter Table Enable/Disable Indexes for more
info.
6) If you are using Alter Table to add indexes after the table has data,

make sure you are adding all indexes in one Alter Table statement
because 
MySQL will copy the table each time the Alter Table is run.

If you are going to be adding 2 billion rows per day, you might want to
try 
1 table per hour which will reduce the number of rows to  100 million 
which may be more manageable (assuming 24 hour day). You can then create
a 
merge table on the 24 rows so you can traverse them. You can of course 
create a merge table just for the morning hours, afternoon hours,
evening 
hours etc.. Name each table like: 20060925_1400 for 4PM on 9/25/2006. Of

course you may also want to summarize this data into a table so you
don't 
need all of this raw data lying around.

Hope this helps.

Mike


Thanks for the advice.

We've got 12GB of RAM, I'll increase the key_buffer_size.  Unfortunately
I can't turn off indexes, then index after. At these rates, I'd never
catch up.

I had hoped I could use partitions like in Oracle. 1 partition every
hour (or 3).  I don't think the merge tables will work however. We
currently only keep 15 days of data and that fills the array. If a merge
table uses disk space, it won't work for us.

I'll check out the key buffer size though.  Thanks.

-Mike

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Partition Help

2006-09-26 Thread mos

At 03:37 PM 9/26/2006, you wrote:


Mike


We're using the Load infile function to load the data generated by
another process. We do not do updates, but occasionally need to either
walk the table or run a query against it. On Oracle, we currently need
3
hour partitions to keep the 5 indexes timely.

This system handles 450-750 Million inserted rows per day with 5 fields
being indexed. This number will be closer to 2 Billion records / day by
Spring 2007 we've been told.

For example, I diverted the full flow of data to MySQL for 15 minutes
and inserted 9 Million records with a back up of loader files.  I need
to speed this up. Unfortunately, table structure and indexes are static
and cannot be changed.

-Mike


Mike,
 I've done a lot of Load Data with large tables and as you no
doubt
discovered, as the number of rows in the table increases, the insert
speed
decreases. This is due to the extra effort involved in maintaining the
index as the rows are being loaded. As the index grows in size, it takes

longer to maintain the index. This is true of any database. MyISAM
tables
are going to be faster than InnoDb in this case.

You can speed it up by:
1) Add as much memory as possible in the machine because building the
index
will be much faster if it has lots of ram.
2) Modify your My.Cnf file so key_buffer_size=1500M or more.  (Assuming
you
have 3gb or more installed) This allocates memory for building the
index.
3) If the table is empty before you add any rows to it, Load Data will
run
much faster because it will build the index *after* all rows have been
loaded. But if you have as few as 1 row in the table before running Load

Data, the index will have to be maintained as the rows are inserted and
this slows down the Load Data considerably.
4) Try throwing an exclusive lock on the table before loading the data.
I'm
not sure but this might help.
5) If your table already has rows in it before running Load Data, and
the
table has indexes defined, it is much faster if your disable the indexes
to
the table before running Load Data, and then enable the index after Load

Data has completed. See Alter Table Enable/Disable Indexes for more
info.
6) If you are using Alter Table to add indexes after the table has data,

make sure you are adding all indexes in one Alter Table statement
because
MySQL will copy the table each time the Alter Table is run.

If you are going to be adding 2 billion rows per day, you might want to
try
1 table per hour which will reduce the number of rows to  100 million
which may be more manageable (assuming 24 hour day). You can then create
a
merge table on the 24 rows so you can traverse them. You can of course
create a merge table just for the morning hours, afternoon hours,
evening
hours etc.. Name each table like: 20060925_1400 for 4PM on 9/25/2006. Of

course you may also want to summarize this data into a table so you
don't
need all of this raw data lying around.

Hope this helps.

Mike


Thanks for the advice.

We've got 12GB of RAM, I'll increase the key_buffer_size.  Unfortunately
I can't turn off indexes, then index after. At these rates, I'd never
catch up.


I don't agree. It takes longer to build the index than to load the data if 
you have indexes active when loading the data. But if you disable the 
index, or not have any indexes on the table during the Load Data, then 
re-enable the index later, MySQL will build the index at least 10x faster 
if you have a large key_buffer_size because it does it all in memory. I've 
had Load Data go from 24 hours to 40 minutes just by adding more memory to 
key_buffer_size and disabling the index and re-enabling it later.


I'd recommend using at least 6000M for key_buffer_size as a start. You want 
to try and get as much of the index in memory as possible.




I had hoped I could use partitions like in Oracle. 1 partition every
hour (or 3).  I don't think the merge tables will work however. We
currently only keep 15 days of data and that fills the array. If a merge
table uses disk space, it won't work for us.


A Merge Table can be built in just ms. It is a logical join between the 
tables and does *not* occupy more disk space. Think of it as a view that 
joins tables of similar schema together vertically so it looks like 1 large 
table.


Mike



I'll check out the key buffer size though.  Thanks.

-Mike

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Partition Help

2006-09-26 Thread Michael Gargiullo


-Original Message-
From: mos [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, September 26, 2006 5:27 PM
To: mysql@lists.mysql.com
Subject: RE: Partition Help

At 03:37 PM 9/26/2006, you wrote:
 
 Mike
 
 
 We're using the Load infile function to load the data generated by
 another process. We do not do updates, but occasionally need to
either
 walk the table or run a query against it. On Oracle, we currently
need
3
 hour partitions to keep the 5 indexes timely.
 
 This system handles 450-750 Million inserted rows per day with 5
fields
 being indexed. This number will be closer to 2 Billion records / day
by
 Spring 2007 we've been told.
 
 For example, I diverted the full flow of data to MySQL for 15 minutes
 and inserted 9 Million records with a back up of loader files.  I
need
 to speed this up. Unfortunately, table structure and indexes are
static
 and cannot be changed.
 
 -Mike


Mike,
  I've done a lot of Load Data with large tables and as you no
doubt
discovered, as the number of rows in the table increases, the insert
speed
decreases. This is due to the extra effort involved in maintaining the
index as the rows are being loaded. As the index grows in size, it
takes

longer to maintain the index. This is true of any database. MyISAM
tables
are going to be faster than InnoDb in this case.

You can speed it up by:
1) Add as much memory as possible in the machine because building the
index
will be much faster if it has lots of ram.
2) Modify your My.Cnf file so key_buffer_size=1500M or more.  (Assuming
you
have 3gb or more installed) This allocates memory for building the
index.
3) If the table is empty before you add any rows to it, Load Data will
run
much faster because it will build the index *after* all rows have been
loaded. But if you have as few as 1 row in the table before running
Load

Data, the index will have to be maintained as the rows are inserted and
this slows down the Load Data considerably.
4) Try throwing an exclusive lock on the table before loading the data.
I'm
not sure but this might help.
5) If your table already has rows in it before running Load Data, and
the
table has indexes defined, it is much faster if your disable the
indexes
to
the table before running Load Data, and then enable the index after
Load

Data has completed. See Alter Table Enable/Disable Indexes for more
info.
6) If you are using Alter Table to add indexes after the table has
data,

make sure you are adding all indexes in one Alter Table statement
because
MySQL will copy the table each time the Alter Table is run.

If you are going to be adding 2 billion rows per day, you might want to
try
1 table per hour which will reduce the number of rows to  100 million
which may be more manageable (assuming 24 hour day). You can then
create
a
merge table on the 24 rows so you can traverse them. You can of course
create a merge table just for the morning hours, afternoon hours,
evening
hours etc.. Name each table like: 20060925_1400 for 4PM on 9/25/2006.
Of

course you may also want to summarize this data into a table so you
don't
need all of this raw data lying around.

Hope this helps.

Mike


Thanks for the advice.

We've got 12GB of RAM, I'll increase the key_buffer_size.
Unfortunately
I can't turn off indexes, then index after. At these rates, I'd never
catch up.

I don't agree. It takes longer to build the index than to load the data
if 
you have indexes active when loading the data. But if you disable the 
index, or not have any indexes on the table during the Load Data, then 
re-enable the index later, MySQL will build the index at least 10x
faster 
if you have a large key_buffer_size because it does it all in memory.
I've 
had Load Data go from 24 hours to 40 minutes just by adding more memory
to 
key_buffer_size and disabling the index and re-enabling it later.

I'd recommend using at least 6000M for key_buffer_size as a start. You
want 
to try and get as much of the index in memory as possible.


I had hoped I could use partitions like in Oracle. 1 partition every
hour (or 3).  I don't think the merge tables will work however. We
currently only keep 15 days of data and that fills the array. If a
merge
table uses disk space, it won't work for us.

A Merge Table can be built in just ms. It is a logical join between the 
tables and does *not* occupy more disk space. Think of it as a view that

joins tables of similar schema together vertically so it looks like 1
large 
table.

Mike


Ah, very cool.

Thanks again.


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Error 1064 when importing 4.0 dump into 4.1 via command line

2006-09-26 Thread Curious George

I dumped a database from a 4.0 mysql and am attempting to move it to a
server running 4.1 - using the command line:
$ mysql -u root -pmypassword empty4.1db  4.0dump.sql

The result:
ERROR 1064 (42000) at line 2: You have an error in your SQL syntax; check
the manual that corresponds to your MySQL server version for the right
syntax to use near '77, 137289, 4)' at line 1

When I look at the file using vi, I see lots of ^M (all over the place). I
believe that these are DOS line breaks and I know how to get rid of them,
but it is a 35MB file and I don't know if I can do it over my slow
connection from home (via SSH).

My question is: are those ^Ms my problem? Or is that normal and I have
some other issue?

Thanks in advance for any pointers.


Re: Error 1064 when importing 4.0 dump into 4.1 via command line

2006-09-26 Thread Carlos Proal

I dont think that is the problem but, what do you mean by a slow
connection ?, you cant run the dos2unix command on the remote server ?

The error ocurred on line 2, did you see any special word in that line
? can you share with us that line? , remember that each version may
can reserve different words.

Carlos


On 9/26/06, Curious George [EMAIL PROTECTED] wrote:

I dumped a database from a 4.0 mysql and am attempting to move it to a
server running 4.1 - using the command line:
$ mysql -u root -pmypassword empty4.1db  4.0dump.sql

The result:
ERROR 1064 (42000) at line 2: You have an error in your SQL syntax; check
the manual that corresponds to your MySQL server version for the right
syntax to use near '77, 137289, 4)' at line 1

When I look at the file using vi, I see lots of ^M (all over the place). I
believe that these are DOS line breaks and I know how to get rid of them,
but it is a 35MB file and I don't know if I can do it over my slow
connection from home (via SSH).

My question is: are those ^Ms my problem? Or is that normal and I have
some other issue?

Thanks in advance for any pointers.




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



transitioning from ms sql

2006-09-26 Thread Arjun Bhandari
Hi,

I have been using MS SQL for the last one year, however would not like to 
transition to mysql. At the first glance it looks very different from ms 
sql and the tools are also different. can someone tell me if there is any 
document which explains the equivalence and how i could port a lot of my 
queries, tables, views and stored procedures to my sql from ms sql.

Best Regards,
Arjun
**
This email and any files transmitted with it are confidential and
intended solely for the use of the individual or entity to whom they
are addressed. Any  unauthorized use of the information contained
in this email or its attachments is prohibited.  If this email is received in
error, please contact the sender and delete the material from your
computer systems. Do not use, copy, or disclose the contents of this
email or any attachments.
Abu Dhabi Investment Authority (ADIA) accepts no responsibility for
the content of this email to the extent that the same consists of
statements and opinions made which are the senders own and not
made on behalf of ADIA.  Nor does ADIA accept any liability for any
errors or omissions in the content of this email caused by electronic
and technical failures.
Although ADIA has taken reasonable precautions to ensure that no
viruses are present in this email, ADIA accepts no responsibility for
any loss or damage arising from the use of this email or its
attachments.
**



Re: transitioning from ms sql

2006-09-26 Thread Wai-Sun Chia

On 9/27/06, Arjun Bhandari [EMAIL PROTECTED] wrote:

Hi,

I have been using MS SQL for the last one year, however would not like to


Huh? If you would NOT like to transition to MySQL, then why are you
asking all these stuff?

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: transitioning from ms sql

2006-09-26 Thread Quentin Bennett
Hi,

There is a white paper on that cery subject available at 
http://www.mysql.com/why-mysql/white-papers/mysql_wp_mssql2mysql.php

-Original Message-
From: Arjun Bhandari [mailto:[EMAIL PROTECTED]
Sent: Wednesday, 27 September 2006 4:11 p.m.
To: mysql@lists.mysql.com
Subject: transitioning from ms sql


Hi,

I have been using MS SQL for the last one year, however would not like to 
transition to mysql. At the first glance it looks very different from ms 
sql and the tools are also different. can someone tell me if there is any 
document which explains the equivalence and how i could port a lot of my 
queries, tables, views and stored procedures to my sql from ms sql.

Best Regards,
Arjun
**
This email and any files transmitted with it are confidential and
intended solely for the use of the individual or entity to whom they
are addressed. Any  unauthorized use of the information contained
in this email or its attachments is prohibited.  If this email is received in
error, please contact the sender and delete the material from your
computer systems. Do not use, copy, or disclose the contents of this
email or any attachments.
Abu Dhabi Investment Authority (ADIA) accepts no responsibility for
the content of this email to the extent that the same consists of
statements and opinions made which are the senders own and not
made on behalf of ADIA.  Nor does ADIA accept any liability for any
errors or omissions in the content of this email caused by electronic
and technical failures.
Although ADIA has taken reasonable precautions to ensure that no
viruses are present in this email, ADIA accepts no responsibility for
any loss or damage arising from the use of this email or its
attachments.
**

The information contained in this email is privileged and confidential and
intended for the addressee only. If you are not the intended recipient, you
are asked to respect that confidentiality and not disclose, copy or make use
of its contents. If received in error you are asked to destroy this email
and contact the sender immediately. Your assistance is appreciated.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]