expanding hierarchies

2007-05-23 Thread tbt

> Hi
>
> i have a table like this
>
> id  nameparent_id
> 1   europe
> 2   spain   1
> 3   england 1
> 4   france  1
> 5   london  3
> 6   mayfair 5
> 7   madrid  2
>
> in this table each destination is mapped to a parent destination
> eg: mayfair is mapped to london, london is mapped to england etc.
>
> i like to write a query to find child destinations when an id is given for
> a parent destination
> eg: if 3 is the id, then the result of query should be 3,5,6
> if 2 is the id, then the result should be 2,7
>
> the number of levels in this table is unknown and the query should work
> for any number of levels
> please provide a sample 'select' query in mysql
-- 
View this message in context: 
http://www.nabble.com/expanding-hierarchies-tf3808150.html#a10777853
Sent from the MySQL - General mailing list archive at Nabble.com.


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Grants for mysqldump

2007-05-23 Thread Anand kumar

Hi i am using mysql-4.1.3-beta version in mysq server my concern is that i
want to create a user (minimal grant)  that should able to take backup using
mysqldump ...can any one help me out ..
i tried giving the following grants but it is not works
grant file,SHOW DATABASES,select,lock tables on *.* to
[EMAIL PROTECTED] by 'pass';

regards  Anand


Re: How to rewrite SQL query for new MySQL LEFT JOINS

2007-05-23 Thread Gmail User

It worked in 4.x but does not work in the new syntax. How should I
rewrite it to get the same result?


OK, that was a lie. It works in 5.x as well. I should learn to
describe my problem more accurately as well as RTFM :-(

The correct description of the query in question would have been:

select * from messages, someothertable  left
join messages as messages_ on
messages.thread = messages_.thread and messages.created <
messages_.created where messages_.id is null

Since there was a comma (inner) join there, the left join applied only
to 'someothertable' and not to message table as intended. As I
learned, in 5.0.12+, parentheses matter!

Duh!

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Grants for mysqlbackup

2007-05-23 Thread Anand kumar

Hi i am using mysql-4.1.3-beta version in mysq server my concern is that i
want to create a user(minimal grant)  that should able to take backup using
mysqldump ...can any one help me out ..
i tried giving the following grants but it is not works
grant file,SHOW DATABASES,select,lock tables on *.* to
[EMAIL PROTECTED] by 'pass';

regards Anand


Re: A question about how to debug the error "2013, 'Lost connection to MySQL server during query'"

2007-05-23 Thread Sebastian Mendel
[EMAIL PROTECTED] schrieb:
> I am running the 64 bit version of mysql server 5.1.18, and on the
> client side I am running python with the mysqldb module.  While
> running a very large insert query I get the following error :
> 
> 2013, 'Lost connection to MySQL server during query'
> 
> I have --log-warnings set to 3, but I get no corresponding errors in
> the error log.
> 
> The question is : how do I go about debugging this problem?

the most common error in this case is a crash of the thread processing your
query


-- 
Sebastian Mendel

www.sebastianmendel.de

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Grants for mysqlbackup

2007-05-23 Thread Anand kumar

Hi i am using mysql-4.1.3-beta version in mysq server my concern is that i
want to create a user(minimal grant)  that should able to take backup using
mysqldump ...
can any one help me out ..
i tried giving the following grants but it is not works
grant file,SHOW DATABASES,select,lock tables on *.* to
[EMAIL PROTECTED] by 'yyy';

regards Anand


Minimal grants for backup

2007-05-23 Thread Anand kumar

Hi i am using mysql-4.1.3-beta version in mysq server

my concern is that i want to create a user that should able to take backup
using mysqldump ...
can any one help me out ..

i tried giving the following grants but it is not works

grant file,SHOW DATABASES,select,lock tables on *.* to
[EMAIL PROTECTED] by 'yyy';

regards
Anand


How to rewrite SQL query for new MySQL LEFT JOINS

2007-05-23 Thread Gmail User

I hope someone can clue me in what a syntax of query that produces the
same would look like for MySQL > 5.0.12

Old query meant to list most recent message from each thread, e.g.

select * from messages left join messages as messages_ on
messages.thread = messages_.thread and messages.created <
messages_.created where messages_.id is null

It worked in 4.x but does not work in the new syntax. How should I
rewrite it to get the same result?

TIA,

Ed

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Upgraded to 5.0.x from 4.1.x and ORDER BY sucks!

2007-05-23 Thread Gmail User

mysqlcheck -h$host -u$user -p$pass --analyze $dbname


I wish that was the case!

I tried analyze table ... and optimize table ..., which I presume
would be the same. It did not help. I also ran mysqlcheck just to see
if it will make a difference. Nope!

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Replicating an existing table

2007-05-23 Thread Brent Baisley

create table x like y

Creates an exact copy of the table without the data, indexes are included.

- Original Message - 
From: "J Trahair" <[EMAIL PROTECTED]>

To: "MySQL General" 
Sent: Wednesday, May 23, 2007 3:58 PM
Subject: Replicating an existing table


Hi Everyone

Is there a way of replicating or cloning the structure of an existing table and giving it a different name? I have a table of items 
on sale in the shops, I want to create a table of items not yet in the shops but due to be introduced very soon.


MySQL 5.0.37 Windows XP.

Thanks in advance.

Jonathan Trahair 



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Replicating an existing table

2007-05-23 Thread Michael Dykman

http://dev.mysql.com/doc/refman/5.0/en/create-table.html

under MySQL 5.0 you can do this:
CREATE TABLE mynewtable LIKE theotherone;
-- this will copy the structure as well as add idexes identical to
the plan in the original


under any earlier 4.x version, this will work too
CREATE TABLE mynewtable SELECT * from theotherone WHERE 1 = 0;
- - this will create the structure, but leave the result without
indexes, so be careful and double check if you use this latter
technique

- michael

On 5/23/07, J Trahair <[EMAIL PROTECTED]> wrote:

Hi Everyone

Is there a way of replicating or cloning the structure of an existing table and 
giving it a different name? I have a table of items on sale in the shops, I 
want to create a table of items not yet in the shops but due to be introduced 
very soon.

MySQL 5.0.37 Windows XP.

Thanks in advance.

Jonathan Trahair



--
- michael dykman
- [EMAIL PROTECTED]

- All models are wrong.  Some models are useful.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Replicating an existing table

2007-05-23 Thread J Trahair
Hi Everyone

Is there a way of replicating or cloning the structure of an existing table and 
giving it a different name? I have a table of items on sale in the shops, I 
want to create a table of items not yet in the shops but due to be introduced 
very soon.

MySQL 5.0.37 Windows XP.

Thanks in advance.

Jonathan Trahair

Re: Help with Migration/Foreign Key

2007-05-23 Thread Baron Schwartz

Hi Chris,

Chris Hoover wrote:

Here is the error from the show innodb status:

070523 15:39:39 Error in foreign key constraint of table mysql/#sql-d70_22:

   FOREIGN KEY (tab_id, class) REFERENCES system_tabs(tab_id, class):
Cannot find an index in the referenced table where the
referenced columns appear as the first columns, or column types
in the table and the referenced table do not match for constraint.
Note that the internal storage type of ENUM and SET changed in
tables created with >= InnoDB-4.1.12, and such columns in old tables
cannot be referenced by such columns in new tables.
See
http://dev.mysql.com/doc/refman/5.0/en/innodb-foreign-key-constraints.html
for correct foreign key definition.


I don't understand this since the primary key for system_tabs is tab_id,
class and clinic_consent_form has the index fk_idx on tab_id, class.  What
is mysql complaining about?


I missed it the first time I looked, but your data types are different:


> CREATE TABLE `system_tabs` (
>  `tab_id` bigint(20) unsigned NOT NULL auto_increment,
>  `class` int(11) NOT NULL,



>  `tab_id` bigint(20) NOT NULL,
>  `class` int(11) NOT NULL,


Unsigned is a different data type.

Cheers
Baron

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Help with Migration/Foreign Key

2007-05-23 Thread Chris Hoover

Here is the error from the show innodb status:

070523 15:39:39 Error in foreign key constraint of table mysql/#sql-d70_22:

   FOREIGN KEY (tab_id, class) REFERENCES system_tabs(tab_id, class):
Cannot find an index in the referenced table where the
referenced columns appear as the first columns, or column types
in the table and the referenced table do not match for constraint.
Note that the internal storage type of ENUM and SET changed in
tables created with >= InnoDB-4.1.12, and such columns in old tables
cannot be referenced by such columns in new tables.
See
http://dev.mysql.com/doc/refman/5.0/en/innodb-foreign-key-constraints.html
for correct foreign key definition.


I don't understand this since the primary key for system_tabs is tab_id,
class and clinic_consent_form has the index fk_idx on tab_id, class.  What
is mysql complaining about?

Chris

On 5/23/07, Baron Schwartz <[EMAIL PROTECTED]> wrote:


Hi Chris,

Chris Hoover wrote:
> I need some help.  I am under presure from management to migrate and
> existing Sybase ASA database to Mysql for a proof of concept.
>
> I have done most of the work, but am having an issue creating a foreign
> key.  I keep getting the "Can't create table './mysqk/#sql-d70_1f.frm'
> (errno: 150)" error.  I have looked in the documentation, and everything
> seems to be fine.  Please advise on what is wrong.
>
> Thanks,
>
> Chris
>
> Here is the offending statement:
>
> ALTER TABLE clinic_consent_form
>ADD CONSTRAINT fk_clinic_c_ref_782_system_t
>FOREIGN KEY (tab_id, class) REFERENCES system_tabs(tab_id, class);
>
>
> Here are the two tables involved:
> CREATE TABLE `system_tabs` (
>  `tab_id` bigint(20) unsigned NOT NULL auto_increment,
>  `object_name` varchar(32) NOT NULL,
>  `description` varchar(64) NOT NULL,
>  `class` int(11) NOT NULL,
>  `audit_flag` smallint(6) NOT NULL default '0',
>  `active_ind` char(1) NOT NULL default 'Y',
>  PRIMARY KEY  (`tab_id`,`class`),
>  UNIQUE KEY `tab_id` (`tab_id`)
> ) ENGINE=InnoDB DEFAULT CHARSET=latin1;
>
> CREATE TABLE `clinic_consent_form` (
>  `clinic_id` bigint(20) NOT NULL,
>  `tab_id` bigint(20) NOT NULL,
>  `class` int(11) NOT NULL,
>  PRIMARY KEY  (`clinic_id`,`tab_id`,`class`),
>  KEY `fk_idx` (`tab_id`,`class`),
>  CONSTRAINT `fk_clinic_c_ref_473_clinic` FOREIGN KEY (`clinic_id`)
> REFERENCES `clinic` (`clinic_id`)
> ) ENGINE=InnoDB DEFAULT CHARSET=latin1;
>

Try running SHOW INNODB STATUS and examining the LAST FOREIGN KEY ERROR
section.

InnoDB is very picky about foreign keys.  Types must match exactly,
indexes have to be
created, etc.  I don't see anything wrong with the table definitions here,
but there
may be some data already in the tables that would violate a foreign key.

Cheers
Baron

--
Baron Schwartz
http://www.xaprb.com/



Re: Help with Migration/Foreign Key

2007-05-23 Thread Baron Schwartz

Hi Chris,

Chris Hoover wrote:

I need some help.  I am under presure from management to migrate and
existing Sybase ASA database to Mysql for a proof of concept.

I have done most of the work, but am having an issue creating a foreign
key.  I keep getting the "Can't create table './mysqk/#sql-d70_1f.frm'
(errno: 150)" error.  I have looked in the documentation, and everything
seems to be fine.  Please advise on what is wrong.

Thanks,

Chris

Here is the offending statement:

ALTER TABLE clinic_consent_form
   ADD CONSTRAINT fk_clinic_c_ref_782_system_t
   FOREIGN KEY (tab_id, class) REFERENCES system_tabs(tab_id, class);


Here are the two tables involved:
CREATE TABLE `system_tabs` (
 `tab_id` bigint(20) unsigned NOT NULL auto_increment,
 `object_name` varchar(32) NOT NULL,
 `description` varchar(64) NOT NULL,
 `class` int(11) NOT NULL,
 `audit_flag` smallint(6) NOT NULL default '0',
 `active_ind` char(1) NOT NULL default 'Y',
 PRIMARY KEY  (`tab_id`,`class`),
 UNIQUE KEY `tab_id` (`tab_id`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1;

CREATE TABLE `clinic_consent_form` (
 `clinic_id` bigint(20) NOT NULL,
 `tab_id` bigint(20) NOT NULL,
 `class` int(11) NOT NULL,
 PRIMARY KEY  (`clinic_id`,`tab_id`,`class`),
 KEY `fk_idx` (`tab_id`,`class`),
 CONSTRAINT `fk_clinic_c_ref_473_clinic` FOREIGN KEY (`clinic_id`)
REFERENCES `clinic` (`clinic_id`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1;



Try running SHOW INNODB STATUS and examining the LAST FOREIGN KEY ERROR section.

InnoDB is very picky about foreign keys.  Types must match exactly, indexes have to be 
created, etc.  I don't see anything wrong with the table definitions here, but there 
may be some data already in the tables that would violate a foreign key.


Cheers
Baron

--
Baron Schwartz
http://www.xaprb.com/

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Help with Migration/Foreign Key

2007-05-23 Thread Chris Hoover

I need some help.  I am under presure from management to migrate and
existing Sybase ASA database to Mysql for a proof of concept.

I have done most of the work, but am having an issue creating a foreign
key.  I keep getting the "Can't create table './mysqk/#sql-d70_1f.frm'
(errno: 150)" error.  I have looked in the documentation, and everything
seems to be fine.  Please advise on what is wrong.

Thanks,

Chris

Here is the offending statement:

ALTER TABLE clinic_consent_form
   ADD CONSTRAINT fk_clinic_c_ref_782_system_t
   FOREIGN KEY (tab_id, class) REFERENCES system_tabs(tab_id, class);


Here are the two tables involved:
CREATE TABLE `system_tabs` (
 `tab_id` bigint(20) unsigned NOT NULL auto_increment,
 `object_name` varchar(32) NOT NULL,
 `description` varchar(64) NOT NULL,
 `class` int(11) NOT NULL,
 `audit_flag` smallint(6) NOT NULL default '0',
 `active_ind` char(1) NOT NULL default 'Y',
 PRIMARY KEY  (`tab_id`,`class`),
 UNIQUE KEY `tab_id` (`tab_id`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1;

CREATE TABLE `clinic_consent_form` (
 `clinic_id` bigint(20) NOT NULL,
 `tab_id` bigint(20) NOT NULL,
 `class` int(11) NOT NULL,
 PRIMARY KEY  (`clinic_id`,`tab_id`,`class`),
 KEY `fk_idx` (`tab_id`,`class`),
 CONSTRAINT `fk_clinic_c_ref_473_clinic` FOREIGN KEY (`clinic_id`)
REFERENCES `clinic` (`clinic_id`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1;


A question about how to debug the error "2013, 'Lost connection to MySQL server during query'"

2007-05-23 Thread aiton
I am running the 64 bit version of mysql server 5.1.18, and on the
client side I am running python with the mysqldb module.  While
running a very large insert query I get the following error :

2013, 'Lost connection to MySQL server during query'

I have --log-warnings set to 3, but I get no corresponding errors in
the error log.

The question is : how do I go about debugging this problem?


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: design choice - quite many tables

2007-05-23 Thread Wm Mussatto
On Tue, May 22, 2007 23:29, Przemysław Klein said:
> Martijn Tonies wrote:
>> Hi,
>>
>>
>>> I'm working on quite big database. It consists of about 200 tables.
>>> Additionaly about 50 tables are per year (because of annual data). It
>>> means every year new 50 tables will have to appear in application. And
>>> now I have a question. Should I use separate databases for "annual"
>>> data
>>> (i.e. db2006, db2007, etc...) (i don't need constraints on that
>>> (annual)
>>> tables) or put all the tables in one database? Is there any way to
>>> 'catalogue'/organize tables within one database (namespace/schema)?
>>>
>>> Any thoughts?
>>>
>>
>> Yes, in my opinion, you should use the same tables for each year. So no
>> "tables per year" or "databases per year", unless there is a very very
>> specific
>> reason for this.
>>
>> Having tables on a per-year basis also means you cannot do cross-year
>> queries easily and you have to adjust your queries according to the
>> current
>> year.
>>
>> Martijn Tonies
>> Database Workbench - development tool for MySQL, and more!
>> Upscene Productions
>> http://www.upscene.com
>> My thoughts:
>> http://blog.upscene.com/martijn/
>> Database development questions? Check the forum!
>> http://www.databasedevelopmentforum.com
>>
>>
>>
> The reason of distribute annual data into different tables is that they
> are NOT small. They store business documents in my company and can count
> about 500k rows (and will grow each year). After performance tests we
> did, it occurs that keeping those data in one table (with additional
> column 'year') wouldn't meet our response time requirements.
> I realize that this approach is not proper from relational point of
> view, but it seems that we must separate annual data. Now, the question
> is: if we should keep them in one database (and be prepared for database
> with approx 500 tables after 3-4 years) or in multiple databases.
>
> Regards,
>
> --
> _/ _/ _/ _/ _/ _/ _/ _/ _/ _/ _/ _/ _/ _/
> Przemek Klein ([EMAIL PROTECTED])
>
Assuming you are using MYISAM table types, each table requires at least
three files on the disk.  If they are in one database they will all be in
one directory (how fast is your OS at finding the files in its directory
structure?).
Are they going to be opened at the same time (how many file handles can
you have open at once?)?
If separate databases, how do you intend to connect to them (single
connection specifying database or multiple connections).

My feeling is that you would not gain anything by having separate databases.

Good luck.
--
William R. Mussatto
Systems Engineer
http://www.csz.com
909-920-9154


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: corruption in db. myisam bad? innodb good?

2007-05-23 Thread Brent Baisley

You would need to find out the reason for the crash to prevent or minimize it. 
The reason may be external to mysql.
Innodb can get really, really slow when tables get physically large if you 
don't have a similar amount of RAM.
MyISAM doesn't support transactions, so no, that wouldn't help.

If you want to test Innodb as the table type, setup a replication/slave server with the tables as Innodb instead. See how your 
queries run against it. The table structure needs to match on a replication/salev server, not the table type. You can also make the 
slave the master in the event of a crash.


- Original Message - 
From: "tim h" <[EMAIL PROTECTED]>

To: 
Sent: Tuesday, May 22, 2007 2:56 PM
Subject: corruption in db. myisam bad? innodb good?



hi. database is myisam,  5.8Gb, 7mil records.
recently had some corruption i think due to mysqld service failure.
10 tabes were crashed.

question --
how can i prevent or minimize this?
Will switching to innodb help?
Will converting all my queries to transactions help?

thanks.

--
Tim H
Berkeley, CA




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Confusing backslash issues with LIKE

2007-05-23 Thread Yves Goergen
On 23.05.2007 14:49 CE(S)T, Yves Goergen wrote:
> [1]
> http://dev.mysql.com/doc/refman/5.0/en/string-comparison-functions.html#operator_like

Never mind. This very page says why it is like it is. It's definitely
too hot in here today. :(

-- 
Yves Goergen "LonelyPixel" <[EMAIL PROTECTED]>
Visit my web laboratory at http://beta.unclassified.de

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Database design

2007-05-23 Thread Brent Baisley
Normalization is about using ids to minimize change, which also eliminates repetition. It's fine to have the color "red" repeated 
throughout your table as long as it will never change. But if you suddenly have two shades of red, you'll need to update all the 
records that say "red". If you used id's, you just update the text associated with the id, a single record. Nobody ever designs to 
5th normal form (except as an exercise), you usually reach level 2 or 3.


When designing a database, you want to determine the various "objects" you need to hold and their attributes. One mistake is that 
you are putting the garment attributes in the a general description field. Which is fine if you don't need to search on more than 
free form text. Sleeve, fabric, cuff, colors, etc. are all attributes of the garment. Since a garment can have multiple attributes, 
the attributes should be in a separate table.


I would create an attributes table that contains all the attributes of the garment. Then you would be able to search the single 
table to find all garments made of a certain fabric with a certain cuff type. The garment attributes table contains codes that link 
to a description. The description could have multiple fields so you can handle conversions between different markets. For example, 
sizes vary between different geographic areas (my sneaker has sizes on the label for US, UK, EU and CM). Since the size is 
represented by an ID, you can search on any of the sizes the ID represents.


- Original Message - 
From: "Officelink" <[EMAIL PROTECTED]>

To: 
Sent: Wednesday, May 23, 2007 8:29 AM
Subject: Database design


Hi everyone,

I¹m trying to set up a database with information that will be used in a
garment slideshow in flash.

The information to be included as part of the slideshow would be:
code, optional title, description, colours, sizes, garment image, fabric
swatch image

Each clothing item to be included in the slideshow will belong to one of
eleven or so categories. Each of the categories will belong to one of two
category types.

I also planned to set up a simple CMS that would allow the information to be
added, edited and deleted from the database.

With the above goals in mind, I came up with two tables as follows:

GARMENTS TABLE
garment_id, int(11), not null, auto_increment, primary key
cat_id, int(10), unsigned, not null, default 0
garment_code, varchar(30), not null
garment_title, varchar(40), null
garment_desc, varchar(255), not null
garment_image, varchar(50), not null
garment_colour, varchar(50), not null
garment_swatch, varchar(50), null
garment_sizes, varchar(100), not null

CATEGORIES TABLE
cat_id, int(10), not null, auto_increment, primary key
cat_name, varchar(40), not null
cat_type, tinyint(4), not null, default 1

I was worried about repeating data in some of the columns, for example the
garment_desc column would have information about sleeve length, cuff type,
fabric, fabric composition etc. and I thought that all these areas could
possibly be broken up into separate tables, but I wasn¹t sure if it was
necessary. Also the colour and size columns would have a lot of repetitive
data.

Someone indicated that normalization is not about eliminating repetition,
it¹s about ensuring that the non-key attributes are functionally dependent
on the entire primary key, but then I read somewhere that you¹re supposed to
break down the information as far as possible to avoid redundancy so I¹m a
bit confused. Or does it depend on the situation and what¹s required of the
database. I mean say the CMS needed to have more functionality than what I
indicated above ­ I mean say the client wanted to be able to generate
reports based on style information such as fabric composition or sleeve
style etc. - would this change the setup?

I wondered if someone could comment on the setup to see if I¹m on the right
track here?

Appreciate any help.


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



TCPWrappers and database unavailibility

2007-05-23 Thread Tuc at T-B-O-H.NET
Hi,

Running 4.1.22 created from FreeBSD ports on FreeBSD 5.5 . 

I run under TCP Wrappers. Even if 1 foreign machine attempts
to contact us on the tcp port, the database becomes incredibly sluggish
or unresponsive. 

phpmyadmin shows " MySQL said: #2003 - The server is not responding"
 Is there something I can do to prevent this from causing all the problems? 

Thanks, Tuc

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



geometry columns and srid

2007-05-23 Thread Kralidis,Tom [Burlington]

Hi,

I have tables to which the mysqlgisimport tool is used to upload spatial
data.  The current situation is that these tables have geomety but
without an srid definition.

How does mysqlgisimpoprt work w.r.t. srid?  When trying 'select
srid(geo) from foo', the value returned is 0, as opposed to an EPSG
code.

Is there a way to force mysqlgisimport to force setting an srid?  Also,
is there a way to set the srid in the table after the data has been
uploaded?

Any advice is much appreciated.

Thanks

..Tom


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Help on selecting a View with 3 Billions rows !

2007-05-23 Thread Brent Baisley

You can using any language to automate it, a scripting language would probably 
be the easiest (php, perl, ruby, python, shell, etc).

The table type doesn't matter too much. Although if the tables are static, I 
would use MyISAM.

Spatial indexes come across as more for use in geometry, rather than distances between locations using long/lat. But all you're 
doing is trying to do is find distances on a spehere that happens to be called Earth, the points of which you refer to as long/lat 
instead of x/y. It is geometry. You should be able to find a number of examples showing distances between zip codes.


- Original Message - 
From: "Chris Prakoso" <[EMAIL PROTECTED]>

To: "Brent Baisley" <[EMAIL PROTECTED]>
Cc: 
Sent: Wednesday, May 23, 2007 9:41 AM
Subject: Re: Help on selecting a View with 3 Billions rows !



Brent,

Thanks for the reply.  I knew that MySQL has spatial index, although reading it quickly, I wasn't sure that this is for me.  But 
maybe I'll read it again in more details.


On your second suggestion, i.e. to do INSERT a bit at a time, how can I 
automate this ?

Also, does it matter if the tables are using MyISAM or InnoDB ?

Cheers,
Chris

- Original Message 
From: Brent Baisley <[EMAIL PROTECTED]>
To: Chris Prakoso <[EMAIL PROTECTED]>; mysql@lists.mysql.com
Sent: Wednesday, 23 May, 2007 1:30:50 PM
Subject: Re: Help on selecting a View with 3 Billions rows !

I think what you want to do is look into creating a spacial index on the raw data. Then just searching on that index will allow 
you

to judge distances between things. You won't need to create a table with every 
single possible combination. There are quite a few
examples in the manual and online.

But, if you really want to go that route, do a little bit at a time instead of all at once. It will go quicker and you'll be able 
to

judge the prgress. For example, join 1 record from one table with the 65K 
records from the other and insert that into the
combination table. Then grab the second record, etc. Do that 54K times. You'll 
be surprised at how quickly it goes.

- Original Message - 
From: "Chris Prakoso" <[EMAIL PROTECTED]>

To: 
Sent: Tuesday, May 22, 2007 6:43 PM
Subject: Help on selecting a View with 3 Billions rows !



Hi all,




I am having a big performance performance problem here, and I was wondering if 
you could give me some advise here.


I have 2 big static tables, one with 65K rows, the other with 54K rows.
I need to do a calculation for each combination of these two table
rows.


So what I did was to create a view like so:




 select


   `c`.`TRACT` AS `TRACT`,


   `c`.`LNG` AS `tlng`,


   `c`.`LAT` AS `tlat`,


   `p`.`NAME` AS `name`,


   `p`.`LNG` AS `lng`,


   `p`.`LAT` AS `lat`,


   `Calc_Distance`(`c`.`LAT`,`c`.`LNG`,`p`.`LAT`,`p`.`LNG`) AS `distance`


 from


   (`tracts` `c` join `parks` `p`);




This give me a view with more than 3,500,000,000  rows !



Now, the second part of this exercise is to grab only a set or rows, where the 
distance is less than 50.


So, I thought I would create another view like so:




 select


   `t`.`TRACT` AS `TRACT`,


   `t`.`tlng` AS `tlng`,


   `t`.`tlat` AS `tlat`,


   `t`.`name` AS `name`,


   `t`.`lng` AS `lng`,


   `t`.`lat` AS `lat`,


   `t`.`distance` AS `distance`


 from


   `tractparkdistance` `t`


 where


   (`t`.`distance` < 50);




tractparkdisctance is the name of the view.




But opening this view takes 'a lot of time' !  I just couldn't wait for it.



So, I though I would try to export this to an external file via
SELECT INTO, and re-import the resulting file back to a new table.




So I did like so:




select *


into outfile "park_distances"


from tractparkdistance


where distance < 50;



Running this statement took more than 12 hours, and still counting
until I killed the process. So far it has produced an 800 MB file.


Moreover, I still need to do a Mean calculation from that 'limited' set of 
data, and still do more calculations.


Next try, I using INSERT INTO SELECT like this:

insert into park_distance_radius50s(tract,tlng,tlat,name,lng,lat,distance)
select tract,tlng,tlat,name,lng,lat,distance
from tractparkdistance
where distance < 50

This was running a very long time as well.



I think I'm out of my depth here.  Anybody has any idea on this ?




Thanks very much in advance !




Regards,


Chris


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]









--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Help on selecting a View with 3 Billions rows !

2007-05-23 Thread Chris Prakoso
Brent,

Thanks for the reply.  I knew that MySQL has spatial index, although reading it 
quickly, I wasn't sure that this is for me.  But maybe I'll read it again in 
more details.

On your second suggestion, i.e. to do INSERT a bit at a time, how can I 
automate this ?

Also, does it matter if the tables are using MyISAM or InnoDB ?

Cheers,
Chris

- Original Message 
From: Brent Baisley <[EMAIL PROTECTED]>
To: Chris Prakoso <[EMAIL PROTECTED]>; mysql@lists.mysql.com
Sent: Wednesday, 23 May, 2007 1:30:50 PM
Subject: Re: Help on selecting a View with 3 Billions rows !

I think what you want to do is look into creating a spacial index on the raw 
data. Then just searching on that index will allow you 
to judge distances between things. You won't need to create a table with every 
single possible combination. There are quite a few 
examples in the manual and online.

But, if you really want to go that route, do a little bit at a time instead of 
all at once. It will go quicker and you'll be able to 
judge the prgress. For example, join 1 record from one table with the 65K 
records from the other and insert that into the 
combination table. Then grab the second record, etc. Do that 54K times. You'll 
be surprised at how quickly it goes.

- Original Message - 
From: "Chris Prakoso" <[EMAIL PROTECTED]>
To: 
Sent: Tuesday, May 22, 2007 6:43 PM
Subject: Help on selecting a View with 3 Billions rows !


> Hi all,
>
>
>
>
> I am having a big performance performance problem here, and I was wondering 
> if you could give me some advise here.
>
>
> I have 2 big static tables, one with 65K rows, the other with 54K rows.
> I need to do a calculation for each combination of these two table
> rows.
>
>
> So what I did was to create a view like so:
>
>
>
>
>  select
>
>
>`c`.`TRACT` AS `TRACT`,
>
>
>`c`.`LNG` AS `tlng`,
>
>
>`c`.`LAT` AS `tlat`,
>
>
>`p`.`NAME` AS `name`,
>
>
>`p`.`LNG` AS `lng`,
>
>
>`p`.`LAT` AS `lat`,
>
>
>`Calc_Distance`(`c`.`LAT`,`c`.`LNG`,`p`.`LAT`,`p`.`LNG`) AS `distance`
>
>
>  from
>
>
>(`tracts` `c` join `parks` `p`);
>
>
>
>
> This give me a view with more than 3,500,000,000  rows !
>
>
>
> Now, the second part of this exercise is to grab only a set or rows, where 
> the distance is less than 50.
>
>
> So, I thought I would create another view like so:
>
>
>
>
>  select
>
>
>`t`.`TRACT` AS `TRACT`,
>
>
>`t`.`tlng` AS `tlng`,
>
>
>`t`.`tlat` AS `tlat`,
>
>
>`t`.`name` AS `name`,
>
>
>`t`.`lng` AS `lng`,
>
>
>`t`.`lat` AS `lat`,
>
>
>`t`.`distance` AS `distance`
>
>
>  from
>
>
>`tractparkdistance` `t`
>
>
>  where
>
>
>(`t`.`distance` < 50);
>
>
>
>
> tractparkdisctance is the name of the view.
>
>
>
>
> But opening this view takes 'a lot of time' !  I just couldn't wait for it.
>
>
>
> So, I though I would try to export this to an external file via
> SELECT INTO, and re-import the resulting file back to a new table.
>
>
>
>
> So I did like so:
>
>
>
>
> select *
>
>
> into outfile "park_distances"
>
>
> from tractparkdistance
>
>
> where distance < 50;
>
>
>
> Running this statement took more than 12 hours, and still counting
> until I killed the process. So far it has produced an 800 MB file.
>
>
> Moreover, I still need to do a Mean calculation from that 'limited' set of 
> data, and still do more calculations.
>
>
> Next try, I using INSERT INTO SELECT like this:
>
> insert into park_distance_radius50s(tract,tlng,tlat,name,lng,lat,distance)
> select tract,tlng,tlat,name,lng,lat,distance
> from tractparkdistance
> where distance < 50
>
> This was running a very long time as well.
>
>
>
> I think I'm out of my depth here.  Anybody has any idea on this ?
>
>
>
>
> Thanks very much in advance !
>
>
>
>
> Regards,
>
>
> Chris
>
>
> -- 
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
> 





-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Database design

2007-05-23 Thread John Meyer
Officelink wrote:
> Hi everyone,
>
> I¹m trying to set up a database with information that will be used in a
> garment slideshow in flash.
>
> The information to be included as part of the slideshow would be:
> code, optional title, description, colours, sizes, garment image, fabric
> swatch image
>
> Each clothing item to be included in the slideshow will belong to one of
> eleven or so categories. Each of the categories will belong to one of two
> category types.
>
> I also planned to set up a simple CMS that would allow the information to be
> added, edited and deleted from the database.
>
> With the above goals in mind, I came up with two tables as follows:
>
> GARMENTS TABLE
> garment_id, int(11), not null, auto_increment, primary key
> cat_id, int(10), unsigned, not null, default 0
> garment_code, varchar(30), not null
> garment_title, varchar(40), null
> garment_desc, varchar(255), not null
> garment_image, varchar(50), not null
> garment_colour, varchar(50), not null
> garment_swatch, varchar(50), null
> garment_sizes, varchar(100), not null
>  
> CATEGORIES TABLE
> cat_id, int(10), not null, auto_increment, primary key
> cat_name, varchar(40), not null
> cat_type, tinyint(4), not null, default 1
>
> I was worried about repeating data in some of the columns, for example the
> garment_desc column would have information about sleeve length, cuff type,
> fabric, fabric composition etc. and I thought that all these areas could
> possibly be broken up into separate tables, but I wasn¹t sure if it was
> necessary. Also the colour and size columns would have a lot of repetitive
> data.
>   

While normalization does have the goal of eliminating repetition, there
are other reasons.  Most notably, you don't want to introduce errors or
even differences into your database. 
A person who accidentally types "eRd", for instance.  You might, and I
emphasize the word "might", consider breaking color and size into two
different tables based upon the following:
1.  The possible set of "valid" answers.
2.  Whether that element will be used in any sort of grouping or
searching level (are you able to search by color, for instance)

-- 
The NCP Revue -- http://www.ncprevue.com/blog


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



mysqldump - IF NOT EXISTS

2007-05-23 Thread Johannes Skov Frandsen
Hi

Im contructing a build script and would like to dump my dev db with
mysqldump so I can run a deploy script on the production server with the
update db.

I would like to construct the script so that it updates existing
tables/fields/posts and add new tables/fields/post if they do not exists.

My buildscript constructs this  command (written in php):

$command = "mysqldump".
" --password=$pass".
" --user=$user".
" --skip-opt".
" --add-locks".
" --create-options".
" --disable-keys".
" --extended-insert".
" --lock-tables".
" --quick".
"
--result-file=".self::BUILD_PATH.'/'.self::PROJECT_NAME."/".$dbName."_dump.sql".
" --databases ".$dbName;
shell_exec($command);

and mysql dump generates the file with out any problems...but:

I have noticed, that when I dump from phpmyadmin 'IF NOT EXISTS' is
added to all create table statements. This dos not
happen with the above command. It is added to create database statement
thought... is that good enought?

In the phpmyadmin dump you can use update statements instead of insert
to populate the tables. Can, and if so, how is this
accomplished with mysqldump?

I have read alot about mysqldump at this adress:
http://dev.mysql.com/doc/refman/5.0/en/mysqldump.html

but can't seem to find the answers for the above questions.

Any help or pointer greatly appreciated.

-- 

Regards / Venlig hilsen

Johannes Skov Frandsen

*Address:* Egelundsvej 18, DK-5260 Odense S
*Web:* www.omesc.com | *Email:* [EMAIL PROTECTED]


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Confusing backslash issues with LIKE

2007-05-23 Thread Yves Goergen
Hello,

I'm having some trouble with the LIKE operator on MySQL 5.0. Here's my
transcript:

mysql> select 'abc\\def';
+-+
| abc\def |
+-+
| abc\def |
+-+
1 row in set (0.00 sec)

mysql> select 'abc\\def' like '%\\%';
++
| 'abc\\def' like '%\\%' |
++
|  0 |
++
1 row in set (0.00 sec)

mysql> select 'abc\\def' like '%%';
+--+
| 'abc\\def' like '%%' |
+--+
|1 |
+--+
1 row in set (0.00 sec)

The last two show my problem. When I search for a backslash, I need to
escape it *twice*. Why that? I can't see that from the manual [1].

[1]
http://dev.mysql.com/doc/refman/5.0/en/string-comparison-functions.html#operator_like

-- 
Yves Goergen "LonelyPixel" <[EMAIL PROTECTED]>
Visit my web laboratory at http://beta.unclassified.de

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Database design

2007-05-23 Thread Officelink
Hi everyone,

I¹m trying to set up a database with information that will be used in a
garment slideshow in flash.

The information to be included as part of the slideshow would be:
code, optional title, description, colours, sizes, garment image, fabric
swatch image

Each clothing item to be included in the slideshow will belong to one of
eleven or so categories. Each of the categories will belong to one of two
category types.

I also planned to set up a simple CMS that would allow the information to be
added, edited and deleted from the database.

With the above goals in mind, I came up with two tables as follows:

GARMENTS TABLE
garment_id, int(11), not null, auto_increment, primary key
cat_id, int(10), unsigned, not null, default 0
garment_code, varchar(30), not null
garment_title, varchar(40), null
garment_desc, varchar(255), not null
garment_image, varchar(50), not null
garment_colour, varchar(50), not null
garment_swatch, varchar(50), null
garment_sizes, varchar(100), not null
 
CATEGORIES TABLE
cat_id, int(10), not null, auto_increment, primary key
cat_name, varchar(40), not null
cat_type, tinyint(4), not null, default 1

I was worried about repeating data in some of the columns, for example the
garment_desc column would have information about sleeve length, cuff type,
fabric, fabric composition etc. and I thought that all these areas could
possibly be broken up into separate tables, but I wasn¹t sure if it was
necessary. Also the colour and size columns would have a lot of repetitive
data.

Someone indicated that normalization is not about eliminating repetition,
it¹s about ensuring that the non-key attributes are functionally dependent
on the entire primary key, but then I read somewhere that you¹re supposed to
break down the information as far as possible to avoid redundancy so I¹m a
bit confused. Or does it depend on the situation and what¹s required of the
database. I mean say the CMS needed to have more functionality than what I
indicated above ­ I mean say the client wanted to be able to generate
reports based on style information such as fabric composition or sleeve
style etc. - would this change the setup?

I wondered if someone could comment on the setup to see if I¹m on the right
track here?

Appreciate any help.


Re: Help on selecting a View with 3 Billions rows !

2007-05-23 Thread Brent Baisley
I think what you want to do is look into creating a spacial index on the raw data. Then just searching on that index will allow you 
to judge distances between things. You won't need to create a table with every single possible combination. There are quite a few 
examples in the manual and online.


But, if you really want to go that route, do a little bit at a time instead of all at once. It will go quicker and you'll be able to 
judge the prgress. For example, join 1 record from one table with the 65K records from the other and insert that into the 
combination table. Then grab the second record, etc. Do that 54K times. You'll be surprised at how quickly it goes.


- Original Message - 
From: "Chris Prakoso" <[EMAIL PROTECTED]>

To: 
Sent: Tuesday, May 22, 2007 6:43 PM
Subject: Help on selecting a View with 3 Billions rows !



Hi all,




I am having a big performance performance problem here, and I was wondering if 
you could give me some advise here.


I have 2 big static tables, one with 65K rows, the other with 54K rows.
I need to do a calculation for each combination of these two table
rows.


So what I did was to create a view like so:




 select


   `c`.`TRACT` AS `TRACT`,


   `c`.`LNG` AS `tlng`,


   `c`.`LAT` AS `tlat`,


   `p`.`NAME` AS `name`,


   `p`.`LNG` AS `lng`,


   `p`.`LAT` AS `lat`,


   `Calc_Distance`(`c`.`LAT`,`c`.`LNG`,`p`.`LAT`,`p`.`LNG`) AS `distance`


 from


   (`tracts` `c` join `parks` `p`);




This give me a view with more than 3,500,000,000  rows !



Now, the second part of this exercise is to grab only a set or rows, where the 
distance is less than 50.


So, I thought I would create another view like so:




 select


   `t`.`TRACT` AS `TRACT`,


   `t`.`tlng` AS `tlng`,


   `t`.`tlat` AS `tlat`,


   `t`.`name` AS `name`,


   `t`.`lng` AS `lng`,


   `t`.`lat` AS `lat`,


   `t`.`distance` AS `distance`


 from


   `tractparkdistance` `t`


 where


   (`t`.`distance` < 50);




tractparkdisctance is the name of the view.




But opening this view takes 'a lot of time' !  I just couldn't wait for it.



So, I though I would try to export this to an external file via
SELECT INTO, and re-import the resulting file back to a new table.




So I did like so:




select *


into outfile "park_distances"


from tractparkdistance


where distance < 50;



Running this statement took more than 12 hours, and still counting
until I killed the process. So far it has produced an 800 MB file.


Moreover, I still need to do a Mean calculation from that 'limited' set of 
data, and still do more calculations.


Next try, I using INSERT INTO SELECT like this:

insert into park_distance_radius50s(tract,tlng,tlat,name,lng,lat,distance)
select tract,tlng,tlat,name,lng,lat,distance
from tractparkdistance
where distance < 50

This was running a very long time as well.



I think I'm out of my depth here.  Anybody has any idea on this ?




Thanks very much in advance !




Regards,


Chris


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: mysqldump running out out of memory

2007-05-23 Thread fredrik
> Fredrik Carlsson:
>>
>> Hi,
>>
>> Yes I'm using the -q option with mysqldump, but the result is the same.
>>
>> This is a replicated environment and the master is running FreeBSD and
>> the slave NetBSD and on the master which only has InnoDB tables there
>> is no problems to run a dump but the machine is to loaded so we can
>> not afford to run the dump there. The tables on the slave is mostly
>> Myisam, maybe there is some kind of memory buffer that I'm missing to
>> tune on NetBSD but i can't figure out what it can be, I've already
>> increased the ulimit values for the session running the dump.
>>
>> // Fredrik
>>
>>
>> Atle Veka wrote:
>>> Have you tried this flag?
>>> -q, --quick Don't buffer query, dump directly to stdout.
>>>
>>>
>>>
>>> On Sun, 29 Apr 2007, Fredrik Carlsson wrote:
>>>
 Hi,

 I have a problem with mysqldump, its exiting with the message

 mysqldump: Error 5: Out of memory (Needed 22042208 bytes) when dumping
 table `theTable` at row: 2990911

 I have searched the archives and tried what people suggested but
 nothing
 seems to work, I'm dumping using the following command:
 /usr/pkg/bin/mysqldump -h localhost -B theDatabase --skip-opt
 --max_allowed_packet=1024M -q

 Any tips on how to get the dump running? the dump should be about
 15-20GB in size the fully dumped, but I never seems to get there.

 // Fredrik Carlsson




>>>
>>>
>>
>>
> how many free memory?
>


The machine have 4G memory and about 1.5G is free so there it is not
running out of memory.

// Fredrik




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]