Fulltext Index / Index .. both?

2007-09-07 Thread Justin
I'm wanting to add a new field to my table that I will both be grouping by, 
searching with full text hits and plain likes.. 

I know I'll need a FULL TEXT index, but will that full text index also index 
like a normal index will? or should I also add an index too..  

thanks.

Re: Delete query question

2007-09-05 Thread Justin

try

SELECT * from geno_260k WHERE  ident IN (SELECT ident FROM geno_260k WHERE 
a1=0

GROUP BY ident HAVING count(a1)25);

This will give you what you're deleting first.. then if that is good. do


DELETE FROM geno_260k WHERE  ident IN (SELECT ident FROM geno_260k WHERE 
a1=0

GROUP BY ident HAVING count(a1)25);

(note the change in case is just my way of seeing things.. it's not 
necessary that I know of)



- Original Message - 
From: Olaf Stein [EMAIL PROTECTED]

To: MySql mysql@lists.mysql.com
Sent: Wednesday, September 05, 2007 9:35 AM
Subject: Delete query question



Hey all

I am stuck here (thinking wise) and need some ideas:

I have this table:

CREATE TABLE `geno_260k` (
 `genotype_id` int(10) unsigned NOT NULL auto_increment,
 `ident` int(10) unsigned NOT NULL,
 `marker_id` int(10) unsigned NOT NULL,
 `a1` tinyint(3) unsigned NOT NULL,
 `a2` tinyint(3) unsigned NOT NULL default '0',
 PRIMARY KEY  (`genotype_id`),
 KEY `ident` (`ident`),
 KEY `marker_id` (`marker_id`),
 CONSTRAINT `geno_260k_ibfk_2` FOREIGN KEY (`marker_id`) REFERENCES
`markers` (`marker_id`),
 CONSTRAINT `geno_260k_ibfk_1` FOREIGN KEY (`ident`) REFERENCES
`individual` (`ident`)
) ENGINE=InnoDB AUTO_INCREMENT=91318273 DEFAULT CHARSET=utf8


And with the following query I get 159 ident's back:

select ident from geno_260k where a1=0 group by ident having
count(a1)25;

I want to delete all records containing those idents (about 26 per 
ident

so 159*26).
So I thought

delete from geno_260k where ident=(select ident from geno_260k where a1=0
group by ident having count(a1)25);

But mysql can not select and delete from the same table.

Any ideas?

Thanks
Olaf



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]





--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: servers full potential / FT searches locking tables

2007-09-04 Thread Justin
Figure it out at 3am this morning while I was sleeping I remembered there 
was an rsync on the databases every 10 mins to replicate the files across 
the network.


I killed the rsync.. and all has been flawless!  =)

gotta love the dream world. answers are always there!



- Original Message - 
From: Baron Schwartz [EMAIL PROTECTED]

To: Justin [EMAIL PROTECTED]
Cc: mysql@lists.mysql.com
Sent: Monday, September 03, 2007 4:42 PM
Subject: Re: servers full potential / FT searches locking tables



Justin wrote:

lockup just happened again.. here's a innodb status.


InnoDB status will be basically useless, as full-text is only applicable 
to MyISAM, and indeed your status output shows only one transaction is 
running (the one running 'show innodb status') and InnoDB has done zero 
work since the server was started (look in the ROW OPERATIONS section).


So your server isn't using InnoDB for any tables (at least none you've 
queried since starting the server).


But you're still allocating some memory and other resources to it.  If I 
were you I'd put the following in your /etc/my.cnf just to ensure InnoDB 
isn't interfering or complicating anything:


skip_innodb

Cheers
Baron

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]





--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Memory Issue would someone confirm

2007-09-04 Thread Justin
alright.. after some testing this afternoon I'm beginning to wonder if this 
is a memory issue.. here's what the test was..


I did a insert select from a table that has approx 500,000 rows and the 
table data is about 1gb in size.. the query went and was in the repair by 
sorting phase when all of a sudden all my queries started to show up with  a 
state of Locked.


As soon as I killed the Insert select query all queries ran fine.

Any insight on how I can prevent this would be much appriciated.

thanks

Ju.



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Memory Issue would someone confirm

2007-09-04 Thread Justin

I just changed to these values


[mysqld]
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock
wait_timeout=30
default-character-set=utf8
max_allowed_packet = 14M  (lowered from 3000MB)
max_connections = 600 (lowered from 3000)
ft_min_word_len = 3

key_buffer_size=2500M

now looking into table_cache.


- Original Message - 
From: Justin [EMAIL PROTECTED]

To: mysql@lists.mysql.com
Sent: Tuesday, September 04, 2007 1:46 PM
Subject: Memory Issue would someone confirm


alright.. after some testing this afternoon I'm beginning to wonder if 
this is a memory issue.. here's what the test was..


I did a insert select from a table that has approx 500,000 rows and the 
table data is about 1gb in size.. the query went and was in the repair by 
sorting phase when all of a sudden all my queries started to show up with 
a state of Locked.


As soon as I killed the Insert select query all queries ran fine.

Any insight on how I can prevent this would be much appriciated.

thanks

Ju.



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]





--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: servers full potential / FT searches locking tables

2007-09-03 Thread Justin

|
++
1 row in set, 1 warning (0.06 sec)




- Original Message - 
From: Justin [EMAIL PROTECTED]

To: mysql@lists.mysql.com
Sent: Friday, August 31, 2007 4:28 PM
Subject: Re: servers full potential / FT searches locking tables



Alright.. I think I see what's is happening after this latest lockup..

here's what I think is happening..

When a replace into query locks a table for a few seconds there are a boot 
load of connections to the db, and then when the table is unlocked the 
connections start to filter through and usually they all finish and 
de-queue nicely but this last time it seemed there were 400-500 constant 
connections never actually going away.. the query it's self finished. but 
there was one right behind it to take it's place..


Almost like it's giving it's self a dos..  Is there any settings I can 
adjust on the server to help with this? or would it be more on the code 
side.


As always when I restart the instance of mysql all goes back smoothly so 
it makes me wonder if it's something in the mysql config that is lagging 
for some reason.


thanks.


- Original Message - 
From: Michael Dykman [EMAIL PROTECTED]

To: Justin [EMAIL PROTECTED]
Cc: mysql@lists.mysql.com
Sent: Tuesday, August 28, 2007 1:31 PM
Subject: Re: servers full potential / FT searches locking tables


No, I'm afraid not.  32 bit architectures have a theoretical limit of
4G of memory space for the entire application: in actual practice, for
a variety of reasons too complex to go into here (and are well
documented elsewhere) your key buffer should be limited to around 2.5G
max, and this is assuming a pure MyISAM implementation.  There simply
is no way a 32 bit build can make use of all that RAM, regardless of
OS.

- michael dykman


On 8/28/07, Justin [EMAIL PROTECTED] wrote:

32bit, but I have all available memory..

MemTotal:  8179612 kB
MemFree: 43684 kB

on the box.   I think the 4gb is only windows.

All my tables are in myisam

so if I was to set
key_buffer_size=5500M

That'd be acceptable?

- Original Message -
From: Mathieu Bruneau [EMAIL PROTECTED]
To: Justin [EMAIL PROTECTED]
Cc: mysql@lists.mysql.com
Sent: Tuesday, August 28, 2007 12:51 AM
Subject: Re: servers full potential / FT searches locking tables


 Your settings doesn't seem optimized much.

 So here first question, do you use 32bits or 64 bits platform? If you 
 have
 64 bits platform with 64 bits mysql and os you can boost most the 
 settings

 to use almost the 8G of ram you have on the server. If you are using
 32bits you will have to do some calculation so you don't go over ~2.6G
 (why not 4Gb?, go read on that on the net)

 So the 2 most importants settings are:
 key_buffer_size (mainly myisam table)
 and/or
 innodb_buffer_pool_size (innodb table)

 Depending if you're using more innodb or myisam (or a mix) you'll tweak
 those pamareters differently, it's usually however not recommended to 
 go

 over 4Gb for the key_buffer_size. MyIsam only stores the key into that
 buffer, so you don't have much index, not worth taking it too big for 
 no
 reason. Innodb however can cache data as well, and will benefit from 
 the

 biggest value possible.

 The server generate statistic that you can look to know the effect of
 that. If you are using phpmyadmin in the variables and status part you 
 can

 see the index usage to guide you.


 You can have a look at the different my.cnf that comes with mysql
 distribution they put comment in there with interesting value for 
 thumbs

 rule. Here the except for key_buffer_size and innodb_buffer_pool_size:
 # Size of the Key Buffer, used to cache index blocks for MyISAM tables.
 # Do not set it larger than 30% of your available memory, as some 
 memory

 # is also required by the OS to cache rows. Even if you're not using
 # MyISAM tables, you should still set it to 8-64M as it will also be
 # used for internal temporary disk tables.
 key_buffer_size=2G

 # InnoDB, unlike MyISAM, uses a buffer pool to cache both indexes and
 # row data. The bigger you set this the less disk I/O is needed to
 # access data in tables. On a dedicated database server you may set 
 this
 # parameter up to 80% of the machine physical memory size. Do not set 
 it

 # too large, though, because competition of the physical memory may
 # cause paging in the operating system.  Note that on 32bit systems you
 # might be limited to 2-3.5G of user level memory per process, so do 
 not

 # set it too high.
 innodb_buffer_pool_size=2G

 Regards,
 --
 Mathieu Bruneau
 aka ROunofF

 ===
 GPG keys available @ http://rounoff.darktech.org

 Justin a écrit :
 Ok.. Straight to the point.. Here is what I currently have.

 MySQL Ver 14.12 Distrib 5.0.27
 RHEL vs 5
 584GB Raid 5 storage
 8GB of RAM
 and Dual 5130

Re: servers full potential / FT searches locking tables

2007-08-31 Thread Justin

Alright.. I think I see what's is happening after this latest lockup..

here's what I think is happening..

When a replace into query locks a table for a few seconds there are a boot 
load of connections to the db, and then when the table is unlocked the 
connections start to filter through and usually they all finish and de-queue 
nicely but this last time it seemed there were 400-500 constant connections 
never actually going away.. the query it's self finished. but there was one 
right behind it to take it's place..


Almost like it's giving it's self a dos..  Is there any settings I can 
adjust on the server to help with this? or would it be more on the code 
side.


As always when I restart the instance of mysql all goes back smoothly so it 
makes me wonder if it's something in the mysql config that is lagging for 
some reason.


thanks.


- Original Message - 
From: Michael Dykman [EMAIL PROTECTED]

To: Justin [EMAIL PROTECTED]
Cc: mysql@lists.mysql.com
Sent: Tuesday, August 28, 2007 1:31 PM
Subject: Re: servers full potential / FT searches locking tables


No, I'm afraid not.  32 bit architectures have a theoretical limit of
4G of memory space for the entire application: in actual practice, for
a variety of reasons too complex to go into here (and are well
documented elsewhere) your key buffer should be limited to around 2.5G
max, and this is assuming a pure MyISAM implementation.  There simply
is no way a 32 bit build can make use of all that RAM, regardless of
OS.

- michael dykman


On 8/28/07, Justin [EMAIL PROTECTED] wrote:

32bit, but I have all available memory..

MemTotal:  8179612 kB
MemFree: 43684 kB

on the box.   I think the 4gb is only windows.

All my tables are in myisam

so if I was to set
key_buffer_size=5500M

That'd be acceptable?

- Original Message -
From: Mathieu Bruneau [EMAIL PROTECTED]
To: Justin [EMAIL PROTECTED]
Cc: mysql@lists.mysql.com
Sent: Tuesday, August 28, 2007 12:51 AM
Subject: Re: servers full potential / FT searches locking tables


 Your settings doesn't seem optimized much.

 So here first question, do you use 32bits or 64 bits platform? If you 
 have
 64 bits platform with 64 bits mysql and os you can boost most the 
 settings

 to use almost the 8G of ram you have on the server. If you are using
 32bits you will have to do some calculation so you don't go over ~2.6G
 (why not 4Gb?, go read on that on the net)

 So the 2 most importants settings are:
 key_buffer_size (mainly myisam table)
 and/or
 innodb_buffer_pool_size (innodb table)

 Depending if you're using more innodb or myisam (or a mix) you'll tweak
 those pamareters differently, it's usually however not recommended to go
 over 4Gb for the key_buffer_size. MyIsam only stores the key into that
 buffer, so you don't have much index, not worth taking it too big for no
 reason. Innodb however can cache data as well, and will benefit from the
 biggest value possible.

 The server generate statistic that you can look to know the effect of
 that. If you are using phpmyadmin in the variables and status part you 
 can

 see the index usage to guide you.


 You can have a look at the different my.cnf that comes with mysql
 distribution they put comment in there with interesting value for thumbs
 rule. Here the except for key_buffer_size and innodb_buffer_pool_size:
 # Size of the Key Buffer, used to cache index blocks for MyISAM tables.
 # Do not set it larger than 30% of your available memory, as some memory
 # is also required by the OS to cache rows. Even if you're not using
 # MyISAM tables, you should still set it to 8-64M as it will also be
 # used for internal temporary disk tables.
 key_buffer_size=2G

 # InnoDB, unlike MyISAM, uses a buffer pool to cache both indexes and
 # row data. The bigger you set this the less disk I/O is needed to
 # access data in tables. On a dedicated database server you may set this
 # parameter up to 80% of the machine physical memory size. Do not set it
 # too large, though, because competition of the physical memory may
 # cause paging in the operating system.  Note that on 32bit systems you
 # might be limited to 2-3.5G of user level memory per process, so do not
 # set it too high.
 innodb_buffer_pool_size=2G

 Regards,
 --
 Mathieu Bruneau
 aka ROunofF

 ===
 GPG keys available @ http://rounoff.darktech.org

 Justin a écrit :
 Ok.. Straight to the point.. Here is what I currently have.

 MySQL Ver 14.12 Distrib 5.0.27
 RHEL vs 5
 584GB Raid 5 storage
 8GB of RAM
 and Dual 5130 processors (2.0GHz Intel Dual-Core Xeon)

 what my question is.. is am I utilizing the servers potential with the
 following as my settings.  The server is a dedicated MySQL server so I
 want all power to go to the server. It just seems to be laggy at times.
 And I want to be sure I've optimized to the fullest potential

 My biggest issue is with FT searches. Tables get locked during larger
 queries and I can't select anything when that happens. Is there any way

Re: servers full potential / FT searches locking tables

2007-08-28 Thread Justin

32bit, but I have all available memory..

MemTotal:  8179612 kB
MemFree: 43684 kB

on the box.   I think the 4gb is only windows.

All my tables are in myisam

so if I was to set
key_buffer_size=5500M

That'd be acceptable?

- Original Message - 
From: Mathieu Bruneau [EMAIL PROTECTED]

To: Justin [EMAIL PROTECTED]
Cc: mysql@lists.mysql.com
Sent: Tuesday, August 28, 2007 12:51 AM
Subject: Re: servers full potential / FT searches locking tables



Your settings doesn't seem optimized much.

So here first question, do you use 32bits or 64 bits platform? If you have 
64 bits platform with 64 bits mysql and os you can boost most the settings 
to use almost the 8G of ram you have on the server. If you are using 
32bits you will have to do some calculation so you don't go over ~2.6G 
(why not 4Gb?, go read on that on the net)


So the 2 most importants settings are:
key_buffer_size (mainly myisam table)
and/or
innodb_buffer_pool_size (innodb table)

Depending if you're using more innodb or myisam (or a mix) you'll tweak 
those pamareters differently, it's usually however not recommended to go 
over 4Gb for the key_buffer_size. MyIsam only stores the key into that 
buffer, so you don't have much index, not worth taking it too big for no 
reason. Innodb however can cache data as well, and will benefit from the 
biggest value possible.


The server generate statistic that you can look to know the effect of 
that. If you are using phpmyadmin in the variables and status part you can 
see the index usage to guide you.



You can have a look at the different my.cnf that comes with mysql 
distribution they put comment in there with interesting value for thumbs 
rule. Here the except for key_buffer_size and innodb_buffer_pool_size:

# Size of the Key Buffer, used to cache index blocks for MyISAM tables.
# Do not set it larger than 30% of your available memory, as some memory
# is also required by the OS to cache rows. Even if you're not using
# MyISAM tables, you should still set it to 8-64M as it will also be
# used for internal temporary disk tables.
key_buffer_size=2G

# InnoDB, unlike MyISAM, uses a buffer pool to cache both indexes and
# row data. The bigger you set this the less disk I/O is needed to
# access data in tables. On a dedicated database server you may set this
# parameter up to 80% of the machine physical memory size. Do not set it
# too large, though, because competition of the physical memory may
# cause paging in the operating system.  Note that on 32bit systems you
# might be limited to 2-3.5G of user level memory per process, so do not
# set it too high.
innodb_buffer_pool_size=2G

Regards,
--
Mathieu Bruneau
aka ROunofF

===
GPG keys available @ http://rounoff.darktech.org

Justin a écrit :

Ok.. Straight to the point.. Here is what I currently have.

MySQL Ver 14.12 Distrib 5.0.27
RHEL vs 5
584GB Raid 5 storage
8GB of RAM
and Dual 5130 processors (2.0GHz Intel Dual-Core Xeon)

what my question is.. is am I utilizing the servers potential with the 
following as my settings.  The server is a dedicated MySQL server so I 
want all power to go to the server. It just seems to be laggy at times. 
And I want to be sure I've optimized to the fullest potential


My biggest issue is with FT searches. Tables get locked during larger 
queries and I can't select anything when that happens. Is there any way 
not to lock the tables on a Full Text search? (does that make sense?)


thanks again for any insight

Justin.

Here's a dump of the my.cnf and the phpmyadmin dump of vars.

/etc/my.cnf

[mysqld]
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock
wait_timeout=60
default-character-set=utf8
max_allowed_packet = 3000M
max_connections = 5000
ft_min_word_len=3

server-id=1
log-error = /var/log/mysql/error.log
expire_logs_days = 3


# Default to using old password format for compatibility with mysql 3.x
# clients (those using the mysqlclient10 compatibility package).
old_passwords=0

[mysql.server]
user=mysql

[mysqld_safe]
err-log=/var/log/mysql/mysqld.log
pid-file=/var/run/mysqld/mysqld.pid


auto increment increment   1
auto increment offset  1
automatic sp privileges  ON
back log  50
basedir  /
binlog cache size  32,768
bulk insert buffer size  8,388,608
character set client  utf8
character set connection  utf8
character set database  utf8
character set filesystem  binary
character set results  utf8
character set server  utf8
character set system  utf8
character sets dir  /usr/share/mysql/charsets/
collation connection  utf8_general_ci
collation database  utf8_general_ci
collation server  utf8_general_ci
completion type  0
concurrent insert  1
connect timeout  5
datadir  /var/lib/mysql/
date format  %Y-%m-%d
datetime format  %Y-%m-%d %H:%i:%s
default week format  0
delay key write  ON
delayed insert limit  100
delayed insert timeout  300
delayed queue size  1,000
div precision increment  4
engine condition pushdown  OFF
expire logs days  3
flush  OFF
flush

Re: servers full potential / FT searches locking tables

2007-08-28 Thread Justin
heh.. ok I'll throw that at my system tech.   and see what it's going to 
take to get a 64bit version of an OS.

for now I'll limit to 2.5

thanks michael.


- Original Message - 
From: Michael Dykman [EMAIL PROTECTED]

To: Justin [EMAIL PROTECTED]
Cc: mysql@lists.mysql.com
Sent: Tuesday, August 28, 2007 1:31 PM
Subject: Re: servers full potential / FT searches locking tables


No, I'm afraid not.  32 bit architectures have a theoretical limit of
4G of memory space for the entire application: in actual practice, for
a variety of reasons too complex to go into here (and are well
documented elsewhere) your key buffer should be limited to around 2.5G
max, and this is assuming a pure MyISAM implementation.  There simply
is no way a 32 bit build can make use of all that RAM, regardless of
OS.

- michael dykman


On 8/28/07, Justin [EMAIL PROTECTED] wrote:

32bit, but I have all available memory..

MemTotal:  8179612 kB
MemFree: 43684 kB

on the box.   I think the 4gb is only windows.

All my tables are in myisam

so if I was to set
key_buffer_size=5500M

That'd be acceptable?

- Original Message -
From: Mathieu Bruneau [EMAIL PROTECTED]
To: Justin [EMAIL PROTECTED]
Cc: mysql@lists.mysql.com
Sent: Tuesday, August 28, 2007 12:51 AM
Subject: Re: servers full potential / FT searches locking tables


 Your settings doesn't seem optimized much.

 So here first question, do you use 32bits or 64 bits platform? If you 
 have
 64 bits platform with 64 bits mysql and os you can boost most the 
 settings

 to use almost the 8G of ram you have on the server. If you are using
 32bits you will have to do some calculation so you don't go over ~2.6G
 (why not 4Gb?, go read on that on the net)

 So the 2 most importants settings are:
 key_buffer_size (mainly myisam table)
 and/or
 innodb_buffer_pool_size (innodb table)

 Depending if you're using more innodb or myisam (or a mix) you'll tweak
 those pamareters differently, it's usually however not recommended to go
 over 4Gb for the key_buffer_size. MyIsam only stores the key into that
 buffer, so you don't have much index, not worth taking it too big for no
 reason. Innodb however can cache data as well, and will benefit from the
 biggest value possible.

 The server generate statistic that you can look to know the effect of
 that. If you are using phpmyadmin in the variables and status part you 
 can

 see the index usage to guide you.


 You can have a look at the different my.cnf that comes with mysql
 distribution they put comment in there with interesting value for thumbs
 rule. Here the except for key_buffer_size and innodb_buffer_pool_size:
 # Size of the Key Buffer, used to cache index blocks for MyISAM tables.
 # Do not set it larger than 30% of your available memory, as some memory
 # is also required by the OS to cache rows. Even if you're not using
 # MyISAM tables, you should still set it to 8-64M as it will also be
 # used for internal temporary disk tables.
 key_buffer_size=2G

 # InnoDB, unlike MyISAM, uses a buffer pool to cache both indexes and
 # row data. The bigger you set this the less disk I/O is needed to
 # access data in tables. On a dedicated database server you may set this
 # parameter up to 80% of the machine physical memory size. Do not set it
 # too large, though, because competition of the physical memory may
 # cause paging in the operating system.  Note that on 32bit systems you
 # might be limited to 2-3.5G of user level memory per process, so do not
 # set it too high.
 innodb_buffer_pool_size=2G

 Regards,
 --
 Mathieu Bruneau
 aka ROunofF

 ===
 GPG keys available @ http://rounoff.darktech.org

 Justin a écrit :
 Ok.. Straight to the point.. Here is what I currently have.

 MySQL Ver 14.12 Distrib 5.0.27
 RHEL vs 5
 584GB Raid 5 storage
 8GB of RAM
 and Dual 5130 processors (2.0GHz Intel Dual-Core Xeon)

 what my question is.. is am I utilizing the servers potential with the
 following as my settings.  The server is a dedicated MySQL server so I
 want all power to go to the server. It just seems to be laggy at times.
 And I want to be sure I've optimized to the fullest potential

 My biggest issue is with FT searches. Tables get locked during larger
 queries and I can't select anything when that happens. Is there any way
 not to lock the tables on a Full Text search? (does that make sense?)

 thanks again for any insight

 Justin.

 Here's a dump of the my.cnf and the phpmyadmin dump of vars.
 
 /etc/my.cnf

 [mysqld]
 datadir=/var/lib/mysql
 socket=/var/lib/mysql/mysql.sock
 wait_timeout=60
 default-character-set=utf8
 max_allowed_packet = 3000M
 max_connections = 5000
 ft_min_word_len=3

 server-id=1
 log-error = /var/log/mysql/error.log
 expire_logs_days = 3


 # Default to using old password format for compatibility with mysql 3.x
 # clients (those using the mysqlclient10 compatibility package).
 old_passwords=0

 [mysql.server]
 user=mysql

 [mysqld_safe]
 err-log=/var/log/mysql/mysqld.log
 pid

servers full potential / FT searches locking tables

2007-08-27 Thread Justin

Ok.. Straight to the point.. Here is what I currently have.

MySQL Ver 14.12 Distrib 5.0.27
RHEL vs 5
584GB Raid 5 storage
8GB of RAM
and Dual 5130 processors (2.0GHz Intel Dual-Core Xeon)

what my question is.. is am I utilizing the servers potential with the 
following as my settings.  The server is a dedicated MySQL server so I want 
all power to go to the server. It just seems to be laggy at times. And I 
want to be sure I've optimized to the fullest potential


My biggest issue is with FT searches. Tables get locked during larger 
queries and I can't select anything when that happens. Is there any way not 
to lock the tables on a Full Text search? (does that make sense?)


thanks again for any insight

Justin.

Here's a dump of the my.cnf and the phpmyadmin dump of vars.

/etc/my.cnf

[mysqld]
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock
wait_timeout=60
default-character-set=utf8
max_allowed_packet = 3000M
max_connections = 5000
ft_min_word_len=3

server-id=1
log-error = /var/log/mysql/error.log
expire_logs_days = 3


# Default to using old password format for compatibility with mysql 3.x
# clients (those using the mysqlclient10 compatibility package).
old_passwords=0

[mysql.server]
user=mysql

[mysqld_safe]
err-log=/var/log/mysql/mysqld.log
pid-file=/var/run/mysqld/mysqld.pid


auto increment increment   1
auto increment offset  1
automatic sp privileges  ON
back log  50
basedir  /
binlog cache size  32,768
bulk insert buffer size  8,388,608
character set client  utf8
character set connection  utf8
character set database  utf8
character set filesystem  binary
character set results  utf8
character set server  utf8
character set system  utf8
character sets dir  /usr/share/mysql/charsets/
collation connection  utf8_general_ci
collation database  utf8_general_ci
collation server  utf8_general_ci
completion type  0
concurrent insert  1
connect timeout  5
datadir  /var/lib/mysql/
date format  %Y-%m-%d
datetime format  %Y-%m-%d %H:%i:%s
default week format  0
delay key write  ON
delayed insert limit  100
delayed insert timeout  300
delayed queue size  1,000
div precision increment  4
engine condition pushdown  OFF
expire logs days  3
flush  OFF
flush time  0
ft boolean syntax  + -()~*:|
ft max word len  84
ft min word len  3
ft query expansion limit  20
ft stopword file  (built-in)
group concat max len  1,024
have archive  YES
have bdb  NO
have blackhole engine  NO
have compress  YES
have crypt  YES
have csv  NO
have dynamic loading  YES
have example engine  NO
have federated engine  NO
have geometry  YES
have innodb  YES
have isam  NO
have merge engine  YES
have ndbcluster  NO
have openssl  DISABLED
have query cache  YES
have raid  NO
have rtree keys  YES
have symlink  YES
init connect
init file
init slave
innodb additional mem pool size  1,048,576
innodb autoextend increment  8
innodb buffer pool awe mem mb  0
innodb buffer pool size  8,388,608
innodb checksums  ON
innodb commit concurrency  0
innodb concurrency tickets  500
innodb data file path  ibdata1:10M:autoextend
innodb data home dir
innodb doublewrite  ON
innodb fast shutdown  1
innodb file io threads  4
innodb file per table  OFF
innodb flush log at trx commit  1
innodb flush method
innodb force recovery  0
innodb lock wait timeout  50
innodb locks unsafe for binlog  OFF
innodb log arch dir
innodb log archive  OFF
innodb log buffer size  1,048,576
innodb log file size  5,242,880
innodb log files in group  2
innodb log group home dir  ./
innodb max dirty pages pct  90
innodb max purge lag  0
innodb mirrored log groups  1
innodb open files  300
innodb support xa  ON
innodb sync spin loops  20
innodb table locks  ON
innodb thread concurrency  8
innodb thread sleep delay  10,000
interactive timeout  28,800
join buffer size  131,072
key buffer size  8,388,600
key cache age threshold  300
key cache block size  1,024
key cache division limit  100
language  /usr/share/mysql/english/
large files support  ON
large page size  0
large pages  OFF
lc time names  en_US
license  GPL
local infile  ON
locked in memory  OFF
log  OFF
log bin  OFF
log bin trust function creators  OFF
log error  /var/log/mysql/error.log
log queries not using indexes  OFF
log slave updates  OFF
log slow queries  OFF
log warnings  1
long query time  10
low priority updates  OFF
lower case file system  OFF
lower case table names  0
max allowed packet  1,073,740,800
max binlog cache size  4,294,967,295
max binlog size  1,073,741,824
max connect errors  10
max connections  5,000
max delayed threads  20
max error count  64
max heap table size  16,777,216
max insert delayed threads  20
max join size  18446744073709551615
max length for sort data  1,024
max prepared stmt count  16,382
max relay log size  0
max seeks for key  4,294,967,295
max sort length  1,024
max sp recursion depth  0
max tmp tables  32
max user connections  0
max write lock count  4,294,967,295
multi range count  256
myisam data pointer size  6
myisam max sort file size  2,147,483,647

Re: servers full potential / FT searches locking tables

2007-08-27 Thread Justin
sorry..  you're right.. that came out wrong.. actually shouldn't of even of 
put that in there like that..


The locking is some other issue that I can't for the life of me remember 
what the query was.. I know it wasn't an Update but it was copying to a tmp 
table then sorting the result which was locking the table from the process I 
was looking at.   I'll keep an eye on it to see if I Can replicate it 
again..  I just want to make sure I've got the server config'd right to 
utilize everything I have.


thanks for the response




- Original Message - 
From: Jay Pipes [EMAIL PROTECTED]

To: Justin [EMAIL PROTECTED]
Cc: mysql@lists.mysql.com
Sent: Monday, August 27, 2007 2:26 PM
Subject: Re: servers full potential / FT searches locking tables


SELECTs don't lock the table.  Are you having frequent UPDATEs while 
selecting?  That would be the reason for locks.


-jay

Justin wrote:

Ok.. Straight to the point.. Here is what I currently have.

MySQL Ver 14.12 Distrib 5.0.27
RHEL vs 5
584GB Raid 5 storage
8GB of RAM
and Dual 5130 processors (2.0GHz Intel Dual-Core Xeon)

what my question is.. is am I utilizing the servers potential with the 
following as my settings.  The server is a dedicated MySQL server so I 
want all power to go to the server. It just seems to be laggy at times. 
And I want to be sure I've optimized to the fullest potential


My biggest issue is with FT searches. Tables get locked during larger 
queries and I can't select anything when that happens. Is there any way 
not to lock the tables on a Full Text search? (does that make sense?)


thanks again for any insight

Justin.

Here's a dump of the my.cnf and the phpmyadmin dump of vars.

/etc/my.cnf

[mysqld]
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock
wait_timeout=60
default-character-set=utf8
max_allowed_packet = 3000M
max_connections = 5000
ft_min_word_len=3

server-id=1
log-error = /var/log/mysql/error.log
expire_logs_days = 3


# Default to using old password format for compatibility with mysql 3.x
# clients (those using the mysqlclient10 compatibility package).
old_passwords=0

[mysql.server]
user=mysql

[mysqld_safe]
err-log=/var/log/mysql/mysqld.log
pid-file=/var/run/mysqld/mysqld.pid


auto increment increment   1
auto increment offset  1
automatic sp privileges  ON
back log  50
basedir  /
binlog cache size  32,768
bulk insert buffer size  8,388,608
character set client  utf8
character set connection  utf8
character set database  utf8
character set filesystem  binary
character set results  utf8
character set server  utf8
character set system  utf8
character sets dir  /usr/share/mysql/charsets/
collation connection  utf8_general_ci
collation database  utf8_general_ci
collation server  utf8_general_ci
completion type  0
concurrent insert  1
connect timeout  5
datadir  /var/lib/mysql/
date format  %Y-%m-%d
datetime format  %Y-%m-%d %H:%i:%s
default week format  0
delay key write  ON
delayed insert limit  100
delayed insert timeout  300
delayed queue size  1,000
div precision increment  4
engine condition pushdown  OFF
expire logs days  3
flush  OFF
flush time  0
ft boolean syntax  + -()~*:|
ft max word len  84
ft min word len  3
ft query expansion limit  20
ft stopword file  (built-in)
group concat max len  1,024
have archive  YES
have bdb  NO
have blackhole engine  NO
have compress  YES
have crypt  YES
have csv  NO
have dynamic loading  YES
have example engine  NO
have federated engine  NO
have geometry  YES
have innodb  YES
have isam  NO
have merge engine  YES
have ndbcluster  NO
have openssl  DISABLED
have query cache  YES
have raid  NO
have rtree keys  YES
have symlink  YES
init connect
init file
init slave
innodb additional mem pool size  1,048,576
innodb autoextend increment  8
innodb buffer pool awe mem mb  0
innodb buffer pool size  8,388,608
innodb checksums  ON
innodb commit concurrency  0
innodb concurrency tickets  500
innodb data file path  ibdata1:10M:autoextend
innodb data home dir
innodb doublewrite  ON
innodb fast shutdown  1
innodb file io threads  4
innodb file per table  OFF
innodb flush log at trx commit  1
innodb flush method
innodb force recovery  0
innodb lock wait timeout  50
innodb locks unsafe for binlog  OFF
innodb log arch dir
innodb log archive  OFF
innodb log buffer size  1,048,576
innodb log file size  5,242,880
innodb log files in group  2
innodb log group home dir  ./
innodb max dirty pages pct  90
innodb max purge lag  0
innodb mirrored log groups  1
innodb open files  300
innodb support xa  ON
innodb sync spin loops  20
innodb table locks  ON
innodb thread concurrency  8
innodb thread sleep delay  10,000
interactive timeout  28,800
join buffer size  131,072
key buffer size  8,388,600
key cache age threshold  300
key cache block size  1,024
key cache division limit  100
language  /usr/share/mysql/english/
large files support  ON
large page size  0
large pages  OFF
lc time names  en_US
license  GPL
local infile  ON
locked in memory  OFF
log  OFF
log bin

Re: servers full potential / FT searches locking tables

2007-08-27 Thread Justin
Sometimes I get about 300 connections to the server, all are selects and all 
select and get the data returned but the connection doesn't go away and the 
website doesn't load up.. usually if there is a lock, the selects wait 2-3 
secs and build up, but once unlocked the queries all are performed and then 
go away.. and the sites load fine.. 3 times today this has happened and I'm 
trying to figure out what could be the cause of it.  if I restart MySQL 
everything is good.


Anyone have any ideas or any ideas on how I can trace where the culprit 
would be..


it's a LAMP backend..




- Original Message - 
From: Jay Pipes [EMAIL PROTECTED]

To: Rolando Edwards [EMAIL PROTECTED]
Cc: mysql@lists.mysql.com; Justin [EMAIL PROTECTED]
Sent: Monday, August 27, 2007 3:03 PM
Subject: Re: servers full potential / FT searches locking tables



A read lock does not prevent other reads.

Rolando Edwards wrote:

SELECTs do lock the tables implicitly.

According to Page 400 (Section 28.1 : Locking Concepts) of MySQL 5.0 
Certification Study Guide (ISBN 0-672-32812-7), here is what the first 
bulletpoint says under the heading A lock on data can be acquired 
implicitly or explicitly:


For a client that does nothing special to acquires locks, the MySQL 
server implicitly acquires locks as necessary to process the client's 
statments sdafely. For example, the server acquires a read lock when the 
client issues a SELECT statement and a write lock when the client issues 
an INSERT statement. Implicit locks are acquired only for the duration of 
a single statement.


- Original Message -
From: Jay Pipes [EMAIL PROTECTED]
To: Justin [EMAIL PROTECTED]
Cc: mysql@lists.mysql.com
Sent: Monday, August 27, 2007 2:26:29 PM (GMT-0500) America/New_York
Subject: Re: servers full potential / FT searches locking tables

SELECTs don't lock the table.  Are you having frequent UPDATEs while 
selecting?  That would be the reason for locks.


-jay

Justin wrote:

Ok.. Straight to the point.. Here is what I currently have.

MySQL Ver 14.12 Distrib 5.0.27
RHEL vs 5
584GB Raid 5 storage
8GB of RAM
and Dual 5130 processors (2.0GHz Intel Dual-Core Xeon)

what my question is.. is am I utilizing the servers potential with the 
following as my settings.  The server is a dedicated MySQL server so I 
want all power to go to the server. It just seems to be laggy at times. 
And I want to be sure I've optimized to the fullest potential


My biggest issue is with FT searches. Tables get locked during larger 
queries and I can't select anything when that happens. Is there any way 
not to lock the tables on a Full Text search? (does that make sense?)


thanks again for any insight

Justin.

Here's a dump of the my.cnf and the phpmyadmin dump of vars.

/etc/my.cnf

[mysqld]
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock
wait_timeout=60
default-character-set=utf8
max_allowed_packet = 3000M
max_connections = 5000
ft_min_word_len=3

server-id=1
log-error = /var/log/mysql/error.log
expire_logs_days = 3


# Default to using old password format for compatibility with mysql 3.x
# clients (those using the mysqlclient10 compatibility package).
old_passwords=0

[mysql.server]
user=mysql

[mysqld_safe]
err-log=/var/log/mysql/mysqld.log
pid-file=/var/run/mysqld/mysqld.pid


auto increment increment   1
auto increment offset  1
automatic sp privileges  ON
back log  50
basedir  /
binlog cache size  32,768
bulk insert buffer size  8,388,608
character set client  utf8
character set connection  utf8
character set database  utf8
character set filesystem  binary
character set results  utf8
character set server  utf8
character set system  utf8
character sets dir  /usr/share/mysql/charsets/
collation connection  utf8_general_ci
collation database  utf8_general_ci
collation server  utf8_general_ci
completion type  0
concurrent insert  1
connect timeout  5
datadir  /var/lib/mysql/
date format  %Y-%m-%d
datetime format  %Y-%m-%d %H:%i:%s
default week format  0
delay key write  ON
delayed insert limit  100
delayed insert timeout  300
delayed queue size  1,000
div precision increment  4
engine condition pushdown  OFF
expire logs days  3
flush  OFF
flush time  0
ft boolean syntax  + -()~*:|
ft max word len  84
ft min word len  3
ft query expansion limit  20
ft stopword file  (built-in)
group concat max len  1,024
have archive  YES
have bdb  NO
have blackhole engine  NO
have compress  YES
have crypt  YES
have csv  NO
have dynamic loading  YES
have example engine  NO
have federated engine  NO
have geometry  YES
have innodb  YES
have isam  NO
have merge engine  YES
have ndbcluster  NO
have openssl  DISABLED
have query cache  YES
have raid  NO
have rtree keys  YES
have symlink  YES
init connect
init file
init slave
innodb additional mem pool size  1,048,576
innodb autoextend increment  8
innodb buffer pool awe mem mb  0
innodb buffer pool size  8,388,608
innodb checksums  ON
innodb commit concurrency  0
innodb concurrency tickets  500
innodb

Group By / Order BY

2007-03-17 Thread Justin

I've got an issue with group / order by.. here's what I'm wondering..

simple table..
date | rev
--
20070315  1
20070315  2
20070316  1
20070316  2

Query I'm running
SELECT * FROM `table` GROUP BY `date` order by `rev` DESC

I would think this would return
20070315 - 2
20070316 - 2

but it doesn't it returns

20070315 - 1
20070316 - 1

What am I missing?  I'm trying to do a group by the date and return the 
higher rev..


thanks agian!

Justin. 



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Group By / Order BY

2007-03-17 Thread Justin
hmm.. that worked.. but I guess I forgot to mention.. I don't need the value 
of the rev.. I wanted to return that row.. like.. let me put some more info



date  revtext  desc
-
20070315  1this is version 1  This was the first one 
today
20070315  2this is version 2  This was the first one 
today, but edited
20070316  1this is version 1  This was the first one 
today
20070316  2this is version 2  This was the first one 
today but agian we needed to update.


I want to pull distinct rows (there could be as many as 30 rev's per date) 
and the data on that row. so.. I'd like to get the following


20070315  2this is version 2  This was the first one 
today, but edited
20070316  2this is version 2  This was the first one 
today but agian we needed to update.


does that make sense?

- Original Message - 
From: Olexandr Melnyk [EMAIL PROTECTED]

To: Justin [EMAIL PROTECTED]; mysql@lists.mysql.com
Sent: Saturday, March 17, 2007 3:56 PM
Subject: Re: Group By / Order BY



1) ORDER BY is executed after GROUP BY;
2) In ORDER BY don't use columns that aren't in GROUP BY, unless it's an
aggregated value;

Your query can be rewritten as:

select date
, max(rev) as max_rev
 from table
group
   by date
order
   by max_rev desc

2007/3/17, Justin [EMAIL PROTECTED]:


I've got an issue with group / order by.. here's what I'm wondering..

simple table..
date | rev
--
20070315  1
20070315  2
20070316  1
20070316  2

Query I'm running
SELECT * FROM `table` GROUP BY `date` order by `rev` DESC

I would think this would return
20070315 - 2
20070316 - 2

but it doesn't it returns

20070315 - 1
20070316 - 1

What am I missing?  I'm trying to do a group by the date and return the
higher rev..

thanks agian!

Justin.


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



-
Olexandr Melnyk,
http://omelnyk.net/




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



OR in Query String

2005-08-10 Thread Justin Palmer
Hi List,

I have the following query where I am trying to locate a single students
record.  I only know that the students record has an id of 3690 and an
employer_id of 3 possibles.  So I thought that OR would work great.  The
problem is that it returns all students with employer_id's of 3, 78, 79.
How do I construct the query correctly to accomplish what I am after,
without only using the student_id?  I need to verify and return the
employer_id.

SELECT student_id, employer_id
FROM wat_student_job
WHERE student_id = 3690
AND employer_id = 3
OR employer_id = 78
OR employer_id = 79

Thanks in advance.

Regards,

Justin Palmer





-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



semicolon commands causing problems

2005-03-23 Thread Justin Moore
Hi, 

Anyone know why I can't run multiple commands from the query browser using 
semicolons to separate the commands?

ex:
The following command works fine:
select now();


However the following commands do not
select now();
select version();


Your help is appreciated,

-JM

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: semicolon commands causing problems

2005-03-23 Thread Justin Moore
I forgot to mention that it is happening from my ant scripts too.  So the 
problem is occurring outside of the query browser as well.  However, from the 
command line I can execute the two statements.  

Any ideas what could be causing this?

-Original Message-
From: Justin Moore 
Sent: Wednesday, March 23, 2005 10:39 AM
To: mysql@lists.mysql.com
Subject: semicolon commands causing problems


Hi, 

Anyone know why I can't run multiple commands from the query browser using 
semicolons to separate the commands?

ex:
The following command works fine:
select now();


However the following commands do not
select now();
select version();


Your help is appreciated,

-JM

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RETURNING Keyword?

2005-02-09 Thread Justin Burger
Good Afternoon,
In Oracle there is a keyword called RETURNING, where I can do an insert 
and specify what row to return, this helps with autoincrement fields;

How can I achieve this in MySQL?
Thanks a bunch!!
-- Justin W. Burger
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


vi.recover in show databases output?

2005-01-15 Thread Justin Lundy
When I open over nine connections to mysql I see vi.recover in my show 
databases output.  The existing connections stay open and continue to 
operate. No new database connections can be made at this point. I should 
be seeing the mysql, phpdig and test databases when performing the query 
but I only see vi.recover? Any ideas why this happens or how to prevent it?

[EMAIL PROTECTED] jbl]$ uname -a
NetBSD subterrain.net 2.0 NetBSD 2.0 (SUBTERRAIN) #1: Tue Dec 28 
22:34:54 EST 2004 
[EMAIL PROTECTED]:/usr/src/sys/arch/i386/compile/SUBTERRAIN i386
[EMAIL PROTECTED] jbl]$ ls /var/mysql/
#sql_267_1.MYI  ib_logfile0 phpdig
#sql_267_2.MYI  ib_logfile1 subterrain.net.err
#sql_267_3.MYI  ibdata1 subterrain.net.pid
#sql_2819_2.MYI mysql   test
[EMAIL PROTECTED] jbl]$ mysql -u root -p
Enter password:
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 1180 to server version: 4.1.7

Type 'help;' or '\h' for help. Type '\c' to clear the buffer.
mysql show databases;
++
| Database   |
++
| vi.recover |
++
1 row in set (0.00 sec)
mysqlexit;
[EMAIL PROTECTED] jbl]$ find / -name vi.recover
/var/tmp/vi.recover
[EMAIL PROTECTED] jbl]$ su -
Password:
[EMAIL PROTECTED] root]# /usr/pkg/etc/rc.d/mysqld restart
Stopping mysqld.
Waiting for PIDS: 10265, 10265, 10265.
Starting mysqld.
[EMAIL PROTECTED] root]# exit
[EMAIL PROTECTED] jbl]$ mysql -u root -p
Enter password:
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 10 to server version: 4.1.7
Type 'help;' or '\h' for help. Type '\c' to clear the buffer.
mysql show databases;
+--+
| Database |
+--+
| mysql|
| phpdig   |
| test |
+--+
3 rows in set (0.04 sec)
mysql exit;
[EMAIL PROTECTED] jbl]$
Thanks,
-JBL
-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]

Re: help with table structure

2004-10-12 Thread Justin Smith
What you have so far looks good, but what I learned from doing my ecomm 
project was that it is beneficial to make a separate table for anything 
and everything that you might have more than one of...  Addresses, phone 
numbers, and email addresses are all great candidates for breaking out 
into another table.  That way, if you want to have a billing address and 
a shipping address for each customer, for example, all you have to do is 
have an address table with all of the address information, and a 
relation table that describes the relationship between an address and a 
user:

address_id int not null,  (foreign key corresponding to an address ID in 
your address table)
user_id int not null,  (foreign key corresponding to a user ID in your 
user table)
address_type int  (1=billing address, 2=shipping address, 3=secondary 
shipping, etc.)

I hope this helps.
Chris W. Parker wrote:
hello,
i'm just looking for some examples of a customer table that some of you
are using for your ecomm sites (or any site that would need a customer
table).
here is mine so far:
(horrible wrapping to follow...)
mysql describe customers;
++-+--+-+---
--++
| Field  | Type| Null | Key | Default
| Extra  |
++-+--+-+---
--++
| id | int(10) unsigned|  | PRI | NULL
| auto_increment |
| fname  | varchar(20) |  | PRI |
||
| lname  | varchar(20) |  | PRI |
||
| address1   | varchar(40) |  | |
||
| address2   | varchar(40) | YES  | |
||
| city   | varchar(20) |  | |
||
| state  | char(2) |  | |
||
| zip| varchar(10) |  | |
||
| phone  | varchar(20) | YES  | |
||
| fax| varchar(20) | YES  | |
||
| email  | varchar(64) |  | PRI |
||
| newsletter | tinyint(1)  |  | | 0
||
| password   | varchar(32) |  | |
||
| signupdate | datetime|  | | -00-00
00:00:00 ||
| lastvisit  | datetime|  | | -00-00
00:00:00 ||
| type   | tinyint(3) unsigned |  | | 0
||
| company| varchar(64) | YES  | |
||
| is_active  | tinyint(4)  |  | | 0
||
| activationdate | datetime|  | | -00-00
00:00:00 ||
| activationtype | tinyint(3) unsigned |  | | 0
||
++-+--+-+---
--++
i would appreciate not only table descriptions (like mine above)(if
you're willing) but comments on what i have so far as well.
thank you,
chris.
 


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: help with table structure

2004-10-12 Thread Justin Smith
Chris W. Parker wrote:
interesting you say that because i was going to do this same thing
except not as completely as i probably should (which i think is what you
are suggesting). what i mean is, my extra table of addresses was going
to be merely shipping addresses for the customer and nothing else. but i
guess i should change it from being just shipping addresses to include
any kind of address relating to the customer?
but what about phone numbers? i'm a bit unsure on that one. in my table
as it is now, i have three: business, home, and fax. in what case would
having a phone (number) table be beneficial?
 

The phone number table is probably a little less important since there 
isn't much data to be stored, but if you for some reason wanted to add 
another phone number (pager, cell phone, etc.) it's a lot easier to just 
add another row to your phone number table than it is to add another 
column to your User table.  Also keep in mind that not every user is 
going to have every type of phone number (I don't have a fax number, for 
instance).

This is only what I've learned from my experience, though, which is 
pretty limited.  Do any of the uber data modelling DBA guru types out 
there have any input as to what is easiest/most efficient?

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: MySQL Databases in Subdirectories?

2004-10-08 Thread Justin Smith
We're trying to avoid the following scenarios:
Using one database for every site, and having 100,000 subdirectories of 
the MySQL data directory;
-or-
Using one (or more) tables in a single database for each site, and 
having x00,000 table files.

Either of the above would make maintenance (backups, etc.) a chore.
Ruben Safir Secretary NYLXS wrote:
How does that help?  The database itself should be allowed to organize 
everything.

Ruben
On Thu, Oct 07, 2004 at 04:57:39PM -0700, Justin Smith wrote:
 

Is it possible to create a database in a lower-level subdirectory of 
MySQL's data directory?  We have almost 100,000 sites, and we would like 
to have a separate database for each site.  However, it's very 
impractical from a filesystem maintenance standpoint to have 100,000 
subdirectories of MySQL's data directory.  What we would like to do is 
break up the directories into something like this:

for site #12345:
[mysql datadir]/01/23/45/[databasename]
This would greatly improve the manageability of the table space.
Is this possible?
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
   

 


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


MySQL Databases in Subdirectories?

2004-10-07 Thread Justin Smith
Is it possible to create a database in a lower-level subdirectory of 
MySQL's data directory?  We have almost 100,000 sites, and we would like 
to have a separate database for each site.  However, it's very 
impractical from a filesystem maintenance standpoint to have 100,000 
subdirectories of MySQL's data directory.  What we would like to do is 
break up the directories into something like this:

for site #12345:
[mysql datadir]/01/23/45/[databasename]
This would greatly improve the manageability of the table space.
Is this possible?
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


hosted application data design

2004-09-01 Thread Justin French
Hi all,
I'm designing a hosted application that many clients will access.  I've 
got all sites running off one PHP code base quite nicely, and now I'm 
hoping to reduce the multiple databases down to just one shared DB.

(Note: at this stage, all accounts are fictional, so there's no legacy 
data to worry about).

So instead of each account have it's own database and autonomous 
tables, I'd like all accounts to share one database.  This will:

- make updates easy
- allow each account to be group aware
- allow a parent website easy access to all account data
I've added client_id columns to all tables, and things are working 
fine, with the following exception...

I'm worried about the primary keys (IDs) for each table growing to an 
unmanageable size pretty quick... 50 accounts * 500 forum posts = 25000 
IDs pretty quickly.  50 sites * 5000 members = 250,000 members.

I'm sure this isn't much of a technical/performance issue (with proper 
indexing), but it's just strikes me as a little strange that the first 
news post for an account could have an ID of '502846', rather than '1'.

So, I was hoping I might be missing something in regards to primary 
keys...

Can a primary key (or unique key) be established on two columns (eg 
primary key = client_id + post id)?
Can an auto-increment column only increment in relation to the client 
id?

Or am I expecting *way* too much out of MySQL and relational data 
design?

---
Justin French
http://indent.com.au
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


How to admin a server that currently has too many connections?

2004-08-31 Thread Justin Swanhart
I am creating an index on a very large innodb table that is taking a
very long time to complete which I understand is a limitation of
innodb.

The problem is that another application has been issuing queries
against the table and those queries have never timed out.  So now I
can't kill the connections, because I can't make a database connection
to mysql to even see their thread ids:

[EMAIL PROTECTED] mysql]$ mysqladmin -u root -p processlist
Enter password:
mysqladmin: connect to server at 'localhost' failed
error: 'Too many connections'

I don't want to kill the mysqld process, because that would stop my
index creation.

Is there anything you can do in this situation?  I don't think
connections from mysqladmin should ever be denied.  Is this a feature
request that I should make?

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



multiple table query

2004-08-30 Thread Justin French
Can anyone help me optimise the way I do this?
I have two tables, 'article' and 'user'.  Article has a user_id which 
related to the user table.

When selecting all articles from the the article table, I'd like to be 
able to get the username of the user_id.  Currently I'm doing this as a 
separate query whilst looping through the articles, but this is 
obviously causing way too many DB calls.

some pseudo code:
SELECT * FROM article
foreach article
{
SELECT username FROM user WHERE id=$user_id
}
Perhaps I need a join, or maybe just a more complex query -- can any 
one lend a hand?

My guess is maybe something like
SELECT article.title, article.user_id, user.username
FROM article, user
WHERE user.id = article.user_id
---
Justin French
http://indent.com.au
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Problem with management server communication.

2004-08-17 Thread Justin Swanhart
I have the same problem.  I have an open bug report here:
http://bugs.mysql.com/?id=4761

I can't figure out why my management server can not see my ndb
servers.  Perhaps I should try to down the secondary interface on all
the servers?

On Tue, 17 Aug 2004 15:54:29 -0400, Wheeler, Alex [EMAIL PROTECTED] wrote:
 Yes, now I remember.  This happens when I have either 2 interfaces or a
 disagreement on hostname.  If the clients don't see the server or
 themselves by the same hostname as the server, this type of error can be
 seen.
 I've been setup with a node that did the initial communication via eth1
 but then tried the remainder via eth0 which corresponded with the
 hostname, and which obviously didn't work.
 I ended up temporarily changing the hostname to reference the eth1 ip.
 
 --
 Alex Wheeler
 
 -Original Message-
 From: IHLING, CHRIS G (CHRIS) [mailto:[EMAIL PROTECTED]
 Sent: Tuesday, August 17, 2004 3:43 PM
 To: IHLING, CHRIS G (CHRIS)
 Cc: [EMAIL PROTECTED]
 Subject: RE: Problem with management server communication.
 
 I have no problem pinging either box from both. There is one other
 strange thing. If I try to use a fully qualified hostname or ip address
 in the config file I get an error when I try to start the ndbd.
 
 -Original Message-
 From: Jonas Oreland [mailto:[EMAIL PROTECTED]
 Sent: Tuesday, August 17, 2004 11:42 AM
 To: IHLING, CHRIS G (CHRIS)
 Cc: [EMAIL PROTECTED]
 Subject: Re: Problem with management server communication.
 
 Hi,
 
 Also make sure that both hostnames are accessible from both computers.
 I.e. that you can ping both lpsdev3-n1  lpsdev3-n2 from both lpsdev3-n1
 
  lpsdev3-n2
 
 /Jonas
 
 IHLING, CHRIS G (CHRIS) wrote:
  I have removed the [TCP] section from my config file and I am still
  seeing the same no contact message. Here is the config file I am
  using:
 
  [COMPUTER]
  Id:1
  ByteOrder: Little
  HostName: lpsdev3-n2
 
  [COMPUTER]
  Id:2
  ByteOrder: Little
  HostName: lpsdev3-n1
 
  [MGM]
  Id:1
  ExecuteOnComputer: 1
  PortNumber: 2200
  ArbitrationRank: 1
 
  [DB DEFAULT]
  NoOfReplicas:2
  FileSystemPath: /usr/lps/mysql/data1
 
  [DB]
  Id:2
  ExecuteOnComputer:1
 
  [DB]
  Id:3
  ExecuteOnComputer:2
 
  [API]
  Id:4
  ExecuteOnComputer:1
 
  [API]
  Id:5
  ExecuteOnComputer:2
 
  Is there any way to turn on tracing or check logs to find out more
  info?
 
  -Original Message-
  From: Wheeler, Alex [mailto:[EMAIL PROTECTED]
  Sent: Tuesday, August 17, 2004 10:08 AM
  To: [EMAIL PROTECTED]
  Subject: RE: Problem with management server communication.
 
 
  I've seen this kind of error before, and if I recall correctly it's
  due to a misconfiguration of the TCP information.  So, if you have any
 
  TCP entries in the config.ini file such as [TCP]NodeId1:..., remove
  them as they are automatically configured anyway. Then stop the nodes,
 
  and start again.  Make sure you use the -i when starting the clients.
 
  --
  Alex Wheeler
 
  -Original Message-
  From: IHLING, CHRIS G (CHRIS) [mailto:[EMAIL PROTECTED]
  Sent: Tuesday, August 17, 2004 12:27 PM
  To: '[EMAIL PROTECTED]'
  Subject: Problem with management server communication.
 
  I am having a problem with the management server getting status from a
 
  ndbd started on another machine.
 
 
 
  The ndbd server will start and report status fine if it is running on
  the same machine as the management server.
 
  The ndbd server will start on a different machine but no status is
  reported.
 
  If the management server is not running ndbd will not start so there
  must be some kind of communication just no status.
 
 
 
  Any suggestions on what to try?
 
 
 
 --
 Jonas Oreland, Software Engineer
 MySQL AB, www.mysql.com
 
 --
 MySQL Cluster Mailing List
 For list archives: http://lists.mysql.com/cluster
 To unsubscribe:
 http://lists.mysql.com/[EMAIL PROTECTED]
 
 
 
 
 --
 MySQL Cluster Mailing List
 For list archives: http://lists.mysql.com/cluster
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: MySQL eats 100% CPU on a quad CPU system...

2004-08-16 Thread Justin Swanhart
Are you swapping?  Check vmstat and iostat

Are your queries optimized, are they doing full table scans?  Enable
the slow query log.


On Mon, 16 Aug 2004 14:48:35 +0200, Fahr o Mavastradioz
[EMAIL PROTECTED] wrote:
 Hello people,
 
 I'm currently configuring a quad CPU system as a standalone MySQL server.
 The machine runs nothing except for MySQL and some trivial things like SSH
 and iptables.
 A second server acts as a webserver and is connected to this one though an
 internal LAN connection.
 Using top or ps, it shows that MySQL eats over 99% of all the available CPU.
 Using slmon, it shows that indeed all 4 CPUs are fully in use. Still, it
 appears to be terribly slow and I don't think MySQL should eat over 99% CPU
 on a quad CPU system...
 The machine runs mysqld 4.0.18 on Debian Sarge. A stripped down my.cnf is
 included below (I merely removed the comments).
 Could anyone tell me what can cause this problem?
 
 Thanks,
 - Fahr
 
 ===my.cnf===
 [client]
 port= 3306
 socket  = /var/run/mysqld/mysqld.sock
 
 [mysqld_safe]
 err-log = /var/log/mysql/mysql.err
 socket  = /var/run/mysqld/mysqld.sock
 open_files_limit = 8192
 
 [mysqld]
 user= mysql
 pid-file= /var/run/mysqld/mysqld.pid
 socket  = /var/run/mysqld/mysqld.sock
 port= 3306
 basedir = /usr
 datadir = /var/lib/mysql
 tmpdir  = /tmp
 language= /usr/share/mysql/english
 skip-locking
 thread_stack= 128K
 skip-innodb
 max_connections = 500
 key_buffer  = 150M
 myisam_sort_buffer_size = 64M
 join_buffer_size  = 1M
 read_buffer_size  = 1M
 sort_buffer_size  = 1M
 table_cache   = 1500
 thread_cache_size = 128
 wait_timeout  = 14400
 connect_timeout   = 10
 max_allowed_packet = 16M
 max_connect_errors = 10
 query_cache_limit  = 1M
 query_cache_size   = 32M
 query_cache_type   = 1
 
 [mysqldump]
 quick
 max_allowed_packet  = 16M
 
 [mysql]
 
 [isamchk]
 key_buffer = 64M
 sort_buffer = 64M
 read_buffer = 16M
 write_buffer = 16M
 
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



want input on method to avoid replication conflicts

2004-08-10 Thread Justin Swanhart
I have four machines A,B,C,D..

A and B are dual masters while C and D are read only slaves

A --- B
|   |
C D


I have a load balancer that connects all clients to A, and when it
fails, connects them to be.  In practice it is a little more
complicated, because if a server becomes backlogged, it recuses itself
from rotation (but there is logic to prevent a cascade where all
servers recuse themselves).

The problem is, that once a failover has occured, from A - B, then B
has to pretty much stay the master forever until it fails, and A takes
over.  My load balancer doesn't support this kind of logic and manual
intervention would be required to implement it.

I have thought about making the top few bits of the auto increment key
reflect the server id (thanks for the idea Jeremy), but I don't really
want partitioned auto-increment values.

What I am thinking about doing, is modifying the autoincrement values
so that server A always produces an EVEN value, while server B always
produces an ODD value.

That seems like it would nearly eliminate the possiblity of conflicts,
and my writes could be load balanced without any problems.  It won't
fix problems with conflicts of other unique columns, but that seems
like a good thing for the most part at least for my applications.

My healthcheck logic will make sure that server A doesn't appear to be
back up to the load balancer until it has caught up with server B, or
vice versa.

Does this sound like a good idea, or should I be thinking about some
other way to do this?

Justin Swanhart

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: What would happen in these two cases?

2004-08-10 Thread Justin Swanhart
You can put multiple renames in one statement, and the entire rename
will be atomic..

I create summary tables from some of my data, and I periodically
refresh them.  When refreshing them I create new tables to replace the
old tables with..

Then I do:
rename current_table to old_table, new_table to current_table

On Tue, 10 Aug 2004 13:50:34 -0700, Haitao Jiang [EMAIL PROTECTED] wrote:
 It would be great if there is a swap table command that is atomic.
 
 Thanks a lot
 
 Haitao
 
 On Tue, 10 Aug 2004 13:23:30 +0300, Egor Egorov [EMAIL PROTECTED] wrote:
  Haitao Jiang [EMAIL PROTECTED] wrote:
 
   Thanks so much for the reply! So, if I understand correctly, to swap
   the table like I described, I should do:
  
   LOCK TABLES A write,A2 write;
   FLUSH TABLES;
   rename table A to A1;
   rename table A2 to A;
   UNLOCK TABLES;
  
   Right? If there is no write to either A or A2, then there is no need
   to lock the table, right?
 
  You can't rename locked tables. RENAME is atomic anyway so you can safely use
  it without lock. But your software should be aware of a possible race condition
  that happens between two RENAME TABLEs.
 
 
 
 
   Thanks!
  
   On Mon, 09 Aug 2004 19:21:39 +0300, Egor Egorov [EMAIL PROTECTED] wrote:
   Haitao Jiang [EMAIL PROTECTED] wrote:
  
Case 1: I have a table A under a running MySQL 4.1.3b server. If I
replace A.MYD and A.MYI files without stop and start the server, will
data gets corrupted?
  
   It depends on many factors. Honestly, there are lots of cases where you will
   get correct data and nothing wrong will happen. But you have to be an
   experience Unix developer to understand Unix internals in order to dance like
   that. :)
  
   So the general answer is: don't, it's too dangerous.
  
Case 2: I rename A to A1, and A2 to A, assume A, A2 have the same
schema but different data/index. Is it bad? Will data gets corrupted?
I tied this one, it seems ok.
  
   Yes, you're right, it just *SEEMS* ok. :)
  
   If you really need to replace table files, use FLUSH TABLES, LOCK TABLES:
  
   http://dev.mysql.com/doc/mysql/en/FLUSH.html
   http://dev.mysql.com/doc/mysql/en/LOCK_TABLES.html
  
   --
   For technical support contracts, goto https://order.mysql.com/?ref=ensita
   This email is sponsored by Ensita.net http://www.ensita.net/
 __  ___ ___   __
/  |/  /_ __/ __/ __ \/ /Egor Egorov
   / /|_/ / // /\ \/ /_/ / /__   [EMAIL PROTECTED]
   /_/  /_/\_, /___/\___\_\___/   MySQL AB / Ensita.net
 ___/   www.mysql.com
  
  
   --
   MySQL General Mailing List
   For list archives: http://lists.mysql.com/mysql
   To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
  
  
  
 
  --
  For technical support contracts, goto https://order.mysql.com/?ref=ensita
  This email is sponsored by Ensita.net http://www.ensita.net/
__  ___ ___   __
   /  |/  /_ __/ __/ __ \/ /Egor Egorov
  / /|_/ / // /\ \/ /_/ / /__   [EMAIL PROTECTED]
  /_/  /_/\_, /___/\___\_\___/   MySQL AB / Ensita.net
___/   www.mysql.com
 
  --
  MySQL General Mailing List
  For list archives: http://lists.mysql.com/mysql
  To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
 
 
 
 
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: What would happen in these two cases?

2004-08-10 Thread Justin Swanhart
FYI, the atomicity of rename and using it to swap tables is discussed
in the manual here:

http://dev.mysql.com/doc/mysql/en/RENAME_TABLE.html

Justin

On Tue, 10 Aug 2004 13:50:34 -0700, Haitao Jiang [EMAIL PROTECTED] wrote:
 It would be great if there is a swap table command that is atomic.
 
 Thanks a lot
 
 Haitao


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: want input on method to avoid replication conflicts

2004-08-10 Thread Justin Swanhart
Each server inserts a hearbeat value into a replicated table.  The
server can see when the last heartbeat it received from its master
was.  If a heartbeat has not been received from the master in 3
minutes, and the master is available, then the replication status is
checked with show slave status to see what the status is.  If
replication is running and there are no errors, then I assume a long
operation is taking place (such as create index) and allow both
servers to remain up.  If either of the replication threads have
stopped because of an error, then the server recuses itself and a page
is automatically sent to me.

The biggest problem here is that the servers can't both recuse
themselves.  I have a number of healthchecks and each is assigned a
severity level.  The more severe the problems the less healthy the
server is said to be.  My script basically says if I am more healthy
than my peer, then I will report up, if I am equally healthy as my
peer, if my server_id is  1 then I will report down, if I am less
healthy than my peer, then I will report down

That is the logic for writes.  For reads basically it is if I am not
backlogged, then I am up


On Tue, 10 Aug 2004 17:50:27 -0400, Mayuran Yogarajah
[EMAIL PROTECTED] wrote:
 Justin Swanhart wrote:
 
 Im curious about this part.
 
 My healthcheck logic will make sure that server A doesn't appear to be
 back up to the load balancer until it has caught up with server B, or
 vice versa.
 
 
 
 How do you accomplish this ? We have a similar situation here where a check
 is needed to be done to see if a slave has caught up to a master.  Are
 you checking
 the binary log names/positions or something more ?
 
 thanks,
 Mayuran


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Setting custom information in processlist

2004-08-10 Thread Justin Swanhart
Oracle has a procedure called DBMS_APPLICATION_INFO.SET_APPLICATION_INFO
that allows you to specify up to 64k of addtional information about
the current connection.  It doesn't have any way to specify this
information at connect time though.

The data can be accessed in Oracle through the V$SESSION system view,
or through
userenv('CLIENT_INFO')

Something in mysql that would be similar and just as easy to implement would be:
create table process_info(ThreadID int, Information text, primary key
(ThreadID));

then in each connection do:
replace into process_info values (CONNECTION_ID(), 'Connection details here');

Add in a little cron job that removes old values from the process_info
table nightly and that should do the trick.


On Tue, 10 Aug 2004 15:24:34 -0700, Jeremy Zawodny [EMAIL PROTECTED] wrote:
 On Thu, Aug 05, 2004 at 12:36:55PM +0100, Naran Hirani wrote:
  Hi,
 
  I'm using a single shared user-login for a web-based application to
  my mysql database - is there a way of including some information at
  connect time or during processing that would show up when issuing
  `show processlist'?
 
 Only if you prefixed each query with a comment:
 
   /* foo #3 */ SELECTL * FROM world ORDER BY...
 
 But not at connect time.
 
  Basically, I need to able to distinguish potentially multiple
  connections with the same login and process information some how.
  This sort of thing is possible in other SQL engines so probably
  should be in MySQL too.
 
 Interesting.  How do other systems handle this?
 
 Jeremy
 --
 Jeremy D. Zawodny |  Perl, Web, MySQL, Linux Magazine, Yahoo!
 [EMAIL PROTECTED]  |  http://jeremy.zawodny.com/
 
 [book] High Performance MySQL -- http://highperformancemysql.com/
 
 
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: SELECT WHERE problem

2004-08-09 Thread Justin Swanhart
Because not doing so violates the SQL standard.  Allowing you to
included non aggregated columns in the SELECT list is a non standard
MySQL extension to the SQL language.  You will get an error in other
products, such as oracle, where you will get a xxx is not a group by
expression error.

Information on this feature is here:
http://dev.mysql.com/doc/mysql/en/GROUP-BY-hidden-fields.html

On Mon, 9 Aug 2004 17:22:17 -0600, Ren Fournier [EMAIL PROTECTED] wrote:
 Thanks, the LEFT JOIN worked.
 
 I do have a question though, why is it considered best practice to list
 all non-aggregated columns ( I assume you mean columns from trucks.*)
 in the GROUP BY statement? I ask because I am interested in fast,
 secure, standards-compliant code, I'm just not always sure what that
 is. :-)
 
 Ren
 
 ---
 Ren Fournier,
 www.renefournier.com
 
 
 
 On Aug 9, 2004, at 2:14 PM, [EMAIL PROTECTED] wrote:
 
  You need to change your INNER JOIN to a LEFT JOIN
 
  SELECT
  trucks.id, sum(history.time_sec) as total_seconds
  FROM
  trucks
  LEFT JOIN
  history
  ON trucks.id = history_truckid
  GROUP BY
  trucks.id
  ORDER BY
  total_seconds desc
 
  One other issue ---IMHO, the SQL engine is being too kind when it
  allows
  you to execute a query like SELECT trucks.*  GROUP BY . In
  practically EVERY OTHER SQL-based product you will use, you will be
  required to list _all_ non-aggregated columns in your GROUP BY
  statement
  or you will get an error.  Listing every column you want to group on is
  considered proper SQL format and I highly recommend the practice.
 
  If you still want to see everything from your trucks table (like in
  your
  original query) you can do this:
 
  CREATE TEMPORARY TABLE tmpTruckIDs
  SELECT
  trucks.id, sum(history.time_sec) as total_seconds
  FROM
  trucks
  LEFT JOIN
  history
  ON trucks.id = history_truckid
  GROUP BY
  trucks.id
  ORDER BY
  total_seconds desc;
 
  SELECT trucks*, tmpTruckIDs.total_seconds
  FROM trucks
  INNER JOIN tmpTruckIDs
  ON tmpTruckIDs.id = trucks.id;
 
  DROP TABLE tmpTruckIDs;
 
  HTH,
  Shawn Green
  Database Administrator
  Unimin Corporation - Spruce Pine
 
  Ren Fournier [EMAIL PROTECTED] wrote on 08/09/2004 03:56:58 PM:
 
  I am having a problem building a SELECT statement that joins two
  tables
  with a WHERE condition.
 
 
  SELECT
 trucks.*
 
  FROM
 trucks, history
 
  WHERE
trucks.account_id = '100'
 AND trucks.status = 'Active'
 AND history.truck_id = trucks.id   This is the
  tricky bit
 
  GROUP BY
 trucks.id
 
  ORDER BY
 history.time_sec DESC
 
 
  Simply put (or as simply as I can put it :-) , this SELECT should
  return all trucks in order of their activity (history.time_sec). The
  problem is when a truck is new to the system and does not have a
  single
  record in the history table (and therefore no value for
  history.time_sec). In that case, the truck is excluded from the
  SELECTed rowsbut I want it returned, just at the bottom of the list
  (least active).
 
  Any ideas how this can be done?
 
  ...Rene
 
  --
  MySQL General Mailing List
  For list archives: http://lists.mysql.com/mysql
  To unsubscribe:
  http://lists.mysql.com/[EMAIL PROTECTED]
 
 
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
 


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Nodes crash on table creation

2004-08-09 Thread Justin Swanhart
There must be an even number of replicas because the cluster mirrors
data between two machines.  It doesn't do three way mirroring.  An
even number of nodes are required because each two data nodes becomes
a node group.

If you have three machines, you could create two NDB processes on each
machine each each using some of the memory you want to dedicate to
NDB, though this adds some additional overhead in the allocation of
operation records and what not.

Just make sure you don't mirror data to the same machine.  I (think)
they are mirrored in the order they occur so do something like the
following (collapsed from standard INI format for brevity)

datamemory 33M, indexmemory 17M
MGM id 1, machine 1
DB id 2, machine 1
DB id 3, machine 2
DB id 4, machine 3
DB id 5, machine 1
DB id 6, machine 2
DB id 7, machine 3
API id 8, machine 1
API id 9, machine 2
API id 10, machine 3

I haven't actually tested that configuration, so your milage may vary.
 If you do test it, then let me know how well it works for you.


On Mon, 9 Aug 2004 12:07:11 -0400, Wheeler, Alex [EMAIL PROTECTED] wrote:
 After doing further testing, it would appear that the number of nodes must be a 
 power of 2:
 A test of 2 nodes works fine, but 3 fails
 4 works, but 6 fails
 8 works...
 Is this documented and I just missed it?
 
 --
 Alex Wheeler
 
 
 
 
 -Original Message-
 From: Mikael Ronström [mailto:[EMAIL PROTECTED]
 Sent: Thursday, August 05, 2004 1:15 PM
 To: Wheeler, Alex
 Cc: [EMAIL PROTECTED]
 Subject: Re: Nodes crash on table creation
 
 Hi Alex,
 I tried a similar configuration with all nodes on the same computer and
 that worked fine.
 Please provide the trace file in some manner and I'll check it once
 more.
 The other nodes fail since there is only one replica and thus when one
 node fails then
 the cluster fails.
 
 Rgrds Mikael
 
 2004-08-05 kl. 18.34 skrev Alex Wheeler:
 
  I am using the mysql-4.1.4-beta-nightly-20040804 snapshot, compiled
  with
  shared memory support, though I'm not using that feature yet.
 
  I'm using a 3 data node, 4 computer configuration, though I've tried
  several options and the only one that seems to work with this build is
  just 1 computer, here's my config file:
 
  [COMPUTER]
  Id:1
  HostName: 192.168.208.101
 
  [COMPUTER]
  Id:2
  HostName: 192.168.208.100
 
  [COMPUTER]
  Id:3
  HostName: 192.168.208.102
 
  [COMPUTER]
  Id:4
  HostName: 192.168.208.103
 
  [MGM]
  Id:1
  ExecuteOnComputer: 1
  ArbitrationRank: 1
 
  [DB DEFAULT]
  NoOfReplicas: 1
  IndexMemory: 50M
  DataMemory: 100M
  LockPagesInMainMemory: Y
 
  [DB]
  Id:6
  ExecuteOnComputer: 2
  FileSystemPath: /home/awheeler/cluster3/ndb_data6/
 
  [DB]
  Id:7
  ExecuteOnComputer: 3
  FileSystemPath: /home/awheeler/cluster3/ndb_data7/
 
  [DB]
  Id:9
  ExecuteOnComputer: 4
  FileSystemPath: /home/awheeler/cluster3/ndb_data9/
 
  [API]
  Id: 20
  ExecuteOnComputer: 1
  ArbitrationRank: 2
 
 
 
  The nodes come up fine, but when I try to create a table, at least one
  of the nodes will bail out and leave an error log:
  Current byte-offset of file-pointer is: 468
 
 
  Date/Time: Thursday 5 August 2004 - 03:51:33
  Type of error: error
  Message: Internal program error (failed ndbrequire)
  Fault ID: 2341
  Problem data: DbtuxMeta.cpp
  Object of reference: DBTUX (Line: 128) 0x0006
  ProgramName: NDB Kernel
  ProcessID: 3244
  TraceFile: ndb_7_trace.log.1
  ***EOM***
 
  The others exit with Arbitrator decided to shutdown this node
 
  The computers all have different amounts of memory, the smallest being
  512MB.
 
  Let me know if you need more information, and where to send the trace
  file.
 
  Any ideas on what is causing ndbrequire to fail?
 
  --
  MySQL Cluster Mailing List
  For list archives: http://lists.mysql.com/cluster
  To unsubscribe:
  http://lists.mysql.com/[EMAIL PROTECTED]
 
 
 Mikael Ronström, Senior Software Architect
 MySQL AB, www.mysql.com
 
 Clustering:
 http://www.infoworld.com/article/04/04/14/HNmysqlcluster_1.html
 
 http://www.eweek.com/article2/0,1759,1567546,00.asp
 
 --
 MySQL Cluster Mailing List
 For list archives: http://lists.mysql.com/cluster
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
 


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



NOT NULL

2004-08-08 Thread Justin French
Can someone give me a quick/clear explanation about why you would 
create a table with columns of NULL / NOT NULL?

I clicked around the MySQL manual last night I *think* I know what's 
what, but it'd be great to read one clear paragraph that summarises it 
all.

---
Justin French
http://indent.com.au
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: a question/issue...

2004-08-01 Thread Justin Swanhart
[note: This discussion went on off-list, but I thought the info might
be useful in list context, so I am reposting it to the list.   Sorry
about the lack of individual messages.  I wish gmail had an option to
automatically add an address to a reply when you are replying to a
message that has been assigned a label.  I've requested the feature in
fact. ]


I didn't say it can't be done.  I said you have to be _very_ careful.
You can't use the connection in both the parent and the client.  If
you connect in the parent, then don't do anything with that connection
handle.  Let the client do all the work with that connection.  If you
don't do that you have an absolute recipe for disaster.  You need to
have one connection per child.  It simply won't work otherwise.  That
isn't to say you can't create one connection for each child in the
parent process and pass it through fork (more on that below).  Just
don't cross the streams, it will be _very bad_.  Close the connection
after the child ends.  Don't close the connection in the child.

The same logic applies to other file descriptors.  If you pass a
socket to a child process, then you close the child in the parent and
you don't try to do anything else with it.  If you don't close it, you
will eventually run out of file descriptors.  The difference is that
closing a database involves more than just closing a file descriptor.
Database connections, well actually the database access layers, such
as PHP and PERL/DBI, do a lot of background work that is obfuscated
from the user.

This includes automatically closing file handles, database
connections, statement handles, etc.  If you close your handle in the
client, and the client library automatically issues a COMMIT and
closes the connection, then you kill the child and the parent also
tries to close the connection (probably because the connection has
gone out of scope) then you are going to have problems.

This is why I said, when you use Perl/DBI you need to make sure
InactiveDestroy is set correctly in the parent and the child.  This
prevents that background voodoo from going on in DBI and ensures
that you don't run into problems.  You will need to do similar
synchronization in other languages to ensure that your children and
your parent get along.

My philosophy is that you should destroy the connection in the same
thread/process/whatever that created the connection.  This ensures
that resources are released properly.

Now, all that said, I truely believe that the best way to do things is
probably the easiest way.  In general it is much easier, and you are
goign to have less bugs, if you just connect in the child.  If you are
connecting in the parent, because the child doesn't normally know what
database to connect to, then just set a variable with the connection
details (dsn/connect string/etc) and let the child connect on it's own
using that.

On Sun, 1 Aug 2004 12:02:30 -0700, bruce [EMAIL PROTECTED] wrote:

 since you're saying you can't share the connection, could you please tell me
 where in the docs it states you can't, and the reason(s) for not being able
 to 'share'.

 i would believe that once you make the connection, you should be able to
 'use' it between any process that can get 'access' to it.

 so my initial question still stands, how can it be accomplished, or why it
 can't be done.

 i'm inclined to believe that if you can create a db connection handle, then
 any other process should be able to use it, as long as the creating
 process/parent process is still alive/running

 -bruce




 -Original Message-
 From: Justin Swanhart [mailto:[EMAIL PROTECTED]
 Sent: Sunday, August 01, 2004 11:53 AM
 To: [EMAIL PROTECTED]
 Subject: Re: a question/issue...

 In general, it is probably a bad idea to inherit database connections
 from a parent in a fork()'ed child process.

 What is your reasoning behind not permitted the children to make their
 own connection?

 If you can not connect from the child and you must inherit a database
 connection from the parent, make sure you create a new connection for
 each child.  You can't share the same connection between the parent
 and the child.   If you use the connection in the parent do not use it
 in the child and vice versa.

 If you are using perl DBI then you need to set InactiveDestroy where
 it makes sense to do so.  See the DBI manual for details.  You will
 need to take similar measures in other environments.

 ..

 On Sun, 1 Aug 2004 09:22:21 -0700, bruce [EMAIL PROTECTED] wrote:
  hi...
 
  i'm testing an app where i want a parent app to create the mysql db
  connection link/handle, and basically pass this handle off to child
  procesess. is there any reason why this can't be done? are there any
  examples of this already being accomplished that you might provide???
 
  as far as i can tell/see, it should be possible. as long as the parent is
  still running, the connection link/handle should still be valid.
 
  i'm going to ultimately

Re: TOP

2004-07-28 Thread Justin Swanhart
TOP is a microsoft SQL extension.

MySQL uses the LIMIT clause.

for instance, the following is a rather typical  top 10 sql query:

select some_column, sum(another_column) total
  from some_table
group by some_column
order by total desc
LIMIT 10

On Wed, 28 Jul 2004 14:39:11 -0400, Kamal Ahmed
[EMAIL PROTECTED] wrote:
 Hi,
 Does anyone know how to do a TOP function in MySQL ?
 
 Thanks,
 
 -Kamal.
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: why use MySql instead of Firebird or SQL Server express?

2004-07-26 Thread justin
 FirebirdSQL is an excellent project, but as far as I know there is not a
 single organization that stands behind it providing support, training,
 etc. as MySQL AB does with the MySQL product line.

Might want to take a look at http://www.ibphoenix.com/

Training is pretty pricey, but it's there.

Cheers,

Justin

Quoting Mark Matthews [EMAIL PROTECTED]:

 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1
 
 Steve Richter wrote:
 
 | In light of the licensing restrictions on using MySql in a commercial
 | package, why would MySql be prefered over Firebird or SQL Server 2005
 | express edition. Both appear to be no charge to redistribute compared
 to the
 | $250 for MySql.
 |
 | http://firebird.sourceforge.net/
 |
 | http://lab.msdn.microsoft.com/express/sql/default.aspx
 |
 | thanks,
 |
 | Steve Richter
 
 
 Steve, you're quoting a price for buying servers 'onesy-twosey' above,
 which doesn't appear to be your situation.
 
 I believe that what you're looking for is what's called an 'OEM' deal,
 if you want to commercially-license MySQL as an 'OEM', then you need to
 contact [EMAIL PROTECTED], as that pricing is always negotiated to meet
 your product's pricing and business model.
 
 MSDE has volume and concurrency limitations that are imposed by MS,
 compared to MySQL (IIRC, 25 concurrent users, and 2GB per database). It
 also is only available on the Windows platform.
 
 FirebirdSQL is an excellent project, but as far as I know there is not a
 single organization that stands behind it providing support, training,
 etc. as MySQL AB does with the MySQL product line.
 
 You are also much more likely to find people and third-party products
 that know, understand and work with MySQL than those that can work with
 FirebirdSQL out there in the marketplace.
 
 Regards,
 
   -Mark
 
 
 
 - --
 Mr. Mark Matthews
 MySQL AB, Software Development Manager, J2EE and Windows Platforms
 Office: +1 708 332 0507
 www.mysql.com
 
 MySQL Guide to Lower TCO
 http://www.mysql.com/it-resources/white-papers/tco.php
 -BEGIN PGP SIGNATURE-
 Version: GnuPG v1.2.3 (MingW32)
 Comment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org
 
 iD8DBQFBBUjktvXNTca6JD8RAjOrAKCFLr/guM/miTygRMxnjcTQhd+dEwCcCj72
 ZSMk+wfjNuPqxSb8h75/c2U=
 =SYAb
 -END PGP SIGNATURE-
 
 -- 
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:   
 http://lists.mysql.com/[EMAIL PROTECTED]
 
 
 




-
This mail sent through IMP: http://horde.org/imp/


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



re: List of associated records

2004-07-25 Thread Justin Swanhart
Create a seperate table called member_interests or something similar

Store one member_id and one interest_id (or whatever you have your PKs
named) in each row.

This is similar to an order entry system, which typically has one
table for order_headers and one for order_detail.  The order_header
table contains things like an order_id, the order_number, the
customer, the selected address, etc..  The order_detail table contains
the items that are on the order.


On Sun, 25 Jul 2004 12:40:09 -0500, Robb Kerr
[EMAIL PROTECTED] wrote:
 I have come across this problem a few times and wondered how other people
 solved the problem.

 Let's say I have a table containing Members. Each Member can choose several
 items in which they are interested. Each of these items represent records
 in a separate table - Interests. How do you store which records from
 Interests the member has checked in their record of the Members table?

 Do you create a TEXT field in the Members table and save a comma-delimited
 string of InterestsIDs?

 Thanx.
 --
 Robb Kerr
 Digital IGUANA
 Helping Digital Artists Achieve their Dreams

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Display field of selected record from full table recordset

2004-07-25 Thread Justin Swanhart
You probably want to pick up a good SQL book.  MySQL by Paul DuBois is
a really good one.

http://www.amazon.com/exec/obidos/tg/detail/-/0735712123/qid=1090786499/sr=8-2/ref=pd_ka_2/102-0741496-3072118?v=glances=booksn=507846

You want to use the WHERE clause of the select statement.

SELECT table.some_column, table.another_column, ...
   FROM table
 WHERE some_column = 'some_value'

see the manual:
http://dev.mysql.com/doc/mysql/en/SELECT.html

On Sat, 24 Jul 2004 13:52:53 -0500, Robb Kerr
[EMAIL PROTECTED] wrote:
 I have a recordset that retrieves the full content of the table - all
 fields, all records. Depending upon the content of different fields in
 different tables, I need to display certain fields of certain records
 within the full recordset. What's the syntax for selecting a particular
 record for display relative to the field contents of another recordset?
 
 Thanx,
 --
 Robb Kerr
 Digital IGUANA
 Helping Digital Artists Achieve their Dreams
 
 http://www.digitaliguana.com
 http://www.cancerreallysucks.org
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: subquery problem.

2004-07-22 Thread Justin Swanhart
Version 4.0 doesn't support subqueries.

In any event, your query would return all rows from
channels as long as there are any rows in users, which
I doubt is what you intended.


--- nambi c [EMAIL PROTECTED] wrote:
 Hi,
 
 My server version :  4.0.18-max-nt
 
 I have created 2 tables 'channels' and 'users' in my
 database. I can query these tables individually. I
 am
 trying to execute a query with 'exists' clause. 
 
 mysql select * from channels where exists (select *
 from users);
 
 This give the following error.
 
 ERROR 1064: You have an error in your SQL syntax. 
 Check the manual that corresp
 onds to your MySQL server version for the right
 syntax
 to use near 'exists (sele
 ct * from users)' at line 1
 
 
 I am getting this syntax error message. The query
 seems perfect to me. Any clue what is happening?
 Help!
 
 -Nambi
 
 
 
   
 __
 Do you Yahoo!?
 Yahoo! Mail is new and improved - Check it out!
 http://promotions.yahoo.com/new_mail
 
 -- 
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:   

http://lists.mysql.com/[EMAIL PROTECTED]
 
 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Transactions and mysql insert it

2004-07-22 Thread Justin Swanhart
MySQL doesn't guarantee that there will be no gaps in sequence values.
 Assigment of the id is always atomic because innodb uses an AUTO_INC
lock that lasts for the time of the insert, not the life of the
transaction.

lets say your highest order number is 10

transaction begins for client 1
  insert into orders (...) values (...)
  mysql_insert_id = 11 
  ... user adds stuff to order ...
  aborts order, transaction rolls back

at the same time
transaction begins for client 2
  insert into orders (...) values (...)
  mysql_insert_id = 12
  ... adds some stuff ...
  commits order

there will be a row with an id of 10 and a row with an id of 12 in
your database..  id 11 was rolled back and is gone

this is all documented here:
http://dev.mysql.com/doc/mysql/en/InnoDB_auto-increment_column.html

--- Scott Haneda [EMAIL PROTECTED] wrote:
 I have been pulling my hair out trying to get a
 solution to something,
 assuming idiotically that in a transaction scenario
 I would not be able to
 get the insert it back out.  It seems to work, I am
 wondering how and if it
 is reliable.
 
 Give the scenario where I have 2 inserts I want to
 make, since I can not
 seem to figure out how to make 2 inserts in one
 statement, I will make 2,
 with the condition that the second one needs to know
 the insert id.
 
 I just don't see how mysql can know the insert id in
 a transaction
 situation, I am assumeing that mysql does not
 actually insert anything at
 all untill it sees the COMMIT, so how does it know
 the insert id at all, the
 records have not been inserted yet?  Is this
 reliable?
 
 Here is some pseudo code that shows what I am doing
 and commetns where I am
 confused:
 
 mysqlQuery(tConn, START TRANSACTION);
 
 repeat 1000 times  
 mysqlQuery(tConn, INSERT into zaddress
SET user_id = '123',
address =
 '[EMAIL PROTECTED]');
 
 // How can mysql know about this  
 tID = mysqlInsertID(tConn);
 
 mysqlQuery(tConn, INSERT INTO zblacklist
SET user_id = '123',
id =   tID  , address = 
  tID);
 end repeat;
 
 mysqlQuery(tConn, COMMIT);
 -- 
 -
 Scott HanedaTel:
 415.898.2602
 http://www.newgeo.com   Fax:
 313.557.5052
 [EMAIL PROTECTED]Novato,
 CA U.S.A.
 
 
 
 -- 
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:   
 http://lists.mysql.com/[EMAIL PROTECTED]
 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: How do I import a .dmp file?

2004-07-22 Thread Justin Swanhart
While the extension on the file isn't set in stone, many people add
the .dmp extension to files created with the Oracle exp utility.

You can use a third party utility call nxtract
(http://www.elmbronze.co.uk/nxtract/index.htm) to convert exp files
into tab delimited files.  The eval version only does 5k rows of data
though.

I've never used, nor do I endorse nxtract.  I just found it while
googling around looking for a tool to process oracle exp files.

It might be easier to just connect to the oracle instance (if it still
exists) using perl and extracting the data to insert statements that
way.




On Thu, 22 Jul 2004 15:53:39 -0700 (PDT), David Blomstrom
[EMAIL PROTECTED] wrote:
 --- Victor Pendleton [EMAIL PROTECTED] wrote:
  Are you referring to an Oracle .dmp export file? If
  so you will also need to
  do some parsing.
 
 * * * * * * * * * *
 
 I don't know; this is the first time I've tangled with
 a .dmp file. Someone told me it must have come from an
 Oracle system, because .dmp files are associated with
 Oracle. But I may have been misinformed.
 
 __
 Do you Yahoo!?
 Vote for the stars of Yahoo!'s next ad campaign!
 http://advision.webevents.yahoo.com/yahoo/votelifeengine/
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Mysql growing pains, 4 days to create index on one table!

2004-07-21 Thread Justin Swanhart
I don't see how using a multi value insert would be
any faster than the insert between the tables.  It
would certainly be faster than one insert statement
per row, but I don't think it would be faster than
insert ... select ...

The only reason I suggested an extended syntax insert
earlier was because I wasn't aware that a temporary
table was being loaded first.

Do you ever delete from this table?

Can you post the results from
show variables for us?

Have you removed the unecessary duplicate key on the
first column of your primary key?

Thanks,

Justin



--- gerald_clark [EMAIL PROTECTED]
wrote:
 mysql -i   filename.sql
 
 matt ryan wrote:
 
  Lopez David E-r9374c wrote:
 
  Since you have a temp table created (no keys I
 assume), use the command
 
   mysqldump -v -e -n -t dbname tablename 
 filename.sql
 
 
   
 
  This creates a file that inserts the records back
 into the same table
 
  it also does not do an insert ignore
 
  I need the records to go into the historical
 table, with an insert ignore
 
 
 
 
 -- 
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:   

http://lists.mysql.com/[EMAIL PROTECTED]
 
 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Mysql growing pains, 4 days to create index on one table!

2004-07-21 Thread Justin Swanhart
--- matt ryan [EMAIL PROTECTED] wrote:
 Do you ever delete from this table?
 
 Temp table is trunicated before the EBCDIC file is
 loaded
I meant the history table :)

 Have you removed the unecessary duplicate key on
 the first column of your primary key?
 
 Have not touched the DIC index yet, I need a backup
 server to change
 indexes, it would take the main server down for too
 long, and it wont be
 that big an increase, it's only a 3 character index,
 I also do joines on
 that field to other tables, so I was hesitant on
 removing that index.

Even if it is only a 3 character index, you still need
to do disk reads/writes to update the values in the
index.  With a huge table like yours one index can
make a big difference.

It will still work fine with joins.  Since it is the
leading column of another index, it will function just
like a normal index.  Having indexes on a and (a,b) is
redundant for searching/joining only a.  If you need
to join/search on b, then a seperate index is required
for b if you have indexed (a,b).


The last option I can think of requires a lot more
work on your part as far as inserting data, but it may
be your only option at this point.  This is why I
asked if you delete data from the history table, as it
makes deletes/updates more problematic as well.

Split the myisam table into seperate tables.  We will
call each table a bucket.

Create a MERGE table of all of them. For selecting the
data.

When inserting, use a hash function on your primary
key values to determine which bucket to insert into. 
If you almost always select by primary key, then you
can optimize your select queries to only look at the
correct bucket as well, and to only use the merge
table when you aren't selecting by primary key.

This will speed your inserts because instead of 258M
rows to search through for each insert there are only
8M if you use 32 buckets.  The biggest benefit is that
you could also insert in parallel using multiple mysql
connections, because you could calculate the bucket as
an additional column in your temporary table, then do
the inserts for all the buckets the same time.




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Cluster on Solaris 9/x86 works

2004-07-20 Thread Justin Swanhart
You do have ByteOrder: Big in the .ini file for the
sparc database servers, right?



--- Alexander Haubold [EMAIL PROTECTED] wrote:
 Hi everyone,
 
 Just to follow up on my previous post regarding
 Cluster on Sparc/Solaris 9: 
 On an x86 Solaris 9 machine that was set up similar
 to the Sparc one, MySQL 
 Cluster (4.1.4) does not produce a Bus Error. Ndbd
 starts up just fine.
 
 I hope that the source will be corrected for the
 Sparc platform, or a note 
 is published on what needs to be done differently to
 compile for Sparc 
 versus x86.
 
 - Alex
 
 
 -+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-
 Alexander Haubold
 Columbia University - SEAS 2001, 2003
 
 362 Riverside Dr. #5B3
 New York, NY 10025
 
 Phone: +212-853-8239
 Email: [EMAIL PROTECTED]
 WWW: http://www.aquaphoenix.com
 
 
 
 -- 
 MySQL Cluster Mailing List
 For list archives: http://lists.mysql.com/cluster
 To unsubscribe:   

http://lists.mysql.com/[EMAIL PROTECTED]
 
 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



data design question

2004-07-19 Thread Justin French
Hi all,
I'm in the planning phase of a hosted web application where all 
instances of the app (a sort-of website CMS) will be running off a 
single code source.  I've got a clear picture about everything except 
for the database design.

a) I could have a separate database  table structure for each website 
in the application

b) I could have a single database for all instances, but individual 
table structures for each (eg client_tablename)

c) I could have all data from all instances in one table structure, 
with a website_ID for each record, signifying which site the record 
relates to.

I'm leaning towards (c) on the basis that updates to the database and 
table structure will be a breeze (only have to update one instance), 
but I'm concerned about performance.

Let's say I had 20,000 articles belonging to 100 websites.  Would there 
be a performance loss by having all 20,000 articles in one table, with 
a indexed `siteID` column identifying which site each article belongs 
to, as opposed to 100 tables (one for each site) holding only their own 
data??

Should I worry at 40,000? 100,000?  Or will the indexing of the siteID 
keep everything extensible?

---
Justin French
http://indent.com.au
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Mysql growing pains, 4 days to create index on one table!

2004-07-15 Thread Justin Swanhart
a few suggestions...

Your slow inserts could be a concurrancy issue.  If
lots of users are doing selects that could be
interfering with your inserts, especially if they use
a n odbc/jdbc app that locks the table for the entire
read. Jdbc reads do that when you tell it to stream
the contents of the query, because they wouldn't all
fit in memory.

--

Does your import script do an INSERT for each line, or
does it combine lines into multi-value inserts?

doing an insert into ... values (...),(...),(...) 
will be much faster than doing one insert for each
row.

since your max packet size is pretty big, you should
be able to consolidate a fairly large number of rows
into one insert.

--

What settings are you using on the Percs?  What stripe
size?  What write cache are you using (back or thru)?
Are you using read-ahead cacheing (that can hurt index
performance)?

--

Defragging the filessytem probably won't have any
major impact on the speed of your application.  

---

Is there another way you can approach the duplicate
problem?  For instance, if duplicate data can only be
generated in the last few days worth of imports, you
might not have to rely on your primary key on the main
table for importing.  This only works if you can drop
the primary key because it is only used for duplicate
checking and isn't used to speed queries.  You could
instead create another table that you do all your
importing to, taking care of  dupes with a primary key
on that table, then insert from that table into the
main one.  Keep a timestamp in that table and purge
the older records periodically.


the last thing i could think of would be a dupe
checker table.  Create an innodb table that consists
of only the columns from your big table and make all
the columns the primary key.  Essentially you have
just created an index only table.  Insert your new
data into a temporary heap table, then delete from the
heap table where the key is in your dupe table. 
Then insert everything from the heap table into the
big table.  Once again, this only works if you don't
need the primary key on the big table.  This will use
more CPU/memory but it may get around your read
problems.




Justin

--- matt ryan [EMAIL PROTECTED] wrote:
 
 
  You might be out of luck with MySQL ... sorry.
 
  You may need to switch to a database that has a
 parallel query 
  facility. Then - every query becomes a massive
 table scan but gets 
  divided into multiple concurrent subqueries - and
 overall the job 
  finishes in a reasonable amount of time. The
 epitomy of brute force. 
  It's hard to rationalize initially but after a
 while you see it's the 
  only way to go. Remember -  indexes are no longer
 required.
 
  We have a billion row 100GB table the users search
 any and every way. 
  Response time is less than a minute.
 
  We are anxiously waiting to see this technology
 added to MySQL. Maybe 
  one day we'll have some money to contribute to the
 effort. Parallel 
  query is not trivial. That's why these databases
 are expensive.
 
  I can send you more details privately if you are
 interested.
 
 
 I've used it, with oracle, but oracles index
 searches are better, hit 
 the best one first, then 2nd best, then 3rd, but I
 really dont want to 
 go to oracle, it's too complicated for my tech's
 
 vs mysql, hit the best one first, and use no other
 
 Query time is a non issue at this point, it's load
 time, load daily file 
 into temp table, then insert ignore into main table,
 on key violation 
 the violating record is ignored
 
 load time is the issue, the server loads files 15
 hours a day, that big 
 primary key makes loading any table over 2-3 gig
 VERY slow
 
 I thought it was a bug, everything was great untill
 you get up to about 
 3 or 4 gig, then it gets SLOW
 
 -- 
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:   

http://lists.mysql.com/[EMAIL PROTECTED]
 
 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Mysql growing pains, 4 days to create index on one table!

2004-07-15 Thread Justin Swanhart
Insert ignore doesn't insert the record if there is a
duplicate.  It simply doesn't insert the row.  Without
the IGNORE clause, the query would generate an error
insert of silenty ignoring the insert.

--- [EMAIL PROTECTED] wrote:
 That's the whole point. Eliminate your indexes and
 your load problems are 
 solved. Especially given the fact that you insert
 ignore and don't use the 
 primary key to validate uniqueness.


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Mysql growing pains, 4 days to create index on one table!

2004-07-15 Thread Justin Swanhart
 I've used it, with oracle, but oracles index
 searches are better, hit 
 the best one first, then 2nd best, then 3rd, but I
 really dont want to 
 go to oracle, it's too complicated for my tech's
Oracle rarely performs index merges, but it does have
the ability to do, which mysql lacks.

 Query time is a non issue at this point, it's load
 time, load daily file 
 into temp table, then insert ignore into main table,
 on key violation 
 the violating record is ignored
I know you don't want to go with Oracle, but I will
just add that it could help you here too, because it
can do parallel DML.  This is especially useful if you
have access to the partitioning option, because you
could then partition your data by hash and get a
number of bonuses.  #1 your index updates will be much
faster because there are a lot less rows to look
through.  #2 parallel DML can insert into multiple
partitions at once.  

I am unsure if MaxDB supports any of those features,
though it may.  You may want to look into it to see if
they are.

I don't want to sound like I'm pushing Oracle.  I'm
not an Oracle sales rep, or anything like that.  I am
a professional Oracle DBA that happens to also use
mySQL a lot and I like both databases.  Sometimes one
is better than the other for solving a problem.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Search one table, then display another table where keys match ?

2004-07-14 Thread Justin Swanhart
Does access_no contain actual textual data, or is it
simply a key like a category or an integer?  If you
don't need to do a fulltext search against access_no
then there is no reason to include it in your fulltext
index.  You should most likely have a seperate index
for access_no in that case.

The following query assumes access_no is exactly the
same in both balloon_txt and balloon_rec for the rows
you are matching. 

SELECT br.* 
  FROM balloon_txt bt,
   balloon_rec br
   /*find the matching rows from balloon_txt*/
 WHERE MATCH(bt.access_no, bt.recs_txt)
   AGAINST ('robin');
   /*and join them to rows in balloon_rec using
 the access_no column*/
   AND bt.access_no = br.access_no 

if access_no doesn't need to be full text indexed, you
could drop the fulltext key and add a new one just for
recs_txt and remove bt.access_no from the MATCH()

--- leegold [EMAIL PROTECTED] wrote:
 If you would entertain a MYSQL/PHP, hope not too
 off-topicIt's 
 probably not difficult to solve - but you would be
 helping me
 with some SQL logic.
 
 The only way I can think of to explain what I want
 to do
 is to give you my working newbie MSQL/PHP code that
 I'm learning
 MYSQL/PHP with, and at a certain point in the code
 below I'll state
 exactly as I can what I want to try to do. It's
 probably
 quite simple but I can't get it- Thanks:
 
 ...
 pre
 ?php
 $dblink = mysql_connect ( 'localhost',  guest,
 password );
 mysql_select_db( balloon, $dblink );
 // Doing a FULLTEXT search
 // Re the SELECT: I indexed both fields together, so
 seemed like
 // I should put them both in the MATCH...OK, it
 works.
 $query=SELECT * FROM balloon_txt WHERE
 MATCH(access_no, recs_txt)
 AGAINST ('robin');
 $result = MySQL_query($query);
 
 /
  OK, right here - next below I'm gonna display/loop
 $result from table
  balloon_txt. But, what I really want to do is take
 the result set
  access_no fields from the search above and
 (access_no is a Key in all
  my tables) and use it to generate results (ie.
 matching records) from
  another table called balloon_rec and dispaly/loop
 the results from
  balloon_rec. So I'm searching balloon_txt, getting
 results, but I want
  to display matching records from another table -
 balloom_rec. Is there
  a way to do a join or something in the SELECT
 above? Or do I process
  $result? Seems a join in the SELECT above or some
 SQL above is cleaner
  - but not sure how(?) Thanks, Lee G.
 ///
 
 while ( $row = mysql_fetch_row( $result ) ) {
  for ( $i=0; $imysql_num_fields( $result ); $i++ )
   {echo $row[$i] .  ;}
  echo\n\n\n;
 }
 // Close the db connection
 mysql_close ( $dblink );
 ?
 /pre
 ...
 
 
 
   
 
 -- 
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:   

http://lists.mysql.com/[EMAIL PROTECTED]
 
 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: do i need an index for this?

2004-07-14 Thread Justin Swanhart
Creating a key will make that query execute very fast,
but if that is the only reason for the key you are
going to be trading quite a lot of space for the speed
of one query.  

How often are you going to run this query?  If you
have 324 million rows, then that index is going to
consume somewhere in the order of 2G or more of disk
space.  Is it worth using all that space to make one
query faster?

--- J S [EMAIL PROTECTED] wrote:
 Hi,
 
 I want to find the earliest and latest times in the
 time column of my table 
 internet_usage:
 

+--+--+--+-+-+---+
 | Field| Type | Null | Key |
 Default | Extra |

+--+--+--+-+-+---+
 | uid  | int(10) unsigned |  | MUL |
 0   |   |
 | time | timestamp(14)| YES  | |
 NULL|   |
 | ip   | int(10) unsigned |  | |
 0   |   |
 | urlid| int(10) unsigned |  | |
 0   |   |
 | timetaken| smallint(5) unsigned | YES  | |
 0   |   |
 | cs_size  | int(10) unsigned | YES  | |
 0   |   |
 | sc_size  | int(10) unsigned | YES  | |
 0   |   |
 | method_ID| tinyint(3) unsigned  |  | |
 0   |   |
 | action_ID| tinyint(3) unsigned  |  | |
 0   |   |
 | virus_ID | tinyint(3) unsigned  |  | |
 0   |   |
 | useragent_ID | smallint(5) unsigned |  | |
 0   |   |

+--+--+--+-+-+---+
 
 So far there are 324936160 rows. If I do :
 
 SELECT MIN(time) as earliest, MAX(time) as latest
 from internet_usage;
 
 I can see the query is going to run for a long time.
 Do I have to create an 
 index on time to speed this up or is there another
 way of doing it?
 
 Thanks for any help.
 
 JS.
 

_
 It's fast, it's easy and it's free. Get MSN
 Messenger today! 
 http://www.msn.co.uk/messenger
 
 
 -- 
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:   

http://lists.mysql.com/[EMAIL PROTECTED]
 
 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Hold System

2004-07-14 Thread Justin Palmer
Hi,

I am currently working on a hold system.  The system will allow
Coordinators to hold international students to place them with American
Host Families.  I am having a hard time coming up with a good MySQL
table design for this process.  

I have a table now that looks something like:

CREATE TABLE `hold_tracker` (
  `id` int(11) NOT NULL auto_increment,
  `STUDENT_ID` int(11) NOT NULL default '0',
  `USER_ID` int(11) NOT NULL default '0',
  `valid` char(1) NOT NULL default '1',
  `date_held` bigint(20) NOT NULL default '0',
  `date_created` bigint(20) NOT NULL default '0',
  PRIMARY KEY  (`id`)
) TYPE=MyISAM AUTO_INCREMENT=12 ;

This works okay, for just keep track of who is holding the student.
There are some restrictions:

1. A Coordinator may only hold 3 students at a time.  (Program logic,
this is done already.) 
2. A Coordinator can only hold a student while in the #1 position for a
certain amount of time. (I don't have a problem with calculating the
time.The problems are:
A. What do I do with the record once there hold has expired?  
B. Also what do I do with the other records that are holding so
they get adequate hold times in the number one  position?)
3. There can only be 3 holds per student. (Program logic, this is done
already)

I can come up with some solutions for Restriction #2, but I feel that
they are kind of sloppy with the current table schema.  I also think
that a better table schema would be in order here.

sloppy_way
After time has expired or the Coordinator has canceled the hold.  I can
turn the 'valid' field of the record to zero.  And update the next
Coordinators 'date_held' to the current date. Giving them adequate time
to have there hold. /sloppy_way 

Has anyone else built a hold system before that might be able to help me
out with some pointers?

I hope this makes sense. If not please ask for clarification.  As always
I appreciate any assistance.

Thank you,

Justin Palmer



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Mysql growing pains, 4 days to create index on one table!

2004-07-14 Thread Justin Swanhart
Indexes can generate vast amounts of random i/o. 
Because of rotational latency, random i/o can really
slow you down, especially if you are using IDE or SATA
disks because they can't do tagged queueing like SCSI
disks can.

If you have the budget for it, I would consider
getting some solid state disks.  Because they have
extremely low latency you will be able to get full i/o
bandwidth on your reads.  If you can't afford those,
consider adding more disks to your RAID array so that
you can spread the reads over more spindles, which
will help performance.


--- matt ryan [EMAIL PROTECTED] wrote:
 [EMAIL PROTECTED] wrote:
 
 You may want more indexes but you might be getting
 killed because you already have too many.
 
 To test - try loading into a table without indexes
 and see if it makes a difference.
 
 At the very least - check to see if the primary
 index which starts with 'dic' can make your special
 'dic' index superfluous.
 
 If write speed is a bottleneck you might consider
 Raid-1 instead of Raid-5.
 
 Reading lots of rows via index is a killer.
 Depending on your hardware it may be cheaper to
 table scan 50 rows than to read 1 via index.
 However, this requires partitioning of the data
 based on some column which appears in every query
 and acts as an initial filter. If you are lucky
 enough to be in that situation - consider a MERGE
 table.
 
 
   
 
 
 These tables are merged, the total table size is
 huge, on this 
 particular table, it's , 45,449,534 rows, however,
 all the merge tables 
 combined are 258,840,305 records
 
 perhaps I should reorder the pimary key, putting the
 longest most unique 
 record up front, and the least unique at the end,
 would that speed up 
 the key check?   I can tell that almost everything
 is read IO, very 
 little write IO
 
 -- 
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:   

http://lists.mysql.com/[EMAIL PROTECTED]
 
 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Replication - multiple masters

2004-07-13 Thread Justin Swanhart
Having that many instances on one box is going to be a
management nightmare.  I can only imagine the recovery
scenarios should you have a hardware problem.  Perhaps
you may want to think about writing your metric data
to a local mysql instance then pulling the data from
each instance into the depot database with a script
that runs every few minutes.  Using this pull
methodology you don't have to worry about losing
metrics if your depot is down but you don't have the
burden of up to tweleve databases to manage on one
machine (twenty four databases total).  

Another option would be to write the metrics to a flat
file on the web server, then simply serve that file up
via HTTP.  You then have only one database to manage,
you can pull the metrics into it very easily and you
still don't have to worry about a down depot server.

In your depot you then have a couple options.  You
could have one master table that contains metrics from
all the machines and contains a column for the machine
name or you could have a seperate table for each
machine, and if you want to examine metrics for all
machines you could use a merge table.

--- Marc Knoop [EMAIL PROTECTED] wrote:
 Jeremy Zawodny writes: 
 
  The web servers record web metrics to local mysql
 databases.  I would 
  like those local databases to be consolidated
 onto the DEPOT [as three 
  separate DBs]. 
 
  You cannot do that.
  snip 
  You'd need to run 3 instances of MySQL on DEPOT,
 one for each WWW
  server you'd like to mirror.
 
 Can you, or anyone comment on the praticality of
 doing so?  I estimate 
 10,000 to 30,000 records per web server, per day
 using 3 remote web servers. 
 The number of web servers would not likely grow to
 more than 12. 
 
 My planned DEPOT server is a Dell PowerEdge - dual
 Xeon, 2GB memory and 
 oodles of disk space. 
 
 Could mysql, or Linux ES 3.0 for that matter, handle
 it?  Is there anyone on 
 this list running several instances of mysql on the
 same box?  Any 
 experiences to share? 
 
  --
 ../mk 
 
 
 -- 
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:   

http://lists.mysql.com/[EMAIL PROTECTED]
 
 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Why this query doesn't group the email addresses?

2004-07-13 Thread Justin Swanhart
You are doing an implicit group by of first, last as
well as your explicit group by of email.  

So you could have two records with the same e-mail
address generate two records with your group by

   Justin Time [EMAIL PROTECTED]
   Justin Credible [EMAIL PROTECTED]  --DUPE--

Case differences between the records could also cause
dupes.  If case differences are causing it then do

select lower(first), lower(last), lower(email)
...
group by lower(first), lower(last), lower(email)

--- Wesley Furgiuele [EMAIL PROTECTED] wrote:
 What type of field is the email field?
 
 Wes
 
 On Jul 13, 2004, at 11:04 AM, Aaron Wolski wrote:
 
  Hey all,
 
  Got this query:
 
  SELECT first,last,email FROM CustomerTable AS t1,
  OrderTable AS t2, CartTable AS t3 WHERE
 t2.cart_id=t3.cart_id
  AND t1.id=t2.customer_index AND t3.submitted='1'
 AND
  t3.product_index='1' AND t3.quantity0
  GROUP BY t1.email ORDER BY t1.first,t1.last
 
  For some strange reason it doesn't seem to group
 the email addresses.
  I'd be hard pressed to find every occurrence out
 of 1000 records, but I
  DID quickly spot two exact same records which
 means the email address
  was not grouped.
 
  What can I do or where did I go wrong?
 
  Thanks!
 
  Aaron
 
 
 -- 
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:   

http://lists.mysql.com/[EMAIL PROTECTED]
 
 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Why this query doesn't group the email addresses?

2004-07-13 Thread Justin Swanhart
 Where is the implicit group?  The 'order by'
 shouldn't effect how things
 are grouped.  On MySQL 4.0.17:

Wow.  I wasn't aware.  Is that expected behavior? 
Other databases (Oracle) generate an error when you
include columns in the select list that aren't in a
group by, or they do an implicit group by (postgres)
on the columns.  I [wrongly] assumed an implicit group
by was going on because the query hadn't generated an
error.

I guess mysql just takes the values from the first
record that matches the group expression.

That creates confusion when you do:
select first,last,email, count(*)
from foobar
group by email

Because the query will report a count of two (given
your data) when there really is only one row with that
first,last,email combination.

Oracle would require you to do:
select first,last,email, count(*)
from foobar
group by first,last,email
otherwise you would get an error that first is not a
GROUP BY expression.

That query would return four rows on your data, each
with a count of 1.

My apologies,

Justin

 
 create table bar ( email varchar(64), first
 varchar(64), last
 varchar(64) );
 
 insert into bar values ('[EMAIL PROTECTED]', 'a', 'smith'),
 ('[EMAIL PROTECTED]', 'a',
 'williams'), ('[EMAIL PROTECTED]', 'b', 'webb'), ('[EMAIL PROTECTED]',
 'c', 'oconner');
 
 mysql select * from bar;
 +-+---+--+
 | email   | first | last |
 +-+---+--+
 | [EMAIL PROTECTED] | a | smith|
 | [EMAIL PROTECTED] | a | williams |
 | [EMAIL PROTECTED] | b | webb |
 | [EMAIL PROTECTED] | c | oconner  |
 +-+---+--+
 
 mysql select first,last,email from bar b group by
 b.email order by
 b.first, b.last;
 +---+---+-+
 | first | last  | email   |
 +---+---+-+
 | a | smith | [EMAIL PROTECTED] |
 | b | webb  | [EMAIL PROTECTED] |
 +---+---+-+
 
 
  Case differences between the records could also
 cause
  dupes.  If case differences are causing it then do
  
  select lower(first), lower(last), lower(email)
  ...
  group by lower(first), lower(last), lower(email)
 
 Case or extra whitespace is a definite possibility. 
 Aaron, try to find
 at least one occurrence of duplicate email addresses
 and then post a
 small (5 row) dataset that exhibits the problem you
 are having.
 
 Garth
 
  --- Wesley Furgiuele [EMAIL PROTECTED] wrote:
   What type of field is the email field?
   
   Wes
   
   On Jul 13, 2004, at 11:04 AM, Aaron Wolski
 wrote:
   
Hey all,
   
Got this query:
   
SELECT first,last,email FROM CustomerTable AS
 t1,
OrderTable AS t2, CartTable AS t3 WHERE
   t2.cart_id=t3.cart_id
AND t1.id=t2.customer_index AND
 t3.submitted='1'
   AND
t3.product_index='1' AND t3.quantity0
GROUP BY t1.email ORDER BY t1.first,t1.last
   
For some strange reason it doesn't seem to
 group
   the email addresses.
I'd be hard pressed to find every occurrence
 out
   of 1000 records, but I
DID quickly spot two exact same records which
   means the email address
was not grouped.
   
What can I do or where did I go wrong?
   
Thanks!
   
Aaron
   
   
   -- 
   MySQL General Mailing List
   For list archives: http://lists.mysql.com/mysql
   To unsubscribe:   
  
 

http://lists.mysql.com/[EMAIL PROTECTED]
   
   
 -- 
 . Garth Webb
 . [EMAIL PROTECTED]
 .
 . shoes * 鞋子 * schoenen * 단화 * chaussures *
 zapatos
 . Schuhe * παπούτσια * pattini * 靴 *
 sapatas * ботинки
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:   

http://lists.mysql.com/[EMAIL PROTECTED]
 
 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Oracle 2 MySQL updates/replication?

2004-07-13 Thread Justin Swanhart
An option would be a log reader program that uses
Oracle log miner to only show commited transactions
from the redo logs.  You could then replay the SQL
that is being executed on the oracle box on the mysql
server as long as the tables are defined the same.

9i has an enhanced log miner that can be used to read
8i redo logs as well, so you might want to use the
newer 9i client if you go this way.

Updates to the oracle database could be processed the
same way using the mysql binary log as long as no
mysql extensions were used like inserting multiple
rows with a single insert statement.

--- Jeremy Zawodny [EMAIL PROTECTED] wrote:
 On Tue, Jul 13, 2004 at 06:11:22PM -0700, Carl
 Edwards wrote:
  Hello,
  
  I found a question about Oracle 2 MySQL
 replication in the
  archive on Sep. 2001 but no mention since?
  
  We have a  department using Oracle 8.1.7 and I'm
 running MySQL
  4.0 and neither of us wants to change :-)
  
  I could call a Perl, C++ or Java program from cron
 to periodically
  update the MySQL instance from Oracle but was
 hoping to use a
  trigger/stored procedure to initiate the update so
 it seems more
  real time.  Does this seem possible?
  
  Of course it may turn out non-trivial to write the
 synchronization
  code so I'll take suggestions on that front also.
 
 Golden Gate Software makes a product that does this.
  I'd have a look
 at what they offer.
 
 Jeremy
 -- 
 Jeremy D. Zawodny |  Perl, Web, MySQL, Linux
 Magazine, Yahoo!
 [EMAIL PROTECTED]  |  http://jeremy.zawodny.com/
 
 [book] High Performance MySQL --
 http://highperformancemysql.com/
 
 -- 
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:   

http://lists.mysql.com/[EMAIL PROTECTED]
 
 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Upgrade to mysql 4.0 in Fedora Core 2

2004-07-12 Thread Justin Swanhart
brpm -qa|grep mysql/b will show you what mysql
packages you have installed.

You probably have both 3.x and 4.x packages installed
and assuming you don't have a 3.x database you want to
preserve, I would suggest uninstalling the 3.x package
with brpm --erase iname_of_3.x_package/i/b

To determine where the files in a package are
installed you can query the package via rpm.  To do so
simply do brpm -ql iname_of_package/i/b

You can use rpm -ql to find where the 4.0 rpm
installed its binaries and execute them from there.  


--- A. Lanza [EMAIL PROTECTED] wrote:
 After installing FC2 i realized that the mysql
 server version installed
 was 3.x. I wanted to install version 4.0, so i got
 the rpm
 package from mysql website and i installed it on my
 system.
 
 Now it seems like mysql server is not running. I get
 no output for the
 command ps -A | grep mysql. Also, when i try to
 connect to mysql
 server using mysql client, i get the following error
 message:
 
 ERROR 2002: Can't connect to local MySQL server
 through socket
 '/var/lib/mysql/mysql.sock' (2)
 
 It seems like mysql server does not automatically
 start when i boot up
 my computer... how can i start it manually? Where in
 the filesystem does
 mysql server get installed after installing the rpm?
 
 Though, when i run the Gnome Service config.
 utility, i can see mysql
 service on the list, with a checkmark, and it
 appears to be running...
 i'm so confused about this.
 
 Please help.
 
 Thanks in advance.
 
 
 
 -- 
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:   

http://lists.mysql.com/[EMAIL PROTECTED]
 
 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Implementing full text searching on an existing, production database.

2004-07-12 Thread Justin Swanhart
Keep in mind that if you create an index on multiple
fields, then all of those fields must be searched at
once.  You can't index product_name, product_desc and
product_category for instance, then only search on the
product_name field using MATCHES.

If you want to bypass this (and many other
limitations, including stoplists, short words, etc)
then I would suggest indexing your data with a
seperate text indexing system like Jakarta Lucene
(http://jakarta.apache.org/lucene/docs/index.html).
Using a product like Lucene will also allow you to
implement your parametric searching MUCH easier.  You
can either define additional parametric fields in you
lucene index or you could create a second one and with
the API very easily merge the searches between the
indexes.  

Keep in mind that a major limitation of the mysql
fulltext engine is that it can't index more than 500
characters which could be a major drawback for your
parametric data.

The following assumes you will stick with mysql
fulltext indexes...

In order to index 3 letter words, you will need to set
min_ft_word_len in your mysql.cnf file.  You probably
also want to create your own list of stopwords and use
ft_stopword_file.

If your table is large then the biggest problem you
are going to have when creating  the index is that the
table will be locked while the index is being created.

If that is a problem then I would suggest that you
create a seperate table with create table as. You
will need some way of keeping track of any additions
to the base table at this point, perhaps using the
highest product_id, or if you have a modification
timestamp on your table use that.
Create the text index on the new table and test it
out.   When you are satisfied that everything is
working ok then rename the new table to the old table,
and insert/update the records that have been
added/modified since you created the copy.

If you can suffer the downtime then simply create the
index on the table and wait it out.

There is very little danger in adding the text index
to a production table.  Just back up your database
before you make the modifications and you should have
no problems whatsoever.

--- Stuart Grimshaw [EMAIL PROTECTED] wrote:
 Hi All,
 
 I'm currently engaged in a project to implement 
 full text searching
 of our product database. The current implementation
 was written for an
 older version of MySQL and doesn't implement BOOLEAN
 MODE.
 
 Currently, the full text search is against a
 de-normalised table from
 fields icluding the products ID, title 
 description, in a simple
 table :
 
 CREATE TABLE text_search
 {
 product_uid int(10) NOT NULL,
 rank int(10) NOT NULL DEFAULT 0,
 txt TEXT
 }
 
 with the full text index set up against txt.
 
 There are several problems with this implementation,
 firstly the
 de-normalised txt field doesn't include all
 information on a
 product, so I would like to implement the search
 against the full
 product table. There are approx 65,000+ products in
 the table, and the
 index would be on 2 or 3 fields in that table. Has
 anyone retro-fitted
 a full text index to a production database? Is there
 an established
 strategy for doing this?
 
 Because of the nature of our business we sell a lot
 of products where
 the keyword is 3 letters, DVD, USB, DDR etc etc. The
 manual mentions
 that while you can reduce the minimum number of
 letters, it's not a
 good idea Modifying the default behavior will, in
 most cases, make
 the search results worse. it says. Is there a
 better way to ensure
 these search terms are included in the index?
 
 Finally, we also carry parimetric data on our
 products, it would be
 good to include this information in the full text
 search. The only way
 I can think of is to create a seperate full text
 index on the table
 storing the parimetric data, and then run the query
 against both
 tables, mergeing the results in code.
 
 -- 
 -S
 
 -- 
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:   

http://lists.mysql.com/[EMAIL PROTECTED]
 
 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: When is mysql 4.1.x , production?

2004-07-12 Thread Justin Swanhart
A beta takes as long as a beta takes.  That is really
the nature of beta testing.  As for an approximate
timeline, I've heard various quotes, but most people
seem to think somewhere late third quarter that the
release will be marked stable.  

4.1.3 is really quite stable and you should have very
few problems with it.  If you are developing a new
product and you need features that are available only
in the 4.1 release, then you are highly encouraged to
test the release.  By doing so you help to move the
beta forward because in the unlikely event that you do
find any problems you can report them and they will
get resolved.  


--- Ben David, Tomer [EMAIL PROTECTED] wrote:
 5-7 more beta releases
 
 How much time is 1 beta release taking
 (approxiamtly)
 
 Thanks :)
 
 Original Message:
 From: Josh Trutwin [EMAIL PROTECTED]
 To: 
 CC: [EMAIL PROTECTED]
 Subject: Re: When is mysql 4.1.x , production?
 Date: Mon Jul 12 16:41:32 GMT 2004
 
 On Mon, 12 Jul 2004 12:05:53 +
 Ben David, Tomer [EMAIL PROTECTED] wrote:
 
  when is mysql 4.1.x going to be released for
 production?
 
 When it is ready I'd guess.  :)
 
 If history is a predictor though I would expect
 4.1.x to go though 5-7 more beta releases though. 
 Help the developers out by testing it in your
 enviornment.
 
 Josh
 
 -- 
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:   
 http://lists.mysql.com/[EMAIL PROTECTED]
 
 
 
 -- 
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:   

http://lists.mysql.com/[EMAIL PROTECTED]
 
 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Implementing full text searching - more questions

2004-07-12 Thread Justin Swanhart
 Does that mean the max. string that can be indexed
 and therefore searched on is 500 chars? What exactly
 is this limitation?
I may have been wrong on this limit.  I know I read
about it somewhere, but I can't seem to find out where
at the moment.  Since the fulltext index is maintained
as a seperate b-tree with each word from the record
and its local weight, I am nearly certain I was wrong
in making the 500 char limit assertion (though a
single word is limited to that length).  

 Can I just add words or append words to the existing
 default stopword file? After I add words do I have
 to
 reindex the fields or restart anything? (I'm using
 win32)

You can't modify the default stoplist, but you can
create your own using ft_stopword_file (just include
the words on the default stoplist).  If you change the
stopwords you must rebuild your index.

 
 If I insert/add a record to a fulltext table do I
 have
 to redo(reindex) anything? Assuming I'm using a
 recent ver.
 of MYSQL.

Fulltext indexes are maintained just as normal indexes
when you insert/update/delete rows.  They will
(sometimes greatly) affect the speed of your DML
operations, but no special synching is necessary to
keep them up to date with your table data.

 Thanks, just genral questions...thanks.
You are quite welcome.



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: innodb filesystem on software raid

2004-07-10 Thread Justin Swanhart
I highly recommend simply using ext3 for your Linux
setup.  The 1 or 2 percent performance benefit that
you may get from raw partitions is way outweighed by
complexness of backups of the raw data.

either way:

First I would suggest you read the Linux RAID howto:
http://www.tldp.org/HOWTO/Software-RAID-HOWTO.html

Here are the basic steps:
create a /etc/raidtab file for your array (probably
md0) using a 32k or 64k chunk size
(hint: man raidtab)

run mkraid to initialize the new raid array (md0)
(hint: man mkraid)

if you want to use raw partitions:
-
#this is redhat/fedora specific
add /dev/md0 to /etc/sysconfig/rawdevices
(hint: man raw)

add chown mysql:mysql /dev/raw/raw0 to
/etc/init.d/rc.local 

if you want to use ext3:
-
mke2fs -j -T largefile4 /dev/md0
(hint: man mke2fs)



--- Scott Mueller [EMAIL PROTECTED] wrote:
 I bought a supermicro 6013p-t for the 4 sata raid
 hard drives support.
 Unfortunately, it doesn't really have raid at all. 
 So I'm forced to use
 software raid.  What I'd like to use is fedora core
 2 with an innodb
 filesystem on a software raid partition according to
 these instructions:
 

http://dev.mysql.com/doc/mysql/en/InnoDB_Raw_Devices.html
 
  
 
 Has anybody done this?  I'm not a linux expert and
 so I'm not sure exactly
 how to set this up.  When I try to setup software
 raid in linux, I'm forced
 to pick a filesystem and its mount point as part of
 the process of creating
 a software raid partition.  So this is the part
 that's stumping me.  How do
 I create a software raid raw device only to use as
 an innodb filesystem?  Is
 this possible?  Or maybe this can't be done without
 hardware raid and I need
 to buy a new server?
 
  
 
 GNU's Parted software
 (http://www.gnu.org/software/parted/) has this to
 say
 regarding features, Supported disk labels: raw
 access (useful for RAID and
 LVM), ms-dos, etc.  But I've been unable to create
 a raw software raid
 device with that as well.
 
  
 
 Scott Mueller
 
 AccelerateBiz Managed Hosting
 
 http://www.acceleratebiz.com
 http://www.acceleratebiz.com/ 
 
 Phone: (800) 360-7360
 
 Fax: (270) 778-3081 
 
  
 
 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: SELECT DISTINCT + ORDER BY confusion

2004-07-09 Thread Justin Swanhart
If you are usign 4.1 you could try:

SELECT DISTINCT d, title
FROM 
(select p.id, p.title 
from product p
join e_prod ep on ep.product=p.id
join story s on s.id = ep.story and s.status = 9 and
s.type = 14
where p.platform_id = 5 and p.genre_id = 23282
order by s.post_date desc
)
limit 10

otherwise:

select p.id, p.title
  from product p
  join e_prod ep on ep.product = p.id
  join story s on s.id = ep.story and s.status = 9
   and s.type = 14
 where p.platform_id = 5 
   and p.genre_id = 23282
group by p.id, p.title
order by p.title
limit 10


--- Victor Pendleton [EMAIL PROTECTED] wrote:
 Have you tried using a group by clause? Group by
 title
 
 -Original Message-
 From: news
 To: [EMAIL PROTECTED]
 Sent: 7/9/04 3:08 PM
 Subject: SELECT DISTINCT + ORDER BY confusion
 
 I've got a product  story setup where there can be
 multiple stories of 
 a given type for any product.  I want to find the
 names of the products 
 with the most-recently-posted stories of a certain
 type.  This query 
 works well:
 
 SELECT p.id,p.title
 FROM product p
 join e_prod ep on ep.product=p.id
 join story s on s.id = ep.story and s.status = 9 and
 s.type = 14
 where p.platform_id = 5 and p.genre_id = 23282
 order by s.post_date desc
 limit 10
 

+++
 | id | title
  |

+++
 | 917958 | Port Royale 2
  |
 | 917958 | Port Royale 2
  |
 | 917958 | Port Royale 2
  |
 | 919355 | Warhammer 40,000: Dawn of War
  |
 | 918989 | The Lord of the Rings, The Battle for
 Middle-earth |
 | 914811 | The Sims 2   
  |
 | 919973 | RollerCoaster Tycoon 3   
  |
 | 915040 | Soldiers: Heroes of World War II 
  |
 | 915040 | Soldiers: Heroes of World War II 
  |
 | 915040 | Soldiers: Heroes of World War II 
  |

+++
 
 
 however since there are multiple stories of the
 correct type for some of
 
 those products, i would like to dedupe the results
 and just get a unique
 
 list of products.  however, if i use SELECT DISTINCT
 it applies that 
 BEFORE it does the sort, so i don't get only the
 most recent products. 
 what i actually get seems to be pretty random.
 
 SELECT DISTINCT p.id,p.title
 FROM product p
 join e_prod ep on ep.product=p.id
 join story s on s.id = ep.story and s.status = 9 and
 s.type = 14
 where p.platform_id = 5 and p.genre_id = 23282
 order by s.post_date desc
 limit 10
 
 ++---+
 | id | title |
 ++---+
 | 917958 | Port Royale 2 |
 | 920457 | Cuban Missile Crisis  |
 | 915000 | Axis  Allies |
 | 919602 | Blitzkrieg: Burning Horizon   |
 | 914594 | SuperPower 2  |
 | 914911 | Kohan II: Kings of War|
 | 915017 | Sid Meier's Pirates!  |
 | 918842 | Warlords Battlecry III|
 | 919973 | RollerCoaster Tycoon 3|
 | 920314 | Immortal Cities: Children of the Nile |
 ++---+
 
 that's pretty messed up.  really what i'd like is:
 
 

+++
 | id | title
  |

+++
 | 917958 | Port Royale 2
  |
 | 919355 | Warhammer 40,000: Dawn of War
  |
 | 918989 | The Lord of the Rings, The Battle for
 Middle-earth |
 | 914811 | The Sims 2   
  |
 | 919973 | RollerCoaster Tycoon 3   
  |
 | 915040 | Soldiers: Heroes of World War II 
  |
 | 914468 | Perimeter
  |
 | 915000 | Axis  Allies
  |
 | 914811 | The Sims 2   
  |
 | 918989 | The Lord of the Rings, The Battle for
 Middle-earth |

+++
 (i built this by hand just to demonstrate the
 desired outcome.)
 
 
 is there any way to do a post-order distinct?
 
 -jsd-
 
 
 -- 
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:

http://lists.mysql.com/[EMAIL PROTECTED]
 
 -- 
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:   

http://lists.mysql.com/[EMAIL PROTECTED]
 
 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:

Re: problem upgrading from 4.1.0-alpha to 4.1.3-beta on Solaris 9

2004-07-09 Thread Justin Swanhart
Do you have a hosts.MYD, or a hosts.frm file?

If you do, and there is no .MYI file, perhaps the
older version is just ignoring the table and not
making it available while the newer version errors
out.

If those files exist, try removing them from the data
directory (move them somewhere else) then starting the
new version.  

Hope that helps,

swany



--- John Fink [EMAIL PROTECTED] wrote:
 
 Hey folks,
 
 My mysql-fu is minimal to the point of nonexistent,
 so please forgive any
 vagaries that come across:
 
 I've recently compiled 4.1.3 to replace 4.1.0 on a
 machine here where I
 work.  The compile and install went fine (as far as
 I can tell, anyway),
 but when I try to start mysqld via the init script
 it dies almost
 immediately and I get the following lines in my .err
 file:
 
 040709 13:41:04  mysqld started
 040709 13:41:04  InnoDB: Started; log sequence
 number 0 43912
 040709 13:41:04  Fatal error: Can't open privilege
 tables: Can't find file: 'host.MYI' (errno: 2)
 040709 13:41:04  mysqld ended
 
 Indeed, I have no host.MYI -- I thought perhaps it
 might be a directory
 permissions error or something, but it's not.  4.1.0
 trundled happily
 along without a host.MYI file in my db's mysql
 directory.  Furthermore,
 none of our other machines with MySQL (running
 various 3.23 and 4.0) have
 them either.
 
 Thanks for any help you can provide,
 
 jf.
 
 
 
 -- 
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:   

http://lists.mysql.com/[EMAIL PROTECTED]
 
 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: How to query an oracle table from a mysql database

2004-07-07 Thread Justin Swanhart
No, that isn't possible using mySQL.

Try linking PHP with older client libraries (9.0.1,
8.1.5, etc) instead of the newer 9.2 libraries and see
if that fixes your problem with PHP.  You can download
them from otn.oracle.com

swany


--- Alonso, Claudio Fabian
[EMAIL PROTECTED] wrote:
 Hello Steve, hello Victor,
 Thanks for your answers.
 My PHP application needs to see this Oracle table,
 but as far as I could see
 PHP has problems with Oracle 9.2.0. I got a
 conection problem and found in
 the PHP's bug database that it isn't currently
 working.
 As I'm familiar with PHP/MySQL, I'm trying to see
 the Oracle table through
 MySQL.
 That's why I'm thinking on a way to create in MySQL
 a view (or something
 similar) that refers to an external database table
 (in this case, Oracle).
 I don't know if this kind of solution is possible,
 using only MySQL to see a
 remote Oracle table, not including any programming
 language.
 
 --Claudio
 ..
 
 -- 
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:   

http://lists.mysql.com/[EMAIL PROTECTED]
 
 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: moving records between tables?

2004-07-07 Thread Justin Swanhart
LOCK TABLE active_table WRITE, history_table WRITE;

#assuming the columns in the tables are exactly 
#the same
insert into history_table
  select * from active_table;

delete from active_table;

UNLOCK TABLES;


if the columns aren't the same between the tables then
you need to do something like

insert into history_table (colA, colB, colC,...)
  select (col1, col2, col3, ...) from active_table;


Hope that helps,

Swany
--- darren [EMAIL PROTECTED] wrote:
 Hi all,
 
 I have 2 tables...one for keeping active items while
 the other is for older
 records.
 
 A housekeep program will come in every night to move
 some records (matching
 several criteria from the active table to the
 history one.
 
 I am currently doing SELECT, INSERT and then DELETE.
 i.e. a select * from
 active where key=key_value limit 1 and then a
 insert into history... and
 then a delete * from active where
 pri_key='pri_key_value'...
 
 I am sure there's a better way right??
   
 
 -- 
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:   

http://lists.mysql.com/[EMAIL PROTECTED]
 
 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Space is filling up

2004-07-07 Thread Justin Swanhart
--- Asif Iqbal [EMAIL PROTECTED] wrote:
 gerald_clark wrote:
  What about getting a bigger drive?
 
 I guess that would be my only option eh?
 

If any of your data can be considered history data
that is never modified, you could create compressed
myISAM tables for that data, removing it from the inno
tablespaces.  

This could give you a little breathing room until you
can get a bigger drive.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: INSERT DISTINCT?

2004-07-07 Thread Justin Swanhart
Create a unique index on each column that you don't
want to be duplicated.

create UNIQUE index table_u1 on table(some_column)

--- John Mistler [EMAIL PROTECTED] wrote:
 Is there a way to do an INSERT on a table only if no
 row already exists with
 the same info for one or more of the columns as the
 row to be inserted?
 That is, without using a method outside SQL?
 
 Thanks,
 
 John
 
 
 -- 
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:   

http://lists.mysql.com/[EMAIL PROTECTED]
 
 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: selecting rows that match two criteria

2004-07-07 Thread Justin Swanhart
You can do it one of two ways..  Either you can do a
self join like the following:

select t1.userid 
 from answers t1,
  answers t2
where t1.qid = 5
  and lower(t1.answer)='student'
  and t2.qid = 6
  and lower(t2.answer) like 'edu%'
  and t1.userid = t2.userid

or you can use a union (available in 4.0.0 or greater)

select userid 
  from answers
 where qid = 5 and lower(answer) = 'student'
UNION
select userid
  from answers
 where qid = 6 and lower(answer) like 'edu%'

union automatically does a distinct so you will get
one row back for each user that answered that they are
education students.


--- Jonathan Duncan [EMAIL PROTECTED] wrote:
 I am trying to figure out what my select statement
 should be to combine
 both of these into one:
 
 SELECT userid FROM Answers WHERE answer like
 Education%;
 SELECT userid FROM Answers WHERE answer=Student;
 
 Table Answers looks like:
 -id int
 -userid int
 -answer text
 
 for each row there would be something like:
 +-++--+
 | qid | userid | answer   |
 +-++--+
 |  5  |   3000 | Student  |
 +-++--+
 
 There are often multiple asnwers for each question.
 
 I want to select the userid's from the table where
 people answered
 Student in question 5 and Education in question
 6.
 
 I am using MySQL 4.08 so I can't do the subquery
 thing.
 
 Thanks,
 Jonathan Duncan
 
 -- 
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:   

http://lists.mysql.com/[EMAIL PROTECTED]
 
 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Some BLOB help please.

2004-06-22 Thread Justin Crone
Hello All,
I have been reading hundreds of posts and sites for information 
regarding BLOBs in MySQL. So far I have not found an answer to my 
question, so I pose it to you fine people. :)

A little background first. I have a web server that uses PHP to retrieve 
documents in PDF format for viewing and printing. When I first developed 
the application I was storing the files on the file system. However 
about 7 months into the project, my file system was starting to get all 
but unmanageable. At 175,000 documents, I decided I needed a new 
mechanism for storing the files. So I began to use BLOBS.

I have started the project again, with BLOBs not yet importing the 
documents from the old project. Currently I have 10,780 files in the 
database, and all is working  excellently. I have 3 tables, one for the 
BLOBs with unique keys, 2 table with the information regarding the BLOB, 
and a 3 table with the actual relevant information to the document.

So as I said, I am rather pleased with the performance and the ease at 
which I was able to get this up and running. However The problem is 
I do have limits, and one of those being disk space. Those 10,000 files 
are taking up 21 GB of space in the database. However the actual space 
required by the files is around 5GB on the file system. The average file 
size is about 1.9MB, so it would seem that each row inserted into the 
database is conforming to that 1.9MB average, giving me this 21GB table.

I would like to know if there is something that I can change to get 
these numbers in line with each other, or if this is just the way of 
things. Current projections for the total documents needed to complete 
the rotation of these files is 720,000 documents. Which if the 1.9MB 
average keeps, that puts me in the neighborhood of  1.4TB of storage.

Any thoughts?
Thanx,
Justin
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: SELECT almost every column

2004-05-14 Thread Justin Swanhart
--- John Mistler [EMAIL PROTECTED] wrote:
 Is there a SELECT statement, if I have a table with
 50 columns, to select
 every column EXCEPT the last one?  Can I do this
 without typing the name of
 all 49 columns?
 
 If so, then what if I want to exclude the last TWO
 columns?
 
 Thanks,
 
 John

There is no construct in SQL to select X number of
columns from a table.  

The traditional answer to this question would normally
be use views, but since MySQL doesn't support them
that doesn't help you very much.

Unless the extra columns are long text columns or
contain BLOBS, then I see no harm in just selecting
them along with the rest of the other columns by using
select * from

If you are accessing the database from a programming
environment then you could do the following:

[pseudo code]
$sql = desc $NAME_OF_TABLE
$result = exec($sql)
$rows = fetch_result_into_array($result)
destroy($result)
$cnt = count($rows) - $NUMBER_OF_COLUMNS_TO_OMIT
if ($cnt = 0) 
{  error(to few columns);
   return;
}
$sql = select 
for ($i=0;$i  $cnt-1;$i++)
{ $sql = $sql + $ary[$i][Field] + , 
}
$sql = $sql + $ary[$cnt][Field]

$sql = $sql +  FROM $NAME_OF_TABLE_TO_SELECT_FROM
$sql = $sql +  WHERE $WHERE_CLAUSE
$sql = $sql +  HAVING $HAVING_CLAUSE
$sql = $sql +  GROUP BY $GROUP_BY_CLAUSE
$sql = $sql +  ORDER BY $ORDER_BY_CLAUSE


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Multi-User Issues

2004-04-16 Thread Justin Palmer
Hey,

Maybe just whip something up and let us see.  If there is more interest
after that then maybe you could do the latter.

Regards,

Justin Palmer



-Original Message-
From: Joshua J. Kugler [mailto:[EMAIL PROTECTED] 
Sent: Thursday, April 15, 2004 4:54 PM
To: [EMAIL PROTECTED]
Subject: Re: Multi-User Issues


Just to get a general feel for interest:

Should I just whip up something quick and dirty and post to the mailing
list, 
or should I work up a nice page or two and put it on a web site?

Anyone else interested?

Warnring: to work up something, it might be a week or two as school is
getting 
really busy right now, but I'd love to do it, as I've used MySQL in 
multi-user environments.

j- k-

On Thursday 15 April 2004 03:05 pm, Justin Palmer said something like:
 Hi Joshua,

 I would love to here more about multi-user issues (like record 
 locking). I searched the archives by the title and by your name with 
 no luck. If you don't feel like going into detail, could you point out

 some good links to learn more about the subject.

 Regards,

 Justin Palmer



 -Original Message-
 From: Joshua J. Kugler [mailto:[EMAIL PROTECTED]
 Sent: Thursday, April 15, 2004 3:27 PM
 To: [EMAIL PROTECTED]
 Subject: Re: Learning curve


 Mike -

 You didn't indicate your department, so I'm not sure what your 
 background is. Your message, overall, is a bit scary, as any 
 university that far behind

 right now would be worrisome.  I'm not exactly sure what you're asking

 for (as you didn't ouline your requirements), but I would first take a

 look on
 sites like sourceforge or freshmeat for systems that already do what
you

 want.  I'm sure the kind of record keeping you do has been done 
 before.

 But as to your main quesiton, it is very doable.  You just need to 
 keep in mind multi-user issue like record locking.  Search the 
 archives for messages
 by me about record locking for an elegant way to do it via a flag
field.
 If
 you can't find it, let me know, and I'll type it up again.

 j- k-

 On Thursday 15 April 2004 02:06 pm, Mike T. Caskey said something 
 like:
  Hi all!
 
  I'm wondering if anyone can help me find out how much time/training 
  is
 
  needed to accomplish my task using MySQL.
 
  My background: I'm fresh to the world of MySQL and databases in 
  general.  I do have some fundamental knowledge in the area of 
  programming and databases, but nothing too in-depth.
 
  My story:  I work for a University that is seemingly falling behind 
  the technical times.  My department is using MS Access as the 
  primary software for handling data, but we're still mainly hard-copy

  for our records-management.  Obviously, there are problems with 
  keeping hard-copy for everything.  I was buried in paperwork for a 
  short while
 
  before I decided to create simple databases/forms using 
  OpenOffice.org, since it was so easy.  Someone in management noticed

  the consistency emerging from my office and inquired. When I told 
  them
 
  about my databases, they decided everyone in the department could 
  benefit from them and assigned the project of making this available 
  to
 
  all.
 
  My problem:  My databases are single-user systems for use in 
  OpenOffice.org and would be difficult to roll them out to my entire 
  team.  I don't want to install OO.o on everyones computer and I 
  don't want to learn MS Access as it is known for being a temporary 
  solution.
 
  So I need something that can keep up with the times and can be 
  rolled out easily (web interface?).  I also need to be able to 
  append scanned
 
  images to records (PDF or JPEG?).  This is all pretty complex and 
  I'm definitely not technically equipped to create this just yet.
 
  MySQL?:  I believe a good question would be whether or not MySQL 
  would
 
  be a good solution for this.  What do you think?  Also, how long 
  would
 
  it take me to learn the necessary information?  Lastly, how long 
  would
 
  it take to develop such a system?
 
  I appreciate your time and information!
  Thanks,
  Mike T. Caskey

 --
 Joshua J. Kugler
 Fairbanks, Alaska
 Computer Consultant--Systems Designer
 .--- --- ...  ..- .--.- ..- --. .-.. . .-.
 [EMAIL PROTECTED]
 ICQ#:13706295
 Every knee shall bow, and every tongue confess, in heaven, on earth, 
 and under the earth, that Jesus Christ is LORD -- Count on it!

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]

-- 
Joshua J. Kugler
Fairbanks, Alaska
Computer Consultant--Systems Designer
.--- --- ...  ..- .--.- ..- --. .-.. . .-.
[EMAIL PROTECTED]
ICQ#:13706295
Every knee shall bow, and every tongue confess, in heaven, on earth, and
under 
the earth, that Jesus Christ is LORD -- Count on it!

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]





-- 
MySQL General

SELECT ?

2004-04-15 Thread Justin Palmer
Hi,

Is it possible to select all records from one table that don't have an
entry in another table.  Something like:

SELECT t1.* 
FROM  'TABLE1' AS t1,
'TABLE2' AS t2
WHERE t1.cv = 1
AND t1.id != t2.id

So in this mock example there is an id in one table, but the id is not
in another table (The second id will be in the second table someday just
not today).

Not sure if this is possible at all or not, but thought I would ask.

Regards,

Justin Palmer




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: SELECT ?

2004-04-15 Thread Justin Palmer
Hi,

Thanks.  So what you are saying is if a record is not there it is
considered NULL?

Regards,

Justin Palmer


-Original Message-
From: Brent Baisley [mailto:[EMAIL PROTECTED] 
Sent: Thursday, April 15, 2004 12:54 PM
To: [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]
Subject: Re: SELECT ?


Absolutely, do a left join and check for NULL values from the other 
table.

SELECT t1.*
FROM t1
LEFT JOIN t2 ON t1.id=t2.id
WHERE
t2.id IS NULL

The left join retains all records in the first table regardless of 
matches in the second. You then filter out those records with a null 
value in a table 2 field.


On Apr 15, 2004, at 3:46 PM, Justin Palmer wrote:

 Hi,

 Is it possible to select all records from one table that don't have an

 entry in another table.  Something like:

 SELECT t1.*
 FROM  'TABLE1' AS t1,
   'TABLE2' AS t2
 WHERE t1.cv = 1
 AND   t1.id != t2.id

-- 
Brent Baisley
Systems Architect
Landover Associates, Inc.
Search  Advisory Services for Advanced Technology Environments
p: 212.759.6400/800.759.0577





-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Error ?

2004-04-15 Thread Justin Palmer
Hi,

I get this error from a query:

#1066 - Not unique table/alias: 't2' 

What does it mean?  Can anyone point me to a place where I can look up
what these codes actually mean?

Any help would be great.

Regards,

Justin Palmer




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



FW: Error ?

2004-04-15 Thread Justin Palmer
The query that generated this error was:

SELECT t1. * 
FROM `wat_student_profile` AS t1, `wat_student_job` AS t2
LEFT JOIN t2 ON t1.student_id = t2.student_id
WHERE t1.cv != '' AND t1.sevis_id = '' AND t2.student_id = NULL 
LIMIT 0 , 30 

Regards,

Justin Palmer


-Original Message-
From: Justin Palmer [mailto:[EMAIL PROTECTED] 
Sent: Thursday, April 15, 2004 1:28 PM
To: [EMAIL PROTECTED]
Subject: Error ?


Hi,

I get this error from a query:

#1066 - Not unique table/alias: 't2' 

What does it mean?  Can anyone point me to a place where I can look up
what these codes actually mean?

Any help would be great.

Regards,

Justin Palmer




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]





-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Error ?

2004-04-15 Thread Justin Palmer
Hi,

Can anyone give me a hint to what this error message is trying to say.

Thanks,

Justin Palmer


-Original Message-
From: Justin Palmer [mailto:[EMAIL PROTECTED] 
Sent: Thursday, April 15, 2004 1:38 PM
To: [EMAIL PROTECTED]
Subject: FW: Error ?


The query that generated this error was:

SELECT t1. * 
FROM `wat_student_profile` AS t1, `wat_student_job` AS t2
LEFT JOIN t2 ON t1.student_id = t2.student_id
WHERE t1.cv != '' AND t1.sevis_id = '' AND t2.student_id = NULL 
LIMIT 0 , 30 

Regards,

Justin Palmer


-Original Message-
From: Justin Palmer [mailto:[EMAIL PROTECTED] 
Sent: Thursday, April 15, 2004 1:28 PM
To: [EMAIL PROTECTED]
Subject: Error ?


Hi,

I get this error from a query:

#1066 - Not unique table/alias: 't2' 

What does it mean?  Can anyone point me to a place where I can look up
what these codes actually mean?

Any help would be great.

Regards,

Justin Palmer




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]





-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]





-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Multi-User Issues

2004-04-15 Thread Justin Palmer
Hi Joshua,

I would love to here more about multi-user issues (like record locking).
I searched the archives by the title and by your name with no luck. If
you don't feel like going into detail, could you point out some good
links to learn more about the subject.  

Regards,

Justin Palmer



-Original Message-
From: Joshua J. Kugler [mailto:[EMAIL PROTECTED] 
Sent: Thursday, April 15, 2004 3:27 PM
To: [EMAIL PROTECTED]
Subject: Re: Learning curve


Mike -

You didn't indicate your department, so I'm not sure what your
background is.  
Your message, overall, is a bit scary, as any university that far behind

right now would be worrisome.  I'm not exactly sure what you're asking
for 
(as you didn't ouline your requirements), but I would first take a look
on 
sites like sourceforge or freshmeat for systems that already do what you

want.  I'm sure the kind of record keeping you do has been done before.

But as to your main quesiton, it is very doable.  You just need to keep
in 
mind multi-user issue like record locking.  Search the archives for
messages 
by me about record locking for an elegant way to do it via a flag field.
If 
you can't find it, let me know, and I'll type it up again.

j- k-

On Thursday 15 April 2004 02:06 pm, Mike T. Caskey said something like:
 Hi all!

 I'm wondering if anyone can help me find out how much time/training is

 needed to accomplish my task using MySQL.

 My background: I'm fresh to the world of MySQL and databases in 
 general.  I do have some fundamental knowledge in the area of 
 programming and databases, but nothing too in-depth.

 My story:  I work for a University that is seemingly falling behind 
 the technical times.  My department is using MS Access as the primary 
 software for handling data, but we're still mainly hard-copy for our 
 records-management.  Obviously, there are problems with keeping 
 hard-copy for everything.  I was buried in paperwork for a short while

 before I decided to create simple databases/forms using 
 OpenOffice.org, since it was so easy.  Someone in management noticed 
 the consistency emerging from my office and inquired. When I told them

 about my databases, they decided everyone in the department could 
 benefit from them and assigned the project of making this available to

 all.

 My problem:  My databases are single-user systems for use in 
 OpenOffice.org and would be difficult to roll them out to my entire 
 team.  I don't want to install OO.o on everyones computer and I don't 
 want to learn MS Access as it is known for being a temporary solution.

 So I need something that can keep up with the times and can be rolled 
 out easily (web interface?).  I also need to be able to append scanned

 images to records (PDF or JPEG?).  This is all pretty complex and I'm 
 definitely not technically equipped to create this just yet.

 MySQL?:  I believe a good question would be whether or not MySQL would

 be a good solution for this.  What do you think?  Also, how long would

 it take me to learn the necessary information?  Lastly, how long would

 it take to develop such a system?

 I appreciate your time and information!
 Thanks,
 Mike T. Caskey

-- 
Joshua J. Kugler
Fairbanks, Alaska
Computer Consultant--Systems Designer
.--- --- ...  ..- .--.- ..- --. .-.. . .-.
[EMAIL PROTECTED]
ICQ#:13706295
Every knee shall bow, and every tongue confess, in heaven, on earth, and
under 
the earth, that Jesus Christ is LORD -- Count on it!

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]





-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: FileMaker Pro compared to MySQL

2004-04-02 Thread Justin Tulloss
I use FileMaker in a server-client configuration. Currently there are 5
clients connecting to one server instance of FileMaker. We have multiple
databases being served, from contacts to truckloads to mileages to costs. We
cover about 40 loads a day and every load has a plethora of information
listed about it. There is, of course, supporting information as well. As for
web factors, there currently aren't any. We are looking into allowing
clients to connect from outside the network, but currently we have 5
employees with copies of FileMaker Pro 6 running on their machines and using
a database being served off one central machine that also runs the active
directory and some accounting functions. I would be more than happy to
provide more information.

Justin Tulloss

 From: Peter Jönsson [EMAIL PROTECTED]
 Date: Fri, 2 Apr 2004 13:21:33 +0200
 To: Justin Tulloss [EMAIL PROTECTED]
 Subject: Re: FileMaker Pro compared to MySQL
 
 Do you work with FM as a sole client, is it a server-client FM setup or
 is it  connected to some web applications (using FM:s internal
 webpublishing, cgi:s etc)?
 
 Speed depends on a number of factors, could you describe your setup and
 how you use the db today?
 /Peter
 
 On 2004-04-02, at 01.26, Justin Tulloss wrote:
 
 I am currently using FileMaker Pro for a relatively large database and
 I am
 wondering how to speed it up. Would MySQL speed things up? Is
 FileMaker in
 some way inferior to MySQL? If so, how? Would it be easy to use a
 FileMaker
 plugin to access the MySQL database, or should I write a PHP front?
 
 Thanks for you help,
 
 Justin Tulloss
 Network Administrator
 InterChez Logistics Systems, Inc.
 


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



FileMaker Pro compared to MySQL

2004-04-01 Thread Justin Tulloss
I am currently using FileMaker Pro for a relatively large database and I am
wondering how to speed it up. Would MySQL speed things up? Is FileMaker in
some way inferior to MySQL? If so, how? Would it be easy to use a FileMaker
plugin to access the MySQL database, or should I write a PHP front?

Thanks for you help,

Justin Tulloss
Network Administrator
InterChez Logistics Systems, Inc.


Hierarchical data design

2004-03-16 Thread Justin French
Hi all,

I've been playing around with the concepts mentioned in this article:
http://www.sitepoint.com/article/hierarchical-data-database/
(Short summary: Using Modified Preorder Tree Traversal, resulting in 
left and right values for each tree node to describe the tree 
structure)

With all this in mind, I'm hoping to emulate a folders and pages 
hierarchical structure for a CMS, without relying on the file system at 
all.

Here's where I get stuck:

In a simple tree, one can easily see that using the title of a node as 
it's primary key is not smart... names can easily collide:

Root
Products
ProductOne
About
FAQ
Support
ProductTwo
About
FAQ
Support
Services
About
As the writer of the article suggests, numeric IDs are the way to go.  
However, I want to call the tree via the URL with name-based ID's (eg 
/products/product-one/about/) rather than numeric IDs (eg /2/17/44/).

A further complication is that this data design would allow two nodes 
in the same parent node to have the same title, since the numeric key 
is the ID, rather than the title.

When we look at a traditional file system, it's based on unique keys AT 
EACH TREE LEVEL, not unique keys for the entire tree.  As such, I don't 
think the above data model is right for this application.

The only catch is I have no idea where to look next.  Hours of Googling 
has returned very little.

Any hints on where to look next would be great.

---
Justin French
http://indent.com.au
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


semaphore.h: incomplete type

2004-03-08 Thread Justin Camp
Greetings!

  I'm having a bit of a problem compiling mysql, that I've been chewing
on for more than a week now..

  I'd preferably like to use the pre-built binaries, but the tared
binary packages don't come with the shared libraries, and the RPM's
can't be moved to different prefixes, so I usually just build the
packages to my obnoxiously customized liking :)

  Anyway, I have this problem both on my development box at home as well
as on the main server at work, both which are set up very similarly. The
machines are practically mirrored, both running RedHat 9, glibc 2.3.2,
kernel 2.4.20 . If any other info is needed, please feel free to contact
me.

 Anyway, the problem lies in trying to compile the my_semaphore.c file
in the mysys directory of the source.
  This is the actual error I get:

gcc -DDEFAULT_BASEDIR=\/usr/local/mysql-4.0.888\
-DDATADIR=\/usr/local/mysql-4.0.888/var\
-DDEFAULT_CHARSET_HOME=\/usr/local/mysql-4.0.888\
-DSHAREDIR=\/usr/local/mysql-4.0.888/share/mysql\ -DHAVE_CONFIG_H
-I. -I. -I.. -I./../include -I../include -I.. -I.-O3 -DDBUG_OFF   -c
`test -f my_semaphore.c || echo './'`my_semaphore.c
In file included from ../include/my_semaphore.h:38,
 from my_semaphore.c:23:
/usr/include/semaphore.h:35: field `__sem_lock' has incomplete type
make[2]: *** [my_semaphore.o] Error 1
make[2]: Leaving directory `/usr/local/src/mysql-4.0.16/mysys'
make[1]: *** [all-recursive] Error 1
make[1]: Leaving directory `/usr/local/src/mysql-4.0.16'
make: *** [all] Error 2


  I've looked high and low for a solution, or even someone who has a
similar problem, obviously to no avail. I've tried upgrading and
downgrading certain packages with no luck...

  If anyone has any ideas at all I would be most grateful!
  Thank you so much for any help you could offer!

Justin


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



How to construct this SQL statement?

2004-02-15 Thread Mr. Justin G. Kozuch
Hi All,

I need with an SQL statement.

I have two tables in my database (mySQL): portfolio and categories.

In the portfolio table, I have a column called categories, and 
there is a number value int here depending on where the item is 
placed in my portfolio. So value 1 is web design, value 2 is 
flash, etc etc etc.

In the categories table, I have a column called categoryvalue and 
categorydescription. Categoryvalue has a number value and 
categorydescription has, well, a description of that category.

On my detail page, can I setup the SQL statement to show the 
description of the category and not the numerical value of the category?

Does this make sense?

Thanks,

Justin


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Newbie question about stats

2004-02-10 Thread Justin French
On Wednesday, February 11, 2004, at 03:29  PM, Keith Warren wrote:

I want to be able to track how many times any particular record is 
returned from a search, and I have two ideas about how to do this. But 
because I have little experience with MySQL, I may be totally off  base.

This database is served on the web via Lasso.

Idea 1. Create an integer field in the table that contains the records 
I want to track, and increment this field each time the data is 
displayed.
not very extensible


Idea 2. Create a new table and create a new record in this table each 
time the record that I want to track is accessed.
not a bad start, but on a high traffic site, you could get a huge 
amount of data very quickly.

Idea 3.  Create a table with two columns, the id of the record, and an 
integer column for the hits... update hits=(hits+1) if found, else 
create a new entry with the id and 1 hit.

Idea 4.  You may want to extend this further to include years, months 
and even days, to generate more meaningful counters.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


VARCHAR 255 versus TINYTEXT

2004-01-12 Thread Justin French
Hi,

Can someone please calrify when I would use VARCHAR 255, and when I 
would use TINYTEXT (which also has a length of 255)?

Thanks,

Justin French

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


advice on change of data design

2003-09-04 Thread Justin French
Hi all,

I'm trying to merge and existing list of 900-odd email-list subscribers 
into an existing membership system.

The existing system uses the userid (eg Justin) as the primary key.  
Obviously, I don't have the leisure of asking 900-odd people what their 
preferred userid is, so I've decided that I need to change the way that 
users login from userid|pass to email|pass.

This way I can just send out a random password to each existing 
subscriber.

I have some concerns about members needing to type such a long email 
address in to login, but putting that aside, my main concern is that 
the email address should be something that can be changed, so it can't 
be the primary key.

I've come up with the following data design, which I'd appreciate 
comments on:

userid (INT 5, primary key)
email (varchar 255, unique)
password (varchar 32, md5hash)
firstname (varchar 50)
lastname (varchar 50)
So, when a user logs in, I check for a match on email and password, 
then assign the userid to the session, probably also assigning their 
first  last names as another session variable, for a human-readable 
name on message boards etc etc.

When the user changes his/her email address, it won't affect their 
primary keys, scattered across many tables... it will just affect how 
they log in.

Any comments?

Justin French

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Lowering the ft_min_word_len

2003-08-09 Thread Justin Hopper
On Tue, 2003-08-05 at 10:57, Paul DuBois wrote:
 At 10:30 -0700 8/5/03, Justin Hopper wrote:
 Hello,
 
 I have a table with a FULLTEXT index on a column of type 'text'.
 Searches on this table using MATCH() AGAINST() work fine for most
 words.  However, I needed to match against a 3 letter word.  So I
 lowered the ft_min_word_len to 3 in /etc/my.cnf.  I then restarted
 MySQL.  I checked that the variable was set to 3 in the running mysqld.
 
 I don't see that you rebuilt your FULLTEXT indexes after restarting
 the server.  Did you?

Yes, I did rebuild the indexes.  Sorry I didn't mention that before.  I
assume the word 'key' would not be picked up if I had not rebuilt the
indexes after lowering the ft_min_word_len.

 
 But for some reason, I cannot fetch any results:
 
 mysql select title_id from support_doc_articles where match(article)
 against ('dns');
 Empty set (0.00 sec)
 
 It does not work IN BOOLEAN MODE either:
 
 mysql select title_id from support_doc_articles where match(article)
 against ('dns' IN BOOLEAN MODE);
 Empty set (0.00 sec)
 
 Actually, I just tried it again, searching for the 3 letter word 'key',
 and it brought back results.  Is 'dns' in the stopwords list?  Is there
 any way I can see what words are in there?  Can I exclude words from the
 stopword list without recompiling MySQL?
 
 They're in the file myisam/ft_static.c in the source distribution. dns
 is not one of them.
 
 I don't believe you can exclude words from the list without recompiling.

Hmmm, any ideas why the word 'dns' would not be picked up then?

 
 Thanks for any help.
 --
 Justin Hopper
 UNIX Systems Engineer
 Spry Hosting
 http://www.spry.com
 
 
 -- 
 Paul DuBois, Senior Technical Writer
 Madison, Wisconsin, USA
 MySQL AB, www.mysql.com
 
 Are you MySQL certified?  http://www.mysql.com/certification/
-- 
Justin Hopper
UNIX Systems Engineer
Spry Hosting
http://www.spry.com


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Lowering the ft_min_word_len

2003-08-06 Thread Justin Hopper
Hello,

I have a table with a FULLTEXT index on a column of type 'text'. 
Searches on this table using MATCH() AGAINST() work fine for most
words.  However, I needed to match against a 3 letter word.  So I
lowered the ft_min_word_len to 3 in /etc/my.cnf.  I then restarted
MySQL.  I checked that the variable was set to 3 in the running mysqld. 
But for some reason, I cannot fetch any results:

mysql select title_id from support_doc_articles where match(article)
against ('dns');
Empty set (0.00 sec)

It does not work IN BOOLEAN MODE either:

mysql select title_id from support_doc_articles where match(article)
against ('dns' IN BOOLEAN MODE);
Empty set (0.00 sec)

Actually, I just tried it again, searching for the 3 letter word 'key',
and it brought back results.  Is 'dns' in the stopwords list?  Is there
any way I can see what words are in there?  Can I exclude words from the
stopword list without recompiling MySQL?

Thanks for any help.
-- 
Justin Hopper
UNIX Systems Engineer
Spry Hosting
http://www.spry.com


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: simple group by taking way too long..

2003-07-13 Thread Justin Spies
Henry,
The information you've posted is a good start, can you post the results of 
an EXPLAIN command?  That would go a long way towards finding a solution.


--Justin


On Sun, 13 Jul 2003, Henry Hank wrote:

 
 I have the following SQL. The source table has 8.1 million rows, and the
 resulting table will have about 7.9 million rows.  I know that's not much of a
 decrease, but the logic is correct as I've tested it on smaller sets.  The
 problem is that when I run with the full set of 8 million rows, it takes about
 2 hours to complete.  The source and target tables are all char or tinyint
 fields (i.e. fixed length records).  
 
 insert into extract2
   select field1,field2,field3,field4,
 if(right(field1,1)='N',mid(field3,2,1),mid(field3,1,1)) as flag,
 count(*) as count,
 val-min(val_cnt) as cnt1,
 if(max(val)val_cnt,1,0) as cnt2 ,
 if(max(val)=min(val) and max(val)=val_cnt,1,0) as last,
 if(min(val)=1,1,0) as initial
   from extract1
   group by field1,field2,field3,field4;
 
 While this code is running, the temp table that is created to do the summary
 grows to 730,662,620 bytes, which is 22 million records at 33 bytes per record.
  Why is mysql creating a temp table of 22 million records, when the SOURCE
 table is only 8.1 million records?  Even if no summary was taking place at all,
 I wouldn't expect the temp table to be almost three times the size.  
 
 This is running on a dual 1GHZ Dell poweredge, with RH 7.2 and mysql 3.23.41,
 and there is nothing else running on the box at the same time.
 
 Is there any way to optimize this group by so it runs faster, and doesn't
 create such a large temp table? 
 
 Thanks,
 
 -Henry
 
 __
 Do you Yahoo!?
 SBC Yahoo! DSL - Now only $29.95 per month!
 http://sbc.yahoo.com
 
 

-- 
Sincerely,
Pantek, Inc.
Justin L. Spies

--
[EMAIL PROTECTED]

Pantek, Inc. - http://www.pantek.com/ - IT Services [EMAIL PROTECTED]
440-519-1802 or 1-877-LINUX-FIX


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: mySQL GUIs

2003-06-12 Thread Justin
Anybody have any recommendations for one that runs on Mac OS X?

Thanks!
Justin

On 6/12/03 9:21 AM, Adam Nelson [EMAIL PROTECTED] wrote:

 I like MySQL Manager - it costs a bit of money, but I find it
 indispensible.
 
 www.ems-hitech.com
 
 It runs on Windows AND Linux.  I'll be switching to the linux version in
 about two weeks, so I'll tell you how well it works.
 
 -Original Message-
 From: Knepley, Jim [mailto:[EMAIL PROTECTED]
 Sent: Wednesday, June 11, 2003 3:00 PM
 To: Rodolphe Toots; [EMAIL PROTECTED]
 Subject: RE: mySQL GUIs
 
 
 I'm a big fan of Scibit's Mascon
 
 
 
 -Original Message-
 From: Rodolphe Toots [mailto:[EMAIL PROTECTED]
 Sent: Wednesday, June 11, 2003 5:17 AM
 To: [EMAIL PROTECTED]
 Subject: mySQL GUIs
 
 
 hi!
 
 i am looking for a good mySQL gui for windows
 i have used mySQL front, which was an excellent free program,
 but i did
 not handle relations and diagrams. also the program is no longer being
 developed
 
 i have now found the prog mySQL tools (http://www.mysqltools.com/) and
 mySQL explorer that works almost as enterprise manager for MS SQL
 server. it even creates database diagrams with relations as in
 enterprise manager! only backdraw is that this program is not
 free, but
 it is the best i have ever seen so far
 
 is there anyone out there that knows of a program that is
 freeware/shareware and is good (like mySQL tools)?
 
 
 
 /rewdboy
 
 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: newbie question

2003-06-12 Thread Justin Scheiber
Well, for the image type at least, the blob type would be appropriate.

-justin

v7rg8 wrote:

Hi all,

my table is like this:

professor (name, gender, bodyImage)

Could anyone guide me how to deal with this image type data?

Another question is how to implement weak entity in mysql.

Thanks,

Alex



 



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


help creating foreign keys

2003-06-03 Thread Justin Scheiber
Hello, I want to create the following tables - where a foriegn key 
references an auto_incremented primary key of another table.  In my 
simple logic, it seem like such a thing should be possible -- after all, 
i just need the value of the referenced primary key.  I know you can't 
have 2 auto_increment columns in a table,  and I have read up on the 
errno: 150 but it still seems like this should be possible.Do I need 
to rethink the table structure?  Or do I just not understand something here?

create table foo (
 foo_idint unsigned auto_increment,
 foo_value int,
 primary key(foo_id)
) type=innodb;
create table bar (
 bar_idint unsigned auto_increment,
 foo_idint unsigned,
 bar_value int,
 primary key (bar_id),
 foreign key(foo_id) references foo(foo_id),
) type=innodb;
ERROR 1005: Can't create table './test/bar.frm' (errno: 150)

-justin

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


help deciding on data types

2003-04-04 Thread Justin French
Hi all,

I'm currently logging page requests with the following table:

CREATE TABLE counters_hits (
  pid tinyint(3) unsigned NOT NULL default '0',
  stamp varchar(30) NOT NULL default ''
) TYPE=MyISAM;

In other words, for ever page hit on my site, I'm recording the unix
timestamp  page ID.

This equates to around 20bytes of data per row.  On 50,000+ hits, this is
turning into 2.4-ish meg of data.


Now, I REALLY don't need to know the exact second of every hit on the
site... no way!!  I've decided counting hits on each pageID by the hour is
detailed enough.


CREATE TABLE test (
  year tinyint(4) NOT NULL default '127',
  month tinyint(2) NOT NULL default '12',
  day tinyint(2) NOT NULL default '31',
  hour tinyint(2) NOT NULL default '24',
  pid tinyint(2) NOT NULL default '99',
  hits int(4) NOT NULL default ''
) TYPE=MyISAM;


I've entered in a few dummy rows, and they're 10 bytes each.  By my sums,
there'll be a maximum of:

24 rows a day x 10 bytes a row = 240 bytes a day
240 bytes a day x 365 days a year = 87600 bytes a year (87.6k)


So, by moving from timestamps to the above format, and throwing away the
accuracy of seconds and minutes, I'm going to have a maximum of 87k a year
of data, compared to my current situation of 2.4 meg of data in a few
months.


Can someone confirm if my maths are all good, and possibly advise any areas
where I might further optimise this?


The only other thing I'm worried about is the fact that I'll have to do a
couple of queries PER HIT using this system (check if the row exists, then
either add one hit or insert a new row), rather than just a simple INSERT...
Any ideas on this too???


TIA

Justin


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



loading data from a back up file

2003-04-02 Thread Justin French
I have a 2.7meg dump of SQL created by phpMyAdmin [1], however when I try to
import this data back into mysql through phpMyAdmin, it times out
(understandably), so I guess I need to do this from the command line, but
have no idea how to go about this AT ALL... can anyone point me in the right
direction?

Justin


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Questions about HEAP tables

2003-03-06 Thread Justin Cook
Hi All,

After googling around and checking out the archives I still haven't found
the exact answer to my following question:

First off the goal is to give a family multiple quotes from different
companies for insurance based on several factors: age, length of coverage
and coverage limits. All the different rates for different companies will be
in a rate table that I will have to query to get the correct premium per
person and then group the resulting recordset by company and sum the
premium. So I figure I have to do something like:


for each member in $number_of_family_members

CREATE TABLE IF NOT EXISTS quote_for_family TYPE=HEAP
SELECT * FROM rates WHERE
age =minimum_age AND
age =maximum_age AND
limit =minimum_limit AND
limit =maximum_limit AND
length_days  maximum_days

next member

SELECT company_name, sum(rate), plan_name FROM quote_for_family GROUP BY
company_name

DROP TABLE quote_for_family

And if all is well I have a list of all the different companies' plans with
a the premium totaled for all family members.


What makes this a little more interesting is that I want to do it from a
Perl script - from want I've seen this looks possible. What I'm wondering
is:

Does this look like the right way to tackle the problem?

Should I use a HEAP table or TEMPORARY table to do this? Its going to be on
a webserver with hopefully some steady traffic so performance is a concern.

Any help is greatly appreciated,

Justin



-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php



RAND() isn't :)

2003-02-27 Thread Justin French
Hi all,

On my LAN server (FreeBSD, MySQL 3.32) the following query works fine, and
seems to return 2 random rows from the table:

SELECT * FROM disc ORDER BY RAND() LIMIT 2

However, when I upload the scripts to the live server (Linux, MySQL 3.32),
the results are VERY un-random -- only very occasionally do the results
differ between each query, usually returning the same two rows.

The data in the two tables isn't identical, but both tables DO contain the
same number of rows, with the same IDs.


Where should I be looking for reasons why the RAND() isn't very random???


TIA
Justin

sql,query


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php



What is faster?..

2003-01-31 Thread Justin Cook
Hello-

I am in the early stages of setting up a website running mysql. The idea is
for the user to be able to compare insurance policies and choose the best
one. I want to give them the option of choosing several insurance plans and
compare them side by side. My question is would it be faster/more efficient
to:
a) Store policy wording in html format in a mysql table; or
b) generate them from a perl script

example user chooses plan a and plan b - hits the compare button to see
policy wording compared side by side - would it be better to call it from
the mysql table or a perl script?

I thank you in advance.

Justin



-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




nested queries

2003-01-23 Thread Justin French
Hi,

I'm trying to dig a bit deeper into mysql queries... I've come across an
application that i'm building which requires (it would seem on the surface)
many queries, do to the relational nature of the data.

The actual app is way too complex to explain, but I've come up with a
reasonably simple example.

Let's say i've got the cliche hierarchical menu system:

parent 1
child 1
grandchild 1
grandchild 2
child 2
grandchild 1
grandchild 2
child 3
child 4
parent 2
child 1
child 2
child 3
child 4
parent 3
child 1
child 2
child 3
child 4
grandchild 1
grandchild 2


From a table point of view, lets say I have 3 tables:

parent (id,name)
child (id, parentID, name)
grandchild (id, parentID, name)


Now, it's easy to query to get all the parents:

select * from parents


And it only takes 2 queries to get all the children of parent 2 (for
expanding one section of the menu)

select * from parents and
select * from children where parentID='2'


But when I decide I want to expand all the parents and children to show the
entire menu structure, it's a HEAP of queries.  Sure, I don't have to WRITE
them all, because PHP can do that for me in loops, but it occured to me that
I might be missing a HUGE piece of the puzzle in terms of what MySQL can
do...

can anyone help with a link to somewhere in the manual, or a tutorial that i
should be reading :)

thanks,

justin

mysql,query


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




  1   2   >