Pushing mysql to the limits

2006-11-01 Thread Cabbar Duzayak

Hi,

We have huge amount of data, and we are planning to use logical
partitioning to divide it over multiple machines/mysql instances. We
are planning to use Intel based machines and will prefer ISAM since
there is not much updates but mostly selects. The main table that
constitutes this much of data has about 5 columns, and rows are about
50 bytes in size, and 3 columns in this table need to be indexed.

So, what I wanted to learn is how much can we push it to the limits on
a single machine with about 2 gig rams? Do you think MYSQL can handle
~ 700-800 gigabyte on a single machine? And, is it OK to put this much
data in a single table, or should we divide it over multiple tables?
If that is the case, what would be the limit for a single table?

Any help/input on this is greatly appreciated.

Thanks.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: MyISAM vs InnoDB

2006-11-01 Thread Leandro Guimarães Faria Corcete DUTRA
Em Tue, 31 Oct 2006 15:24:44 -0500, Francis escreveu:

 MyISAM vs InnoDB ? What is the best to use

Always use a DBMS, and MySQL is no (proper) DBMS without a transactional
backend.  There are InnoDB, which is not completely free (needs a proprietary
backup tool); BDB, which is deprecated until further notices; and SolidDB, which
is still β.

Choose your evil.

-- 
Leandro Guimarães Faria Corcete DUTRA +55 (11) 9406 7191 (cel)
Administrador de (Bases de) Dados +55 (11) 2122 0302 (com)
http://br.geocities.com./lgcdutra/+55 (11) 5685 2219 (res)
BRASILmailto:[EMAIL PROTECTED]


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: MyISAM vs InnoDB

2006-11-01 Thread Miles Thompson

At 07:56 AM 11/1/2006, Leandro Guimarães Faria Corcete DUTRA wrote:


snip .. further notices; and SolidDB, which
is still β.

Choose your evil.

--
Leandro Guimarães Faria Corcete DUTRA +55 (11) 9406 7191 (cel)
Administrador de (Bases de) Dados +55 (11) 2122 0302 (com)
http://br.geocities.com./lgcdutra/+55 (11) 5685 2219 (res)
BRASILmailto:[EMAIL PROTECTED]



Leandro,

Help this poor English-speaker - what's the symbol you use to describe SolidDB?

Cheers - Miles Thompson



--
No virus found in this outgoing message.
Checked by AVG Free Edition.
Version: 7.1.409 / Virus Database: 268.13.21/509 - Release Date: 10/31/2006




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: MyISAM vs InnoDB

2006-11-01 Thread Jon Ribbens
Miles Thompson [EMAIL PROTECTED] wrote:
 At 07:56 AM 11/1/2006, Leandro Guimarães Faria Corcete DUTRA wrote:
 snip .. further notices; and SolidDB, which
 is still β.
 
 Help this poor English-speaker - what's the symbol you use to describe 
 SolidDB?

I assume it is a beta character, since solidDB for MySQL is indeed
in beta. See http://dev.soliddb.com/

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: MyISAM vs InnoDB

2006-11-01 Thread Paul McCullagh
On Nov 1, 2006, at 12:56 PM, Leandro Guimarães Faria Corcete DUTRA  
wrote:



Em Tue, 31 Oct 2006 15:24:44 -0500, Francis escreveu:


MyISAM vs InnoDB ? What is the best to use


	Always use a DBMS, and MySQL is no (proper) DBMS without a  
transactional
backend.  There are InnoDB, which is not completely free (needs a  
proprietary
backup tool); BDB, which is deprecated until further notices; and  
SolidDB, which

is still β.


plug

Excuse me, but I have to do some advertising in my own interest :)

There is also the PrimeBase XT (PBXT), which is also Beta, but is  
already available as a pluggable storage engine for 5.1 (besides  
merged code version for MySQL 4.1.21).


More information at: http://www.primebase.com/xt

For the latest 5.1 version please check out: http://sourceforge.net/ 
projects/pbxt


Best regards,

Paul

/plug


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Pushing mysql to the limits

2006-11-01 Thread Dan Buettner

Cabbar -

That much data is possible in MySQL; I've seen references to terabyte
databases in the past.  700-800 GB in a single table appears to be
possible but I wouldn't really recommend it.  A couple of suggestions,
based on my own reading (I've not worked with such large dbs in MySQL
myself, only in Sybase):

- The MyISAM format can be used with MERGE tables, which would allow
you to break up your data across multiple smaller tables but still
treat it as one logical table for SELECT, UPDATE, and INSERT purposes.
The advantage would be potentially easier backups, faster
checks/repairs if you experienced corruption, possibly faster purges
if you purge data and have your tables arranged by date order (or
however you might split things up), and the ability to compress static
tables to reduce disk consumption and possibly speed up load time off
disk.
http://dev.mysql.com/doc/refman/5.0/en/merge-storage-engine.html

- 2 GB RAM may well not allow you to keep the indices in memory, which
could slow things down significantly.  With as much data as you have
and three columns being indexed, you may not be able to put enough RAM
in the machine to keep it in memory, however.  You might consider
whether indexing a prefix (first few characters) instead of an entire
column would be worthwhile - you'd have to find a balance between
index size and search speed.

The limit for a single table is quite large (65536 terabytes now), but
for such large tables you do need to either specify size up front or
run an alter table later to bring it up to spec.  The filesystem is
one potential limiter as well.
http://dev.mysql.com/doc/refman/5.0/en/table-size.html
http://dev.mysql.com/doc/refman/5.0/en/create-table.html

HTH,
Dan


On 11/1/06, Cabbar Duzayak [EMAIL PROTECTED] wrote:

Hi,

We have huge amount of data, and we are planning to use logical
partitioning to divide it over multiple machines/mysql instances. We
are planning to use Intel based machines and will prefer ISAM since
there is not much updates but mostly selects. The main table that
constitutes this much of data has about 5 columns, and rows are about
50 bytes in size, and 3 columns in this table need to be indexed.

So, what I wanted to learn is how much can we push it to the limits on
a single machine with about 2 gig rams? Do you think MYSQL can handle
~ 700-800 gigabyte on a single machine? And, is it OK to put this much
data in a single table, or should we divide it over multiple tables?
If that is the case, what would be the limit for a single table?

Any help/input on this is greatly appreciated.

Thanks.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Pushing mysql to the limits

2006-11-01 Thread Mathieu Bruneau
Hi,

What been said below is true, I also think you should probably use Merge
table on subsequent table.

While 2G won't be enough for holding the full index for sure, splitting
the table could allow you to have much smaller index and be able to hold
that one in memory. Also if you use dynamic size table, the limit by
default of the size is 4G, if you need to go above that size you need to
run an alter table (if you didn't create it with the parameters) and
that actually convert the 32bits pointers to bigger one which would slow
down thing a bit on a 32bits architecture :)

Also note that you probably want to use MyISAM table format and not the
old ISAM one :)

But with that database size and that amount of ram, you'll have
intensive IO unless it's almost always the same data that is accessed...
(Which doesn't make much sense if you keep that much!)


Just my 2cent...
-- 
Mathieu Bruneau
aka ROunofF

Dan Buettner a écrit :
 Cabbar -
 
 That much data is possible in MySQL; I've seen references to terabyte
 databases in the past.  700-800 GB in a single table appears to be
 possible but I wouldn't really recommend it.  A couple of suggestions,
 based on my own reading (I've not worked with such large dbs in MySQL
 myself, only in Sybase):
 
 - The MyISAM format can be used with MERGE tables, which would allow
 you to break up your data across multiple smaller tables but still
 treat it as one logical table for SELECT, UPDATE, and INSERT purposes.
 The advantage would be potentially easier backups, faster
 checks/repairs if you experienced corruption, possibly faster purges
 if you purge data and have your tables arranged by date order (or
 however you might split things up), and the ability to compress static
 tables to reduce disk consumption and possibly speed up load time off
 disk.
 http://dev.mysql.com/doc/refman/5.0/en/merge-storage-engine.html
 
 - 2 GB RAM may well not allow you to keep the indices in memory, which
 could slow things down significantly.  With as much data as you have
 and three columns being indexed, you may not be able to put enough RAM
 in the machine to keep it in memory, however.  You might consider
 whether indexing a prefix (first few characters) instead of an entire
 column would be worthwhile - you'd have to find a balance between
 index size and search speed.
 
 The limit for a single table is quite large (65536 terabytes now), but
 for such large tables you do need to either specify size up front or
 run an alter table later to bring it up to spec.  The filesystem is
 one potential limiter as well.
 http://dev.mysql.com/doc/refman/5.0/en/table-size.html
 http://dev.mysql.com/doc/refman/5.0/en/create-table.html
 
 HTH,
 Dan
 
 
 On 11/1/06, Cabbar Duzayak [EMAIL PROTECTED] wrote:
 Hi,

 We have huge amount of data, and we are planning to use logical
 partitioning to divide it over multiple machines/mysql instances. We
 are planning to use Intel based machines and will prefer ISAM since
 there is not much updates but mostly selects. The main table that
 constitutes this much of data has about 5 columns, and rows are about
 50 bytes in size, and 3 columns in this table need to be indexed.

 So, what I wanted to learn is how much can we push it to the limits on
 a single machine with about 2 gig rams? Do you think MYSQL can handle
 ~ 700-800 gigabyte on a single machine? And, is it OK to put this much
 data in a single table, or should we divide it over multiple tables?
 If that is the case, what would be the limit for a single table?

 Any help/input on this is greatly appreciated.

 Thanks.

 -- 
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:   
 http://lists.mysql.com/[EMAIL PROTECTED]


 


===
GPG keys available @ http://rounoff.darktech.org

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



MyISAM to InnoDB conversion help

2006-11-01 Thread Mikhail Berman
Hi everyone,
 
I am hoping to get help with extremely slow performance of MyISAM to
InnoDB conversion. Or find out if this type of performance is usual
 
I have MyISAM table that contains - 3,299,509 rows and I am trying to
convert it to InnoDB for the use with row-level locking, and I am
getting insertion speed of 0.243 of a record a second while I am doing
INSERT INTO InnoDB_table SELECT * FROM MyISAM_Table.
 
Your help is appreciated.
 
Here is what my environment looks like.
 
Hardware:
 
SunBlade 2000 with 2GB processor connected to StorEdge A5200 with RAID5
on it.
 
OS:
 
[EMAIL PROTECTED]/uname -a
SunOS * 5.9 Generic_118558-19 sun4u sparc SUNW,Sun-Blade-1000

MySQL:
 
mysql status;
--
mysql  Ver 14.12 Distrib 5.0.18, for sun-solaris2.9 (sparc) using
readline 5.0
 
InnoDB tables structure:
 
 DAILY_EDGAR_INNODB |CREATE TABLE `DAILY_EDGAR_INNODB` (
  `ftp_file_name_key` char(80) NOT NULL default '',
  `history_record` int(1) NOT NULL default '0',
  `description` char(100) NOT NULL default '',
  `company_fkey` char(10) NOT NULL default '',
  `company_name` char(100) NOT NULL default '',
  `subject_company_fkey` char(10) NOT NULL default '',
  `filer_description` char(10) NOT NULL default '',
  `form_fkey` char(20) NOT NULL default '',
  `file_accepted` char(20) NOT NULL default '',
  `been_evaluated` char(20) NOT NULL default '',
  `uport_evaluated` int(1) NOT NULL default '0',
  `file_date` char(10) NOT NULL default '',
  `file_size` char(10) NOT NULL default '50 KB',
  `accession_number` char(24) NOT NULL default '',
  `http_file_name_html` char(100) NOT NULL default '',
  `http_file_name_text` char(100) NOT NULL default '',
  `create_date` date NOT NULL default '-00-00',
  `change_date` date NOT NULL default '-00-00',
  PRIMARY KEY  (`ftp_file_name_key`),
  KEY `company_idx` (`company_fkey`),
  KEY `filaccdx` (`file_accepted`),
  KEY `beendx` (`been_evaluated`),
  KEY `fidadx` (`file_date`),
  KEY `upevdx` (`uport_evaluated`),
  KEY `crdadx` (`create_date`),
  KEY `hiredx` (`history_record`),
  KEY `accession_number` (`accession_number`),
  KEY `fofkdx` (`form_fkey`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1 |

Procedure used to execute conversion:
 
root 27686  0.0  0.2 5840 3224 ?S 14:08:23  0:00 mysql
-pxx xxx -e insert into DAILY_EDGAR_INNODB select * from
DAILY_EDGAR

my.cnf InnoDB section:
 
# Uncomment the following if you are using InnoDB tables
innodb_data_home_dir = /export/home/mysqldata/ibdata
innodb_data_file_path = ibdata1:2000M;ibdata2:10M:autoextend
innodb_log_group_home_dir = /export/home/mysqldata/ibdata
innodb_log_arch_dir = /export/home/mysqldata/ibdata
# You can set .._buffer_pool_size up to 50 - 80 %
# of RAM but beware of setting memory usage too high
innodb_buffer_pool_size = 1G
innodb_additional_mem_pool_size = 50M
# Set .._log_file_size to 25 % of buffer pool size
#innodb_log_file_size = 100M
#innodb_log_buffer_size = 8M
innodb_flush_log_at_trx_commit = 1
innodb_lock_wait_timeout = 50
 
Best,
 
Mikhail Berman
 


Re: MyISAM vs InnoDB

2006-11-01 Thread mos

At 05:56 AM 11/1/2006, Leandro Guimarães Faria Corcete DUTRA wrote:

Em Tue, 31 Oct 2006 15:24:44 -0500, Francis escreveu:

 MyISAM vs InnoDB ? What is the best to use

Always use a DBMS, and MySQL is no (proper) DBMS without a 
transactional

backend.  There are InnoDB, which is not completely free (needs a proprietary
backup tool); BDB, which is deprecated until further notices; and SolidDB, 
which

is still β.

Choose your evil.


Ok, so your solution is to use something else? Is there a better open 
source database out there for that amount of data?


Mike

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: MyISAM vs InnoDB

2006-11-01 Thread Martijn Tonies
  MyISAM vs InnoDB ? What is the best to use

 Always use a DBMS, and MySQL is no (proper) DBMS without a
 transactional
backend.  There are InnoDB, which is not completely free (needs a
proprietary
backup tool); BDB, which is deprecated until further notices; and SolidDB,
which
is still β.

 Choose your evil.

Ok, so your solution is to use something else? Is there a better open
source database out there for that amount of data?

Firebird? PostgreSQL?

Both are open source and ALWAYS free for whatever usuage, no dual
licensing whatsoever.

Martijn Tonies
Database Workbench - tool for InterBase, Firebird, MySQL, NexusDB, Oracle 
MS SQL Server
Upscene Productions
http://www.upscene.com
My thoughts:
http://blog.upscene.com/martijn/
Database development questions? Check the forum!
http://www.databasedevelopmentforum.com


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: MyISAM to InnoDB conversion help

2006-11-01 Thread Rolando Edwards
If you are do this in MySQL 5, try this:

ALTER TABLE table-name ENGINE = InnoDB;

That's all.

Let MySQL worry about conversion.
You may also want to tweek the innodb
system variables (show variables like 'innodb%)
for better InnoDB performance prior to trying this.

- Original Message -
From: Mikhail Berman [EMAIL PROTECTED]
To: mysql@lists.mysql.com
Sent: Wednesday, November 1, 2006 10:31:13 AM GMT-0500 US/Eastern
Subject: MyISAM to InnoDB conversion help

Hi everyone,
 
I am hoping to get help with extremely slow performance of MyISAM to
InnoDB conversion. Or find out if this type of performance is usual
 
I have MyISAM table that contains - 3,299,509 rows and I am trying to
convert it to InnoDB for the use with row-level locking, and I am
getting insertion speed of 0.243 of a record a second while I am doing
INSERT INTO InnoDB_table SELECT * FROM MyISAM_Table.
 
Your help is appreciated.
 
Here is what my environment looks like.
 
Hardware:
 
SunBlade 2000 with 2GB processor connected to StorEdge A5200 with RAID5
on it.
 
OS:
 
[EMAIL PROTECTED]/uname -a
SunOS * 5.9 Generic_118558-19 sun4u sparc SUNW,Sun-Blade-1000

MySQL:
 
mysql status;
--
mysql  Ver 14.12 Distrib 5.0.18, for sun-solaris2.9 (sparc) using
readline 5.0
 
InnoDB tables structure:
 
 DAILY_EDGAR_INNODB |CREATE TABLE `DAILY_EDGAR_INNODB` (
  `ftp_file_name_key` char(80) NOT NULL default '',
  `history_record` int(1) NOT NULL default '0',
  `description` char(100) NOT NULL default '',
  `company_fkey` char(10) NOT NULL default '',
  `company_name` char(100) NOT NULL default '',
  `subject_company_fkey` char(10) NOT NULL default '',
  `filer_description` char(10) NOT NULL default '',
  `form_fkey` char(20) NOT NULL default '',
  `file_accepted` char(20) NOT NULL default '',
  `been_evaluated` char(20) NOT NULL default '',
  `uport_evaluated` int(1) NOT NULL default '0',
  `file_date` char(10) NOT NULL default '',
  `file_size` char(10) NOT NULL default '50 KB',
  `accession_number` char(24) NOT NULL default '',
  `http_file_name_html` char(100) NOT NULL default '',
  `http_file_name_text` char(100) NOT NULL default '',
  `create_date` date NOT NULL default '-00-00',
  `change_date` date NOT NULL default '-00-00',
  PRIMARY KEY  (`ftp_file_name_key`),
  KEY `company_idx` (`company_fkey`),
  KEY `filaccdx` (`file_accepted`),
  KEY `beendx` (`been_evaluated`),
  KEY `fidadx` (`file_date`),
  KEY `upevdx` (`uport_evaluated`),
  KEY `crdadx` (`create_date`),
  KEY `hiredx` (`history_record`),
  KEY `accession_number` (`accession_number`),
  KEY `fofkdx` (`form_fkey`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1 |

Procedure used to execute conversion:
 
root 27686  0.0  0.2 5840 3224 ?S 14:08:23  0:00 mysql
-pxx xxx -e insert into DAILY_EDGAR_INNODB select * from
DAILY_EDGAR

my.cnf InnoDB section:
 
# Uncomment the following if you are using InnoDB tables
innodb_data_home_dir = /export/home/mysqldata/ibdata
innodb_data_file_path = ibdata1:2000M;ibdata2:10M:autoextend
innodb_log_group_home_dir = /export/home/mysqldata/ibdata
innodb_log_arch_dir = /export/home/mysqldata/ibdata
# You can set .._buffer_pool_size up to 50 - 80 %
# of RAM but beware of setting memory usage too high
innodb_buffer_pool_size = 1G
innodb_additional_mem_pool_size = 50M
# Set .._log_file_size to 25 % of buffer pool size
#innodb_log_file_size = 100M
#innodb_log_buffer_size = 8M
innodb_flush_log_at_trx_commit = 1
innodb_lock_wait_timeout = 50
 
Best,
 
Mikhail Berman
 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: MyISAM to InnoDB conversion help

2006-11-01 Thread Mikhail Berman
Hi Rolando,

Thank you for your help.  

I am on MySQL 5, and I have tried to do the conversion using ALTER TABLE
command. With the same very slow result.

Do you by any chance have specific suggestions how to tweak variables
related to this?

Here is what I got:

+-+-
-+
| Variable_name   | Value
|
+-+-
-+
| innodb_additional_mem_pool_size | 52428800
|
| innodb_autoextend_increment | 8
|
| innodb_buffer_pool_awe_mem_mb   | 0
|
| innodb_buffer_pool_size | 1073741824
|
| innodb_checksums| ON
|
| innodb_commit_concurrency   | 0
|
| innodb_concurrency_tickets  | 500
|
| innodb_data_file_path   | ibdata1:2000M;ibdata2:10M:autoextend
|
| innodb_data_home_dir| /export/home/mysqldata/ibdata
|
| innodb_doublewrite  | ON
|
| innodb_fast_shutdown| 1
|
| innodb_file_io_threads  | 4
|
| innodb_file_per_table   | OFF
|
| innodb_flush_log_at_trx_commit  | 1
|
| innodb_flush_method |
|
| innodb_force_recovery   | 0
|
| innodb_lock_wait_timeout| 50
|
| innodb_locks_unsafe_for_binlog  | OFF
|
| innodb_log_arch_dir | /export/home/mysqldata/ibdata
|
| innodb_log_archive  | OFF
|
| innodb_log_buffer_size  | 1048576
|
| innodb_log_file_size| 5242880
|
| innodb_log_files_in_group   | 2
|
| innodb_log_group_home_dir   | /export/home/mysqldata/ibdata
|
| innodb_max_dirty_pages_pct  | 90
|
| innodb_max_purge_lag| 0
|
| innodb_mirrored_log_groups  | 1
|
| innodb_open_files   | 300
|
| innodb_support_xa   | ON
|
| innodb_sync_spin_loops  | 20
|
| innodb_table_locks  | ON
|
| innodb_thread_concurrency   | 20
|
| innodb_thread_sleep_delay   | 1
|
+-+-
-+

Best,

Mikhail Berman

-Original Message-
From: Rolando Edwards [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, November 01, 2006 11:05 AM
To: Mikhail Berman
Cc: mysql@lists.mysql.com
Subject: Re: MyISAM to InnoDB conversion help

If you are do this in MySQL 5, try this:

ALTER TABLE table-name ENGINE = InnoDB;

That's all.

Let MySQL worry about conversion.
You may also want to tweek the innodb
system variables (show variables like 'innodb%) for better InnoDB
performance prior to trying this.

- Original Message -
From: Mikhail Berman [EMAIL PROTECTED]
To: mysql@lists.mysql.com
Sent: Wednesday, November 1, 2006 10:31:13 AM GMT-0500 US/Eastern
Subject: MyISAM to InnoDB conversion help

Hi everyone,
 
I am hoping to get help with extremely slow performance of MyISAM to
InnoDB conversion. Or find out if this type of performance is usual
 
I have MyISAM table that contains - 3,299,509 rows and I am trying to
convert it to InnoDB for the use with row-level locking, and I am
getting insertion speed of 0.243 of a record a second while I am doing
INSERT INTO InnoDB_table SELECT * FROM MyISAM_Table.
 
Your help is appreciated.
 
Here is what my environment looks like.
 
Hardware:
 
SunBlade 2000 with 2GB processor connected to StorEdge A5200 with RAID5
on it.
 
OS:
 
[EMAIL PROTECTED]/uname -a
SunOS * 5.9 Generic_118558-19 sun4u sparc SUNW,Sun-Blade-1000

MySQL:
 
mysql status;
--
mysql  Ver 14.12 Distrib 5.0.18, for sun-solaris2.9 (sparc) using
readline 5.0
 
InnoDB tables structure:
 
 DAILY_EDGAR_INNODB |CREATE TABLE `DAILY_EDGAR_INNODB` (
  `ftp_file_name_key` char(80) NOT NULL default '',
  `history_record` int(1) NOT NULL default '0',
  `description` char(100) NOT NULL default '',
  `company_fkey` char(10) NOT NULL default '',
  `company_name` char(100) NOT NULL default '',
  `subject_company_fkey` char(10) NOT NULL default '',
  `filer_description` char(10) NOT NULL default '',
  `form_fkey` char(20) NOT NULL default '',
  `file_accepted` char(20) NOT NULL default '',
  `been_evaluated` char(20) NOT NULL default '',
  `uport_evaluated` int(1) NOT NULL default '0',
  `file_date` char(10) NOT NULL default '',
  `file_size` char(10) NOT NULL default '50 KB',
  `accession_number` char(24) NOT NULL default '',
  `http_file_name_html` char(100) NOT NULL default '',
  `http_file_name_text` char(100) NOT NULL default '',
  `create_date` date NOT NULL default '-00-00',
  `change_date` date NOT NULL default '-00-00',
  PRIMARY KEY  (`ftp_file_name_key`),
  KEY `company_idx` (`company_fkey`),
  KEY `filaccdx` (`file_accepted`),
  KEY `beendx` (`been_evaluated`),
  KEY `fidadx` (`file_date`),
  KEY `upevdx` (`uport_evaluated`),
  KEY `crdadx` (`create_date`),
  KEY `hiredx` (`history_record`),
  KEY `accession_number` (`accession_number`),
  KEY `fofkdx` (`form_fkey`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1 |

Procedure used to execute conversion:
 
root 27686  0.0  0.2 5840 3224 ?S 

Re: MyISAM vs InnoDB

2006-11-01 Thread Edward Macnaghten

Francis wrote:

Question about MyISAM vs InnoDB ? What is the best to use, I have 
a large table contain around 10  millons of records. What is the best 
for me ? Use MyISAM or InnoDB ?
 



Depends VERY much on your application.  If any concurrency and/or 
durability is required then I would forget about MyISAM, as this is not 
ACID and integrity of the data is at risk.  In fact, if the application 
is suitable for MyISAM and database could be embedded (runs on same 
machine as application) then I would probably consider SQLite as that is 
even faster.


If concurrency and scaleability is required then I would go PostgreSQL 
rather tham MySQL, expecially if a large number of heavy users are on at 
the same time.


For a web-based solution on a machine with a single processor/core then 
InnoDB is a strong contender.


Eddy

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: MyISAM vs InnoDB

2006-11-01 Thread Edward Macnaghten

Francis wrote:

Question about MyISAM vs InnoDB ? What is the best to use, I have 
a large table contain around 10  millons of records. What is the best 
for me ? Use MyISAM or InnoDB ?
 



Depends VERY much on your application.  If any concurrency and/or 
durability is required then I would forget about MyISAM, as this is not 
ACID and integrity of the data is at risk.  In fact, if the application 
is suitable for MyISAM and database could be embedded (runs on same 
machine as application) then I would probably consider SQLite as that is 
even faster.


If concurrency and scaleability is required then I would go PostgreSQL 
rather tham MySQL, expecially if a large number of heavy users are on at 
the same time.


For a web-based solution on a machine with a single processor/core then 
InnoDB is a strong contender.


Eddy

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: MyISAM to InnoDB conversion help

2006-11-01 Thread Rolando Edwards
Check these variable 
bulk_insert_buffer_size (Default usually 8M)
innodb_buffer_pool_size (Default usually 8M)


- Original Message -
From: Mikhail Berman [EMAIL PROTECTED]
To: Rolando Edwards [EMAIL PROTECTED]
Cc: mysql@lists.mysql.com
Sent: Wednesday, November 1, 2006 11:13:44 AM GMT-0500 US/Eastern
Subject: RE: MyISAM to InnoDB conversion help

Hi Rolando,

Thank you for your help.  

I am on MySQL 5, and I have tried to do the conversion using ALTER TABLE
command. With the same very slow result.

Do you by any chance have specific suggestions how to tweak variables
related to this?

Here is what I got:

+-+-
-+
| Variable_name   | Value
|
+-+-
-+
| innodb_additional_mem_pool_size | 52428800
|
| innodb_autoextend_increment | 8
|
| innodb_buffer_pool_awe_mem_mb   | 0
|
| innodb_buffer_pool_size | 1073741824
|
| innodb_checksums| ON
|
| innodb_commit_concurrency   | 0
|
| innodb_concurrency_tickets  | 500
|
| innodb_data_file_path   | ibdata1:2000M;ibdata2:10M:autoextend
|
| innodb_data_home_dir| /export/home/mysqldata/ibdata
|
| innodb_doublewrite  | ON
|
| innodb_fast_shutdown| 1
|
| innodb_file_io_threads  | 4
|
| innodb_file_per_table   | OFF
|
| innodb_flush_log_at_trx_commit  | 1
|
| innodb_flush_method |
|
| innodb_force_recovery   | 0
|
| innodb_lock_wait_timeout| 50
|
| innodb_locks_unsafe_for_binlog  | OFF
|
| innodb_log_arch_dir | /export/home/mysqldata/ibdata
|
| innodb_log_archive  | OFF
|
| innodb_log_buffer_size  | 1048576
|
| innodb_log_file_size| 5242880
|
| innodb_log_files_in_group   | 2
|
| innodb_log_group_home_dir   | /export/home/mysqldata/ibdata
|
| innodb_max_dirty_pages_pct  | 90
|
| innodb_max_purge_lag| 0
|
| innodb_mirrored_log_groups  | 1
|
| innodb_open_files   | 300
|
| innodb_support_xa   | ON
|
| innodb_sync_spin_loops  | 20
|
| innodb_table_locks  | ON
|
| innodb_thread_concurrency   | 20
|
| innodb_thread_sleep_delay   | 1
|
+-+-
-+

Best,

Mikhail Berman

-Original Message-
From: Rolando Edwards [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, November 01, 2006 11:05 AM
To: Mikhail Berman
Cc: mysql@lists.mysql.com
Subject: Re: MyISAM to InnoDB conversion help

If you are do this in MySQL 5, try this:

ALTER TABLE table-name ENGINE = InnoDB;

That's all.

Let MySQL worry about conversion.
You may also want to tweek the innodb
system variables (show variables like 'innodb%) for better InnoDB
performance prior to trying this.

- Original Message -
From: Mikhail Berman [EMAIL PROTECTED]
To: mysql@lists.mysql.com
Sent: Wednesday, November 1, 2006 10:31:13 AM GMT-0500 US/Eastern
Subject: MyISAM to InnoDB conversion help

Hi everyone,
 
I am hoping to get help with extremely slow performance of MyISAM to
InnoDB conversion. Or find out if this type of performance is usual
 
I have MyISAM table that contains - 3,299,509 rows and I am trying to
convert it to InnoDB for the use with row-level locking, and I am
getting insertion speed of 0.243 of a record a second while I am doing
INSERT INTO InnoDB_table SELECT * FROM MyISAM_Table.
 
Your help is appreciated.
 
Here is what my environment looks like.
 
Hardware:
 
SunBlade 2000 with 2GB processor connected to StorEdge A5200 with RAID5
on it.
 
OS:
 
[EMAIL PROTECTED]/uname -a
SunOS * 5.9 Generic_118558-19 sun4u sparc SUNW,Sun-Blade-1000

MySQL:
 
mysql status;
--
mysql  Ver 14.12 Distrib 5.0.18, for sun-solaris2.9 (sparc) using
readline 5.0
 
InnoDB tables structure:
 
 DAILY_EDGAR_INNODB |CREATE TABLE `DAILY_EDGAR_INNODB` (
  `ftp_file_name_key` char(80) NOT NULL default '',
  `history_record` int(1) NOT NULL default '0',
  `description` char(100) NOT NULL default '',
  `company_fkey` char(10) NOT NULL default '',
  `company_name` char(100) NOT NULL default '',
  `subject_company_fkey` char(10) NOT NULL default '',
  `filer_description` char(10) NOT NULL default '',
  `form_fkey` char(20) NOT NULL default '',
  `file_accepted` char(20) NOT NULL default '',
  `been_evaluated` char(20) NOT NULL default '',
  `uport_evaluated` int(1) NOT NULL default '0',
  `file_date` char(10) NOT NULL default '',
  `file_size` char(10) NOT NULL default '50 KB',
  `accession_number` char(24) NOT NULL default '',
  `http_file_name_html` char(100) NOT NULL default '',
  `http_file_name_text` char(100) NOT NULL default '',
  `create_date` date NOT NULL default '-00-00',
  `change_date` date NOT NULL default '-00-00',
  PRIMARY KEY  (`ftp_file_name_key`),
  KEY `company_idx` (`company_fkey`),
  KEY `filaccdx` (`file_accepted`),
  KEY `beendx` 

Re: MyISAM to InnoDB conversion help

2006-11-01 Thread Rolando Edwards
I just noticed your innodb_data_file_path
You have a shared InnoDB tablespace
That can be murder on a MySQL Server

You may want to separate each InnoDB into a separate file

Here are the steps needed to separate InnoDB tables.

1) Do a mysqldump on your database to mydata.sql.
2) Shutdown MySQL
3) Goto my.cnf and add 'innodb_file_per_table' in the [mysqld] section
4) Change in my.cnf : innodb_data_file_path to ibdata1:10M:autoextend
Note: You may want add this too : bulk_insert_buffer_size = 256M
5) Delete ibdata1, ibdata2, and the ib_logfile* files
6) Restart MySQL (the innodb data files and log will regenerate)
7) Run MySQL using the script mydata.sql

All InnoDB data will be sitting in separate .ibd files
in the database folder. Only the data dictionary info
for all InnoDB tables will be sitting in the ibdata1 file.

Give it a try.

- Original Message -
From: Rolando Edwards [EMAIL PROTECTED]
To: Mikhail Berman [EMAIL PROTECTED]
Cc: mysql@lists.mysql.com
Sent: Wednesday, November 1, 2006 11:24:00 AM GMT-0500 US/Eastern
Subject: Re: MyISAM to InnoDB conversion help

Check these variable 
bulk_insert_buffer_size (Default usually 8M)
innodb_buffer_pool_size (Default usually 8M)


- Original Message -
From: Mikhail Berman [EMAIL PROTECTED]
To: Rolando Edwards [EMAIL PROTECTED]
Cc: mysql@lists.mysql.com
Sent: Wednesday, November 1, 2006 11:13:44 AM GMT-0500 US/Eastern
Subject: RE: MyISAM to InnoDB conversion help

Hi Rolando,

Thank you for your help.  

I am on MySQL 5, and I have tried to do the conversion using ALTER TABLE
command. With the same very slow result.

Do you by any chance have specific suggestions how to tweak variables
related to this?

Here is what I got:

+-+-
-+
| Variable_name   | Value
|
+-+-
-+
| innodb_additional_mem_pool_size | 52428800
|
| innodb_autoextend_increment | 8
|
| innodb_buffer_pool_awe_mem_mb   | 0
|
| innodb_buffer_pool_size | 1073741824
|
| innodb_checksums| ON
|
| innodb_commit_concurrency   | 0
|
| innodb_concurrency_tickets  | 500
|
| innodb_data_file_path   | ibdata1:2000M;ibdata2:10M:autoextend
|
| innodb_data_home_dir| /export/home/mysqldata/ibdata
|
| innodb_doublewrite  | ON
|
| innodb_fast_shutdown| 1
|
| innodb_file_io_threads  | 4
|
| innodb_file_per_table   | OFF
|
| innodb_flush_log_at_trx_commit  | 1
|
| innodb_flush_method |
|
| innodb_force_recovery   | 0
|
| innodb_lock_wait_timeout| 50
|
| innodb_locks_unsafe_for_binlog  | OFF
|
| innodb_log_arch_dir | /export/home/mysqldata/ibdata
|
| innodb_log_archive  | OFF
|
| innodb_log_buffer_size  | 1048576
|
| innodb_log_file_size| 5242880
|
| innodb_log_files_in_group   | 2
|
| innodb_log_group_home_dir   | /export/home/mysqldata/ibdata
|
| innodb_max_dirty_pages_pct  | 90
|
| innodb_max_purge_lag| 0
|
| innodb_mirrored_log_groups  | 1
|
| innodb_open_files   | 300
|
| innodb_support_xa   | ON
|
| innodb_sync_spin_loops  | 20
|
| innodb_table_locks  | ON
|
| innodb_thread_concurrency   | 20
|
| innodb_thread_sleep_delay   | 1
|
+-+-
-+

Best,

Mikhail Berman

-Original Message-
From: Rolando Edwards [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, November 01, 2006 11:05 AM
To: Mikhail Berman
Cc: mysql@lists.mysql.com
Subject: Re: MyISAM to InnoDB conversion help

If you are do this in MySQL 5, try this:

ALTER TABLE table-name ENGINE = InnoDB;

That's all.

Let MySQL worry about conversion.
You may also want to tweek the innodb
system variables (show variables like 'innodb%) for better InnoDB
performance prior to trying this.

- Original Message -
From: Mikhail Berman [EMAIL PROTECTED]
To: mysql@lists.mysql.com
Sent: Wednesday, November 1, 2006 10:31:13 AM GMT-0500 US/Eastern
Subject: MyISAM to InnoDB conversion help

Hi everyone,
 
I am hoping to get help with extremely slow performance of MyISAM to
InnoDB conversion. Or find out if this type of performance is usual
 
I have MyISAM table that contains - 3,299,509 rows and I am trying to
convert it to InnoDB for the use with row-level locking, and I am
getting insertion speed of 0.243 of a record a second while I am doing
INSERT INTO InnoDB_table SELECT * FROM MyISAM_Table.
 
Your help is appreciated.
 
Here is what my environment looks like.
 
Hardware:
 
SunBlade 2000 with 2GB processor connected to StorEdge A5200 with RAID5
on it.
 
OS:
 
[EMAIL PROTECTED]/uname -a
SunOS * 5.9 Generic_118558-19 sun4u sparc SUNW,Sun-Blade-1000

MySQL:
 
mysql status;
--
mysql  Ver 14.12 Distrib 5.0.18, for sun-solaris2.9 (sparc) using
readline 5.0
 
InnoDB tables 

RE: MyISAM to InnoDB conversion help

2006-11-01 Thread Mikhail Berman
 
Great,

Thank you for your help Rolando,

Mikhail Berman


-Original Message-
From: Rolando Edwards [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, November 01, 2006 11:41 AM
Cc: mysql@lists.mysql.com; Mikhail Berman
Subject: Re: MyISAM to InnoDB conversion help

I just noticed your innodb_data_file_path You have a shared InnoDB
tablespace That can be murder on a MySQL Server

You may want to separate each InnoDB into a separate file

Here are the steps needed to separate InnoDB tables.

1) Do a mysqldump on your database to mydata.sql.
2) Shutdown MySQL
3) Goto my.cnf and add 'innodb_file_per_table' in the [mysqld] section
4) Change in my.cnf : innodb_data_file_path to ibdata1:10M:autoextend
Note: You may want add this too : bulk_insert_buffer_size = 256M
5) Delete ibdata1, ibdata2, and the ib_logfile* files
6) Restart MySQL (the innodb data files and log will regenerate)
7) Run MySQL using the script mydata.sql

All InnoDB data will be sitting in separate .ibd files in the database
folder. Only the data dictionary info for all InnoDB tables will be
sitting in the ibdata1 file.

Give it a try.

- Original Message -
From: Rolando Edwards [EMAIL PROTECTED]
To: Mikhail Berman [EMAIL PROTECTED]
Cc: mysql@lists.mysql.com
Sent: Wednesday, November 1, 2006 11:24:00 AM GMT-0500 US/Eastern
Subject: Re: MyISAM to InnoDB conversion help

Check these variable
bulk_insert_buffer_size (Default usually 8M) innodb_buffer_pool_size
(Default usually 8M)


- Original Message -
From: Mikhail Berman [EMAIL PROTECTED]
To: Rolando Edwards [EMAIL PROTECTED]
Cc: mysql@lists.mysql.com
Sent: Wednesday, November 1, 2006 11:13:44 AM GMT-0500 US/Eastern
Subject: RE: MyISAM to InnoDB conversion help

Hi Rolando,

Thank you for your help.  

I am on MySQL 5, and I have tried to do the conversion using ALTER TABLE
command. With the same very slow result.

Do you by any chance have specific suggestions how to tweak variables
related to this?

Here is what I got:

+-+-
-+
| Variable_name   | Value
|
+-+-
-+
| innodb_additional_mem_pool_size | 52428800
|
| innodb_autoextend_increment | 8
|
| innodb_buffer_pool_awe_mem_mb   | 0
|
| innodb_buffer_pool_size | 1073741824
|
| innodb_checksums| ON
|
| innodb_commit_concurrency   | 0
|
| innodb_concurrency_tickets  | 500
|
| innodb_data_file_path   | ibdata1:2000M;ibdata2:10M:autoextend
|
| innodb_data_home_dir| /export/home/mysqldata/ibdata
|
| innodb_doublewrite  | ON
|
| innodb_fast_shutdown| 1
|
| innodb_file_io_threads  | 4
|
| innodb_file_per_table   | OFF
|
| innodb_flush_log_at_trx_commit  | 1
|
| innodb_flush_method |
|
| innodb_force_recovery   | 0
|
| innodb_lock_wait_timeout| 50
|
| innodb_locks_unsafe_for_binlog  | OFF
|
| innodb_log_arch_dir | /export/home/mysqldata/ibdata
|
| innodb_log_archive  | OFF
|
| innodb_log_buffer_size  | 1048576
|
| innodb_log_file_size| 5242880
|
| innodb_log_files_in_group   | 2
|
| innodb_log_group_home_dir   | /export/home/mysqldata/ibdata
|
| innodb_max_dirty_pages_pct  | 90
|
| innodb_max_purge_lag| 0
|
| innodb_mirrored_log_groups  | 1
|
| innodb_open_files   | 300
|
| innodb_support_xa   | ON
|
| innodb_sync_spin_loops  | 20
|
| innodb_table_locks  | ON
|
| innodb_thread_concurrency   | 20
|
| innodb_thread_sleep_delay   | 1
|
+-+-
-+

Best,

Mikhail Berman

-Original Message-
From: Rolando Edwards [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, November 01, 2006 11:05 AM
To: Mikhail Berman
Cc: mysql@lists.mysql.com
Subject: Re: MyISAM to InnoDB conversion help

If you are do this in MySQL 5, try this:

ALTER TABLE table-name ENGINE = InnoDB;

That's all.

Let MySQL worry about conversion.
You may also want to tweek the innodb
system variables (show variables like 'innodb%) for better InnoDB
performance prior to trying this.

- Original Message -
From: Mikhail Berman [EMAIL PROTECTED]
To: mysql@lists.mysql.com
Sent: Wednesday, November 1, 2006 10:31:13 AM GMT-0500 US/Eastern
Subject: MyISAM to InnoDB conversion help

Hi everyone,
 
I am hoping to get help with extremely slow performance of MyISAM to
InnoDB conversion. Or find out if this type of performance is usual
 
I have MyISAM table that contains - 3,299,509 rows and I am trying to
convert it to InnoDB for the use with row-level locking, and I am
getting insertion speed of 0.243 of a record a second while I am doing
INSERT INTO InnoDB_table SELECT * FROM MyISAM_Table.
 
Your help is appreciated.
 
Here is what my environment looks like.
 
Hardware:
 
SunBlade 2000 with 2GB processor connected to 

Re: storing images in the database

2006-11-01 Thread Richard Harris

Hi,

In my honest opinion, uploading the image to a preset directory in the 
code, and setting the code to submit the image name, extension, and if 
needs be (it shouldnt, because you can set it in the HTML), the 
directory. Google has a lot of useful resources on this.


But, if you are really desperate to do it using the BLOB type, 
http://www.wellho.net/solutions/php-example-php-form-image-upload-store-in-mysql-database-retreive.html 
was a help to me when I first started learning.


Hope this helps,


r

#
Richard Harris
PHP, MYSQL, XHTML, CSS
[EMAIL PROTECTED]
[EMAIL PROTECTED]
#
Rodney Courtis wrote:

Hi

I want to store jpeg images in a mysql database from php in the form of 'blob' types. 
Is this possible and can you detail for me the steps to achieve this?


Regards and thanks
Rodney Courtis







--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Automatic Removal

2006-11-01 Thread Richard Harris

Hi,

got a Really Simple question, which is probably really easy, but it's 
just not starign me in the face.


Is there any query in MYSQL (or function in PHP that works with MYSQL) 
to remove a value from a field when a new week comes across, or to 
remove something from a field on a defined date and time, automatically?


Apologies if this is the wrong group

Thanks in Advance!

r

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Automatic Removal

2006-11-01 Thread Martijn Tonies

 got a Really Simple question, which is probably really easy, but it's 
 just not starign me in the face.
 
 Is there any query in MYSQL (or function in PHP that works with MYSQL) 
 to remove a value from a field when a new week comes across, or to 
 remove something from a field on a defined date and time, automatically?

Depends on your definition of automatically :-)

MySQL 5.1 has Events: a server side process checks periodically to see
if anything needs to be done, for example, your delete/clear SQL statement.

Martijn Tonies
Database Workbench - development tool for MySQL, and more!
Upscene Productions
http://www.upscene.com
My thoughts:
http://blog.upscene.com/martijn/
Database development questions? Check the forum!
http://www.databasedevelopmentforum.com

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: MyISAM vs InnoDB

2006-11-01 Thread mos

At 09:35 AM 11/1/2006, Martijn Tonies wrote:

  MyISAM vs InnoDB ? What is the best to use

 Always use a DBMS, and MySQL is no (proper) DBMS without a
 transactional
backend.  There are InnoDB, which is not completely free (needs a
proprietary
backup tool); BDB, which is deprecated until further notices; and SolidDB,
which
is still β.

 Choose your evil.

Ok, so your solution is to use something else? Is there a better open
source database out there for that amount of data?

Firebird? PostgreSQL?

Both are open source and ALWAYS free for whatever usuage, no dual
licensing whatsoever.


Martijn,
Sure, I've thought of those too. But has anyone gotten Firebird to 
store 700-800gb tables? Can you split Firebird's .gdb file across drives? 
The main problem with tables of that size is maintaining the index. My 
upper limit for MySQL is 100 million rows. After that any new rows that are 
added will take much longer to add because the index tree has to be 
maintained. I definitely recommend cramming as much memory in the box as 
humanly possible because indexes of that size will need it. Probably the 
simplist solution for MySQL is to use Merge tables.  I know some people 
with MySQL, Oracle and MS SQL have terabyte tables, but I haven't heard of 
other databases storing tables that large. So if you or anyone else has 
used FireBird or PostgreSQL to store terabyte tables, I'd certainly would 
be interested in hearing about it. :)


Mike



Martijn Tonies
Database Workbench - tool for InterBase, Firebird, MySQL, NexusDB, Oracle 
MS SQL Server
Upscene Productions
http://www.upscene.com
My thoughts:
http://blog.upscene.com/martijn/
Database development questions? Check the forum!
http://www.databasedevelopmentforum.com


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



jdbc question regarding enum

2006-11-01 Thread Jon Drukman
disclaimer: i know nothing about java or jdbc.  a co-worker is trying to 
access a database i've set up using jdbc.  he says that my enum column 
is always returning an integer value instead of the string.  obviously 
this is less than desirable.


does anybody have any advice i could give him on where to look or 
something to change?


-jsd-


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



returning unique value

2006-11-01 Thread Ross Hulford


I have two tables galleries which contains the number and name of the photo 
galleries and 'thumnails' the images  that are conenected to the galleries.

I am trying to create a 'pick a gallery' screen where it selects all the 
galleries and then output the first thumbnail image associated with that 
gallery. The probroblem is only returning one unique image.





-- 
-- Table structure for table `galleries`
-- 

CREATE TABLE `galleries` (
  `id` int(11) NOT NULL auto_increment,
  `display` tinyint(4) NOT NULL default '0',
  `galleryorder` int(11) NOT NULL default '0',
  `title` mediumtext NOT NULL,
  `description` text NOT NULL,
  PRIMARY KEY  (`id`),
  KEY `id` (`id`)
) ENGINE=MyISAM AUTO_INCREMENT=9 DEFAULT CHARSET=latin1 AUTO_INCREMENT=9 ;

-- 
-- Dumping data for table `galleries`
-- 

INSERT INTO `galleries` VALUES (7, 1, 1, 'my gallery1', 'my gallery1');
INSERT INTO `galleries` VALUES (8, 0, 1, 'gallery2', 'my gallery2');

-- 

-- 
-- Table structure for table `thumbnails`
-- 

CREATE TABLE `thumbnails` (
  `id` int(4) NOT NULL auto_increment,
  `gallery` int(4) NOT NULL default '0',
  `display` tinyint(4) NOT NULL default '0',
  `photoorder` int(4) NOT NULL default '0',
  `caption` varchar(80) NOT NULL default '',
  `description` varchar(200) default NULL,
  `bin_data` longblob,
  `filename` varchar(50) default NULL,
  `filesize` varchar(50) default NULL,
  `filetype` varchar(50) default NULL,
  PRIMARY KEY  (`id`)
) ENGINE=MyISAM AUTO_INCREMENT=513 DEFAULT CHARSET=latin1 AUTO_INCREMENT=513 ;



This is what I have so far



  ? $dbquery = SELECT id FROM galleries;
$result = mysql_query($dbquery);
while($row=mysql_fetch_array($result))
{
echo $id=$row['id'];
$dbquery2 = SELECT * FROM thumbnails where gallery=$id;
$result2 = mysql_query($dbquery2);
 while($myimage=mysql_fetch_array($result2)){
echo $myimage['caption'];
 }
}

Fw: returning unique value

2006-11-01 Thread Ross Hulford
I have atable   which contains the number and name of the photo galleries and 
'thumnails' the images  that are conenected to the galleries.

I am trying to create a 'pick a gallery' screen where it selects all the 
galleries and then output the first thumbnail image associated with that 
gallery. The problem I have is returning one unique image.





-- 
-- Table structure for table `galleries`
-- 

CREATE TABLE `galleries` (
  `id` int(11) NOT NULL auto_increment,
  `display` tinyint(4) NOT NULL default '0',
  `galleryorder` int(11) NOT NULL default '0',
  `title` mediumtext NOT NULL,
  `description` text NOT NULL,
  PRIMARY KEY  (`id`),
  KEY `id` (`id`)
) ENGINE=MyISAM AUTO_INCREMENT=9 DEFAULT CHARSET=latin1 AUTO_INCREMENT=9 ;

-- 
-- Dumping data for table `galleries`
-- 

INSERT INTO `galleries` VALUES (7, 1, 1, 'my gallery1', 'my gallery1');
INSERT INTO `galleries` VALUES (8, 0, 1, 'gallery2', 'my gallery2');

-- 

-- 
-- Table structure for table `thumbnails`
-- 

CREATE TABLE `thumbnails` (
  `id` int(4) NOT NULL auto_increment,
  `gallery` int(4) NOT NULL default '0',
  `display` tinyint(4) NOT NULL default '0',
  `photoorder` int(4) NOT NULL default '0',
  `caption` varchar(80) NOT NULL default '',
  `description` varchar(200) default NULL,
  `bin_data` longblob,
  `filename` varchar(50) default NULL,
  `filesize` varchar(50) default NULL,
  `filetype` varchar(50) default NULL,
  PRIMARY KEY  (`id`)
) ENGINE=MyISAM AUTO_INCREMENT=513 DEFAULT CHARSET=latin1 AUTO_INCREMENT=513 ;



This is what I have so far



  ? $dbquery = SELECT id FROM galleries;
$result = mysql_query($dbquery);
while($row=mysql_fetch_array($result))
{
echo $id=$row['id'];
$dbquery2 = SELECT * FROM thumbnails where gallery=$id;
$result2 = mysql_query($dbquery2);
 while($myimage=mysql_fetch_array($result2)){
echo $myimage['caption'];
 }
}

Re: returning unique value

2006-11-01 Thread Dan Buettner

Ross -

In your query you can add a LIMIT clause to get just one row in the
result, a la:
SELECT * FROM thumbnails where gallery=$id LIMIT 1

If you want the first image, say the one with lowest ID number, you
could do this:
SELECT * FROM thumbnails where gallery=$id ORDER BY id LIMIT 1

Dan


On 11/1/06, Ross Hulford [EMAIL PROTECTED] wrote:



I have two tables galleries which contains the number and name of the photo 
galleries and 'thumnails' the images  that are conenected to the galleries.

I am trying to create a 'pick a gallery' screen where it selects all the 
galleries and then output the first thumbnail image associated with that 
gallery. The probroblem is only returning one unique image.





--
-- Table structure for table `galleries`
--

CREATE TABLE `galleries` (
  `id` int(11) NOT NULL auto_increment,
  `display` tinyint(4) NOT NULL default '0',
  `galleryorder` int(11) NOT NULL default '0',
  `title` mediumtext NOT NULL,
  `description` text NOT NULL,
  PRIMARY KEY  (`id`),
  KEY `id` (`id`)
) ENGINE=MyISAM AUTO_INCREMENT=9 DEFAULT CHARSET=latin1 AUTO_INCREMENT=9 ;

--
-- Dumping data for table `galleries`
--

INSERT INTO `galleries` VALUES (7, 1, 1, 'my gallery1', 'my gallery1');
INSERT INTO `galleries` VALUES (8, 0, 1, 'gallery2', 'my gallery2');

-- 

--
-- Table structure for table `thumbnails`
--

CREATE TABLE `thumbnails` (
  `id` int(4) NOT NULL auto_increment,
  `gallery` int(4) NOT NULL default '0',
  `display` tinyint(4) NOT NULL default '0',
  `photoorder` int(4) NOT NULL default '0',
  `caption` varchar(80) NOT NULL default '',
  `description` varchar(200) default NULL,
  `bin_data` longblob,
  `filename` varchar(50) default NULL,
  `filesize` varchar(50) default NULL,
  `filetype` varchar(50) default NULL,
  PRIMARY KEY  (`id`)
) ENGINE=MyISAM AUTO_INCREMENT=513 DEFAULT CHARSET=latin1 AUTO_INCREMENT=513 ;



This is what I have so far



  ? $dbquery = SELECT id FROM galleries;
$result = mysql_query($dbquery);
while($row=mysql_fetch_array($result))
{
echo $id=$row['id'];
$dbquery2 = SELECT * FROM thumbnails where gallery=$id;
$result2 = mysql_query($dbquery2);
 while($myimage=mysql_fetch_array($result2)){
echo $myimage['caption'];
 }
}



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Max of Count

2006-11-01 Thread Jerry Schwartz
Thanks, Dan, that does indeed work; but as you said it only gives one
record. Removing the limit gives them all to me, which for visual inspection
works fine.

This started out as a Gee, I wonder... task, so as a learning experience
I'd like to figure out how to get all of (and only) the records whose count
is the maximum. It seems tantalizingly out of reach. I know what MySQL is
telling me (I can use MAX without a GROUP only if I don't use other columns
that aren't functions), but not how to fix it.

Regards,

Jerry Schwartz
Global Information Incorporated
195 Farmington Ave.
Farmington, CT 06032

860.674.8796 / FAX: 860.674.8341


 -Original Message-
 From: Dan Buettner [mailto:[EMAIL PROTECTED]
 Sent: Tuesday, October 31, 2006 5:49 PM
 To: Jerry Schwartz
 Cc: mysql@lists.mysql.com
 Subject: Re: Max of Count

 This might work for you, Jerry - you're on the right track,
 pretty close.

 SELECT prod.prod_id, COUNT(*)
 FROM prod, prod_rel
 WHERE prod.prod_id = prod_rel.prod_id
 GROUP BY prod.prod_id
 ORDER BY 2 DESC
 LIMIT 1

 Be aware that using the LIMIT 1 may be misleading in that if you have
 multiple entries with the same highest number of relations, you'll
 only see one.  That may or may not matter to you.

 HTH,
 Dan

 On 10/31/06, Jerry Schwartz [EMAIL PROTECTED] wrote:
  This should be simple, but I haven't used a subquery in
 quite this way, and
  have little experience with GROUP BY.
 
  I have 2 tables
 
  prod:prod_id /* a unique key */
  prod_rel:prod_id /* values that tie prod_rel to prod */
 
  A prod_id in prod might have 0 or more matching rows in
 prod_rel. I want to
  find the prod_id of the entry in prod that has the most
 dependent rows in
  prod_rel, and how many dependent rows it has.
 
  SELECT MAX(c) FROM
  (SELECT COUNT(prod_rel.prod_id) AS c
FROM prod, prod_rel
WHERE prod.prod_id = prod_rel.prod_id GROUP BY
 prod_rel.prod_id)
  AS t1;
 
  gives me
 
  ++
  | MAX(c) |
  ++
  |  7 |
  ++
 
  That's all fine and dandy, but I haven't figured out how to
 get the actual
  value of prod_id that goes with that maximum value. I tried
 
  mysql SELECT MAX(c), id FROM
  -   (SELECT COUNT(prod_rel.prod_id) as c,
 prod_rel.prod_id as id
  -  FROM prod, prod_rel
  -  WHERE prod.prod_id = prod_rel.prod_id GROUP BY
 prod_rel.prod_id)
  -   AS t;
 
  and got
 
  ERROR 1140 (42000): Mixing of GROUP columns
 (MIN(),MAX(),COUNT(),...) with
  no
  GROUP columns is illegal if there is no GROUP BY clause
 
  I had a feeling that I couldn't just use id in the outer
 select, but how do
  I get it? I tried adding GROUP BY t.prod_id, but that just
 gave me the same
  error.
 
  Please help me find my way.
 
  Regards,
 
  Jerry Schwartz
  Global Information Incorporated
  195 Farmington Ave.
  Farmington, CT 06032
 
  860.674.8796 / FAX: 860.674.8341
 
 
 
 
  --
  MySQL General Mailing List
  For list archives: http://lists.mysql.com/mysql
  To unsubscribe:
 http://lists.mysql.com/[EMAIL PROTECTED]
 
 





-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: RE: Max of Count

2006-11-01 Thread Dan Buettner

This is how I'd do it, though ISTR there being a problem with LIMITs
in subqueries at one point (works for me on 5.0.24).  (I changed to
user and user_entitlements because I have those tables I can use in my
database)

SELECT u.id, COUNT(*) AS ct
FROM users u, user_entitlements ue
WHERE u.id = ue.user_id
GROUP BY u.id
HAVING ct = (SELECT COUNT(*) AS ct2 FROM users u2, user_entitlements
ue2 WHERE u2.id = ue2.user_id GROUP BY u2.id ORDER BY ct2 DESC LIMIT
1)

I think you should also be able to do it in two steps with a
server-side variable:

SELECT COUNT(*) AS ct2 FROM users u2, user_entitlements ue2 WHERE
u2.id = ue2.user_id GROUP BY u2.id ORDER BY ct2 DESC LIMIT 1 INTO
@mycount

SELECT u.id, COUNT(*) AS ct
FROM users u, user_entitlements ue
WHERE u.id = ue.user_id
GROUP BY u.id
HAVING ct = @mycount

BUT that does not work for me on 5.0.24.  The value of @mycount ends
up as 1, which seems like a bug?

Dan




On 11/1/06, Jerry Schwartz [EMAIL PROTECTED] wrote:

Thanks, Dan, that does indeed work; but as you said it only gives one
record. Removing the limit gives them all to me, which for visual inspection
works fine.

This started out as a Gee, I wonder... task, so as a learning experience
I'd like to figure out how to get all of (and only) the records whose count
is the maximum. It seems tantalizingly out of reach. I know what MySQL is
telling me (I can use MAX without a GROUP only if I don't use other columns
that aren't functions), but not how to fix it.

Regards,

Jerry Schwartz
Global Information Incorporated
195 Farmington Ave.
Farmington, CT 06032

860.674.8796 / FAX: 860.674.8341


 -Original Message-
 From: Dan Buettner [mailto:[EMAIL PROTECTED]
 Sent: Tuesday, October 31, 2006 5:49 PM
 To: Jerry Schwartz
 Cc: mysql@lists.mysql.com
 Subject: Re: Max of Count

 This might work for you, Jerry - you're on the right track,
 pretty close.

 SELECT prod.prod_id, COUNT(*)
 FROM prod, prod_rel
 WHERE prod.prod_id = prod_rel.prod_id
 GROUP BY prod.prod_id
 ORDER BY 2 DESC
 LIMIT 1

 Be aware that using the LIMIT 1 may be misleading in that if you have
 multiple entries with the same highest number of relations, you'll
 only see one.  That may or may not matter to you.

 HTH,
 Dan

 On 10/31/06, Jerry Schwartz [EMAIL PROTECTED] wrote:
  This should be simple, but I haven't used a subquery in
 quite this way, and
  have little experience with GROUP BY.
 
  I have 2 tables
 
  prod:prod_id /* a unique key */
  prod_rel:prod_id /* values that tie prod_rel to prod */
 
  A prod_id in prod might have 0 or more matching rows in
 prod_rel. I want to
  find the prod_id of the entry in prod that has the most
 dependent rows in
  prod_rel, and how many dependent rows it has.
 
  SELECT MAX(c) FROM
  (SELECT COUNT(prod_rel.prod_id) AS c
FROM prod, prod_rel
WHERE prod.prod_id = prod_rel.prod_id GROUP BY
 prod_rel.prod_id)
  AS t1;
 
  gives me
 
  ++
  | MAX(c) |
  ++
  |  7 |
  ++
 
  That's all fine and dandy, but I haven't figured out how to
 get the actual
  value of prod_id that goes with that maximum value. I tried
 
  mysql SELECT MAX(c), id FROM
  -   (SELECT COUNT(prod_rel.prod_id) as c,
 prod_rel.prod_id as id
  -  FROM prod, prod_rel
  -  WHERE prod.prod_id = prod_rel.prod_id GROUP BY
 prod_rel.prod_id)
  -   AS t;
 
  and got
 
  ERROR 1140 (42000): Mixing of GROUP columns
 (MIN(),MAX(),COUNT(),...) with
  no
  GROUP columns is illegal if there is no GROUP BY clause
 
  I had a feeling that I couldn't just use id in the outer
 select, but how do
  I get it? I tried adding GROUP BY t.prod_id, but that just
 gave me the same
  error.
 
  Please help me find my way.
 
  Regards,
 
  Jerry Schwartz
  Global Information Incorporated
  195 Farmington Ave.
  Farmington, CT 06032
 
  860.674.8796 / FAX: 860.674.8341
 
 
 
 
  --
  MySQL General Mailing List
  For list archives: http://lists.mysql.com/mysql
  To unsubscribe:
 http://lists.mysql.com/[EMAIL PROTECTED]
 
 







--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: MyISAM vs InnoDB

2006-11-01 Thread Jochem van Dieten

On 11/1/06, mos wrote:


 Sure, I've thought of those too. But has anyone gotten Firebird to
store 700-800gb tables? Can you split Firebird's .gdb file across drives?
The main problem with tables of that size is maintaining the index. My
upper limit for MySQL is 100 million rows. After that any new rows that are
added will take much longer to add because the index tree has to be
maintained. I definitely recommend cramming as much memory in the box as
humanly possible because indexes of that size will need it. Probably the
simplist solution for MySQL is to use Merge tables.  I know some people
with MySQL, Oracle and MS SQL have terabyte tables, but I haven't heard of
other databases storing tables that large. So if you or anyone else has
used FireBird or PostgreSQL to store terabyte tables, I'd certainly would
be interested in hearing about it. :)


What is the big deal of a TB? Now, if you get past 20 TB you might
want to team up with one of the commercial PostgreSQL supporters
(Fujitsu, EnterpriseDB, Greenplum etc.), but Sun even sells appliances
for 100 TB PostgreSQL databases.

Jochem

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Manually Inserted Data

2006-11-01 Thread Keith Spiller

Hi,

I'm trying to manually insert data into a mysql query so that the
data will be sorted by the query but not inserted into the table.

Here's what I have so far:

( SELECT ID, Title, Label, Location, Start, End, Time,
 Description, Organization, Department, Contact, Phone,
 Email, Global, Board, Committee, Status,
 TBD_Time , TO_DAYS(End) - TO_DAYS(Start) + 1 AS Days
 FROM site_calendar WHERE Global='1' )
UNION
( SELECT '9', 'No events exist for this month...', '', '', '',
  '-00-00', '00:00:00', '', '', '', '', '', '', '1', '', '', '', '', as 
z )

  ORDER BY z, Start ASC, Status DESC, Time ASC

I just don't know what I am doing wrong her, or how to
accomplish this task.  Please help.

Thanks,


Keith 



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Manually Inserted Data

2006-11-01 Thread Rolando Edwards
The first UNION part has 19 columns
the other UNION part has 18 columns

- Original Message -
From: Keith Spiller [EMAIL PROTECTED]
To: [MySQL] mysql@lists.mysql.com
Sent: Wednesday, November 1, 2006 3:31:24 PM GMT-0500 US/Eastern
Subject: Manually Inserted Data

Hi,

I'm trying to manually insert data into a mysql query so that the
data will be sorted by the query but not inserted into the table.

Here's what I have so far:

( SELECT ID, Title, Label, Location, Start, End, Time,
  Description, Organization, Department, Contact, Phone,
  Email, Global, Board, Committee, Status,
  TBD_Time , TO_DAYS(End) - TO_DAYS(Start) + 1 AS Days
  FROM site_calendar WHERE Global='1' )
UNION
( SELECT '9', 'No events exist for this month...', '', '', '',
   '-00-00', '00:00:00', '', '', '', '', '', '', '1', '', '', '', '', as 
z )
   ORDER BY z, Start ASC, Status DESC, Time ASC

I just don't know what I am doing wrong her, or how to
accomplish this task.  Please help.

Thanks,


Keith 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Manually Inserted Data

2006-11-01 Thread Keith Spiller

Hi Rolando,

Thanks for your help.

I have reduced the query to the bare essensials to try to test the concept, 
but

it still fails...

( SELECT ID, Start, End
FROM site_calendar_v2 as c
WHERE Global='1' )
UNION
( SELECT '9', '2006-11-01', '-00-00' as z )
ORDER BY z, Start ASC, Status DESC, Time ASC  a

Does anyone see my mistake?

Keith

- Original Message - 
From: Rolando Edwards [EMAIL PROTECTED]

To: Keith Spiller [EMAIL PROTECTED]
Cc: MySQL mysql@lists.mysql.com
Sent: Wednesday, November 01, 2006 2:11 PM
Subject: Re: Manually Inserted Data



The first UNION part has 19 columns
the other UNION part has 18 columns

- Original Message -
From: Keith Spiller [EMAIL PROTECTED]
To: [MySQL] mysql@lists.mysql.com
Sent: Wednesday, November 1, 2006 3:31:24 PM GMT-0500 US/Eastern
Subject: Manually Inserted Data

Hi,

I'm trying to manually insert data into a mysql query so that the
data will be sorted by the query but not inserted into the table.

Here's what I have so far:

( SELECT ID, Title, Label, Location, Start, End, Time,
 Description, Organization, Department, Contact, Phone,
 Email, Global, Board, Committee, Status,
 TBD_Time , TO_DAYS(End) - TO_DAYS(Start) + 1 AS Days
 FROM site_calendar WHERE Global='1' )
UNION
( SELECT '9', 'No events exist for this month...', '', '', '',
  '-00-00', '00:00:00', '', '', '', '', '', '', '1', '', '', '', '', 
as

z )
  ORDER BY z, Start ASC, Status DESC, Time ASC

I just don't know what I am doing wrong her, or how to
accomplish this task.  Please help.

Thanks,


Keith


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe: 
http://lists.mysql.com/[EMAIL PROTECTED] 



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: MyISAM vs InnoDB

2006-11-01 Thread mos

At 02:27 PM 11/1/2006, Jochem van Dieten wrote:

On 11/1/06, mos wrote:


 Sure, I've thought of those too. But has anyone gotten Firebird to
store 700-800gb tables? Can you split Firebird's .gdb file across drives?
The main problem with tables of that size is maintaining the index. My
upper limit for MySQL is 100 million rows. After that any new rows that are
added will take much longer to add because the index tree has to be
maintained. I definitely recommend cramming as much memory in the box as
humanly possible because indexes of that size will need it. Probably the
simplist solution for MySQL is to use Merge tables.  I know some people
with MySQL, Oracle and MS SQL have terabyte tables, but I haven't heard of
other databases storing tables that large. So if you or anyone else has
used FireBird or PostgreSQL to store terabyte tables, I'd certainly would
be interested in hearing about it. :)


What is the big deal of a TB? Now, if you get past 20 TB you might
want to team up with one of the commercial PostgreSQL supporters
(Fujitsu, EnterpriseDB, Greenplum etc.), but Sun even sells appliances
for 100 TB PostgreSQL databases.

Jochem



Jochem,
There is a big difference between a 20 TB database and a 20 TB 
table!  Unless you're storing huge blobs, a table of over 1TB will have 
hundreds of millions of rows (billions?), and that means huge index trees 
that need to be maintained.  If PostgreSQL can put 20 TB into a table and 
still have reasonably fast inserts and queries, then I'll take my hat off 
to them. But first I need to see proof that they can accomplish this. So if 
you have any sites or white papers you'd like to share, go ahead. Keep in 
mind we're talking about TB tables here, not databases.


Mike



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Manually Inserted Data

2006-11-01 Thread Gerald L. Clark

Keith Spiller wrote:

Hi Rolando,

Thanks for your help.

I have reduced the query to the bare essensials to try to test the 
concept, but

it still fails...

( SELECT ID, Start, End

( SELECT ID, Start, End as z

FROM site_calendar_v2 as c
WHERE Global='1' )
UNION
( SELECT '9', '2006-11-01', '-00-00' as z )
ORDER BY z, Start ASC, Status DESC, Time ASC  a

Does anyone see my mistake?

Keith




--
Gerald L. Clark
Supplier Systems Corporation

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Update field concactenation

2006-11-01 Thread Rich
Hi folks.  Boy do I have an interesting one for you today.  This list  
is great.


Can I update a record using a typical update query, and have a field  
be updated to:


previous value of the field + (something I want added to the end)

so field 'flavours' starts out as: vanilla chocolate

...then I want to add strawberry to it in a single update...to get...

vanilla choclolate strawberry

Or do I have to grab the value first in a subquery or alternate  
query?  I just have multiple records to work on so I'd like to do it  
in one db call.


Cheers

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Update field concactenation

2006-11-01 Thread Edwin Cruz
Try:

Update myTable set myField = concat(myField,' new data to add') where
myField = 'some restrictions';


Regards!



 -Mensaje original-
 De: Rich [mailto:[EMAIL PROTECTED] 
 Enviado el: Miércoles, 01 de Noviembre de 2006 06:35 p.m.
 Para: Submit MySQL
 Asunto: Update field concactenation
 
 
 Hi folks.  Boy do I have an interesting one for you today.  
 This list  
 is great.
 
 Can I update a record using a typical update query, and have a field  
 be updated to:
 
 previous value of the field + (something I want added to the end)
 
 so field 'flavours' starts out as: vanilla chocolate
 
 ...then I want to add strawberry to it in a single 
 update...to get...
 
 vanilla choclolate strawberry
 
 Or do I have to grab the value first in a subquery or alternate  
 query?  I just have multiple records to work on so I'd like to do it  
 in one db call.
 
 Cheers
 
 -- 
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:
 http://lists.mysql.com/mysql? [EMAIL PROTECTED]
 


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: MyISAM vs InnoDB

2006-11-01 Thread Jochem van Dieten

On 11/1/06, mos wrote:

At 02:27 PM 11/1/2006, Jochem van Dieten wrote:


What is the big deal of a TB? Now, if you get past 20 TB you might
want to team up with one of the commercial PostgreSQL supporters
(Fujitsu, EnterpriseDB, Greenplum etc.), but Sun even sells appliances
for 100 TB PostgreSQL databases.


 There is a big difference between a 20 TB database and a 20 TB
table!  Unless you're storing huge blobs, a table of over 1TB will have
hundreds of millions of rows (billions?), and that means huge index trees
that need to be maintained.


Indexes scale with ln(O). The difference between 100 million rows and
1 billion rows is maybe 10%. And if you are worried about your indexes
getting asymmetric use a hash index instead of a B-tree. Though
realistically you would partition the data and then your indexes get
partitioned too.



If PostgreSQL can put 20 TB into a table and
still have reasonably fast inserts and queries, then I'll take my hat off
to them.


It can if you design your queries to make use of the indexes and the
partitioning.



But first I need to see proof that they can accomplish this. So if
you have any sites or white papers you'd like to share, go ahead. Keep in
mind we're talking about TB tables here, not databases.


Google the PostgreSQL and PostGIS mailinglists.

Jochem

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



help trying to add an autoincrement col to an exisiting table

2006-11-01 Thread Randy Paries

Hello,
currently i have the following table structure

CREATE TABLE users (
 uname varchar(20) NOT NULL default '',
 passwd varchar(15) NOT NULL default '',
 fname varchar(25) NOT NULL default '',
 lname varchar(40) NOT NULL default '',
 dir varchar(28) NOT NULL default '',
 pict varchar(50) NOT NULL default '',
 level int(4) NOT NULL default '0',
 email varchar(40) NOT NULL default '',
 rank int(4) NOT NULL default '0',
 dgroup int(4) NOT NULL default '0',
 parent varchar(20) NOT NULL default '',
 seminar int(11) NOT NULL default '0',
 getnewsletter int(11) default '0',
 PRIMARY KEY  (uname),
 KEY uname_users (uname),
 KEY dir_users (dir),
 KEY seminar_users (seminar),
 KEY user_lvl_idx (level)
) TYPE=MyISAM;

I want to add an autoincrement field

when i first tried i got the error

Incorrect table definition; there can be only one auto column and it
must be defined as a key

so then i tried
ALTER TABLE `users` DROP PRIMARY KEY;
ALTER TABLE `users` ADD PRIMARY KEY (id);
ALTER TABLE `users` CHANGE `id` `keyid` INT(10)  UNSIGNED NOT NULL
AUTO_INCREMENT;

and i get the error
Duplicate entry '0' for key 1

Can some one please tell me what i am doing wrong
Thanks

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



How to view locks in MySQL

2006-11-01 Thread Ow Mun Heng
Under MSSQL there's a stored procedure called sp_lock which can be used.
Is there an equivalent one in mySQL?



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]