10:30 PM, Matthew Stuart
> wrote:
> > Hi all, I have a table of products (ps_4products), and a table of
> up-to-date prices and sizes (_import_products). I am trying to replace old
> content in the table ps4_products with up-to-date content in the
> _import_products table, but I am
wrote:
> Hi all, I have a table of products (ps_4products), and a table of up-to-date
> prices and sizes (_import_products). I am trying to replace old content in
> the table ps4_products with up-to-date content in the _import_products table,
> but I am getting errors.
>
> I am
_size higher on the slaves than on
the master, but we shouldn't have to do this (I don't think). Does
replication add some sort of extra header information, etc. that would
increase the packet sizes? If so, is there a formula for calculating what
the max_packet_size should be in a mas
If you use an auto_increment column (e.g. id int auto_increment) then
your trigger could do something like this:
DELETE FROM table WHERE id < new.id - 100;
On Aug 30, 2006, at 3:49 PM, Dan Buettner wrote:
You could accomplish this with a trigger on the table - on INSERT,
execute a DELETE st
You could accomplish this with a trigger on the table - on INSERT,
execute a DELETE statement.
http://dev.mysql.com/doc/refman/5.0/en/create-trigger.html
You would need to find a way to identify which is the "bottom" record
to be deleted ... I might use an ID column, and consider the
lowest-numbe
sorry for the double post.
if i want to have a row of about 100 records. and everytime i insert a new
record, it gets pushed on the top, and the bottom one gets pushed out, sort
of like a heap. is this possible?
i know i can just delete the record, etc, but i was wondering if there was a
built i
Hi,
RE:
> we have a table with many (~0.5 billion) records and a geometry field
> which was defined as a simple "point". The `show table status` shows that
> the row format is dynamic, however, a simple point in the GIS
> representation has a fixed format (see: WKB: 21 bytes: 1 for MSB/LSB, 4
> f
Hi,
we have a table with many (~0.5 billion) records and a geometry field
which was defined as a simple "point". The `show table status` shows that
the row format is dynamic, however, a simple point in the GIS
representation has a fixed format (see: WKB: 21 bytes: 1 for MSB/LSB, 4
for type and 2x8
r: tablespace size stored in header is 877184 pages, but
InnoDB: the sum of data file sizes is 953856 pages
And Mr. Heikki tell me to do these steps:
(953856 - 877184) / 64 = 1198 MB
1) Stop the mysqld server.
2) Add a new 1198M ibdata file at the end of innodb_data_file_path.
3) When you st
|Dear All,
As subject, Actually i've been ever meet this case
when i see :
InnoDB: Error: tablespace size stored in header is 877184 pages, but
InnoDB: the sum of data file sizes is 953856 pages
And Mr. Heikki tell me to do these steps:
(953856 - 877184) / 64 = 1198 MB
1) Stop the m
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Jon Drukman wrote:
> My master has two databases: channel and hardware. I'm only interested
> in replicating hardware, so I set up replicate-do-db=hardware on the slaves.
>
> However, I am having problems because of giant LOAD DATA operations
> pe
i probably should have mentioned: both master & slave are running 4.0.20
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
My master has two databases: channel and hardware. I'm only interested
in replicating hardware, so I set up replicate-do-db=hardware on the slaves.
However, I am having problems because of giant LOAD DATA operations
performed nightly on channel. Replication blows up with "max packet
exceeded
;ve
> > checked the block size of the fulltext index using myisamchk (is there
> > an easier way to find out block size?) and it is 2048. The block size of
> > the primary key on the same table is 1024. Is that what it means by
> > "Indexes use different block sizes"?
&
o preload a fulltext index. I've
> checked the block size of the fulltext index using myisamchk (is there
> an easier way to find out block size?) and it is 2048. The block size of
> the primary key on the same table is 1024. Is that what it means by
> "Indexes use different bl
ndexes use different block sizes"?
As you can see from below, I've tried to only load the fulltext index,
and the error persists. I have also tried setting the global
key_cache_block_size to 2048 and that didn't work. I have also tried
creating a separate key cache with it's own
e data files
> sizes? convertion to InnoDB will need more or less disk space than MyISAM?
>
> -thanks, Eli
--
Richard F. Rebel
[EMAIL PROTECTED]
t. 212.239.
signature.asc
Description: This is a digitally signed message part
Hello,
My HDD is running low and I MyISAM tables are keep crashing... I think that
converting to InnoDB will be more stable, but what about the data files
sizes? convertion to InnoDB will need more or less disk space than MyISAM?
-thanks, Eli
--
MySQL General Mailing List
For list archives
ent: Tuesday, March 25, 2003 19:02
Subject: sizes
which one is the biggest datatype:
enum('true', 'false');OR
tinyint(1);
I mena biggest when it comes to store them on disk.
Thank you,
Mattias.
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
Mattias,
> which one is the biggest datatype:
> enum('true', 'false');OR
> tinyint(1);
> I mena biggest when it comes to store them on disk.
ENUM has a storage requirement of 1 or 2 bytes. If you have up to 256
ENUM values to choose from, you need 1 byte, if you have more, it will
take 2 byte
which one is the biggest datatype:
enum('true', 'false');OR
tinyint(1);
I mena biggest when it comes to store them on disk.
Thank you,
Mattias.
would think be a minimum to handle
this DB.
and lastly:
6-would any other DBMS (than mysql), say commercial ones, be better
equipped to handle such data sizes?
thanks. any relayed experiences the subject of large database is very
much appreciated.
best
M,
Monday, October 28, 2002, 2:55:44 PM, you wrote:
>>Monday, October 28, 2002, 2:00:28 PM, you wrote:
>>
>>MS> At 13:36 +0200 28-10-2002, Victoria Reznichenko wrote:
>> >>MdB> 2) How many entries can one single table have?
MdB> (The absolute maximum)
There are no limits on number of
>Monday, October 28, 2002, 2:00:28 PM, you wrote:
>
>MS> At 13:36 +0200 28-10-2002, Victoria Reznichenko wrote:
> >>MdB> 2) How many entries can one single table have?
>>>MdB> (The absolute maximum)
>>>
>>>There are no limits on number of rows.
>
>MS> What if the values for Primary Key run out?
>
>
M,
Monday, October 28, 2002, 2:00:28 PM, you wrote:
MS> At 13:36 +0200 28-10-2002, Victoria Reznichenko wrote:
>>MdB> 2) How many entries can one single table have?
>>MdB> (The absolute maximum)
>>
>>There are no limits on number of rows.
MS> What if the values for Primary Key run out?
You get "
At 13:36 +0200 28-10-2002, Victoria Reznichenko wrote:
>MdB> 2) How many entries can one single table have?
>MdB> (The absolute maximum)
>
>There are no limits on number of rows.
What if the values for Primary Key run out? Can you use a unsigned bigint as Primary
Key?
I mean, we /have/ to store
Michelle,
Saturday, October 26, 2002, 6:59:28 PM, you wrote:
MdB> I am trying a "worst-case-scenario" of my databse.
MdB> I have, so far, come up with some numbers to do the
MdB> calculations with and perhaps a way of splitting one
MdB> big table into smaller, yet structured, tables.
MdB> (These
I am trying a "worst-case-scenario" of my databse.
I have, so far, come up with some numbers to do the
calculations with and perhaps a way of splitting one
big table into smaller, yet structured, tables.
(These questions are not limited by hardware, because
if the need for speed is there, the reve
In the last episode (Jul 28), Bill Leonard said:
> I have been searching and searching, and maybe this is a 4.0 thing,
> but is there a way, on a case by case basis, to pre-define a size
> limit for a MySQL database? In other words, make one 50MB and the
> next one make 100MB on the same server?
>
I have been searching and searching, and maybe this is a 4.0 thing, but is
there a way, on a case by case basis, to pre-define a size limit for a MySQL
database? In other words, make one 50MB and the next one make 100MB on the
same server?
I've seen indications you can set the default size for al
I have been searching and searching, and maybe this is a 4.0 thing, but is
there a way, on a case by case basis, to pre-define a size limit for a MySQL
database? In other words, make one 50MB and the next one make 100MB on the
same server?
I've seen indications you can set the default size for al
6000 Nov 2 14:16 ibdata1 -rw-rw1 mysql
>> mysql 5242880 Nov 2 14:16 ib_logfile0 -rw-rw1 mysql
>> mysql 5242880 Nov 2 09:30 ib_logfile1 -rw-rw---- 1 mysql
>> mysql 5242880 Nov 1 10:10 ib_logfile2
>
> Which are exactly the sizes you speci
1 mysqlmysql 5242880 Nov 2 14:16 ib_logfile0
> -rw-rw1 mysqlmysql 5242880 Nov 2 09:30 ib_logfile1
> -rw-rw1 mysqlmysql 5242880 Nov 1 10:10 ib_logfile2
Which are exactly the sizes you specified in the config file. It's
doing exactly what yo
the Innodb format total nowhere near
1Gig. The tables were converted with "ALTER TABLE tbl_name TYPE=INNODB;"
from the myISAM type. Are the above file sizes normal for this type of
conversion? Is this the price for using the innodb format?
Thank
At 12:18 PM 10/30/2001 -0600, Dan Nelson wrote:
>In the last episode (Oct 30), Bennett Haselton said:
> > I created one table with the command:
> >
> > CREATE TABLE pet (name VARCHAR(20), owner VARCHAR(20), species
> VARCHAR(20), sex CHAR(1), birth DATE, death DATE, id INT UNSIGNED NOT
> NULL);
In the last episode (Oct 30), Bennett Haselton said:
> I created one table with the command:
>
> CREATE TABLE pet (name VARCHAR(20), owner VARCHAR(20), species VARCHAR(20), sex
>CHAR(1), birth DATE, death DATE, id INT UNSIGNED NOT NULL);
>
> and another one with the command:
>
> CREATE TABLE p
I created one table with the command:
CREATE TABLE pet (name VARCHAR(20), owner VARCHAR(20), species VARCHAR(20), sex
CHAR(1), birth DATE, death DATE, id INT UNSIGNED NOT NULL);
and another one with the command:
CREATE TABLE pet2 (name VARCHAR(20), owner VARCHAR(20), species VARCHAR(20), sex
Make your auto_increment primary key something bigger than a TINYINT
> ive run into some trouble with a table not taking more then 128
> rows. I just use MySQL from my webpage, with PHP, and its all
> done by my host, so im a novice here. Anything i can do to get
> the table to accept more the
ive run into some trouble with a table not taking more then 128 rows. I just use MySQL
from my webpage, with PHP, and its all done by my host, so im a novice here. Anything
i can do to get the table to accept more the 128 rows?
thanks folks.
DarthZeth
"Men never do evil so completely and cheerf
Hello,
I am trying to find some information on MySQL. Could you folks help me
out. I need:
Max Database Size
Max Table Size
Max Size for a row
Max Number of rows
Max Number of Columns
Max number of indexes per table
Thanks!
Joshua Drake
-
41 matches
Mail list logo