Re: Size limitation of user variable?

2009-02-11 Thread Baron Schwartz
Right, my (implied) point was that you have AT LEAST one limitation --
your max_packet_size.  And I showed the way the original author can
investigate and figure out the rest himself :)

On Wed, Feb 11, 2009 at 4:28 AM, Johan De Meersman  wrote:
> What you just tested, on the other hand, was the limit of your maxpacket :-)
> Up that to something unlikely and try again :-)
>
> On Tue, Feb 10, 2009 at 9:24 PM, Baron Schwartz  wrote:
>>
>> On Tue, Feb 10, 2009 at 1:29 PM, Cantwell, Bryan
>>  wrote:
>> > I am trying to put the result of a function that returns MEDIUMTEXT into
>> > a user variable in my procedure. I haven't attempted to push the limits
>> > of the MEDIUMTEXT size, but wonder if the user variable can even handle
>> > this?
>>
>>
>> The REPEAT() function helps here:
>>
>> mysql> set @var := repeat('a', 1024 * 1024);
>> Query OK, 0 rows affected (0.05 sec)
>>
>> mysql> select length(@var);
>> +--+
>> | length(@var) |
>> +--+
>> |  1048576 |
>> +--+
>> 1 row in set (0.01 sec)
>>
>> So it accepts a mebibyte, let's see if we can notch that up :)
>>
>> mysql> set @var := repeat('a', 1024 * 1024 * 1024);
>> Query OK, 0 rows affected, 1 warning (0.00 sec)
>>
>> mysql> show warnings;
>>
>> +-+--+--+
>> | Level   | Code | Message
>> |
>>
>> +-+--+--+
>> | Warning | 1301 | Result of repeat() was larger than
>> max_allowed_packet (16777216) - truncated |
>>
>> +-+--+--+
>> 1 row in set (0.00 sec)
>>
>> --
>> Baron Schwartz, Director of Consulting, Percona Inc.
>> Our Blog: http://www.mysqlperformanceblog.com/
>> Our Services: http://www.percona.com/services.html
>>
>> --
>> MySQL General Mailing List
>> For list archives: http://lists.mysql.com/mysql
>> To unsubscribe:http://lists.mysql.com/mysql?unsub=vegiv...@tuxera.be
>>
>
>
>
> --
> Celsius is based on water temperature.
> Fahrenheit is based on alcohol temperature.
> Ergo, Fahrenheit is better than Celsius. QED.
>



-- 
Baron Schwartz, Director of Consulting, Percona Inc.
Our Blog: http://www.mysqlperformanceblog.com/
Our Services: http://www.percona.com/services.html

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Size limitation of user variable?

2009-02-11 Thread Johan De Meersman
What you just tested, on the other hand, was the limit of your maxpacket :-)
Up that to something unlikely and try again :-)

On Tue, Feb 10, 2009 at 9:24 PM, Baron Schwartz  wrote:

> On Tue, Feb 10, 2009 at 1:29 PM, Cantwell, Bryan
>  wrote:
> > I am trying to put the result of a function that returns MEDIUMTEXT into
> > a user variable in my procedure. I haven't attempted to push the limits
> > of the MEDIUMTEXT size, but wonder if the user variable can even handle
> > this?
>
>
> The REPEAT() function helps here:
>
> mysql> set @var := repeat('a', 1024 * 1024);
> Query OK, 0 rows affected (0.05 sec)
>
> mysql> select length(@var);
> +--+
> | length(@var) |
> +--+
> |  1048576 |
> +--+
> 1 row in set (0.01 sec)
>
> So it accepts a mebibyte, let's see if we can notch that up :)
>
> mysql> set @var := repeat('a', 1024 * 1024 * 1024);
> Query OK, 0 rows affected, 1 warning (0.00 sec)
>
> mysql> show warnings;
>
> +-+--+--+
> | Level   | Code | Message
> |
>
> +-+--+--+
> | Warning | 1301 | Result of repeat() was larger than
> max_allowed_packet (16777216) - truncated |
>
> +-+--+--+
> 1 row in set (0.00 sec)
>
> --
> Baron Schwartz, Director of Consulting, Percona Inc.
> Our Blog: http://www.mysqlperformanceblog.com/
> Our Services: http://www.percona.com/services.html
>
> --
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:http://lists.mysql.com/mysql?unsub=vegiv...@tuxera.be
>
>


-- 
Celsius is based on water temperature.
Fahrenheit is based on alcohol temperature.
Ergo, Fahrenheit is better than Celsius. QED.


Re: Size limitation of user variable?

2009-02-10 Thread Baron Schwartz
On Tue, Feb 10, 2009 at 1:29 PM, Cantwell, Bryan
 wrote:
> I am trying to put the result of a function that returns MEDIUMTEXT into
> a user variable in my procedure. I haven't attempted to push the limits
> of the MEDIUMTEXT size, but wonder if the user variable can even handle
> this?


The REPEAT() function helps here:

mysql> set @var := repeat('a', 1024 * 1024);
Query OK, 0 rows affected (0.05 sec)

mysql> select length(@var);
+--+
| length(@var) |
+--+
|  1048576 |
+--+
1 row in set (0.01 sec)

So it accepts a mebibyte, let's see if we can notch that up :)

mysql> set @var := repeat('a', 1024 * 1024 * 1024);
Query OK, 0 rows affected, 1 warning (0.00 sec)

mysql> show warnings;
+-+--+--+
| Level   | Code | Message
 |
+-+--+--+
| Warning | 1301 | Result of repeat() was larger than
max_allowed_packet (16777216) - truncated |
+-+--+--+
1 row in set (0.00 sec)

-- 
Baron Schwartz, Director of Consulting, Percona Inc.
Our Blog: http://www.mysqlperformanceblog.com/
Our Services: http://www.percona.com/services.html

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Size limitation of user variable?

2009-02-10 Thread Cantwell, Bryan
I am trying to put the result of a function that returns MEDIUMTEXT into
a user variable in my procedure. I haven't attempted to push the limits
of the MEDIUMTEXT size, but wonder if the user variable can even handle
this? 



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Size limitation on Text field

2003-02-12 Thread Gord Muir

Thanks to all that responded to my questions about Full Text Searching. 
Being caught up in that paradigm I overlooked the simpler answer  :)

I am now experiencing a problem with pulling data from a Text (longvarchar) 
field in MySql. The data is entered into a TEXTAREA field on a web page and 
from there populated into the DB. That works fine and I can see that all the 
data is entered correctly.

On pulling the data out however it seems the maximum length won't go past 
4095. I open an ODBC connection, do a simple Select and assign the data to a 
'$Doc' variable. On querying the length of the $Doc variable it returns 4095 
even though the amount of data in the DB is greater.

My first thoughts are that communication buffers are set too low but I'm not 
sure this is correct, or how to set them higher.. any of your gurus know how 
to deal with this?

TIA

Gord





_
Tired of spam? Get advanced junk mail protection with MSN 8.  
http://join.msn.com/?page=features/junkmail


-
Before posting, please check:
  http://www.mysql.com/manual.php   (the manual)
  http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php



Re: any size limitation as to the size of a query statement?

2001-06-21 Thread Don Read


On 21-Jun-01 Chris Petersen wrote:
>> here, @ids is the array, delete in chunks of 64:
>> while ( my(@id)= splice(@ids,0,64)) {
>>   $qry="DELETE FROM master WHERE master_id in (".join(',',@id).")";
>>   SQLQuery($qry);
>> }
> 
> I thought about this, too..  But Mark said that he needed to delete from the
> table where the ID's don't match.  If you break it into chunks, then you'd
> end up deleting everything that doesn't match that first chunk, which kills
> pretty much the entire table.
> 

I prolly wasn't too clear: @ids is the array of all non-matches. 
i.e the list you want to delete (& i'm assuming id is the primary key).
   
@ids=GetArray("SELECT a.id
   FROM master as a LEFT JOIN new_tbl as b ON a.id=b.id 
   WHERE b.id IS NULL");

> The other option would be to create a new field in the table, something like
> "dontdeleteme TinyInt Unsigned Not Null Default 0"...  Then use your splice
> technique to do something like:
> 
> $dbh->do('UPDATE master SET dontdeleteme=0');
> while ( my(@id)= splice(@ids,0,64)) {
> $dbh->do("UPDATE master SET dontdeleteme=1 WHERE master_id in
> (".join(',',@id).")");
> }

;>
Which is _exactly_ how i use it :

#  Mark all active webmasters (traffic in the last month)
#  without busting max_packet_size

my @ids=GetArray("SELECT distinct id from hits 
  WHERE hitdate >= DATE_SUB(current_date, INTERVAL 1 MONTH)");

SQLQuery("UPDATE webmaster set active=0");
while ( my(@id)= splice (@ids,0,64)) {
  $qry="UPDATE webmaster set active=1 
WHERE wmid in (" .join(',',@id).")";
SQLQuery($qry);
}


Regards,
-- 
Don Read   [EMAIL PROTECTED]
-- It's always darkest before the dawn. So if you are going to 
   steal the neighbor's newspaper, that's the time to do it.

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: any size limitation as to the size of a query statement?

2001-06-21 Thread Chris Petersen

> here, @ids is the array, delete in chunks of 64:
> while ( my(@id)= splice(@ids,0,64)) {
>   $qry="DELETE FROM master WHERE master_id in (".join(',',@id).")";
>   SQLQuery($qry);
> }

I thought about this, too..  But Mark said that he needed to delete from the
table where the ID's don't match.  If you break it into chunks, then you'd
end up deleting everything that doesn't match that first chunk, which kills
pretty much the entire table.

The other option would be to create a new field in the table, something like
"dontdeleteme TinyInt Unsigned Not Null Default 0"...  Then use your splice
technique to do something like:

$dbh->do('UPDATE master SET dontdeleteme=0');
while ( my(@id)= splice(@ids,0,64)) {
$dbh->do("UPDATE master SET dontdeleteme=1 WHERE master_id in
(".join(',',@id).")");
}
$dbh->do('DELETE FROM master WHERE dontdeleteme=<1');


I have no idea if this is slower or not, but I do use a similar technique on
a client's database.  Granted, that one is about 6800 records, not 68000.
But it's still another thought.

-Chris


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




RE: any size limitation as to the size of a query statement?

2001-06-21 Thread Mark A. Sharkey

Thanks, Don.  However, this is sort of the opposite of what we need.  We
need to delete from master where master_id  NOT IN the array.  If we do that
in chunks, after the first chunk, we would have wiped out all of the records
outside the array.

It seems like with Chris's suggestion, along with Bill's suggestion about
the max_allowed_packet setting, we should be okay.  However, if there are
still more ideas out there, I'd love to hear them!

Mark



> -Original Message-
> From: Don Read [mailto:[EMAIL PROTECTED]]
> Sent: Thursday, June 21, 2001 8:46 AM
> To: Chris Petersen
> Cc: [EMAIL PROTECTED]; Mark A. Sharkey
> Subject: Re: any size limitation as to the size of a query statement?
>
>
>
> On 21-Jun-01 Chris Petersen wrote:
> > Why not do something like:
> >
> > my $q = "DELETE FROM master WHERE master_id NOT IN (";
> > $q .= join(',', @masteridarray);
> > $q .= ')';
> > $dbi_dbh->do($q);
> >
> > Though this will obviously fall into the same size limitations
> that you were
> > doing, but it should execute a lot faster, and be a bit smaller
> than your
> > example.  (I don't have an answer for this, being rather new to
> the mysql
> > community myself).
> >
>
> here, @ids is the array, delete in chunks of 64:
>
>   while ( my(@id)= splice(@ids,0,64)) {
> $qry="DELETE FROM master WHERE master_id in (".join(',',@id).")";
> SQLQuery($qry);
>   }
>
>
> Adjust '64' as needed..
>
> Regards,
> --
> Don Read   [EMAIL PROTECTED]
> -- It's always darkest before the dawn. So if you are going to
>steal the neighbor's newspaper, that's the time to do it.
>


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: any size limitation as to the size of a query statement?

2001-06-21 Thread Don Read


On 21-Jun-01 Chris Petersen wrote:
> Why not do something like:
> 
> my $q = "DELETE FROM master WHERE master_id NOT IN (";
> $q .= join(',', @masteridarray);
> $q .= ')';
> $dbi_dbh->do($q);
> 
> Though this will obviously fall into the same size limitations that you were
> doing, but it should execute a lot faster, and be a bit smaller than your
> example.  (I don't have an answer for this, being rather new to the mysql
> community myself).
> 

here, @ids is the array, delete in chunks of 64:

  while ( my(@id)= splice(@ids,0,64)) {
$qry="DELETE FROM master WHERE master_id in (".join(',',@id).")";
SQLQuery($qry);
  }


Adjust '64' as needed..

Regards,
-- 
Don Read   [EMAIL PROTECTED]
-- It's always darkest before the dawn. So if you are going to 
   steal the neighbor's newspaper, that's the time to do it.

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Re: any size limitation as to the size of a query statement?

2001-06-21 Thread Bill Marrs

My code generates a large query.  I had to increase the size of
"max_allowed_packet" so that this query would work.

I have this in my /etc/my.cnf file:

set-variable = max_allowed_packet=2097152

-bill


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: any size limitation as to the size of a query statement?

2001-06-21 Thread Chris Petersen

Why not do something like:

my $q = "DELETE FROM master WHERE master_id NOT IN (";
$q .= join(',', @masteridarray);
$q .= ')';
$dbi_dbh->do($q);

Though this will obviously fall into the same size limitations that you were
doing, but it should execute a lot faster, and be a bit smaller than your
example.  (I don't have an answer for this, being rather new to the mysql
community myself).

-Chris

> Hello,
> 
> I have a table with approximately 68,000 records.  I then get a file
> uploaded to the server with approximately 68,000 master_id's listed in it.
> If the master_id's in our MySQL table are not one of the master_id's listed
> in the file, they need to be deleted from the MySQL table.
> 
> Right now, a Perl script reads the master_id's from the uploaded file, and
> puts them all into an array in memory.  We then loop through all the
> master_id's from the MySQL table.  If the MySQL master_id is not in the
> array of master_id's from the file, then the record is deleted.  This works
> fine if we are using a small amount of records, but there are over 68,000 of
> them.  It seems like there should be a better, more efficient way of doing
> this.
> 
> One thought that I had, was to create one big huge DELETE statement.
> Something like this:
> 
>   $q = "DELETE FROM master";
> 
>   $where = " WHERE ( ";
>   $morethan1 = 0;
>   foreach (@masteridarray) { #this is our 68,000 valid id's
>  if ($morethan1) {
>  $where .= " OR master_id != $_ ";
>  }else{
>  $where .= " master_id != $_ ";
>  $morethan1 = 1;
>  }
>   }
>   $where .= ")";
>   if ($morethan1) {
>   $q .= $where;
>   }
> 
>   my $sth = $dbi_dbh->prepare($q);
>   $sth->execute;
> 
> However, I wasn't sure if MySQL could process something like this.  And if
> it could, how much bigger could the query statement get (the number of
> records in the table will easily exceed 100,000 before the end of the year,
> and continue to grow even bigger next year)?
> 
> Another thought/question was whether MySQL had some mechanism for searching
> through the upload file itself.  Something along the lines of, "DELETE FROM
> master where master_id != [any of the masterid's in file ./id.txt]".
> 
> The number of records in this table is only going to grow over the coming
> months, so, I want to make sure that I have a solution that is scalable.
> Any/all suggestions will be greatly appreciated!
> 
> Mark
> 
> 
> Imagine your own dedicated server, but ...
> without all the hassles of managing it yourself!
> Managed Dedicated Servers
> http://www.ServerPros.com
> 
> --
> Mark A. Sharkey
> PrecisionPros.com Network
> 6543 East Omega Street
> Mesa, Arizona 85215
> 800 844 4434 toll free
> 480 461 9765 local
> 480 461 9312 fax
> 
> 
> 
> -
> Before posting, please check:
>  http://www.mysql.com/manual.php   (the manual)
>  http://lists.mysql.com/   (the list archive)
> 
> To request this thread, e-mail <[EMAIL PROTECTED]>
> To unsubscribe, e-mail
> <[EMAIL PROTECTED]>
> Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
> 


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




any size limitation as to the size of a query statement?

2001-06-20 Thread Mark A. Sharkey

Hello,

I have a table with approximately 68,000 records.  I then get a file
uploaded to the server with approximately 68,000 master_id's listed in it.
If the master_id's in our MySQL table are not one of the master_id's listed
in the file, they need to be deleted from the MySQL table.

Right now, a Perl script reads the master_id's from the uploaded file, and
puts them all into an array in memory.  We then loop through all the
master_id's from the MySQL table.  If the MySQL master_id is not in the
array of master_id's from the file, then the record is deleted.  This works
fine if we are using a small amount of records, but there are over 68,000 of
them.  It seems like there should be a better, more efficient way of doing
this.

One thought that I had, was to create one big huge DELETE statement.
Something like this:

$q = "DELETE FROM master";

$where = " WHERE ( ";
$morethan1 = 0;
foreach (@masteridarray) { #this is our 68,000 valid id's
   if ($morethan1) {
   $where .= " OR master_id != $_ ";
   }else{
   $where .= " master_id != $_ ";
   $morethan1 = 1;
   }
}
$where .= ")";
if ($morethan1) {
$q .= $where;
}

my $sth = $dbi_dbh->prepare($q);
$sth->execute;

However, I wasn't sure if MySQL could process something like this.  And if
it could, how much bigger could the query statement get (the number of
records in the table will easily exceed 100,000 before the end of the year,
and continue to grow even bigger next year)?

Another thought/question was whether MySQL had some mechanism for searching
through the upload file itself.  Something along the lines of, "DELETE FROM
master where master_id != [any of the masterid's in file ./id.txt]".

The number of records in this table is only going to grow over the coming
months, so, I want to make sure that I have a solution that is scalable.
Any/all suggestions will be greatly appreciated!

Mark


Imagine your own dedicated server, but ...
without all the hassles of managing it yourself!
Managed Dedicated Servers
http://www.ServerPros.com

--
Mark A. Sharkey
PrecisionPros.com Network
6543 East Omega Street
Mesa, Arizona 85215
800 844 4434 toll free
480 461 9765 local
480 461 9312 fax



-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




RE: size limitation

2001-03-26 Thread Pascal THIVENT

Hi,

For our benchmarks, we use for the MySQL Server a Sun Enterprise 3500 with 4
UltraSPARC processors, 6 Go of RAM, 3 disks of 40 Go...
MySQL runs under Solaris 7.


> -Message d'origine-
> De: Jeremy D . Zawodny [mailto:[EMAIL PROTECTED]]
> Date: vendredi 23 mars 2001 19:11
> À: Pascal THIVENT
> Cc: [EMAIL PROTECTED]
> Objet: Re: size limitation
> 
> 
> On Fri, Mar 23, 2001 at 11:33:42AM +0100, Pascal THIVENT wrote:
> >  
> > I've got to manage one database.
> > This database is made of one table.
> > This table will grow very quickly (we evaluate that we'll 
> have to make
> > arround 1.000.000.000 inserts per day).
> 
> What sort of hardware are you planning to use?
> -- 
> Jeremy D. Zawodny, <[EMAIL PROTECTED]>
> Technical Yahoo - Yahoo Finance
> Desk: (408) 328-7878Fax: (408) 530-5454
> Cell: (408) 439-9951
> 

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




RE: size limitation

2001-03-26 Thread Sinisa Milivojevic

Pascal THIVENT writes:
 > Hi, 
 > 
 > the database I would like to managed is constitued by one table.
 > This table is made of 3 columns, with no primary key
 > 
 > mysql> CREATE TABLE test (
 >  id1 int(8) unsigned,
 >  id2 int(4) unsigned,
 >  id3 int(4) unsigned
 >   );
 > 
 > mysql> CREATE INDEX idx_id1 ON test (id1);
 > mysql> CREATE INDEX idx_id2 ON test (id2);
 > mysql> CREATE INDEX idx_id3 ON test (id3);
 > 
 > We use 3 indexes because our benchmark shown that it's better than an index
 > on the 3 columns for our application.
 > 
 > 
 > mysql> desc test;
 > +-+-+--+-+-+---+
 > | Field   | Type| Null | Key | Default | Extra |
 > +-+-+--+-+-+---+
 > | id1 | int(8) unsigned |  | MUL | 0   |   |
 > | id2 | int(4) unsigned |  | MUL | 0   |   |
 > | id3 | int(4) unsigned |  | MUL | 0   |   |
 > +-+-+--+-+-+---+
 > 
 > 
 > Here are the request we make on this table :
 > 
 > SELECT COUNT(*) FROM test WHERE id1=? AND id2=?;
 > SELECT COUNT(*) FROM test WHERE id1=? AND id=2=? AND id3=?;
 > INSERT INTO test (USER_ID, FLIGHT_ID, FLIGHTOBJECT_ID) values (?,?,?);
 > 
 > 
 > The row size is 16 bytes. So we'll get 16 gigabytes of datas per day (it
 > will be around 10 gigabyte in reality because only 66% of the request need
 > an insert).
 > 
 > The table is going to grow until we'll drop it (because of storage
 > limitations). 10 days of lifetime for our datas will be enough :)
 > We make no update in order to be very quick.


Hi!

Consider using smallint instead of int for second and third column.

This will make entire row size 8 bytes. That is for data only, not
counting index space, which is more complicated.

You make no update in order to be quick. But may be with updates your
table could last longer. Much longer.


Regards,

Sinisa

    __ _   _  ___ ==  MySQL AB
 /*/\*\/\*\   /*/ \*\ /*/ \*\ |*| Sinisa Milivojevic
/*/ /*/ /*/   \*\_   |*|   |*||*| mailto:[EMAIL PROTECTED]
   /*/ /*/ /*/\*\/*/  \*\|*|   |*||*| Larnaca, Cyprus
  /*/ /*/  /*/\*\_/*/ \*\_/*/ |*|
  /*/^^^\*\^^^
 /*/ \*\Developers Team



-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




RE: size limitation

2001-03-26 Thread Pascal THIVENT

Hi, 

the database I would like to managed is constitued by one table.
This table is made of 3 columns, with no primary key

mysql> CREATE TABLE test (
id1 int(8) unsigned,
id2 int(4) unsigned,
id3 int(4) unsigned
 );

mysql> CREATE INDEX idx_id1 ON test (id1);
mysql> CREATE INDEX idx_id2 ON test (id2);
mysql> CREATE INDEX idx_id3 ON test (id3);

We use 3 indexes because our benchmark shown that it's better than an index
on the 3 columns for our application.


mysql> desc test;
+-+-+--+-+-+---+
| Field   | Type| Null | Key | Default | Extra |
+-+-+--+-+-+---+
| id1 | int(8) unsigned |  | MUL | 0   |   |
| id2 | int(4) unsigned |  | MUL | 0   |   |
| id3 | int(4) unsigned |  | MUL | 0   |   |
+-+-+--+-+-+---+


Here are the request we make on this table :

SELECT COUNT(*) FROM test WHERE id1=? AND id2=?;
SELECT COUNT(*) FROM test WHERE id1=? AND id=2=? AND id3=?;
INSERT INTO test (USER_ID, FLIGHT_ID, FLIGHTOBJECT_ID) values (?,?,?);


The row size is 16 bytes. So we'll get 16 gigabytes of datas per day (it
will be around 10 gigabyte in reality because only 66% of the request need
an insert).

The table is going to grow until we'll drop it (because of storage
limitations). 10 days of lifetime for our datas will be enough :)
We make no update in order to be very quick.


> -Message d'origine-
> De: Heikki Tuuri [mailto:[EMAIL PROTECTED]]
> Date: samedi 24 mars 2001 15:31
> À: [EMAIL PROTECTED]; [EMAIL PROTECTED]
> Objet: Re: size limitation
> 
> 
> Pascal,
> 
> could you describe what indexes you should have on the data, and
> how you will add to it and delete from it?
> 
> >In order to choose a DBMS, i've to check to MySQL is able to 
> support our
> >requirements. I've got to manage one database.
> >This database is made of one table.
> >This table will grow very quickly (we evaluate that we'll 
> have to make
> >arround 1.000.000.000 inserts per day). 
> 
> Do you really mean one billion inserts per day? 1000 million inserts?
> If the row size is 100 bytes, you will get new data 100 gigabytes
> per day. If you have a terabyte of disk, you can only fit 10 days'
> data in it.
> 
> >I want to be sure that MySQL is able to manage table bigger 
> than 1 TeraOctet.
> >Does anyone have already use in similar condition ? 
> >I read that innobase table allows to store a table on 
> several oracle-like
> >"tablespace"  that can be created on several disks. I think 
> we can put the
> >indexes on other disk too (using symbolic links). 
> 
> An Innobase tablespace can span 4 billion database pages, that is,
> 64 terabytes with the default page size of 16 kB.
> 
> A single file of a tablespace can currently hold 2 GB or 4 GB
> depending on your OS. You would need some 500 data files in your
> tablespace.
> 
> You cannot currently instruct Innobase where to put your data
> and index trees (all go into the same tablespace). But, if your table
> is so huge, both the data and the indexes will lie on several disks,
> since a single disk can store max 80 GB of data today, I think.
> 
> If you have 1 billion insertions per day, that is 12 000 inserts per
> second around the clock. The CPU usage of Innobase would allow
> you to do this many insertions. The problem can be the communications
> overhead between your client process and the MySQL server. It might
> use even 100 microseconds per insert. You can reduce this overhead by
> using the multiple INSERT INTO ... VALUES (...), (...), (...)
> insertion syntax of MySQL.
> 
> Another problem is the number of disks you need: if you have to make
> 12 000 random insertions per second, you need a disk farm of
> some 200-500 separate disks. The operating system will use some CPU
> for each disk i/o. If you have to read and write 12 000 16 kB pages
> each second, you need a bandwidth of 400 MB/second in your disk
> channel. These numbers are similar to the largest TPC-C benchmarks
> run today. The hardware will be expensive. You can look at
> www.tcp.org, where you find descriptions of the benchmark systems
> and their prices.
> 
> >I would like to know what are the requirement to store so 
> much records. 
> >Does anyome have reference of site that manage a table 
> bigger than 1 tera ?
> 
> The largest databases in the world are of the order 4 - 40 TB.
> I think many of them use Teradata database machines. Some use
> Oracle.
> 
> >Did someone try that

Re: size limitation

2001-03-24 Thread Jeremy D . Zawodny

On Sat, Mar 24, 2001 at 05:30:52PM +0200, Heikki Tuuri wrote:
>
> MySQL/Innobase may be able to do what you need, but we should have a
> real-world test of using a big disk farm. I do not know if any MySQL
> user uses a disk farm. Maybe Jeremy Zawodny knows?

I don't know of anyone doing that yet... But I'm sure they're out
there somewhere.

Jeremy
-- 
Jeremy D. Zawodny, <[EMAIL PROTECTED]>
Technical Yahoo - Yahoo Finance
Desk: (408) 328-7878Fax: (408) 530-5454
Cell: (408) 439-9951

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: size limitation

2001-03-24 Thread Heikki Tuuri

Pascal,

could you describe what indexes you should have on the data, and
how you will add to it and delete from it?

>In order to choose a DBMS, i've to check to MySQL is able to support our
>requirements. I've got to manage one database.
>This database is made of one table.
>This table will grow very quickly (we evaluate that we'll have to make
>arround 1.000.000.000 inserts per day). 

Do you really mean one billion inserts per day? 1000 million inserts?
If the row size is 100 bytes, you will get new data 100 gigabytes
per day. If you have a terabyte of disk, you can only fit 10 days'
data in it.

>I want to be sure that MySQL is able to manage table bigger than 1 TeraOctet.
>Does anyone have already use in similar condition ? 
>I read that innobase table allows to store a table on several oracle-like
>"tablespace"  that can be created on several disks. I think we can put the
>indexes on other disk too (using symbolic links). 

An Innobase tablespace can span 4 billion database pages, that is,
64 terabytes with the default page size of 16 kB.

A single file of a tablespace can currently hold 2 GB or 4 GB
depending on your OS. You would need some 500 data files in your
tablespace.

You cannot currently instruct Innobase where to put your data
and index trees (all go into the same tablespace). But, if your table
is so huge, both the data and the indexes will lie on several disks,
since a single disk can store max 80 GB of data today, I think.

If you have 1 billion insertions per day, that is 12 000 inserts per
second around the clock. The CPU usage of Innobase would allow
you to do this many insertions. The problem can be the communications
overhead between your client process and the MySQL server. It might
use even 100 microseconds per insert. You can reduce this overhead by
using the multiple INSERT INTO ... VALUES (...), (...), (...)
insertion syntax of MySQL.

Another problem is the number of disks you need: if you have to make
12 000 random insertions per second, you need a disk farm of
some 200-500 separate disks. The operating system will use some CPU
for each disk i/o. If you have to read and write 12 000 16 kB pages
each second, you need a bandwidth of 400 MB/second in your disk
channel. These numbers are similar to the largest TPC-C benchmarks
run today. The hardware will be expensive. You can look at
www.tcp.org, where you find descriptions of the benchmark systems
and their prices.

>I would like to know what are the requirement to store so much records. 
>Does anyome have reference of site that manage a table bigger than 1 tera ?

The largest databases in the world are of the order 4 - 40 TB.
I think many of them use Teradata database machines. Some use
Oracle.

>Did someone try that ? I hope it's possible.
>Otherwise, i'll to choose Oracle :( and Oracle is not cheap. Regards,--

MySQL/Innobase may be able to do what you need, but we should have
a real-world test of using a big disk farm. I do not know if any
MySQL user uses a disk farm. Maybe Jeremy Zawodny knows?

Regards,

Heikki

>Pascal ThiventGroupe SQLIWeb :   http://www.sqli.com 
>Mail :   [EMAIL PROTECTED] : +33 (0)1 55 93 25 39


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: size limitation

2001-03-23 Thread Jeremy D . Zawodny

On Fri, Mar 23, 2001 at 11:33:42AM +0100, Pascal THIVENT wrote:
>  
> I've got to manage one database.
> This database is made of one table.
> This table will grow very quickly (we evaluate that we'll have to make
> arround 1.000.000.000 inserts per day).

What sort of hardware are you planning to use?
-- 
Jeremy D. Zawodny, <[EMAIL PROTECTED]>
Technical Yahoo - Yahoo Finance
Desk: (408) 328-7878Fax: (408) 530-5454
Cell: (408) 439-9951

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




size limitation

2001-03-23 Thread Pascal THIVENT

In order to choose a DBMS, i've to check to MySQL is able to support our
requirements.
 
I've got to manage one database.
This database is made of one table.
This table will grow very quickly (we evaluate that we'll have to make
arround 1.000.000.000 inserts per day).
 
I want to be sure that MySQL is able to manage table bigger than 1 Tera
Octet.
Does anyone have already use in similar condition ?
 
I read that innobase table allows to store a table on several oracle-like
"tablespace"  that can be created on several disks. I think we can put the
indexes on other disk too (using symbolic links).
 
I would like to know what are the requirement to store so much records.
 
Does anyome have reference of site that manage a table bigger than 1 tera ?
Did someone try that ?
 
I hope it's possible.
Otherwise, i'll to choose Oracle :( and Oracle is not cheap.
 
Regards,

--
Pascal Thivent

Groupe SQLI
Web :   http://www.sqli.com 
Mail :   [EMAIL PROTECTED]
Tel : +33 (0)1 55 93 25 39








 

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




database size limitation #3

2001-01-15 Thread marko milovanovic

i'm sorry to have to ask for the 3rd time the same question but i'm
quite sure there is an answer, whether yes or no!

is it possible at installation time or at configuration time, to state a
limit for the size of the databases created.

ie.
if i give one database per user i dont want any user to have any
database greater than say 50mb.
so that if that user wants to put too much records in one of her table
she wont be able to fill up the disk which isn't large enough!

the limitation on table size is not what i'm looking for, neither using
something like linux quotas.

please, any help will be greatly appreciated!

thanks

marko

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: databases size limitation #2

2001-01-13 Thread marko milovanovic

David Hodgkinson wrote:
> 
> marko milovanovic <[EMAIL PROTECTED]> writes:

> > how would it be possible to set a limitation in the size of the
> > databases at the creation time so that a user couldn-t build any
> > databases greater than, say, 20 MB?

> Smells like using quotas at the OS level to me...

acrually i wouldn-t like to use this method which i find not suitable
for my purpose...
but anyway thanks for responding!

marko

--
> Dave Hodgkinson, http://www.hodgkinson.org
> Editor-in-chief, The Highway Star   http://www.deep-purple.com
>   Apache, mod_perl, MySQL, Sybase hired gun for, well, hire
>   -

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: databases size limitation #2

2001-01-13 Thread marko milovanovic

Tõnu Samuel wrote:
> 
> marko milovanovic wrote:
> >
> > uh! i already posted this message a while ago!
> > could anyone please help me?
> >
> > i could find some informations about increasing the size of the
> > databases, but not about limitations to their size.
> >
> > how would it be possible to set a limitation in the size of the
> > databases at the creation time so that a user couldn-t build any
> > databases greater than, say, 20 MB?
> >
> > i just couldn-t find this information anywhere!
> 
> Everything is in manual.

but restraining the size?
i haven-t come across it!
sorry...
 
> I have seen databases on MySQL in terabytes, and theoretical limit if
> much more away. But there are some limitations in operating systems, so
> check manual to suit your needs.

in the manual i could find informations about increasing the size, or
the possibilities to have huge databases in terabytes but not how to
restrain there size!

think about a small hdd with lots of users!
is there a way at the create database level to limit its size or number
of rows without the user having to take care of this is my question.

i'll really apreciate any help regarding this!

thank you!

marko

> --
> MySQL Development Team
>__  ___ ___   __
>   /  |/  /_ __/ __/ __ \/ /   Tonu Samuel <[EMAIL PROTECTED]>
>  / /|_/ / // /\ \/ /_/ / /__  MySQL AB, http://www.mysql.com/
> /_/  /_/\_, /___/\___\_\___/  Tallinn, Estonia
><___/

-- 
___
 marko milovanovic   - chef de projet
 i (france) - 121 rue du vieux pont de sèvres
 92100 boulogne billancourt -  01 46 94 99 66
 http://www.ifrance.com -  [EMAIL PROTECTED]
___

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: databases size limitation #2

2001-01-13 Thread Tõnu Samuel

marko milovanovic wrote:
> 
> uh! i already posted this message a while ago!
> could anyone please help me?
> 
> i could find some informations about increasing the size of the
> databases, but not about limitations to their size.
> 
> how would it be possible to set a limitation in the size of the
> databases at the creation time so that a user couldn-t build any
> databases greater than, say, 20 MB?
> 
> i just couldn-t find this information anywhere!

Everything is in manual.

I have seen databases on MySQL in terabytes, and theoretical limit if
much more away. But there are some limitations in operating systems, so
check manual to suit your needs.

-- 
MySQL Development Team
   __  ___ ___   __
  /  |/  /_ __/ __/ __ \/ /   Tonu Samuel <[EMAIL PROTECTED]>
 / /|_/ / // /\ \/ /_/ / /__  MySQL AB, http://www.mysql.com/
/_/  /_/\_, /___/\___\_\___/  Tallinn, Estonia
   <___/

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: databases size limitation #2

2001-01-13 Thread David Hodgkinson

marko milovanovic <[EMAIL PROTECTED]> writes:

> uh! i already posted this message a while ago!
> could anyone please help me?
> 
> i could find some informations about increasing the size of the
> databases, but not about limitations to their size.
> 
> how would it be possible to set a limitation in the size of the
> databases at the creation time so that a user couldn-t build any
> databases greater than, say, 20 MB?
> 
> i just couldn-t find this information anywhere!

Smells like using quotas at the OS level to me...

-- 
Dave Hodgkinson, http://www.hodgkinson.org
Editor-in-chief, The Highway Star   http://www.deep-purple.com
  Apache, mod_perl, MySQL, Sybase hired gun for, well, hire
  -

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




databases size limitation #2

2001-01-13 Thread marko milovanovic

uh! i already posted this message a while ago!
could anyone please help me?

i could find some informations about increasing the size of the
databases, but not about limitations to their size.

how would it be possible to set a limitation in the size of the
databases at the creation time so that a user couldn-t build any
databases greater than, say, 20 MB?

i just couldn-t find this information anywhere!

thanks in advance!

marko

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php