2013 3:07 AM
> To: mysql@lists.mysql.com
> Subject: Mysql resource limits.
>
> Hi All.
>
> I would like to limit resources available to a given user in mysql. I know
> that there is https://dev.mysql.com/doc/refman/5.5/en/user-resources.html,
> I also know that cgroups can be us
Hi All.
I would like to limit resources available to a given user in mysql. I know
that there is https://dev.mysql.com/doc/refman/5.5/en/user-resources.html,
I also know that cgroups can be used at operating system level.
What are your experiences in limiting resources in mysql? I've user percona
eBay once developed a patch for pooled threads, on top of 5.0, to resolve this
kind of issue so they can support 10k+ sessions(massive amount of application
need to talk to those mysql).
Not sure whether they are merged into main version though.
Best regards
Zhuchao
在 2011-4-14,17:59,Rei
Am 14.04.2011 11:50, schrieb Johan De Meersman:
> - Original Message -
>> From: "Reindl Harald"
>>
>> even if you have enough memory why will you throw it away for a
>> unusual connection count instead use the RAm for innodb-buffer-pool,
>> query-cache, key-buffers?
>
> Maybe the applicat
- Original Message -
> From: "Reindl Harald"
>
> even if you have enough memory why will you throw it away for a
> unusual connection count instead use the RAm for innodb-buffer-pool,
> query-cache, key-buffers?
Maybe the application doesn't have support for connection pooling and can't
net
> To: mysql@lists.mysql.com
> Subject: Re: Practical connection limits MySQL 5.1/5.5
>
>
> Am 13.04.2011 23:50, schrieb Jeff Lee:
> > Hey All,
> >
> > Can anyone provide some guidance as to what the practical connection limits
> > to MySQL 5.1/5.5 are
Am 13.04.2011 23:50, schrieb Jeff Lee:
> Hey All,
>
> Can anyone provide some guidance as to what the practical connection limits
> to MySQL 5.1/5.5 are under linux?
>
> We're running a ruby on rails application that establishes 50 to 100
> connections to our database
Hey All,
Can anyone provide some guidance as to what the practical connection limits
to MySQL 5.1/5.5 are under linux?
We're running a ruby on rails application that establishes 50 to 100
connections to our database upon startup resulting in around 1,000
persistent db connections. I
Changed limits: max_open_files: 2048 max_connections: 1024 table_cache:
507
I'm getting the below warning in my event viewer:
Changed limits: max_open_files: 2048 max_connections: 1024 table_cache:
507
How can i slove it.
Thanks in advance.
On Tue, 04 Mar 2008 08:18:08 -0500, Phil wrote:
> Just inheritance from an old design that has passed it's limits.
Just checking :)
I was talking to someone about redundancy in a table and he was like
"that's good though, because there are multiple (blah, blah, blah)...but
Just inheritance from an old design that has passed it's limits.
I actually have a development version which does just that, but there is a
lot of work to convert many php scripts and sql to include the new column.
It's some way away from live though, so the problem I outlined still exi
On Thu, 28 Feb 2008 11:19:40 -0500, Phil wrote:
> I have 50 plus tables lets call them A_USER, B_USER, C_USER etc which I
> daily refresh with updated (and sometimes new) data.
>
> I insert the data into a temporary table using LOAD DATA INFILE. This
> works great and is very fast.
May I ask wh
ng to figure out what limits are being hit
without success.
Would certainly appreciate any pointers to look at..
Phil
On Thu, Feb 28, 2008 at 11:19 AM, Phil <[EMAIL PROTECTED]> wrote:
> I'm trying to figure out which limits I'm hitting on some inserts.
>
> I have 50 plu
I'm trying to figure out which limits I'm hitting on some inserts.
I have 50 plus tables lets call them A_USER, B_USER, C_USER etc which I
daily refresh with updated (and sometimes new) data.
I insert the data into a temporary table using LOAD DATA INFILE. This works
great and is
OK, got another piece of information.
I was running with the same 'default' mysql configuration, but
with a different data set.
when the mysql database table reached 7,285,902 records mysql
dropped the connection.
When I tried to reconnect, the mysql library core dumped.
Interesting note is th
Thanks for the responses.
I am sorry I cannot give more detailed information now. The system is
being run through tests for other issues.
But I can say that we are using a my.cnf that contains only this one
line:
set-variable=max_connections=300
There are a few processes making queries to the d
Mark Kozikowski a écrit :
Hello all,
I have been using MySQL for about 5 years now in a company project.
I store a lot of data, very rapidly into the database.
Presently, I am having a problem where the MySQL server appears to
be denying a connection when I reach a database size of about
10
HI Mark,
What is the error your seeing the error log file, can u please let us know.
regards
anandkl
On Jan 21, 2008 2:27 PM, Mark Kozikowski <[EMAIL PROTECTED]> wrote:
>
>
> Hello all,
>
> I have been using MySQL for about 5 years now in a company project.
>
> I store a lot of data, very rapidl
Hello all,
I have been using MySQL for about 5 years now in a company project.
I store a lot of data, very rapidly into the database.
Presently, I am having a problem where the MySQL server appears to
be denying a connection when I reach a database size of about
10 billion bytes.
I am runnin
>
> Search speeds and CPU with MyISAM is quite good. I tried InnoDb and insert
> speeds was far too slow because of its row locking versus MyISAM's table
> locking. Some people have been able to fine tune InnoDb but it requires
> even more RAM because InnoDb works best when the entire table fits i
At 12:18 PM 2/5/2007, kalin mintchev wrote:
> Put as much memory in the machine as possible. Building indexes for a
> table
> of that size will consume a lot of memory and if you don't have enough
> memory, building the index will be done on the hard disk where it is 100x
> slower. I've had 100M
> Put as much memory in the machine as possible. Building indexes for a
> table
> of that size will consume a lot of memory and if you don't have enough
> memory, building the index will be done on the hard disk where it is 100x
> slower. I've had 100M row tables without too much problem. However
At 09:44 PM 2/4/2007, kalin mintchev wrote:
hi all...
i just wanted to ask here if somebody has experience in pushing the mysql
limits... i might have a job that needs to have a table (or a few tables)
holding about a 100 million records. that's a lot of records is there
any limitati
kalin mintchev" <[EMAIL PROTECTED]>
To: "ViSolve DB Team" <[EMAIL PROTECTED]>
Cc:
Sent: Monday, February 05, 2007 4:07 PM
Subject: Re: mysql limits
thanks... my question was more like IF mysql can handle that amount of
records - about 100 million... and if it's jus
columns.
>
> Thanks
> ViSolve DB Team.
> - Original Message -
> From: "kalin mintchev" <[EMAIL PROTECTED]>
> To:
> Sent: Monday, February 05, 2007 9:14 AM
> Subject: mysql limits
>
>
>> hi all...
>>
>> i just wanted to ask here if someb
value can be used by
a single column itself or depends on the size of the columns.
Thanks
ViSolve DB Team.
- Original Message -
From: "kalin mintchev" <[EMAIL PROTECTED]>
To:
Sent: Monday, February 05, 2007 9:14 AM
Subject: mysql limits
hi all...
i just wanted to ask
hi all...
i just wanted to ask here if somebody has experience in pushing the mysql
limits... i might have a job that needs to have a table (or a few tables)
holding about a 100 million records. that's a lot of records is there
any limitation of some kind that wouldn;t allow mysql to h
is not much updates but mostly selects.
You are asking for trouble. Hear the voice of experience.
> So, what I wanted to learn is how much can we push it to the limits on a
> single machine with about 2 gig rams? Do you think MYSQL can handle ~
> 700-800 gigabyte on a single machi
>> constitutes this much of data has about 5 columns, and rows are about
>> 50 bytes in size, and 3 columns in this table need to be indexed.
>>
>> So, what I wanted to learn is how much can we push it to the limits on
>> a single machine with about 2 gig rams? Do you t
utes this much of data has about 5 columns, and rows are about
50 bytes in size, and 3 columns in this table need to be indexed.
So, what I wanted to learn is how much can we push it to the limits on
a single machine with about 2 gig rams? Do you think MYSQL can handle
~ 700-800 gigabyte on a single
data has about 5 columns, and rows are about
50 bytes in size, and 3 columns in this table need to be indexed.
So, what I wanted to learn is how much can we push it to the limits on
a single machine with about 2 gig rams? Do you think MYSQL can handle
~ 700-800 gigabyte on a single machine? And, is
Thanks for the information.
I agree with what you say.
There is just one comment I'd like to make.
You are right that the TIMESTAMP has a specific range. I am comparing
it to a date outside that range. This could cause problems.
But I strongly believe that the SQL user, who in many cases i
Ben Clewett wrote:
> C# has two DateTime constants:
>
> DateTime.MinValue = '0001-01-01 00:00:00.000'
> DateTime.MaxValue = '-12-31 23:59:59.999'
>
>
> MySQL really doesn't like these values, it shows warnings:
>
> +-+--+-+
Ben Clewett wrote:
(I know that TIMESTAMP has a far smaller date range than DATETIME.
But all our data has to be time-zone independent. Therefore TIMESTAMP
is the only field appropriate for our use.)
try and see if this works
SELECT * FROM a WHERE cast(t as datetime) > '0001-01-
Hi Barry,
> Well removing 'explicit' warnings for every user having problems with
> 3rd party modules would have mysql without any warnings nowadays ;)
>
> i think that your mono should get more stable.
I completely take this on board. This is a bug outside MySQL.
Warnings are very useful. Wh
Ben Clewett schrieb:
Hi Barry,
This is what I get:
mysql> CREATE TABLE a ( t TIMESTAMP );
Query OK, 0 rows affected (0.25 sec)
mysql> SELECT * FROM a WHERE t > '0001-01-01 00:00:00';
Empty set, 1 warning (0.00 sec)
mysql> SHOW WARNINGS;
+-+--+--
Ben Clewett schrieb:
Hi Barry,
This will happen when comparing against a TIMESTAMP field.
CREATE TABLE a ( t TIMESTAMP );
SELECT * FROM a WHERE t > '0001-01-01 00:00:00';
Well my msql doesn't give me any errors using that query.
neither a warning.
This "might" be a problem with windows.
Wi
Hi Barry,
This is what I get:
mysql> CREATE TABLE a ( t TIMESTAMP );
Query OK, 0 rows affected (0.25 sec)
mysql> SELECT * FROM a WHERE t > '0001-01-01 00:00:00';
Empty set, 1 warning (0.00 sec)
mysql> SHOW WARNINGS;
+-+--+
Duncan Hill wrote:
On Tuesday 06 June 2006 15:38, [EMAIL PROTECTED] wrote:
Quoting Barry <[EMAIL PROTECTED]>:
Well my msql doesn't give me any errors using that query.
neither a warning.
Ditto.
usemysql> use test;
Database changed
mysql> CREATE TABLE a ( t TIMESTAMP );
Query OK, 0 rows affect
Quoting Barry <[EMAIL PROTECTED]>:
> Ben Clewett schrieb:
> > Hi Barry,
> >
> > This will happen when comparing against a TIMESTAMP field.
> >
> > CREATE TABLE a ( t TIMESTAMP );
> >
> > SELECT * FROM a WHERE t > '0001-01-01 00:00:00';
> >
>
> Well my msql doesn't give me any errors using tha
On Tuesday 06 June 2006 15:38, [EMAIL PROTECTED] wrote:
> Quoting Barry <[EMAIL PROTECTED]>:
> > Well my msql doesn't give me any errors using that query.
> > neither a warning.
>
> Ditto.
>
> usemysql> use test;
> Database changed
> mysql> CREATE TABLE a ( t TIMESTAMP );
> Query OK, 0 rows affecte
Hi Barry,
This will happen when comparing against a TIMESTAMP field.
CREATE TABLE a ( t TIMESTAMP );
SELECT * FROM a WHERE t > '0001-01-01 00:00:00';
I understand that TIMESTAMP cannot handle this date. But I would hope
to be able to compare against this date without MySQL giving the
warnin
Ben Clewett schrieb:
To whom it may concern,
I'm involved in lots of C# coding with several coders.
I have a gripe with MySQL which may be easy to solve in future development.
C# has two DateTime constants:
DateTime.MinValue = '0001-01-01 00:00:00.000'
DateTime.MaxValue = '-12-31
To whom it may concern,
I'm involved in lots of C# coding with several coders.
I have a gripe with MySQL which may be easy to solve in future development.
C# has two DateTime constants:
DateTime.MinValue = '0001-01-01 00:00:00.000'
DateTime.MaxValue = '-12-31 23:59:59.999'
> That's because ignoring cpu and query complexity isn't generally done.
> Sure, you can run a zillion queries per second if all you're doing is
> "SELECT num from table;". But really threads are limited by memory
I agree with many of your points. We tuned our per-thread buffers
appropriately a
On 5/19/06, Lyle Tagawa <[EMAIL PROTECTED]> wrote:
Given a nptl/linux box (or pthreads/freeBSD) for example, can you tell
what is the theoretical max running thread count (in the context of
paging/process scheduling and not in the context of memory sizing),
assuming that there's no configuration
Hello,
This question is for those with experience sizing their MySQL back-end.
I have one box running 3000 mysqld threads and serving 6000 qps, and is
operating fine. The run queue is generally empty, but we observe ~20K
context-switches/s. At some point, as usage increases, the run queue
le
m: Jim <[EMAIL PROTECTED]>
> Subject: Any limits on Database Size?
>
>
>
> Hi All,
>
> We used to use Interbase which required a new file to be
> assigned for every 4 gig of data stored in a DB. Is there
> any issues like this in mySQL?
>
> Thanks, Jim
--
Hi All,
We used to use Interbase which required a new file to be assigned for every
4 gig of data stored in a DB.
Is there any issues like this in mySQL?
Thanks,
Jim
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL
than the theoretical maximum. So I wouldn't
design your database with unlimited tables in mind.
A database can hold multiple terabytes of data, but again you would
run into limits of the OS, like maximum file size. Using InnoDB you
would be able to split the tables into multiple files to
Hi, all
I have 2 questions:
1) Is there any limit on the number of tables I can create in MySQL and
how large I can hold in a database?
2) Does MYSQL support to save binary data file in the table. I can't just
save paths to the files. They files reside in another machine.
Thanks for reply.
X.C
:45 AM
Subject: InnoDB, FreeBSD and Memory: Cannot Allocate memory - raising limits
SUMMARY
While I am fairly certain the problem is with FreeBSD, since this is
InnoDB/MySQL related, I thought I would post here to see if others have
had
this problem. I have googled several different phrases to
ip on both the above mentioned
page and the InnoDB OS Error Codes manual page:
http://dev.mysql.com/doc/mysql/en/operating-system-error-codes.html
My summary:
On FreeBSD 5.x, there is a hard limit default compiled into the kernel
which limits the amount of memory a process can use to 51
:45 AM
Subject: InnoDB, FreeBSD and Memory: Cannot Allocate memory - raising limits
SUMMARY
While I am fairly certain the problem is with FreeBSD, since this is
InnoDB/MySQL related, I thought I would post here to see if others have
had
this problem. I have googled several different phrases to
SUMMARY
While I am fairly certain the problem is with FreeBSD, since this is
InnoDB/MySQL related, I thought I would post here to see if others have had
this problem. I have googled several different phrases to find this answer
-- how does FreeBSD 5 set resource limits, and how do I override
Fortune
>
> On Monday 07 March 2005 10:31 am, Kevin Cowley wrote:
> > Unfortunately both limits are getting in our way.
> >
> > We have approximately 32,000 variables scattered across a number of
> > tables that we need to convert to bitmaps. The problem is that about
> >
:
> Unfortunately both limits are getting in our way.
>
> We have approximately 32,000 variables scattered across a number of
> tables that we need to convert to bitmaps. The problem is that about
> 1500 of these variables need to go in a single bitmap hence the problems
> with the 1024/64
Unfortunately both limits are getting in our way.
We have approximately 32,000 variables scattered across a number of
tables that we need to convert to bitmaps. The problem is that about
1500 of these variables need to go in a single bitmap hence the problems
with the 1024/64 column/table limit
wrote:
> Does anyone know if there is a method of circumventing or changing the
> default join limits of 64 tables or 1024 columns? We're running Mysql
> 4.1.4 using MyISAM tables
>
> Kevin Cowley
> Product Development
> Alchemetrics Ltd
> SMARTER DATA , FASTER
>
Does anyone know if there is a method of circumventing or changing the
default join limits of 64 tables or 1024 columns? We're running Mysql
4.1.4 using MyISAM tables
Kevin Cowley
Product Development
Alchemetrics Ltd
SMARTER DATA , FASTER
Tel: 0118 902 9000 (swithcboard)
Tel: 0118 902
What does the error log say? Anything?
Donny
> -Original Message-
> From: Frank Denis (Jedi/Sector One) [mailto:[EMAIL PROTECTED]
> Sent: Friday, January 28, 2005 10:42 AM
> To: Mat
> Cc: mysql@lists.mysql.com
> Subject: Re: 2 gigs limits on MyISAM indexes?
>
>
This may be of use to you:
http://dev.mysql.com/doc/mysql/en/table-size.html
It appears that there is no limit in MySQL itself, but maybe in the
unlying operating system.
Frank Denis (Jedi/Sector One) wrote:
On Fri, Jan 28, 2005 at 04:00:24PM +, Mat wrote:
What Operating System are you run
On Fri, Jan 28, 2005 at 04:00:24PM +, Mat wrote:
> What Operating System are you running this on?
Linux 2.6, 64 bits.
MySQL 4.1.9.
> Also, is there anything in the errorlog?
Nothing, but as soon as I restart the server, it enters a strange state
where all slots are full with unauthenti
What Operating System are you running this on? Also, is there anything in the
errorlog?
Is there a limit on the size of .MYI files?
I have a database that worked flawlessly until today. I can't
restart it,
it immediately freezes.
I noticed that the .MYI file of a table has reached e
Is there a limit on the size of .MYI files?
I have a database that worked flawlessly until today. I can't restart it,
it immediately freezes.
I noticed that the .MYI file of a table has reached exactly 2 gigs.
May it be related? Is there anything to do in order to recover the data
an
On Wed, 15 Dec 2004, EP wrote:
> Thomas Spahni <[EMAIL PROTECTED]> wrote:
>
> > the column type will limit the number of characters per row. A column
> > of type TEXT will hold up to 65,535 characters but with LONGTEXT you
> > can put up to 4,294,967,295 charcters into one row. I have an
> > appli
d is probably stuck in an "indexing" paradigm, but I'd like to know w=
> here the limits (of Full Text search) are, if any.
>
>
> Can anyone advise?
>
> [Thanks!]
>
>
> Eric Pederson
>
>
--
For technical support c
Thomas Spahni <[EMAIL PROTECTED]> wrote:
> the column type will limit the number of characters per row. A column
> of
> type TEXT will hold up to 65,535 characters but with LONGTEXT you can
> put
> up to 4,294,967,295 charcters into one row. I have an application with
> Texts of up to 200 pages
"EP" <[EMAIL PROTECTED]> wrote on 15/12/2004 15:44:15:
> Thomas Spahni <[EMAIL PROTECTED]> wrote:
>
> > the column type will limit the number of characters per row. A column
> > of
> > type TEXT will hold up to 65,535 characters but with LONGTEXT you can
> > put
> > up to 4,294,967,295 charcter
very well.
Thomas Spahni
On Tue, 14 Dec 2004, EP wrote:
> I've looked in the documentation but didn't see any indication of the
> limits of Full-Text Search in terms of how many characters/words it can
> process per row.
>
> For example, if I have a column with 4,00
I've looked in the documentation but didn't see any indication of the limits of
Full-Text Search in terms of how many characters/words it can process per row.
For example, if I have a column with 4,000 character strings in it, can I use
it effectively in Full-Text Searching?
W
I apologize for my skepticism of 15 minutes ago. I finally _read_
http://dev.mysql.com/doc/mysql/en/Table_size.html carefully, and indeed
your suggestion is dead on.
thank you again.
On Mon, 2004-07-26 at 14:19, Paul DuBois wrote:
> At 12:48 -0400 7/26/04, Michael Dykman wrote:
> >I am using a d
thank you for the suggestion, I will give that a try. I thought it
suspicious that the table stopped receiving data at 2 bytes under the
natural 4G limit (8 byte int) which was standard under 3.22. As I said,
I am using a development release and I have found 1 or 2 other
regression errors along t
At 12:48 -0400 7/26/04, Michael Dykman wrote:
I am using a development build of 4.1.3 (the last 4.1.3 release I think;
mysql-4.1.3-beta-nightly-20040628) so I suppose I have this coming, but
here goes:
As I am running on RH Enterprise Server 3 with a Pentium Xeon (32-bit)
According to the documenta
You must be getting an error code when inserting now.
If that is related to index file size (that's what I had)
.
You can do ALTER TABLE MAX_ROWS=
On Mon, 2004-07-26 at 11:48, Michael Dykman wrote:
> I am using a development build of 4.1.3 (the last 4.1.3 release I think;
> mysql-4.1.3-beta-nigh
I am using a development build of 4.1.3 (the last 4.1.3 release I think;
mysql-4.1.3-beta-nightly-20040628) so I suppose I have this coming, but
here goes:
As I am running on RH Enterprise Server 3 with a Pentium Xeon (32-bit)
According to the documentation, for a 32 bit processor, I should be abl
: "RV Tec" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Tuesday, May 18, 2004 9:28 AM
Subject: MySQL limits.
> Folks,
>
> I have a couple of questions that I could not find the answer
> at the MySQL docs or list archives. Hope you guys can help me.
>
>
Let's see if I can give you some ideas.
> -Original Message-
> From: RV Tec [mailto:[EMAIL PROTECTED]
> Sent: Tuesday, May 18, 2004 8:28 AM
> To: [EMAIL PROTECTED]
> Subject: MySQL limits.
>
> We have a database with approximately 135 tables (MyISAM).
>
Folks, Tim,
Oops! Forgot to mention that... we are running MySQL 4.0.18.
Thanks a lot!
Best regards,
RV Tec
On Tue, 18 May 2004, Tim Cutts wrote:
>
> On 18 May 2004, at 2:28 pm, RV Tec wrote:
>
> >
> > Is MySQL able to handle such load with no problems/turbulences
> > at all? If so, what
On 18 May 2004, at 2:28 pm, RV Tec wrote:
Is MySQL able to handle such load with no problems/turbulences
at all? If so, what would be the best hardware/OS
configuration?
What is the largest DB known to MySQL community?
We regularly run databases with around 200 GB of data per instance,
Folks,
I have a couple of questions that I could not find the answer
at the MySQL docs or list archives. Hope you guys can help me.
We have a database with approximately 135 tables (MyISAM).
Most of them are small, but we have 5 tables, with 8.000.000
records. And that number is to incr
Kris Burford <[EMAIL PROTECTED]> wrote:
> hi
>
> wondering whether someone can set me straight on whether it's possible to
> request a set of records from a single table with multiple conditions.
>
> for instance, a "story" table, containing id, title, text, section and
> published_date. what i
hi
wondering whether someone can set me straight on whether it's possible to
request a set of records from a single table with multiple conditions.
for instance, a "story" table, containing id, title, text, section and
published_date. what i would like is to retrieve is the 5 most recently
pub
Richard Davey wrote:
Hi all,
I have what is probably a quite standard question and would love to
know how you would all approach this scenario:
I have a table in a database that has approx. 190,000 records in it.
The table is currently 128MB in size and I'm happy that it is well
constructed with n
Hi all,
I have what is probably a quite standard question and would love to
know how you would all approach this scenario:
I have a table in a database that has approx. 190,000 records in it.
The table is currently 128MB in size and I'm happy that it is well
constructed with no data duplication a
> our database is about 20 Gb and growing daily. so far, I still see
> nearly constant time query performance on tables with ~10M rows. I
> don't think mysql is limited by file size per se.
I guess performance depends a lot on what your tables look like, and
your hardware, obviously. From my own e
On Thu, Nov 06, 2003 at 05:08:54PM -0700, Jeff Mathis wrote:
> our database is about 20 Gb and growing daily. so far, I still see
> nearly constant time query performance on tables with ~10M rows. I don't
> think mysql is limited by file size per se.
It is limited to the extent that your operating
tabase
> to take advantage of the granularity and control we get over file access
> that way.
>
> But we already have 1.5GB, and that could lead to a very large database
> very quickly.
>
> What are people's experiences with large MySQL databases? What are the
> practica
arge MySQL databases? What are the
practical limits under Solaris 2.8?
ari
Ari Davidow
[EMAIL PROTECTED]
http://www.ivritype.com/
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
ED]>
Sent: Thursday, October 16, 2003 17:49
Subject: LIMITS
>
> when I do a query is there a way to IGNORE the first X number of returned
> records???
>
> For instance I want to see 15 records after the first 50.
>
>
>
>
> Shawn Cummings
> Engineering Project Mana
Just do
Select ... limit 50, 15;
(if i'm wrong, see MySQL manual, Section LIMIT Syntax)
;)
Alexis
-Original Message-
From: Cummings, Shawn (GNAPs) [mailto:[EMAIL PROTECTED]
Sent: quinta-feira, 16 de Outubro de 2003 16:49
To: [EMAIL PROTECTED]
Subject: LIMITS
when I do a que
On Thu, 16 Oct 2003 11:49:29 -0400
"Cummings, Shawn (GNAPs)" <[EMAIL PROTECTED]> wrote:
> when I do a query is there a way to IGNORE the first X number of returned
> records???
>
> For instance I want to see 15 records after the first 50.
yes, use LIMIT clause:
SELECT * FROM tablename LIMIT 50
when I do a query is there a way to IGNORE the first X number of returned
records???
For instance I want to see 15 records after the first 50.
Shawn Cummings
Engineering Project Manager
Global NAPs
10 Merrymount Rd
Quincy, MA 02169
Desk 617-507-5150
VoIP 617-507-3550
[EMAIL PROTECTED]
--
At 13:18 +0200 6/29/03, Mark Rowlands wrote:
I have a tabletheres a surprise
month protocol port utime
06 tcp21 12
06 tcp21 13
05 udp 43 100232
05 udp 21 100245
what I would like to do is select by month and by protocol but with
I have a tabletheres a surprise
month protocol port utime
06 tcp21 12
06 tcp21 13
05 udp 43 100232
05 udp 21 100245
what I would like to do is select by month and by protocol but within
protocol limit to the top 5 by count of
Where can I learn --someplace really for dummies?-- about creating
filters and limits?
Thanks,
Ted
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Hi,
I need to set a variable limit on the MySQL file size (Average row
length * no of rows )
When we insert data in to the table using JDBC .i should get a
unique JDBC exception (so that i trigger an archive).
Is this posible in MySQL?
I notice that during creation of table i can give such op
m: gerald_clark [mailto:[EMAIL PROTECTED]]
> Sent: Tuesday, February 18, 2003 11:36 PM
> To: Rob
> Cc: [EMAIL PROTECTED]
> Subject: Re: Limits and order bys
>
> Sort them yourself after retrieving them.
>
> Rob wrote:
>
> >I have a question regarding the use
: gerald_clark [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, February 18, 2003 11:36 PM
To: Rob
Cc: [EMAIL PROTECTED]
Subject: Re: Limits and order bys
Sort them yourself after retrieving them.
Rob wrote:
>I have a question regarding the use of LIMIT with ORDER BY. My problem is
>as follows:
>
Sort them yourself after retrieving them.
Rob wrote:
I have a question regarding the use of LIMIT with ORDER BY. My problem is
as follows:
I want my users to be able to pageanate through result sets, so I've written
some code
that will display the results of a query 15 rows at a time in a HTML
1 - 100 of 137 matches
Mail list logo