ms.) With MySQL Cluster, will MySQL finally start using the
memory paging trick Oracle and others have been using for years?
Otherwise, what is the point of having 16 gigs of ram for one MySQL server?
Thanks,
Adam Erickson
--
MySQL General Mailing List
For list archives: http://lists.mysql.co
> Finally, if anyone has or knows of a good phone directory already.
> Please point me in the right direction.
Have you considered LDAP? Run it over SSL with TLS - should calm any
security concerns. Outlook (and most any other mail agent for that
matter) can hook into it for address book lookup
> We have a perl cgi program that refreshes every 45 seconds and connects
> to a mysql database to update records, overtime. This could have 300
Cache the page for 45 seconds. Subsequent hits see the output from the 1st
request and do not require a DB connection. Release the cache after 45
secon
> But what if concurrent inserts are happening to the table while the
> users page-view thru the data. The count may change.
True, in my situation that is not a big concern. In yours it may. You can
either expire the cached value every so often or run two queries on each
hit.
If you're expectin
that value in the user's session.
Subsequent requests use that value instead of doing count(*) every load.
LIMIT offset,number handles pagination very well IMO.
Adam Erickson
-
Before posting, please check:
http://www.m
lag defaults to 0 (false) then new entries will not require a
reset. Once you've exhausted the table (everyone has a skip of 1 (true))
you can set them all to 0 and start over again at the top.
I don't think this would be very elegant but it would do
Would:
SELECT id,first,last FROM names ORDER BY id LIMIT 2,1;
Work? Limits to one result, start at second offset. (I may have the 2,1 in
the wrong order though)
> -Original Message-
> From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]]
> Sent: Monday, January 13, 2003 10:20 PM
> To: [EM
We're running a 4x1.6Ghz Xeon box. Linux reports 8 procs. We've been
running MySQL on it for a couple months now in production under a good load.
Database sits around 60GB with anywhere from 200-800 concurrent connections.
Both InnoDB and MyISAM types are used and we're not having a single proble
> Here's my QUESTION! Because some of the program information is
> large I don't
> want to query the data base everytime do I?
Define large? Are we talking mixed media types (PDF/Word/PowerPoint) or
plain text/HTML? As someone has already suggested, you might be better to
save the files on the d
> I am curious if there is a better way to restart mysql
> that would kill off any hung/long-running queries but
> not totally bring the server down. Any other ideas for
> managing a big load.
What you can do, assuming the mysql user has proper privs, is list and
kill all mysql threads running wh
> -Original Message-
> From: Dicky Wahyu Purnomo [mailto:[EMAIL PROTECTED]]
> Subject: Memory Limit
> And what is the calculation for the memory also
The formula you want is (this does not account for InnoDB buffers either):
key_buffer_size + (record_buffer + sort_buffer)*max_conne
that is very painful to overcome" I wouldn't
even ask. But I've been dealing with this for the last two weeks, at this
point I'll settle for anything. :)
Thank you for your time,
Adam Erickson
[EMAIL PROTECTED]
System specs:
4x700 Xeon (1mb)
4gb Ram
Kernel 2.4.9 (SMP, enterpris
12 matches
Mail list logo