MySQL Cluster

2004-04-14 Thread Adam Erickson
(This is probably not the best place for this post, but here goes...)

The (soon to be released) MySQL cluster software docs use a sample 
cluster node configured with Dual Xeons and 16GB of ram.  MySQL  has 
never been able to use more than 2 gigs of system memory (on 32 bit 
platforms.)  With MySQL Cluster, will MySQL finally start using the 
memory paging trick Oracle and others have been using for years?  
Otherwise, what is the point of having 16 gigs of ram for one MySQL server?

Thanks,
Adam Erickson
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Local copy of database

2003-11-14 Thread Adam Erickson
 Finally, if anyone has or knows of a good phone directory already. 
 Please point me in the right direction.

Have you considered LDAP?  Run it over SSL with TLS - should calm any
security concerns.  Outlook (and most any other mail agent for that
matter) can hook into it for address book lookups.  You can store
non-email related information in the LDAP database (address, phone
number, etc.)

The only catch is the offline-mode.  I do not believe there is one but
if you're considering installing MySQL on each machine you could set up
a LDAP replicated slave instead.

Don't get me wrong, writing a MySQL DB w/ GUI frontend would be more
entertaining but this path is already well-traveled.

Adam


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Persisten Connection

2003-04-04 Thread Adam Erickson
 We have a perl cgi program that refreshes every 45 seconds and connects
 to a mysql database to update records, overtime. This could have 300

Cache the page for 45 seconds.  Subsequent hits see the output from the 1st
request and do not require a DB connection.  Release the cache after 45
seconds to use the refreshed DB data.

 people using this same process, because we are queuing user in the
 database and when they reach queue position 1 and the resources are low,
 they are redirected to a web login page.  Is there a way to keep one

Not sure what you mean here but if this requires a DB hit then caching is
less of an option.  Although, you would still gain performance by not
re-querying the same thing over and over.

 persistent connection to the database so that it does have to connection
 every time in this perl program, because I believe this is causing a
 load on our UNIX box.

mod_perl is for you.  CGI cannot do persistent connections.


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: getting a page of data and total count

2003-02-10 Thread Adam Erickson
 But what if concurrent inserts are happening to the table while the
 users page-view thru the data. The count may change.

True, in my situation that is not a big concern.  In yours it may.  You can
either expire the cached value every so often or run two queries on each
hit.

If you're expecting even nominal use of these pages the two-query option
isn't as bad as it sounds.  Remember that your OS with cache the disk
access.  The first time it may be slow but subsequent requests will be
faster.



-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




RE: getting a page of data and total count

2003-01-14 Thread Adam Erickson
 What is the best way to get the total count as well as the paged
 content. Do I need to fire two queries or is there a trick which can do
 this in a single query?

I'm doing this on tables with many millions of records.  I take a hit once
to get a total record count then cache that value in the user's session.
Subsequent requests use that value instead of doing count(*) every load.

LIMIT offset,number handles pagination very well IMO.

Adam Erickson


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




RE: select the next name

2003-01-13 Thread Adam Erickson
Would:

SELECT id,first,last FROM names ORDER BY id LIMIT 2,1;

Work?  Limits to one result, start at second offset.  (I may have the 2,1 in
the wrong order though)

 -Original Message-
 From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]]
 Sent: Monday, January 13, 2003 10:20 PM
 To: [EMAIL PROTECTED]
 Subject: select the next name


 To begin with I am only about 3/4 of the pay through paul d's
 great book so bear with me.  Say I have a table (names) with
 columns (id, first, last) and I want the next name when ordered
 by last, first in the table, say given a particular id.   So how
 would I frame the query. Something like

 select id,last, first from names where next id from index of last,first

 ???

 Thanks

 Sam D


 -
 Before posting, please check:
http://www.mysql.com/manual.php   (the manual)
http://lists.mysql.com/   (the list archive)

 To request this thread, e-mail [EMAIL PROTECTED]
 To unsubscribe, e-mail
 [EMAIL PROTECTED]
 Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




RE: select the next name

2003-01-13 Thread Adam Erickson
 Ok my gut would say that this would not give the result I want
 unless the id's are sorted by last name, and given inserts and
 such I can see that would not be the case.   But I sense the
 glimmer of an approach in this query...

You're right, I don't know why I thought you wanted to sort by ID.  Guess I
need more coffee.

I think I understand what you're trying to do:

 - Run program, fetches a name.  Processes and exits.
 - Run program again, same SQL but gets the NEXT name.  Processes and exits.

LIMIT would work, but you would need to pass the current offset to the next
instance of your code.  Of course, that counter would need to be reset
should the table data change at all and it would be an ugly hack regardless.
Forget about it.

What you're really looking for is a method to maintain state between DB
connections.  Unfortunately, no RDBMS that I can think of supports this
internally.

I think the best solution for you is to add another column to the table and
use that for a skip flag.

Your SQL is this:

SELECT id,first,last FROM name WHERE skip=0 ORDER BY last LIMIT 1;
UPDATE name SET skip=1 where id=ID You just received;

If your skip flag defaults to 0 (false) then new entries will not require a
reset.  Once you've exhausted the table (everyone has a skip of 1 (true))
you can set them all to 0 and start over again at the top.

I don't think this would be very elegant but it would do the job.  A trigger
would be handy here. Ducks

Adam Erickson


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




RE: MySQL4 on Linux with Intel Hyperthreading CPUs?

2002-12-11 Thread Adam Erickson
We're running a 4x1.6Ghz Xeon box.  Linux reports 8 procs.  We've been
running MySQL on it for a couple months now in production under a good load.
Database sits around 60GB with anywhere from 200-800 concurrent connections.
Both InnoDB and MyISAM types are used and we're not having a single problem
with it.

(I don't think I've ever seen the machine go above a load of 2...)

 -Original Message-
 From: John Dell [mailto:[EMAIL PROTECTED]]
 Sent: Wednesday, December 11, 2002 5:15 PM
 To: [EMAIL PROTECTED]
 Subject: MySQL4 on Linux with Intel Hyperthreading CPUs?


 Hi,

 I have a new dedicated mysql 4.0.5 server running redhat 8.0 that we are
 testing.  The new server is a dual Intel Xeon 2.4GHZ with hyperthreading
 enabled, so Linux thinks there are 4 cpu's rather than the 2 real CPU's.

 Anybody have any experience with this and whether it can cause
 any problems
 with MySQL or Linux?

 Any performance reasons to enable/disable hyperthreading?

 Thanks!
 John Dell
 [EMAIL PROTECTED]


 -
 Before posting, please check:
http://www.mysql.com/manual.php   (the manual)
http://lists.mysql.com/   (the list archive)

 To request this thread, e-mail [EMAIL PROTECTED]
 To unsubscribe, e-mail
 [EMAIL PROTECTED]
 Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




RE: Data base driven web page idea - need help!

2002-11-27 Thread Adam Erickson
 Here's my QUESTION! Because some of the program information is
 large I don't
 want to query the data base everytime do I?

Define large?  Are we talking mixed media types (PDF/Word/PowerPoint) or
plain text/HTML?  As someone has already suggested, you might be better to
save the files on the disk and store the filenames in the database instead.

Then again, you said some of the information is large.  What's the ratio?
If it's small enough, fitting it in the DB might not be so bad.  Depends on
the hardware.

Since you're pulling static pages from disk, not SQL, it would probably
scale better.

Always looking for a challenge, I would look at the kind of data you're
dealing with.  If it's of similar format, you may be able to create a SQL
strucutre that would allow more flexibility with your data.  Render the data
in different views where applicable or at least enforce a theme on your
system.  Of couse, XML/XSLT is more suitable for that  and I've gone way
overboard...

 This would take up to many resources. So I've decided to write a program
...
 every night and remove the old ones through a cron job?

If you're eager to store it all in the database, I would do so.  It doesn't
sound like this stuff is going to change often.  Throw a reverse-proxy squid
in front of apache.  Even running on the same machine will help a LOT.

Avoid cronjobs and nightly generated static files.  That practice does NOT
scale and can be very difficult to manage.


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: resetting mysql server gently

2002-10-10 Thread Adam Erickson

 I am curious if there is a better way to restart mysql
 that would kill off any hung/long-running queries but
 not totally bring the server down. Any other ideas for
 managing a big load.

What you can do, assuming the mysql user has proper privs, is list and
kill all mysql threads running when you shut down apache.  Steps
involved:

kill apache
SQL: 'show processlist;'
..foreach id returned..
  SQL: 'kill $id;'
sleep a few
start apache

You'll have killed all running mysql threads without stopping the
server.  You could limit it to just queries of certain status as well.

Make sure not to kill the thread that's killing threads.  ;)

I'm not a mysql developer, but I would think doing this could have
negative results.  Half-completed transactions will halt replication.  I
wouldn't be surprised if you experienced some data corruption either.


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




RE: Memory Limit

2002-10-04 Thread Adam Erickson

 -Original Message-
 From: Dicky Wahyu Purnomo [mailto:[EMAIL PROTECTED]]
 Subject: Memory Limit

 And what is the calculation for the memory also 

The formula you want is (this does not account for InnoDB buffers either):

key_buffer_size + (record_buffer + sort_buffer)*max_connections
  768M  +16M   * 1,200 = 19,968M

That's way more than 4gb of ram.  Do you mean 120 connections?  That would
set you right underneath the magic 2GB which has been plauging me as well on
Intel.  (4xXeon, 4gb ram)

 I know if I set my configuration to lower value than I had slow
 query performance. And I found with sortbuffer : 12M and
 recordbuffer : 12M ... my query performance is good. but I can't
 have more than 1200 concurrent threads :((

I've been able to increase our connections to above 600 by using 1M sort and
record buffers and key_buffer sizes of only 256M.  Of course, with settings
like these, at 600 connections it's dog slow and useless to connect that
many threads anyway.

Under Linux, any maybe Solaris as well, the problem lies in the fact that
MySQL is one application with multiple threads.  Thus all threads share a
single memory 'ceiling'.  A fork-based model wouldn't have this problem.  Or
I don't think it would anyway.

Not much help, I know, but if you do find a way around this let me know.


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Memory cap?

2002-09-26 Thread Adam Erickson

Greetings,

I am indeed in a very strange situation, to me at least.  We've got a
good-sized database (about 60gb) over 4k tables.  We average about 300
queries per second.  We've recently noticed some strange behavior at around
430 connections.  We get the unable to create new thread (errno 11) if you
are not out of available ram you may have encountered an OS bug, see the
manual. error.  When this happens, we usually have a gig or two of ram
available.  I haven't been able to gleam more details from the manual, so
here I am.

Using the formula (key_buffer_size + (record_buffer +
sort_buffer)*max_connections) from the docs, it appears that right when the
connection count hits about 430, 2gb of ram is being used.

After much tweaking of my.cnf, I can get just under 600 connections by
reducing record and sort buffers.  Conversely, if I set key_buffer to
something large (ie. 1gb) I cap out at a couple hundred connections.  Is
their some kind of limit imposed on allocated memory by mysql?  Or should I
look elsewhere (Kernel, pthreads)?

This question may sound somewhat absurd and if my hosting provider hadn't
said MySQL has a 2gb limit that is very painful to overcome I wouldn't
even ask.  But I've been dealing with this for the last two weeks, at this
point I'll settle for anything.  :)

Thank you for your time,
Adam Erickson
[EMAIL PROTECTED]

System specs:
4x700 Xeon (1mb)
4gb Ram
Kernel 2.4.9 (SMP, enterprise)
MySQL-max v3.23.52 (MySQL AB binary release)

my.cnf (hacked to bits at this point):
...
[mysqld]
port= 3306
socket  = /tmp/mysql.sock
skip-locking
set-variable= key_buffer=256M
set-variable= max_allowed_packet=1M
set-variable= table_cache=4096
set-variable= sort_buffer=1M
set-variable= record_buffer=1M
set-variable= thread_cache=12
# Try number of CPU's*2 for thread_concurrency
set-variable= thread_concurrency=12
set-variable= back_log=10
set-variable= thread_stack=64k
set-variable= max_connect_errors=20
set-variable= record_rnd_buffer=2M

log-bin
server-id   = 1

set-variable= max_connections=600
replicate-ignore-db=mysql
set-variable= wait_timeout=600

# Point the following paths to different dedicated disks
#tmpdir = /tmp/
#log-update = /path-to-dedicated-directory/hostname

# Uncomment the following if you are using BDB tables
set-variable= bdb_cache_size=384M
set-variable= bdb_max_lock=10

# Uncomment the following if you are using Innobase tables
#innodb_data_file_path = ibdata1:2000M:autoextend:max:4000M;ibdata2:4000M
#innodb_data_file_path = ibdata1:2000M;ibdata2:4000M:autoextend:max:6000M
innodb_data_file_path =
ibdata1:2000M;ibdata2:6000M;ibdata3:6000M;ibdata4:6000M;ibdata5:6000M;ibdata
6:10M:autoextend:max:6000M
innodb_data_home_dir = /opt/mysql/data
innodb_log_group_home_dir = /opt/mysql/data
innodb_log_arch_dir = /opt/mysql/data
set-variable = innodb_mirrored_log_groups=1
set-variable = innodb_log_files_in_group=3
set-variable = innodb_log_file_size=5M
set-variable = innodb_log_buffer_size=8M
innodb_flush_log_at_trx_commit=1
innodb_log_archive=0
set-variable = innodb_buffer_pool_size=8M
set-variable = innodb_additional_mem_pool_size=8M
set-variable = innodb_file_io_threads=8
set-variable = innodb_lock_wait_timeout=50

[mysqldump]
...

[safe_mysqld]
open-files-limit=8192


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php