>-Original Message-
>From: Andrés Tello [mailto:mr.crip...@gmail.com]
>Sent: Monday, April 25, 2011 10:24 AM
>To: Mailing-List mysql
>Subject: Memory Usage.
>
>How can I know how memory is being used by Mysql?
>
>I have 32GB Ram, but I can't make mysql to use more than 12GB Ram , and even
>
Am 25.04.2011 16:24, schrieb Andrés Tello:
> How can I know how memory is being used by Mysql?
>
> I have 32GB Ram, but I can't make mysql to use more than 12GB Ram , and even
> that I have tables over 40GB...
>
> Thanks! xD
depends on storage-engine (myisam or innodb), buffer-sizes, size
of th
Ravi raj schrieb:
> Dear walter Harms,
>
> Thanks for your valuable solution, but in the code which
> you provided is printing only one row , if i try to print whole table,
> or 2, or 3, columns fully means its giving segmentation fault, kindly
> check the below code for furthur in
|
|
}
|
//free(sel_smt);
mysql_free_result(res);
mysql_close(MySQL);
exit(0);
}
----------code
ends here
Thanks and regards,
Ravi
- Original Message -
From: "walter
hi ravi,
this works for me. it should help
you to get a starting point
re,
wh
/*
simpple DB connect test
gcc -L/usr/lib/mysql -lmysqlclient connect.c
*/
#define _GNU_SOURCE
#include
#include
#include
int main()
{
MYSQL *MySQL;
MYSQL_ROW row;
MYSQL_RES *res;
Hi Joerg,
Thanks a lot for the info.
regards
anandkl
On 7/23/08, Joerg Bruehe <[EMAIL PROTECTED]> wrote:
>
> Hi !
>
>
> Ananda Kumar wrote:
>
>> Hi All,
>> I have setup slave db. The machine configuration details of this slave is
>> same as master.
>>
>> OS=redhat
>> 8 cpu
>> 16GB RAM
>>
>> key_
Hi !
Ananda Kumar wrote:
Hi All,
I have setup slave db. The machine configuration details of this slave is
same as master.
OS=redhat
8 cpu
16GB RAM
key_buffer_size=3000M
innodb_buffer_pool_size=1M.
But when i do top, in the master db
Cpu(s): 0.5%us, 0.3%sy, 0.0%ni, 87.2%id, 11.9%wa,
bruce a écrit :
Hi..
Fairly new to mysql, in particular tuning.
I have a test mysql db, on a test server. I've got a test app that runs on
multiple servers, with each test app, firing/accessing data from the central
db server.
the central server is on a 2GHz, 1GMem, 100G system. MySQL is the b
I just changed to these values
[mysqld]
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock
wait_timeout=30
default-character-set=utf8
max_allowed_packet = 14M (lowered from 3000MB)
max_connections = 600 (lowered from 3000)
ft_min_word_len = 3
key_buffer_size=2500M
now looking into table_
On 5/15/07, Micah Stevens <[EMAIL PROTECTED]> wrote:
I think you may be able to get around this by using multiple key
buffers? (MySQL 4.1 or later)
key buffers caches only index data and they dont help with sorting like
sort_buffer. they dont impact innodb engine. even while using multiple key
I think you may be able to get around this by using multiple key
buffers? (MySQL 4.1 or later)
-Micah
On 05/15/2007 01:24 AM, Christoph Klünter wrote:
Hi List,
We have a mysql-Server with 8G of Ram. But mysql doesn't use this ram.
But we get following error:
May 14 22:56:11 sql mysqld[5875]
On 5/15/07, Mathieu Bruneau <[EMAIL PROTECTED]> wrote:
Hi, yeah, apparenlty you're running into the 32 bits memory liimt. Note
thta some memory is allocated for the OS so you don't even have the full
4GB of ram you can technically adressesed.
The 64 bits os would increase this limit to 64gb++ (
On 5/15/07, Christoph Klünter <[EMAIL PROTECTED]> wrote:
I have set the sort_buffer_size to 1G but even this doesn't help.
Any hints ? Should we try a 64Bit-OS ?
setting sort_buffer_size to 1GB is not recommended. it is a thread specific
configuration parameter which means each thread will
Christoph Klünter a écrit :
> Hi List,
>
> We have a mysql-Server with 8G of Ram. But mysql doesn't use this ram.
> But we get following error:
>
> May 14 22:56:11 sql mysqld[5875]: 070514 22:56:10 [ERROR] /usr/sbin/mysqld:
> Got error 12 from storage engine
> May 14 22:56:11 sql mysqld[5875]:
Ben, the my.cnf file (usually in /etc/my.cnf) usually contains the
settings related to memory usage. You can see info on a lot of the
various settings here:
http://dev.mysql.com/doc/refman/5.0/en/server-system-variables.html
and here for InndoDB-specific:
http://dev.mysql.com/doc/refman/5.0/en/in
a why this keeps increasing?
Thanks,
Rohit
- Original Message - From: "Lars Heidieker" <[EMAIL PROTECTED]>
To: "Rohit Peyyeti" <[EMAIL PROTECTED]>
Sent: Wednesday, February 01, 2006 4:37 PM
Subject: Re: Memory problems?
All these processes share the same
g?
Thanks,
Rohit
- Original Message -
From: "Lars Heidieker" <[EMAIL PROTECTED]>
To: "Rohit Peyyeti" <[EMAIL PROTECTED]>
Sent: Wednesday, February 01, 2006 4:37 PM
Subject: Re: Memory problems?
All these processes share the same address space (linux wa
Oops, I was obviously wrong about your example, please ignore it. :)
--
Alexey Polyakov
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Hi Kevin, I also observed differences between query plans on InnoDB
and MyISAM tables. I had a complex query, which had two possible
plans, first plan included const and ref joins on bigger tables then
joins on smaller tables, and second one was to do range scan of
smaller table and then join all l
Hello.
Please send the output of SHOW CREATE TABLE for each table.
> Note no index is used.
MEMORY tables usually uses HASH index which might be the source of the
problem. Change the index to B-Tree and check if the query plan have
changed.
Kevin Burton wrote:
> I was benchmarking a
In the last episode (Sep 22), Blumenkrantz, Steve said:
> We very recently began replicating data from a master to a slave and
> since doing that we've noticed that most of the RAM in the machine 2
> GB is being used with very little (relatively) free (12MB - 50MB).
> I've looked at several forums
Dan,
Have you tried LOAD INDEX INTO CACHE? See
http://dev.mysql.com/doc/mysql/en/load-index.html
Mike
At 09:06 AM 5/17/2005, Dan Salzer wrote:
So I don't think this is a mysql issue, but I wanted to bounce it off the
group anyways and see if anyone had seen similar behavior.
I'm running
Hello.
See:
http://dev.mysql.com/doc/mysql/en/crashing.html
"Chris Knipe" <[EMAIL PROTECTED]> wrote:
> We will try our best to scrape up some info that will hopefully help
> diagnose
> the problem, but since we have already crashed, something is definitely
> wrong
> and this may f
Baba,
- Original Message -
From: "Baba Buehler" <[EMAIL PROTECTED]>
Newsgroups: mailing.database.myodbc
Sent: Friday, April 29, 2005 12:54 AM
Subject: memory error & innodb backup
We've got a customer whose system has been experiencing corruption in
their InnoDB tables. They have return
Don't want this to roll to far down the list.
My Memory table hit 16Mb and locked up. Is there something in my.cnf that
I don't have correct. I thought I set it to 128MB memory tables.
max_connections = 3500
max_user_connections = 1500
key_buffer = 750M
myisam_sort_buffer_size = 130
Hello.
You may use 'CREATE TABLE ... SELECT' statement:
create table t3 type=heap select * from t2;
But be careful, as it doesn't create indexes automatically, unless you specify
them manually in your statement. See:
http://dev.mysql.com/doc/mysql/en/create-table.html
You may put
On Thu, 10 Feb 2005 10:19:32 +0900, Batara Kesuma
<[EMAIL PROTECTED]> wrote:
> Hi Tobias,
>
> On Wed, 9 Feb 2005 14:48:16 +0100 (CET)
> Tobias Asplund <[EMAIL PROTECTED]> wrote:
>
> > > I try to install MySQL 4.1.9 (official RPM from mysql.com). My machine
> > > is running linux 2.6.9, and it has
Hi Tobias,
On Wed, 9 Feb 2005 14:48:16 +0100 (CET)
Tobias Asplund <[EMAIL PROTECTED]> wrote:
> > I try to install MySQL 4.1.9 (official RPM from mysql.com). My machine
> > is running linux 2.6.9, and it has 4GB of RAM. The problem is MySQL
> > won't start if I set innodb_buffer_pool_size to >= 2G
On Wed, 9 Feb 2005, Batara Kesuma wrote:
> Hi,
>
> I try to install MySQL 4.1.9 (official RPM from mysql.com). My machine
> is running linux 2.6.9, and it has 4GB of RAM. The problem is MySQL
> won't start if I set innodb_buffer_pool_size to >= 2GB. Here is my
> ulimit.
Are you trying this on a 3
Hello.
> But now this takes forever...
May be you have some locks or your system is heavy-loaded. Use 4.1.9 instead of
4.1.7.
"Kevin A. Burton" <[EMAIL PROTECTED]> wrote:
> Under 4.0.18 we were loading about 800M of data into a memory table to
> get better performance from some of our
Dathan Pattishall wrote:
Hmm that's a range, that should do a table scan in 4.0.18,
Yes... I believe it did but since its a memory table it went by really
quick.
since a
memory table type is just a hash table. In 4.1 I believe it supports
ranges since the table is more of a myISAM type.
Yes.
Hmm that's a range, that should do a table scan in 4.0.18, since a
memory table type is just a hash table. In 4.1 I believe it supports
ranges since the table is more of a myISAM type.
Is there an index on TIMESTAMP?
Does the range cover more then 30% of the table?
> -Original Message-
In the last episode (Oct 05), Doug Wolfgram said:
> At 07:35 PM 10/5/2004, you wrote:
> >In the last episode (Oct 05), Doug Wolfgram said:
> >> When I run top after my server has been running for a few days,
> >> Mysql is using 60 or 70MB of memory. When I restart mysql, it goes
> >> back to 3000.
In the last episode (Oct 05), Doug Wolfgram said:
> When I run top after my server has been running for a few days, Mysql
> is using 60 or 70MB of memory. When I restart mysql, it goes back to
> 3000. Any idea where I should start to look for a problem? What
> causes this?
What are your mysql memo
We have an Opteron server with 6 gig of RAM.
The issue used to be 4 gig - the max amount of memory a 32-bit processor
could access. With 64-bit processors, the amount of accessible memory
has jumped into the terrabyte range.
Pick a distribution that is for the AMD-64 (we use SuSE 8 Enterprise)
On Tue, Jun 29, 2004 at 08:46:35PM -0400, Alejandro Heyworth wrote:
> Eric,
>
> I'm looking for a way to eliminate the construction, transmission, and
> parsing of the long multi-row INSERT queries that we are issuing from our
> client app. Since we are inserting 200k rows a shot, we're looking
Eric,
I'm looking for a way to eliminate the construction, transmission, and
parsing of the long multi-row INSERT queries that we are issuing from our
client app. Since we are inserting 200k rows a shot, we're looking for
every boost that we can find.
* Connecting: (3) [want to use a connect
http://dev.mysql.com/doc/mysql/en/Insert_speed.html
-Eric
On Tue, 29 Jun 2004 09:43:04 -0400, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
>
>
> I am proposing this as a hypothetical situation and I would like the full
> feedback of the group:
>
> Could Alejandro re-use the sections of the MyS
I am proposing this as a hypothetical situation and I would like the full
feedback of the group:
Could Alejandro re-use the sections of the MySQL source code that handle
replication and bin-logging to make his data capture application appear as
a "Master" server and have his MySQL database act as
Shawn,
Very Interesting idea. I definitely want to look into this a bit more.
I fear though that the bin-logs might be written first to disk before they
are copied over to the replicas.
Another member of my team mentioned there might be a way to issue direct
MyISAM table INSERTS. She suggested
In case anyone else encounters this particular symptom, it turns out the
problem was gcc using some orphaned headers for mysql 3.23.56 sitting in
/usr/include/mysql rather than the correct mysql 4.0.17 ones residing in
/usr/local/include/mysql, thus yielding all the strange behaviour.
M.
On Sat,
Karthik Viswanathan wrote:
Thanks for the information. Before I try to further look into the query,
I would like to know if there is some memory issue. Its strange since
the speed for executing same query differs. Its a Mac G5 with just 1GB
ram. I could see lot of pageouts in the top command. Th
Geoffrey,
- Original Message -
From: "Geoffrey" <[EMAIL PROTECTED]>
To: "Heikki Tuuri" <[EMAIL PROTECTED]>; <[EMAIL PROTECTED]>
Sent: Saturday, February 07, 2004 10:11 PM
Subject: Re: Memory Leak using InnoDB ?
> Dan, Heikki,
>
>
Dan, Heikki,
- Original Message -
From: "Heikki Tuuri" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Saturday, February 07, 2004 11:39 AM
Subject: Re: Memory Leak using InnoDB ?
> Geoffrey, Dan,
>
> - Original Message -
> From: "Dan N
Geoffrey, Dan,
- Original Message -
From: "Dan Nelson" <[EMAIL PROTECTED]>
Newsgroups: mailing.database.myodbc
Sent: Saturday, February 07, 2004 5:27 AM
Subject: Re: Memory Leak using InnoDB ?
> In the last episode (Feb 07), Geoffrey said:
> > I'm runnin
Geoffrey wrote:
Hi,
I'm running MySQL 4.0.17 with RH Linux 8 on Xeon 3.0/1GB RAM.
One application has to access the database (1 connection to the DB is open
on startup and left open). However this application performs a lot of
queries on the DB.
Main InnoDB table : 50.000 Rows
Other InnoDB table
In the last episode (Feb 07), Geoffrey said:
> I'm running MySQL 4.0.17 with RH Linux 8 on Xeon 3.0/1GB RAM.
>
> One application has to access the database (1 connection to the DB is
> open on startup and left open). However this application performs a
> lot of queries on the DB.
>
> Thanks to "
On Jan 31, 2004, at 1:09 AM, Adam Goldstein wrote:
On Jan 30, 2004, at 10:25 AM, Bruce Dembecki wrote:
On Jan 28, 2004, at 12:01 PM, Bruce Dembecki wrote this wonderful
stuff:
So.. My tips for you:
1) Consider a switch to InnoDB, the performance hit was dramatic,
and it's
about SO much more
On Jan 30, 2004, at 10:25 AM, Bruce Dembecki wrote:
On Jan 28, 2004, at 12:01 PM, Bruce Dembecki wrote this wonderful
stuff:
So.. My tips for you:
1) Consider a switch to InnoDB, the performance hit was dramatic,
and it's
about SO much more than transactions (which we still don't do)!
Consider
> On Jan 28, 2004, at 12:01 PM, Bruce Dembecki wrote this wonderful stuff:
>>
>> So.. My tips for you:
>>
>> 1) Consider a switch to InnoDB, the performance hit was dramatic, and it's
>> about SO much more than transactions (which we still don't do)!
>>
> Consider it switched! as soon as I find
On Jan 28, 2004, at 12:01 PM, Bruce Dembecki wrote this wonderful stuff:
I don't think there would be any benefit to using InnoDB, at least not
from a transaction point of view
For the longest time I was reading the books and listening to the
experts
and all I was hearing is InnoDB is great becau
Raid 5 is just as common as any other raid in software, and on my other
boxes it does not present any problem at all... I have seen excellent
tests with raid5 in software, and many contest that software raid 5 on
a high powered system is faster than hardware raid 5 using the same
disks-- I hav
On 1/28/04 10:29 AM, wrote:
>
>So should we always use InnoDB over BerkeleyBD? I was
>under the impression Berkeley was faster and better at
>handling transactions.
>
>Dan
>
Eermm... That's outside my scope of expertise, my experiences have been
exclusively with InnoDB and before that MyISAM, an
So should we always use InnoDB over BerkeleyBD? I was
under the impression Berkeley was faster and better at
handling transactions.
Dan
-Original Message-
From: Bruce Dembecki [mailto:[EMAIL PROTECTED]
Sent: Wednesday, January 28, 2004 11:01 AM
To: [EMAIL PROTECTED]
Subject: Re: Memory
> I don't think there would be any benefit to using InnoDB, at least not
> from a transaction point of view
For the longest time I was reading the books and listening to the experts
and all I was hearing is InnoDB is great because it handles transactions.
Having little interest in transactions per
I have had linux on soft-raid5 (6x18G, 8x9G, 4x18G) systems, and the
load was even higher... The explanation for this could be that at high
IO rates the data is not 100% synced across the spindles, and therefore
smaller files (ie files smaller than the chunk size on each physical
disk) must wai
I have managed to get what looks like >2G for the process, but, it does
not want to do a key_buffer of that size
I gave it a Key_buffer of 768M and a query cache of 1024M, and it seems
happier.. though, not noticeably faster.
[mysqld]
key_buffer = 768M
max_allowed_packet = 8M
table_cach
I don't think there would be any benefit to using InnoDB, at least not
from a transaction support view.
After your nightly optimize/repair are you also doing a flush? That may
help.
I haven't seen any direct comparisons between HFS+ and file systems
supported by Linux. I would believe that Lin
I have added these settings to my newer my.cnf, including replacing the
key_buffer=1600M with this 768M... It was a touch late today to see if
it has a big effect during the heavy load period (~3am to 4pm EST, site
has mostly european users)
I did not have any of these settings explicitly set i
The primary server (Dual Athlon) has several U160 scsi disks, 10K and
15K rpm... Approximately half the full size images are on one 73G U160,
the other half on another (about 120G of large images alone being
stored... I am trying to get him to abandon/archive old/unused images).
The system/lo
Have you tried reworking your queries a bit? I try to avoid using "IN"
as much as possible. What does EXPLAIN say about how the long queries
are executed? If I have to match something against a lot of values, I
select the values into a HEAP table and then do a join. Especially if
YOU are going
Yes, I saw this port before... I am not sure why I cannot allocate more
ram on this box- It is a clean 10.3 install, with 10.3.2 update. I got
this box as I love OSX, and have always loved apple, but, this is not
working out great. Much less powerful (and less expensive) units can do
a better
2GB was the per-process memory limit in Mac OS X 10.2 and earlier. 10.3
increased this to 4GB per-process. I've gotten MySQL running with 3GB
of RAM on the G5 previously.
This is an excerpt from a prior email to the list from back in October
when I was first testing MySQL on the G5:
> query_ca
Yes, MySQL is capable of using more than 2GB, but it still must obey
the limits of the underlying OS. This means file sizes, memory
allocation and whatever else. Have you heard of anybody allocating more
the 2GB using OSX? I've heard of quite a bit more using Linux or other
Unix flavors, but no
Others on this list have claimed to be able to set over 3G, and my
failure is with even less than 2G (though, I am unsure if there is a
combination of other memory settings working together to create an >2GB
situation combined)
Even at 1.6G, which seems to work (though, -not- why we got 4G of
You may be hitting an OSX limit. While you can install more than 2GB on
a system, I don't think any one process is allowed to allocated more
than 2GB of RAM to itself. It's not a 64-bit OS yet. You should be able
to search the Apple website for this limit.
On Jan 26, 2004, at 6:10 AM, Adam Gold
]>
To: <[EMAIL PROTECTED]>
Cc: <[EMAIL PROTECTED]>
Sent: Sunday, January 18, 2004 8:31 AM
Subject: Re: Memory leaks using MySQL C Api
> Agreed, I am not calling mysql_store_result(). I attempted to add
> my_free() but the function does not seem to exist, it is also not liste
Agreed, I am not calling mysql_store_result(). I attempted to add
my_free() but the function does not seem to exist, it is also not listed
in the API docs for the c api. As such it still seems that there should
be no leak, but yet I do get one. Thanks for the idea anyway Chris,
maybe you can cla
Hey wait a minute. Where did you get the my_free(), may be you are
trying to say mysql_free(), but then that is used only if result set is
used/called.
But the code does not show any result set call. ie. mysql_use_result()
or mysql_store_result().
So, the question now, how come there is a leak
Hi!
You're looking for the function my_free(). Enjoy!
Regards,
Chris
John McCaskey wrote:
I have the following code:
//try the mysql connection
mysql_init(&mysql_connection);
if(!mysql_real_connect(&mysql_connection, db_host, db_user, db_pass,
db_db, 0, NULL, 0)) {
eikki Tuuri" <[EMAIL PROTECTED]>
Sent: Saturday, December 20, 2003 4:50 AM
Subject: Re: memory trap
> One more thing - I had the following command issued:
>
> LOCK TABLES old WRITE;
>
> and then I try to insert 20GB (1+ Million records) of data.
>
> Drago
>
&g
Hi!
What MySQL version you are running?
Can you show us your my.cnf, so that we see how much memory you had
allocated to the InnoDB buffer pool?
Did you run some big DELETE, UPDATE, or SELECT ... FOR UPDATE, or SELECT ...
LOCK IN SHARE MODE query at that time, so that you could really exhaust al
Hi,
I don't see anything wrong with that. If I was in your shoes I'd make sure
I don't have any buffer overflows anywhere between the definitions and
where you use the variable - these are notorious to only cause
segmentation faults when there's no more memory to silently consume (e.g
variable to
it could be that err is initialized as > 0 or not initialized at all
-Original Message-
From: Lars Wenderoth [mailto:[EMAIL PROTECTED]
Sent: 23 September 2003 13:02
To: [EMAIL PROTECTED]
Subject: Memory and C API...
Hello there!
I have a problem with a little C program i am writing.. It
Hi,
Could you post your definitions for 'dbase' and 'err'...
Gerald.
On Tue, 23 Sep 2003, Lars Wenderoth wrote:
> Hello there!
>
> I have a problem with a little C program i am writing.. It uses
> SSL-encrypted connections to a special server. This server needs
> information from a MySQL databa
On Mon, Jun 23, 2003 at 07:21:25PM -0500, Miguel Perez wrote:
>
> Hi, I have a question about the memory that mysql uses,
> Here is the info that top command displays:
>
> 7:39pm up 55 days, 2:51, 1 user, load average: 0.18, 0.08, 0.02
> 54 processes: 53 sleeping, 1 running, 0 zombie, 0 stopp
Please check the manual.
http://www.mysql.com/doc/en/Memory_use.html
Edward Dudlik
Becoming Digital
www.becomingdigital.com
- Original Message -
From: "George Christoforakis" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Thursday, 12 June, 2003 04:07
Subject: memory setup
Hello,
an
- Original Message -
From: "Jeremy Zawodny" <[EMAIL PROTECTED]>
To: "Jeff Kilbride" <[EMAIL PROTECTED]>
Cc: "Lopez David E-r9374c" <[EMAIL PROTECTED]>; "'DeepBlue'"
<[EMAIL PROTECTED]>; <[EMAIL PROTECTED]>
S
On Fri, Mar 28, 2003 at 10:46:52AM -0800, Jeff Kilbride wrote:
> I've heard some bad things about turning off swap on Linux. I think it was
> on Jeremy Z.'s Blogger page. If he sees this maybe he can comment.
It really depends on the kernel too. Newer (2.4.19+) kernel are much
better than those a
;'DeepBlue'" <[EMAIL PROTECTED]>; <[EMAIL PROTECTED]>
Sent: Friday, March 28, 2003 7:38 AM
Subject: RE: Memory Leak
DeepBlue
For 1000 simultaneous connections, your key_buffer seems way low.
Should be in the 100's of megabytes. You may have to increase your
RA
ssage---
From: Egor Egorov
Date: sexta-feira, 28 de março de 2003 10:13:25
To: [EMAIL PROTECTED]
Subject: re: Memory Leak
On Friday 28 March 2003 11:23, DeepBlue wrote:> I'm experiencing a memory leak problem on a server with runs Mysql 3.23.55> with 1000 simultaneous users.>> It
On Friday 28 March 2003 11:23, DeepBlue wrote:
> I'm experiencing a memory leak problem on a server with runs Mysql 3.23.55
> with 1000 simultaneous users.
>
> It's an athlon XP 1800 with 512 Mb ram.
>
> Befor starting Mysql, webmin shows 350 Mb free memory, but after starting
> mysql server it go
At 12:53 PM 11/19/2002, you wrote:
Yeah, I've experienced the same thing -- if I leave my computer
unattended it just saps memory until I start getting complaints from
Windows about VM usage. Same deal -- W2K with all the updates.
-JF
You can try a resident memory manager that will automatical
Yeah, I've experienced the same thing -- if I leave my computer
unattended it just saps memory until I start getting complaints from
Windows about VM usage. Same deal -- W2K with all the updates.
-JF
> -Original Message-
> From: Jon Finanger [mailto:[EMAIL PROTECTED]]
> Sent: Tuesday, N
If it makes you feel better, your not the only one with this problem. I ran
into it as well and still have the problem from time to time. Any clues as
to why?
-Original Message-
From: Jon Finanger [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, November 19, 2002 5:56 AM
To: Dyego Souza do Carmo;
> -Original Message-
> From: Dicky Wahyu Purnomo [mailto:[EMAIL PROTECTED]]
> Subject: Memory Limit
> And what is the calculation for the memory also
The formula you want is (this does not account for InnoDB buffers either):
key_buffer_size + (record_buffer + sort_buffer)*max_conne
Adam,
Friday, September 27, 2002, 8:03:14 AM, you wrote:
AE> I am indeed in a very strange situation, to me at least. We've got a
AE> good-sized database (about 60gb) over 4k tables. We average about 300
AE> queries per second. We've recently noticed some strange behavior at around
AE> 430 con
linux, in cobalt raq4
- Original Message -
From: "walt" <[EMAIL PROTECTED]>
To: "SandraR" <[EMAIL PROTECTED]>
Sent: Monday, July 29, 2002 5:29 PM
Subject: Re: memory decay ..
> SandraR wrote:
>
> > Hi!!
> > Is ok that when
Hi.
On Mon 2002-07-29 at 16:33:40 -0300, [EMAIL PROTECTED] wrote:
>
> Hi!!
> Is ok that when I do a dump the memory utilized it is not released.
> what can I do to solve this problem??
> (mysql 2.23.37)
Please be more specific. How do you notice, the memory is not
released. Please quote
Oops, sorry, few mistakes here
Mike Blazer wrote:
>
> Hello guys,
> I'm runing mysql 4.0.0 under apache and mod_perl on solaris 2.28 - we
> are in a kinda testing/adjusting period. So, yesterday I've wrote an
> utility on top of
> `ps -o vsz,rss -e` to monitor the memory usage by my processes an
Hi!
> "domi" == domi <[EMAIL PROTECTED]> writes:
domi> Hi !
domi> Thank You for the response !
domi> so I don't think it's solaris-problem.
domi> Ofcource it can still be "my code" but I have not found
domi> the problem...
domi> So I took a chance and passed this question to list.
domi
somewhere.
40 - 50 MB in a week is not normal, or what You think ??
I'will test hoard library asap, lets see what happens !
Thank You for Reading.
=d0Mi=
Original Message -
Date: 25-Apr-2002 18:30:57 +0200
From: Rick Flower <[EMAIL PROTECTED]>
To: MySQL Mailing List &l
dOMi writes:
>However, after only a week the memory usage ocf this process
>has been grown to 40 - 50 MB so there's have to leakage somewhere.
What you *may* be seeing is standard memory fragmentation that many
Unix' systems have with the standard allocator.. You don't mention
what platform you'
On Wed, Feb 06, 2002 at 02:57:51PM +0200, Sinisa Milivojevic wrote:
> Franklin, Kevin writes:
> > We are running an extremely large instance of mysql version 3.23.46 on
> > Solaris 2.8. We are attempting to use a software version compiled 64 bit
> > and have been experiencing memory related serve
hi!
> "Sinisa" == Sinisa Milivojevic <[EMAIL PROTECTED]> writes:
Sinisa> Albert Chin writes:
>>
>> $ grep SIZEOF_LONG config.h
>> #define SIZEOF_LONG 8
>> #define SIZEOF_LONG_LONG 8
>>
>> $ grep SIZEOF_LONG incldue/my_config.h
>> #define SIZEOF_LONG 8
>> #define SIZEOF_LONG_LONG 8
>>
>>
On Wed, Feb 06, 2002 at 07:17:13PM +0200, Sinisa Milivojevic wrote:
> Albert Chin writes:
> >
> > $ grep SIZEOF_LONG config.h
> > #define SIZEOF_LONG 8
> > #define SIZEOF_LONG_LONG 8
> >
> > $ grep SIZEOF_LONG incldue/my_config.h
> > #define SIZEOF_LONG 8
> > #define SIZEOF_LONG_LONG 8
> >
> >
Albert Chin writes:
>
> Ok. Guess we'll wait for a fix. Any idea when a fix for MySQL will be
> available?
>
> --
> albert chin ([EMAIL PROTECTED])
>
Just look into Changelog's of the versions that come out.
--
Regards,
__ ___ ___ __
/ |/ /_ __/ __/ __ \/ /Mr. Sinisa M
On Wed, Feb 06, 2002 at 06:12:22PM +0200, Sinisa Milivojevic wrote:
> Albert Chin writes:
> >
> > MySQL 3.23.46 was built with the Sun C++ compiler:
> > $ CC -V
> > CC: Sun WorkShop 6 update 2 C++ 5.3 Patch 111685-03 2001/10/19
> >
> > It was built as follows:
> > CC=cc CFLAGS="-mr -Qn -xs
Albert Chin writes:
> On Wed, Feb 06, 2002 at 07:17:13PM +0200, Sinisa Milivojevic wrote:
> Yes, SIZEOF_INT is 4:
> $ grep SIZEOF_INT config.h
> #define SIZEOF_INT 4
> $ grep SIZEOF_INT include/my_config.h
> #define SIZEOF_INT 4
>
> However, according to
>
>http://docs.sun.com/ab2/coll.4
Albert Chin writes:
> On Wed, Feb 06, 2002 at 07:17:13PM +0200, Sinisa Milivojevic wrote:
>
> Yes, SIZEOF_INT is 4:
> $ grep SIZEOF_INT config.h
> #define SIZEOF_INT 4
> $ grep SIZEOF_INT include/my_config.h
> #define SIZEOF_INT 4
>
> --
> albert chin ([EMAIL PROTECTED])
>
We shall ha
1 - 100 of 145 matches
Mail list logo