Got a packet bigger than 'max_allowed_packet' - why is mysql so unkind?

2007-08-01 Thread Hartleigh Burton

Hi Everyone,

I have had similar problems in the past, and have managed to usually  
stumble my way around them. But now this officially is just not  
making any sense... to me at least ;)


Problem: I want to backup a database from server1 and restore it to  
server2. The database size is ~10GB, largest row at any time is ~100M.


Here is what I have been doing...

Creating the SQL backup
# mysqldump --host=192.168.1.61 --user=hburton -p --opt -- 
single_transaction --verbose --max_allowed_packet=100M -- 
net_buffer_length=100M mraentertainment  mraentertainment.sql


From my understanding, net_buffer_length should tell mysqldump the  
maximum size that extended-insert rows are allowed to be before  
beginning a new INSERT row set. I have however tried --skip-extended- 
insert  --extended-insert=FALSE (believing that the  
max_allowed_packet that is erring is actually multiple rows in a  
single INSERT)


The backup completes successfully and I have myself a 9.2GB SQL dump  
file ready to go.


Reloading the SQL backup
# mysql --host=localhost --user=hburton -p --max_allowed_packet=100M  
mraentertainment  mraentertainment.sql


Error: when trying to load the SQL file into the empty database on  
the new server I receive the message ERROR 1153 (08S01) at line 92:  
Got a packet bigger than 'max_allowed_packet' bytes. I see so many  
people on forums everywhere that get the same issue... but can not  
seem to find an answer for this.


As you can see, max_allowed_packet for mysqldump (backup) and mysql  
(restore) are the same. I have also set max_allowed_packet=1G in the  
config files for MySQL on both servers. I have tried using values up  
to 1G, as this is apparently the largest value supported according to  
the documentation.


If anyone can help me out with this the assistance is greatly  
appreciated.




Regards,
Hartleigh Burton
Resident Geek.

how to use all system RAM for myisam db?

2007-08-01 Thread Christian Parpart
Hi all,

we've a 16GB RAM host. one is configured to use InnoDB, where we can quite 
easily configure the amount of RAM to be used.
but the other mashine is said to be configured using myisam, but we cannot 
really tweak it to use all of the 16GB of RAM the server has.
there are - of course - quite alot buffer sizes regarding myisam that can be 
configured. and we already increased some of them. but noone makes really use 
of the whole system ram the host provides.
So, does anyone know a good poiint on how to exhaust the whole RAM (read as: 
as much as possible) for the myisam based DB?

Thanks in advance,
Christian Parpart.


signature.asc
Description: This is a digitally signed message part.


segment fault when using mysql++

2007-08-01 Thread wangxu
below is my code;these code works very fine until, the length of the 
field content  exceeds 30,

then  segment fault happened.
how can I avoid this?

   mysqlpp::Query query = conn.query();
   query  select content from pages where id=id;
   mysqlpp::ResUse res = query.use();
   if (!res) {
   std::cerr  Result set is empty!  std::endl;
   return 1;
   }
   mysqlpp::Row r;
   while (r = res.fetch_row()) {
   printf (r[content]);
 }

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: how to use all system RAM for myisam db?

2007-08-01 Thread Dan Nelson
In the last episode (Aug 01), Christian Parpart said:
 we've a 16GB RAM host. one is configured to use InnoDB, where we can
 quite easily configure the amount of RAM to be used. but the other
 mashine is said to be configured using myisam, but we cannot really
 tweak it to use all of the 16GB of RAM the server has. there are - of
 course - quite alot buffer sizes regarding myisam that can be
 configured. and we already increased some of them. but noone makes
 really use of the whole system ram the host provides. So, does anyone
 know a good poiint on how to exhaust the whole RAM (read as: as much
 as possible) for the myisam based DB?

MySQL relies on the OS's filesystem cache to cache MyISAM table data,
so it's already being done.  The reason for InnoDB's separate cache is
because you can specify raw disk partitions as InnoDB tablespaces,
which bypasses any OS-level caching.

-- 
Dan Nelson
[EMAIL PROTECTED]

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



counting on a complex query

2007-08-01 Thread Les Fletcher
I have a nice little query that I can get the results from fine, but am 
having problems figuring out the best way to get the count directly from 
mysql.  The query looks like the following:


SELECT t1.id, t1.col1, t2.id, t2.col1, t2.col2, t2.col3, t2.col4 FROM t1 
JOIN t2 ON ( t2.id = t1.col1 ) WHERE ( ... lots of OR's and AND's on t1 
... ) GROUP BY t1.col1 HAVING ( count(*) = t2.col2 ) ORDER BY t2.col3, 
t2.col4;



There is a one-to-many relationship between t2 and t1 ( lots of entries 
in t1 associated with an entry in t2 ).  The group by is just collapsing 
the t1 matches to get unique entries in t2 while the HAVING is then 
cutting that result set down further based on some predefined criteria. 

This gets the set of records that I want.  The problem is that I also 
want to be able to page through the records, but still know how many of 
them are, hence wanting to be able to count the number of results in the 
set.  Right now I know of two options, one is to just run the query with 
out any limits and count the records in my application.  The other is 
two do the following:


SELECT COUNT(*) from ( SELECT t1.id, t1.col1, t2.id, t2.col1, t2.col2, 
t2.col3, t2.col4 FROM t1 JOIN t2 ON ( t2.id = t1.col1 ) WHERE ( ... lots 
of OR's and AND's on t1 ... ) GROUP BY t1.col1 HAVING ( count(*) = 
t2.col2 ) ORDER BY t2.col3, t2.col4 ) t3;



Not sure if this is an optimal way to do it or not.  Any suggestions on 
a better way of getting the count?


Les

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: how to use all system RAM for myisam db?

2007-08-01 Thread Chris Kantarjiev
The real problem that you're running into (been there, done that) is
that the MyISAM index code is only 32-bit clean. You can try to use
more than 4GB for the key_buffer, but it won't work; I don't recall
if the code implicitly limits the value, or just falls over.

One possible workaround is to use multiple (alternate) key_buffers
if your schema is such that that makes sense. You will want to
spend some time looking at the statistics to understand just
where you're running out of memory - if the pressure is
on the index or the tables themselves (or both).

As Dan said, MyISAM tries to get the system to do caching of
the table data. That works reasonably well on Linux and NetBSD
(at least), but joins and sorts will be slow above a certain
size, and there seems to be nothing that can be done about it.

Good luck.

Best,
chris

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: counting on a complex query

2007-08-01 Thread Jerry Schwartz
Will SELECT SQL_CALC_FOUND_ROWS and SELECT FOUND_ROWS() do what you want?

Regards,

Jerry Schwartz
The Infoshop by Global Information Incorporated
195 Farmington Ave.
Farmington, CT 06032

860.674.8796 / FAX: 860.674.8341

www.the-infoshop.com
www.giiexpress.com
www.etudes-marche.com


 -Original Message-
 From: Les Fletcher [mailto:[EMAIL PROTECTED]
 Sent: Wednesday, August 01, 2007 3:27 PM
 To: mysql@lists.mysql.com
 Subject: counting on a complex query

 I have a nice little query that I can get the results from
 fine, but am
 having problems figuring out the best way to get the count
 directly from
 mysql.  The query looks like the following:

 SELECT t1.id, t1.col1, t2.id, t2.col1, t2.col2, t2.col3,
 t2.col4 FROM t1
 JOIN t2 ON ( t2.id = t1.col1 ) WHERE ( ... lots of OR's and
 AND's on t1
 ... ) GROUP BY t1.col1 HAVING ( count(*) = t2.col2 ) ORDER BY
 t2.col3,
 t2.col4;


 There is a one-to-many relationship between t2 and t1 ( lots
 of entries
 in t1 associated with an entry in t2 ).  The group by is just
 collapsing
 the t1 matches to get unique entries in t2 while the HAVING is then
 cutting that result set down further based on some predefined
 criteria.

 This gets the set of records that I want.  The problem is that I also
 want to be able to page through the records, but still know
 how many of
 them are, hence wanting to be able to count the number of
 results in the
 set.  Right now I know of two options, one is to just run the
 query with
 out any limits and count the records in my application.  The other is
 two do the following:

 SELECT COUNT(*) from ( SELECT t1.id, t1.col1, t2.id, t2.col1,
 t2.col2,
 t2.col3, t2.col4 FROM t1 JOIN t2 ON ( t2.id = t1.col1 ) WHERE
 ( ... lots
 of OR's and AND's on t1 ... ) GROUP BY t1.col1 HAVING ( count(*) =
 t2.col2 ) ORDER BY t2.col3, t2.col4 ) t3;


 Not sure if this is an optimal way to do it or not.  Any
 suggestions on
 a better way of getting the count?

 Les

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:
 http://lists.mysql.com/[EMAIL PROTECTED]






-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: counting on a complex query

2007-08-01 Thread Perrin Harkins
On 8/1/07, Jerry Schwartz [EMAIL PROTECTED] wrote:
 Will SELECT SQL_CALC_FOUND_ROWS and SELECT FOUND_ROWS() do what you want?

Be careful.  While it will give the correct number of rows that would
have been found if LIMIT had not been used, it makes the server
compute the whole result set, disabling the shortcuts that LIMIT
normally allows.  I found that with my large queries it was faster to
do two separate queries (a COUNT for the number of rows and a LIMIT
for one page of results) than to use LIMIT and FOUND_ROWS().

- Perrin

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Mysql 5.0.41 Cross Compiler capable from source?

2007-08-01 Thread John Penman \(jopenman\)
Hello all,

I am trying to build mysql-5.0.41 under a cross compiling environment.
Below is the beginning of the configure:

 

-bash-2.05b$ ./configure --host=arm-none-linux-gnueabi
--prefix=/view/mysql --with-mysqld-ldflags=-all-static

configure: WARNING: If you wanted to set the --build type, don't use
--host.

If a cross compiler is detected then cross compile mode will be used.

checking build system type... i686-pc-linux-gnu

checking host system type... arm-none-linux-gnueabi

checking target system type... arm-none-linux-gnueabi

 

All outward appearances show that building from the source supports
cross compiling. But as it goes through the configure it abruptly ends.
In reviewing the script I found the following code:

 

...

if test $cross_compiling = yes; then

{ { echo $as_me:$LINENO: error: cannot run test program while cross
compiling

See \`config.log' for more details. 5

echo $as_me: error: cannot run test program while cross compiling

See \`config.log' for more details. 2;}

{ (exit 1); exit 1; };

}

else

cat conftest.$ac_ext _ACEOF

...

 

By default the $cross_compiling flag gets sets to true if the --host
parameter is used. Reviewing this code leads me to believe that there is
no support for cross compiling, or I am missing something. There are 15
locations in the script with code similar to the above.

 

Any help with this would be greatly appreciated.

 



Re: Got a packet bigger than 'max_allowed_packet' - why is mysql so unkind?

2007-08-01 Thread Hartleigh Burton

Hi Gerald,

I have set the max_allowed_packet to 1G on both servers. Any other  
ideas by any chance?





Regards,
Hartleigh Burton
Resident Geek

MRA Entertainment Pty Ltd
5 Dividend St | Mansfield | QLD 4122 | Australia
Phone: (07) 3457 5041
Fax: (07) 3349 8806
Mobile: 0421 646 978

www.mraentertainment.com



Internal Virus Database was built: Never
Checked by MAC OSX... we don't get viruses!


On 01/08/2007, at 11:09 PM, Gerald L. Clark wrote:

You must also set max_allowed_packed for the new server, not just  
the new client.




Re: segment fault when using mysql++

2007-08-01 Thread Warren Young
I'm replying to you both personally and to the MySQL++ mailing list, 
where this message is on topic.  Please reply only on the list, not to 
me directly.


wangxu wrote:
below is my code;these code works very fine until, the length of the 
field content  exceeds 30,


How certain are you about this threshold?  It seems a very odd number 
for a computer to care about.  If you'd said 65536 bytes, or 16.7 MB, 
I'd put a lot more credence in your assertion.



   mysqlpp::Row r;
   while (r = res.fetch_row()) {
   printf (r[content]);
 }


I'm not sure it's the problem, but you really shouldn't use printf() for 
this.  The main reason is that printf() will scan the resulting string 
for % signs and try to interpret them as formatting options.  If it 
finds any, it will then try to find varargs, and fail; this would easily 
explain your segfault.  The byte count does affect the chances that this 
will happen, so maybe that's where your perception that it's data size 
related comes from.


Other reasons not to use printf() with MySQL++ data types are 
inefficiency and type safety.


See examples/cgi_jpeg.cpp for the right way to emit bulk MySQL++ to stdout.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: counting on a complex query

2007-08-01 Thread Les Fletcher
SQL_CALC_FOUND_ROWS isn't an option for us.  Right now I am doing two 
queries, but I am just trying to see if there is a better way to do the 
count query than to just turn it into a dervied table and count the 
results i.e. SELECT COUNT(*) FROM (big nasty query) t1. 

I think what the question is boiling down to is how to do count queries 
when using group by and/or having clauses.


Les

Perrin Harkins wrote:

On 8/1/07, Jerry Schwartz [EMAIL PROTECTED] wrote:
  

Will SELECT SQL_CALC_FOUND_ROWS and SELECT FOUND_ROWS() do what you want?



Be careful.  While it will give the correct number of rows that would
have been found if LIMIT had not been used, it makes the server
compute the whole result set, disabling the shortcuts that LIMIT
normally allows.  I found that with my large queries it was faster to
do two separate queries (a COUNT for the number of rows and a LIMIT
for one page of results) than to use LIMIT and FOUND_ROWS().

- Perrin
  


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Returning All Rows That Repeat

2007-08-01 Thread Brent Baisley
You have your count in the wrong spot. I'm not even sure how that  
query would work. The count(*) should be part of a select field. By  
putting it in the HAVING clause it's calculating it after the query  
runs.


SELECT jobs.*, count(*) AS Cnt FROM jobs GROUP BY customer_number,  
job_number HAVING

Cnt  1 ORDER BY customer_number;

You are doing a full table scan though, so it is going to be limited  
by the speed of you disk. You should have an index on customer_number 
+job_number if this is run frequently. If you don't need all the  
fields, then specify just the ones you need instead on *. If you only  
need the fields that are already in the index, then MySQL doesn't  
need to access the table (although that may be only InnoDB tables).



On Jul 29, 2007, at 10:31 PM, John Kopanas wrote:


Does it makes sense that on a table of 100,000 rows that my DB is
crapping out with the following query?

SELECT * FROM jobs GROUP BY customer_number, job_number HAVING
count(*)  1 ORDER BY customer_number;

:-)

On 7/29/07, John Trammell [EMAIL PROTECTED] wrote:

From: John Kopanas [EMAIL PROTECTED]
Subject: Returning All Rows That Repeat

I want to be able to return all rows that have one or more other  
rows

with the same customer_number and job_number.

So for instance.  If their are two jobs in my jobs table with a
customer_number = '0123' and job_number ='12' then I want both of
those jobs to return one right after another so I can compare their
other fields.  And I want to run a query once a day over the whole
table to see if their are any repeats.


One way would be a self-join, e.g.:

SELECT *
FROM mytable t1, mytable t2-- same table twice
WHERE t1.customer_number = t2.customer_number-- with same  
cust. no.

AND t1.job_number = t2.job_number-- and same job no.
AND t1.id  t2.id;-- but the records are distinct

INFORMATION IN THIS MESSAGE, INCLUDING ANY ATTACHMENTS, IS  
INTENDED FOR THE PERSONAL AND CONFIDENTIAL USE OF THE INTENDED  
RECIPIENT(S) NAMED ABOVE. If you are not an intended recipient of  
this message, or an agent responsible for delivering it to an  
intended recipient, you are hereby notified that you have received  
this message in error, and that any review, dissemination,  
distribution, or copying of this message is strictly prohibited.  
If you received this message in error, please notify the sender  
immediately, delete the message, and return any hard copy print-outs.





--
John Kopanas
[EMAIL PROTECTED]

http://www.kopanas.com
http://www.cusec.net
http://www.soen.info

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql? 
[EMAIL PROTECTED]





--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: question about data partitioning in application level.

2007-08-01 Thread Brent Baisley
I would use replication with multiple masters. You could setup 2, 3  
or however many servers, that all replicate to 1 server (which you  
may be doing). I would just set the auto_increment increment larger  
than you need to allow for growth, like 10. If you are using  
auto_increment.
You then have a list of servers to connect to in your php code. Every  
time it needs to connect it, it picks a random server to connect to.


Splitting by user id ranges isn't going to split the load evenly. If  
you are going to do it that way, I would have even ids go one way and  
odd another. Or any other increment beside 2.



On Jul 30, 2007, at 12:37 AM, Hiroyuki Yamada wrote:


I have a question about data partitioning.

I'm currently runnning a database server having too many writes, so
trying to partition data in application level.

What I'm trying to do is partitioning data by user,
for example, doing like the following.

user_id 1-10 = to server1
user_id 11-20 = to server2


When I have two tables listed below,

user - user profile and so on. (few writes)
user_write - user write history (many writes)

and trying to partition user_write table because of too many writes.

   
 user
   

  
 user_write1  user_write2
  


What would you do to execute consistent insertion to both user and  
user_write1

in this situation ?
(both insertion have to be successful and one insertion fail is not  
granted.)


Does two phase commit in MySQL5 work in this kind of situation ?
(transaction beyond multiple servers.)

What would you do when using MySQL4 ?


Thanks,

Hiroyuki Yamada

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql? 
[EMAIL PROTECTED]





--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: counting on a complex query

2007-08-01 Thread Perrin Harkins
On 8/1/07, Les Fletcher [EMAIL PROTECTED] wrote:
 SQL_CALC_FOUND_ROWS isn't an option for us.  Right now I am doing two
 queries, but I am just trying to see if there is a better way to do the
 count query than to just turn it into a dervied table and count the
 results i.e. SELECT COUNT(*) FROM (big nasty query) t1.

I don't think so.  That's the way I've done it.  You may want to
experiment with changing the SELECT on the inside query to minimize
the amount of data that gets put into the temp table.  I don't know if
the optimizer is smart enough to skip fetching those values or not.

- Perrin

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: segment fault when using mysql++

2007-08-01 Thread wangxu
actually I am using  this piece of code,but I replaced fprintf to printf 
just to simplify the problem.

and the field content actually stores string content.
Shall this mean some thing different from your explanation?
   while (r = res.fetch_row()) {
   FILE *fp1;
   char *fname=new char[50];
   fname[0]='\0';
   strcat(fname,HTML_HOME.c_str());
   strcat(fname,id);
   strcat(fname,.html);
   fp1 = fopen(fname, w);
   fprintf (fp1, r[content]);
   fclose(fp1);
   delete  fname;
   }

Warren Young wrote:
I'm replying to you both personally and to the MySQL++ mailing list, 
where this message is on topic.  Please reply only on the list, not to 
me directly.


wangxu wrote:
below is my code;these code works very fine until, the length of the 
field content  exceeds 30,


How certain are you about this threshold?  It seems a very odd number 
for a computer to care about.  If you'd said 65536 bytes, or 16.7 MB, 
I'd put a lot more credence in your assertion.



   mysqlpp::Row r;
   while (r = res.fetch_row()) {
   printf (r[content]);
 }


I'm not sure it's the problem, but you really shouldn't use printf() 
for this.  The main reason is that printf() will scan the resulting 
string for % signs and try to interpret them as formatting options.  
If it finds any, it will then try to find varargs, and fail; this 
would easily explain your segfault.  The byte count does affect the 
chances that this will happen, so maybe that's where your perception 
that it's data size related comes from.


Other reasons not to use printf() with MySQL++ data types are 
inefficiency and type safety.


See examples/cgi_jpeg.cpp for the right way to emit bulk MySQL++ to 
stdout.





--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]