Re: some problem of InnoDB performance optimization

2010-03-22 Thread Peter Zaitsev
Hi,

Lets see If I can help.

Overly long queries (transactions
>  in general) are bad for performance as a lot of unpurged versions
> accumulate.
>
> In this sentence, I don't know the meaning about 'unpureged version
> accumulate'
>

When rows are updated new versions are created. They are later removed by
purge thread - only then no active transactions may need them. This is why
long open transactions are expensive.



>
>
> And I don't how to understanding 'open transaction'?
>

This is transaction which is started (opened) but not yet ended by commit or
rollback.



>
>
> Required for logical level replication to work properly.
>
> What's the meaning about logical level replication?
>

MySQL has statement level, also caused logical as well as row based
replication.   statement level replication requires updates to be
serializable to work.


>
>
> Can give problems for portable applications if you port from MySQL-4.0 to
> later
>
> What's the meaning about this sentence?
>
>
>
This means you can run into the problems if you upgrade from MySQL 4.0 to
later version. Probably is not much relevant any more.



-- 
Peter Zaitsev, CEO, Percona Inc.
Tel: +1 888 401 3401 ext 501   Skype:  peter_zaitsev
24/7 Emergency Line +1 888 401 3401 ext 911

Percona Training Workshops
http://www.percona.com/training/


Re: MySQL Load Balancing

2006-08-07 Thread Peter Zaitsev

Ed Pauley II wrote:


  


Continuent's m/cluster will not work for me as it does not allow 
replication across a WAN. We have an offsite backup that needs to be in 
the replication (2-way to make switching back and forth easy) chain. I 
am thinking of a master, slave setup at each location where the masters 
continue to replicate both ways and then each replicates to it's own 
slaves. I would like to load balance these slaves on each end. I have 
not been able to find an appliance that will balance the reads for me. I 
have looked into possible solutions such as Sequoia, which I know would 
require a different setup. Is anyone actually using Sequoia? Does anyone 
use an appliance for load balancing MySQL? LVS seems like a good 
possibility but I would rather have an out-of-box solution since I will 
be repeating this setup at a remote location.




Sorry,  did not try Sequoia so can't tell how well it works


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Text search + transactions?

2006-08-07 Thread Peter Zaitsev
On Mon, 2006-08-07 at 09:49 +0100, Lexington Luthor wrote:
> Peter Zaitsev wrote:
> > Hi,
> > 
> > One of approaches is to have  "shadow" MyISAM table in addition to your
> > Innodb table to perform full text search, which you can update in bulks
> > or via triggers.
> 
> How can I ensure isolation for queries on this "shadow" table? The 
> documentation says that the table type does not support transactions.

Right.   If you want  full text search to follow transaction isolation
as well you're in trouble.In most search applications however it is
not that critical.   

For some cases some extra filtering (ie by join with Innodb table) can
help to ensure row versions match each other. 

If even that one would not work you would need to implement your own
little search engine in SQL (ie creating dictionary table + word list
table) - with this one you can make it to follow transaction isolation
but it will be very slow.  



> Sorry for being a bit dense here, but what do you mean exactly? Will 
> updates to the "shadow" table only be visible in their own transaction 
> until commit? Will they be rolled back on transaction abort?

No. MyISAM does not support transactions. So you would need to ensure
shadow table updates handle it in some way.



> 
> > You also can try sphinx:  http://www.sphinxsearch.com/  which works with
> > any storage engine and also much faster. 
> 
>  From what I can tell from the Sphinx docs, it is not transactional 
> either. Not only that, it does not support SQL tables at all, it is 
> simply a wrapper for a search API using the mysql storage engine 
> interface. Can you please elaborate on what you mean?

Right.  I assumed you want to use Innodb tables  because you want
transactions but you did not really need search queries to follow same
isolation mode. 



-- 
Peter Zaitsev,  MySQL Performance Expert
MySQL Performance Blog - http://www.mysqlperformanceblog.com
MySQL Consulting http://www.mysqlperformanceblog.com/mysql-consulting/


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: maximum files size for longblob - what is bigger?

2006-08-07 Thread Peter Zaitsev
On Mon, 2006-08-07 at 09:18 +0100, [EMAIL PROTECTED] wrote:
> what us the maximum filesize for longblobs in kb? Is there anything bigger?

http://dev.mysql.com/doc/refman/5.0/en/storage-requirements.html

As you can see in theory it is about 4GB.

It is however also limited by max_packet_size which is 16M by default. 

I would be very careful using blobs larger than 100MB.   MySQL will need
some 3 times of this size of memory allocated on the server for blob
processing. 



-- 
Peter Zaitsev,  MySQL Performance Expert
MySQL Performance Blog - http://www.mysqlperformanceblog.com
MySQL Consulting http://www.mysqlperformanceblog.com/mysql-consulting/


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Text search + transactions?

2006-08-07 Thread Peter Zaitsev
On Mon, 2006-08-07 at 08:49 +0100, Lexington Luthor wrote:
> Hi,
> 
> I need fast text searching on a transactional table. Is it possible to 
> use transactions and text-search on a table together yet in any 
> production stable version of mysql?

Hi,

One of approaches is to have  "shadow" MyISAM table in addition to your
Innodb table to perform full text search, which you can update in bulks
or via triggers.

You also can try sphinx:  http://www.sphinxsearch.com/  which works with
any storage engine and also much faster. 



-- 
Peter Zaitsev,  MySQL Performance Expert
MySQL Performance Blog - http://www.mysqlperformanceblog.com
MySQL Consulting http://www.mysqlperformanceblog.com/mysql-consulting/


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: MySQL Load Balancing

2006-08-04 Thread Peter Zaitsev
On Fri, 2006-08-04 at 15:54 -0400, Ed Pauley II wrote:
> I am looking into a scale-out solution for MySQL. I have read white 
> papers and searched the web but I can't find a load balancer that claims 
> to work well for MySQL.  MySQL's white paper shows NetScaler in the 
> scale-out stack but nothing on Citrix.com mentions MySQL. I also read 
> that friendster wrote a custom script for NetScaler to work in a MySQL 
> environment. I would rather not have to do that. Is there an out-of-box 
> solution for load balancing MySQL. My understanding is that MySQL is a 
> little more complicated than HTTP load balancing, which we already do 
> with Coyote Point Equalizers. I have thought about LVS. Has anyone had 
> any experience with load balancing MySQL? Any recommendations? Thanks in 
> advance.

As some people mentioned there is "Continuent" solution, this is what
was Emic networks previously.  

If you're building solution on your own such as master and number of
slaves there are plenty of things to think, regarding load balancing,
for example if replication breaks for any reason of falls behind on one
of the slaves you might want to kick it up.   

For very basic setup you even can use DNS for load balancing, which does
not solve many of the problems describe. 

The same kind of simple load balancing is build in MySQL JDBC Driver. 

In general everyone seems to implement something on their own, working
well for their application. 




-- 
Peter Zaitsev,  MySQL Performance Expert
MySQL Performance Blog - http://www.mysqlperformanceblog.com
MySQL Consulting http://www.mysqlperformanceblog.com/mysql-consulting/


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: MySQL 5.0 25% slower

2006-02-24 Thread Peter Zaitsev
On Mon, 2005-12-26 at 09:27 +0800, Chenzhou Cui wrote:

Hi,

Sorry for long delay with reply.

I guess it is similar to
http://bugs.mysql.com/bug.php?id=17229

The problem is basically confirmed and we're now working to find
solution



> Dear Peter,
> 
> Thanks very much for your concern. Answers to your questions are listed 
> below. Here, I am facing another serious problem: Should I interrupt the 
> Indexing work, which has been running for about 19 days? I don't know 
> how long it will take to finish the job. The table contains 
> 1,045,175,762 rows and there is 3GB memory in my server.
> 
> There are two important fields: `RAdeg` and `DEdeg` in the table. The 
> source data is ordered by `DEdeg`. It costed me 22 hours 14 min 37.27 
> sec to add a index on `DEdeg`. The `RAdeg` values are random. I don't 
> know how many days will it cost to create the index on that field.
> 
> Some information about my database and server are provided below.
> 
> Happy New Year,
> Chenzhou
> ===
> 
> 
> show processlist;
> +--+--+---+---+-+-+---++
> | Id   | User | Host  | db| Command | Time| 
> State | Info   |
> +--+--+---+---+-+-+---++
> | 1524 | cb   | localhost | USNOB | Query   | 1630664 | copy to tmp 
> table | alter table `main` add index (`RAdeg`) |
> | 4486 | cb   | localhost | USNOB | Query   |   0 | 
> NULL  | show processlist   |
> +--+--+---+---+-+-+---++
> 2 rows in set (0.00 sec)
> 
> show table status from USNOB;
> +--++-++++--+---+--+---++-+-++---+--++-+
> | Name | Engine | Version | Row_format | Rows   | Avg_row_length 
> | Data_length  | Max_data_length   | Index_length | Data_free | 
> Auto_increment | Create_time | Update_time | Check_time 
> | Collation | Checksum | Create_options | Comment |
> +--++-++++--+---+--+---++-+-++---+--++-+
> | main | MyISAM |  10 | Fixed  | 1045175762 |157 
> | 164092594634 | 44191571343572991 |   6073899008 | 0 
> |   NULL | 2005-12-06 08:31:40 | 2005-12-07 06:41:01 | 
> NULL   | latin1_swedish_ci | NULL || |
> +--++-++++--+---+--+---++-+-++---+--++-+
> 
> #>free
>  total   used   free sharedbuffers cached
> Mem:   3116424    3110228   6196  0  412922528564
> -/+ buffers/cache: 5403722576052
> Swap:  1020088  20548 999540
> 
> 
> Peter Zaitsev wrote:
> 
> >Hi, 
> >
> >I'm not on the MySQL list so let me write to you directly.
> >
> >Are you using MyISAM Tables ? 
> >  
> >
> Yes. I am using the default format.
> 
> >How does SHOW CREATE TABLE looks like ? 
> >
> >  
> >
> CREATE TABLE `main` (
>   `USNO_B1_0` char(12) NOT NULL default '',
>   `Tycho_2` char(12) default NULL,
>   `RAdeg` double(10,6) default NULL,
>   `DEdeg` double(10,6) default NULL,
>   `e_RAdeg` smallint(3) default NULL,
>   `e_DEdeg` smallint(3) default NULL,
>   `Epoch` float(6,1) default NULL,
>   `pmRA` mediumint(6) default NULL,
>   `pmDE` mediumint(6) default NULL,
>   `muPr` tinyint(1) default NULL,
>   `e_pmRA` smallint(3) default NULL,
>   `e_pmDE` smallint(3) default NULL,
>   `fit_RA` tinyint(1) default NULL,
>   `fit_DE` tinyint(1) default NULL,
>   `Ndet` tinyint(1) default NULL,
>   `Flags` char(3) default NULL,
>   `B1mag` float(5,2) default NULL,
>   `B1C` tinyint(1) default NULL,
>   `B1S` tinyint(1) default NULL,
>   `B1f` smallint(3) default NULL,
>   `B1s_g` tinyint(2) default NULL,
>   `B1xi` float(6,2) default NULL,
>   `B1eta` float(6,2) default NULL,
>   `R1mag` float(5,2) default NULL,
>   `R1C` tinyint(1) def

Re: MySQL not using optimum disk throughput.

2005-05-31 Thread Peter Zaitsev
On Sat, 2005-05-07 at 08:18, Greg Whalin wrote:
> Hi Peter,
> 
> As for reporting bugs ...
> http://bugs.mysql.com/bug.php?id=7437
> http://bugs.mysql.com/bug.php?id=10437
> 
> We have found Opteron w/ Mysql to be an extremely buggy platform, 
> especially under linux 2.6, but granted, we are running Fedora.  Perhaps 
> we will try Suse, but I feel I have heard similar reports (from 
> Friendster) about their use of Suse 2.6 and Opterons being similarly slow.


Well, if I'm not mistaken Friendster had been running into some bugs
with Linux kernel but it was not directly Opteron related. 



> 
> We are currently running MyIsam tables, but plan on switching to Innodb 
> in the next month or two btw, so our performance problems are w/ MyIsam.

Do you still have the problem ?

I've seen you're using FC1 which is rather old.  I have not heard about
much of success of this version with Opteron.

also did you run mysql-test on your MySQL  server ?  Does it pass at all
? If it does not  it is just likely your build is broken or incompatible
with your system.

There are unfortunately two problems which affect both self compiler
binaries and  out binaries.  Self compiled binaries could be affected by
GLIBC bugs and compiler bugs which we've seen a lot when platform just
appeared.Our static RPM may however have other problem -  Opteron
distributions  are not 100% binary compatible for statically linked
binaries and ie binary compiled on SuSE SLES is known to crash on RH AS
in some cases. 

We have great adoption of opteron platform among our customers with
great success rate, so I'm quite surprised by extent of problems you're
having. 


-- 
Peter Zaitsev, Senior Performance Engineer
MySQL AB, www.mysql.com


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: MySQL not using optimum disk throughput.

2005-05-06 Thread Peter Zaitsev
On Fri, 2005-05-06 at 19:01, Greg Whalin wrote:
> What drives are you using?  For SCSI RAID, you definitly want deadline 
> scheduler.  That said, even after the switch to deadline, we saw our 
> Opteron's running way slow (compared to older slower Xeons).  Whatever 
> the problem is, we fought it for quite a while (though difficult to test 
> too much w/ production dbs) and ended up rolling back to 2.4.

One more thing to try, if you have smart RAID would be  "noop"
scheduler, to let  hardware to do the job.  Smart optimizations OS do to
reduce head movement may not make sense for RAID. In practice I've
however seen close results. 

Also which storage engine are you using ?

One of the things which was changed in 2.6 for some hardware
configurations is  fsync() performance.  It was cases in some cases,
so it was instant. 

This for example explained in many cases why people moving from  IDE
devices to much faster SCSI devices may observe performance degradation
(IDE with 2.4 has typically fake fsync)


In general  we have very positive feedback from using Opterons with
MySQL at this point.  Sometimes it takes time to make it work right,
especially it was the case when they were new but when it flies. 
Practically same applies to EM64T - It is very good to have now two
inexpensive 64bit platforms available. 

We're getting some feedback about problems on some Fedora Core versions,
well this is "bleeding edge" distribution so I'm nothing but surprised.

SuSE both in SLES and Professional variants seems to work very well with
Opterons as well as recent RH EL.

Speaking about MySQL problems - if you have any MySQL issues on
Opterons,  please report them as bugs and we'll troubleshoot it.

> 
> Kevin Burton wrote:
> > Kevin Burton wrote:
> > 
> >> Greg Whalin wrote:
> >>
> >>>
> >>> Deadline was much faster.  Using sysbench:
> >>>
> >>> test:
> >>> sysbench --num-threads=16 --test=fileio --file-total-size=20G 
> >>> --file-test-mode=rndrw run
> >>
> >>
> >>
> > So... FYI.  I rebooted with elevator=deadline as a kernel param.
> > 
> > db2:~# cat /sys/block/sda/queue/scheduler
> > noop anticipatory [deadline] cfq
> > 
> > (which I assume means I'm now running deadline.  Is there any other way 
> > to find out?)
> > 
> > And no performance diff.  Note that you're benchmarks only show a 20M 
> > addition overhead.  We're about 60x too slow for these drives so I'm not 
> > sure what could be going on here :-/
> > 
> > Kevin
> > 
-- 
Peter Zaitsev, Senior Performance Engineer
Come to hear my talk at MySQL UC 2005  http://www.mysqluc.com/
MySQL AB, www.mysql.com



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: MySQL not using optimum disk throughput.

2005-05-06 Thread Peter Zaitsev
On Fri, 2005-05-06 at 22:16, John David Duncan wrote:
> > And no performance diff.  Note that you're benchmarks only show a 20M
> > addition overhead.  We're about 60x too slow for these drives so I'm 
> > not
> > sure what could be going on here :-/
> 
> 
> 
> I know of a site that encountered a similar performance issue:
> The OS was reading in a lot more data from the disk than the
> database really needed.
> 
> The culprit turned out to be the stripe size on a 4-disk RAID.
> By reducing the stripe size from 768K to 32K, they obtained a
> 200% increase in mysql throughput.

Hi,

This is actually interesting point, as we typically recommend large
stripes with MySQL (RAID 10 best) 

This may sounds like contradiction but it is not.  You need to have
large stripe set  (256-1024K+) but small RAID controller cache line 
(16K for Innodb tables) 

The thing is by default  many RAID controllers would put  cacheline size
= stripe size, some may not even allow to change it. 

If it is the case  MySQL will have to read a lot of unnecessary data
which will kill performance.


-- 
Peter Zaitsev, Senior Performance Engineer
Come to hear my talk at MySQL UC 2005  http://www.mysqluc.com/
MySQL AB, www.mysql.com



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: CREATE performance degradation from 4.0.17 -> 4.0.20

2004-08-04 Thread Peter Zaitsev
On Mon, 2004-08-02 at 14:57, Tinley, Jeremy wrote:
> We're upgrading from 3.23.58 to 4.0.20 and found that that although the
> ALTER test results of sql-bench had been greatly improved, CREATE has
> shown nasty performance degradation.  Just before needing to make the
> decision to revert back to 3.23.58, we found a post here where someone
> had a similar problem when using SAN storage.  We see the problem using
> hardware RAID, shared storage or local SCSI disks.
> 
> The machine in question is a 3ghz, 4GB RAM, reiserfs.  The data and
> application reside on local SCSI disks, 10k rpm. All installations are
> the MySQL provided linux-binary (x86), Standard releases.

Hi,

This is the known issue.
In MySQL 4.0.17  calling fsync() on frm files was added during table
creation. This was done so create table is more durable if used with
transactional tables such as Innodb.   It however affects all tables at
this point.

In most cases new tables are created rarely so it is not the problem, 
if it is for you case  --skip-sync-frm option can be used to avoid such
behavior. 

On other hand B->C changes for some tests surprise me. Are the results
stable if you repeat the run ?  In some cases especially for short tests
deviation can be pretty large.


> 
> Here is an excerpt of sql-bench results:
> 
> TestABC DE
> --
> alter_table_add6026 88
> alter_table_drop   4315 88
> create+drop12   11   11   240  223
> create_MANY_tables 10   11   10   220  228
> create_index111 11
> create_key+drop14   15   15   231  221
> create_table000 00
> select_1_row088 89
> select_2_rows   199 99
> select_column+column199 99
> select_group_when_MANY_tables   59   1110   10
> 
> 
> Column A is MySQL 3.23.58
> Column B is MySQL 4.0.15
> Column C is MySQL 4.0.16
> Column D is MySQL 4.0.17
> Column E is MySQL 4.0.20
> 
> 
> The biggest problem is the create set.  That's a HUGE difference in the
> exact same hardware.  Thoughts?
> 
> 
> -J
-- 
Peter Zaitsev, Senior Support Engineer
MySQL AB, www.mysql.com




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



New version of sql-bench is available from BitKeeper

2004-03-16 Thread Peter Zaitsev
Dear MySQL users,

  MySQL benchmark team is proud to announce availability of the new
version of sql-bench suite. It is currently available only in
BitKeeper tree, named "mysql-bench".  
  This is still development release and we would really appreciate
your testing, bug reports as well as comments you might have.  Please
mail these to [EMAIL PROTECTED] 

For the additional instructions how to work with MySQL BitKeeper tree
please refer to: http://www.mysql.com/doc/en/Installing_source_tree.html
manual page. 

After couple of months of testing we plan to replace elder version
currently shipped with MySQL distribution with this one.


The command you can use to clone the mysql-bench tree is:

bk clone bk://mysql.bkbits.net/mysql-bench mysql-bench

To compile the tree you'll need to run:

cd mysql-bench
aclocal
autoconf
automake
./configure
make

To quickly run all tests with default options you may use
./run-all-tests
script.


The changes in this new sql-bench version include:

  - ability to run with large database sizes
  - support for new MySQL 4.0 and 4.1 features, such as UNION,
Subqueries
  - AS3AP test
  - Separate Multi-User AS3AP test 
  - Many new test cases 




-- 
Peter Zaitsev, Senior Support Engineer
MySQL AB, www.mysql.com

Meet the MySQL Team at User Conference 2004! (April 14-16, Orlando,FL)
  http://www.mysql.com/uc2004/


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Benchmarking/optimization of MySQL

2004-03-02 Thread Peter Zaitsev
gt; select_column+column (10)  12  20
> Why is older version that faster in such a simple query?
> 
> Also note that when I installed MySQL 3.23.58 to machine TWO with exactly same 
> options as it is installed on machine ONE the results were almost identical - 
> meaning hardware has no noticable impact whatsoever.
> 
> 
> Does anyone know where these (and other) differences come from?
> 
> 
> PS: I would be very pleased if I could see hardware description / my.cnf / 
> sql-bench results from you to see if I am on the right way and how much 
> headroom do I still have. (Currently my "run-all-tests" script finishes with 
> just above 1500 seconds on server TWO. Details I will post tomorrow as this 
> message is already way too long and it is 4o'clock here and I can already see 
> my bed in front of me although it is still 15 km away:).
> 
> 
> 
> Best regards,
> 
> Bostjan Skufca
> system administrator
> 
> Domenca d.o.o. 
> Phone: +386 4 5835444
> Fax: +386 4 5831999
> http://www.domenca.com
-- 
Peter Zaitsev, Senior Support Engineer
MySQL AB, www.mysql.com

Meet the MySQL Team at User Conference 2004! (April 14-16, Orlando,FL)
  http://www.mysql.com/uc2004/


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Table Name Case Sensitivity

2004-02-23 Thread Peter Zaitsev
On Mon, 2004-02-23 at 08:56, Tim Hayes wrote:
> OK
> 
> There is still the possibility of an in-compatability between the 2
> platforms.
> 
> However - in both Linux and Windows (MySQL 4.0.17) the variable is not
> recognized / updateable using the set command!
> 
> 
> I get - Unknown system variable 'lower_case_table_names'
> 

This is startup option. Did you use it as such (in my.cnf) ?



-- 
Peter Zaitsev, Senior Support Engineer
MySQL AB, www.mysql.com

Meet the MySQL Team at User Conference 2004! (April 14-16, Orlando,FL)
  http://www.mysql.com/uc2004/


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Indexing Woes

2004-02-11 Thread Peter Zaitsev
On Wed, 2004-02-11 at 09:29, Chris Fossenier wrote:
> Hello,
>  
> I had a question about indexing a while back and everyone screamed
> "normalize"!!
>  
> Well...I've normalized much as I'm going to, and at most I have 3 indexes on
> any one table. My database has 120 million records in it and the index
> creation is taking a ridiculous amount of time. I can create the same
> indexes on MS SQL or Oracle in a fraction (a small fraction) of the time.
>  
> Any tips? If I look at the PROCESSLIST, I can see that MySQL is using Key
> Cache instead of File Sort. I've read that File Sort is faster but have no
> idea how to force MySQL to use this method.
>  
> When MySQL indexes, does it actually create a copy of the table first (same
> size as original .MYD) and then prune it back to a smaller size for the
> .MYI? The reason I ask is because one table that I'm indexing has been
> running for a long time and the .MYI is only 3GB and the .MYD is 12GBnot
> a good sign.
>  

Check myisam_max_sort_file_size,  myisam_max_extra_sort_file_size and
myisam_sort_buffer  description and values. 

You shall be able to make Repair happening by Sort unless it is unique
index, which is much faster.


-- 
Peter Zaitsev, Senior Support Engineer
MySQL AB, www.mysql.com

Meet the MySQL Team at User Conference 2004! (April 14-16, Orlando,FL)
  http://www.mysql.com/uc2004/


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Maximum tables in a join (4.0.x)

2004-02-10 Thread Peter Zaitsev
On Tue, 2004-02-10 at 07:47, Dan Nelson wrote:
> In the last episode (Feb 10), Andrew Loughe said:
> > What is the maximum number of tables allowed in a join for MySQL
> > 4.0.x?
> 
> It's not in the docs as far as I can see, but the feature comparison
> page says 31: http://www.mysql.com/information/crash-me.php?res_id=49 ,
> search for "tables in join".

One can get 63 by Recompiling MySQL if it helps. 



-- 
Peter Zaitsev, Senior Support Engineer
MySQL AB, www.mysql.com

Meet the MySQL Team at User Conference 2004! (April 14-16, Orlando,FL)
  http://www.mysql.com/uc2004/


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: SQL2000 and MySql

2004-02-10 Thread Peter Zaitsev
On Tue, 2004-02-10 at 08:38, Martijn Tonies wrote:

> > * Assuming that my points below regarding performance are correct (I'm
> > sure that Heikki will stand by InnoDB and back up anyone preaching it's
> > performance benefits), the lower hardware costs are an important factor
> > (as in lower for a given performance target).
> 
> Note: when using InnoDB in 24x7 environments, you need to purchase an
> additional hot-backup tool to do your backups. Not expensive at all though.

Martin,

This is not exactly the case.   There are several ways to get Innodb hot
and consistent backup.  Commercial Innodb Hot Backup tool by Innobase Oy
is the easiest to use and fastest.

Alternatively you can use LVM (or similar tool) to get the consist read
only snapshot and use it as backup or  use 
"mysqldump --single-transaction" to  get consistent text backup.
As Innob has versioning  these selects will not lock anything.



-- 
Peter Zaitsev, Senior Support Engineer
MySQL AB, www.mysql.com

Meet the MySQL Team at User Conference 2004! (April 14-16, Orlando,FL)
  http://www.mysql.com/uc2004/


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: MySQL benchmarks

2004-01-13 Thread Peter Zaitsev
On Tue, 2004-01-13 at 01:58, Prasad Budim Ram wrote:
> Hi All,
> 
> Is there any AS3AP benchmark suite readily available for MySQL?

Ram,

Yes but you're asking it in the wrong place :)

It is still not published to the public (we plan to publicly open our
Benchmark BitKeeper tree later this month)

Ranger could you please send to Ram our current version ?

You can also try OSDB - AS3AP implementation in C.

P.S For benchmarks issues it is better to use [EMAIL PROTECTED]
list which is dedicated for this purpose. 


-- 
Peter Zaitsev, Full-Time Developer
MySQL AB, www.mysql.com

Want to swim with the dolphins? (April 14-16, 2004)
  http://www.mysql.com/uc2004/



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: optimizer bug in the index used by mysql/Innodb in the search

2003-03-19 Thread Peter Zaitsev
On Tue, 2003-03-18 at 21:38, [EMAIL PROTECTED] wrote:
>  Description:
> 
>  Hello Peter,
> 
>  Have you explained to Heikki this problem?
>  Have you fixed it?
> 
>  Please, tell me about it.
> 

Dear Rafarife,

This problem is qualified as wrong optimizer plan selection.
Happily you can select proper plan by using FORCE INDEX() clause.

We will improve optimizer to handle this case, but I can't promise you 
how soon this would happen - it is very tricky task as you need to make
sure your changes do not worsen behavior in other cases.

Also I would recommend you to posting your message to most appropriate
mailing list instead of both. Many users are subscribed to both mailing
list and they will get your message twice. 


-- 
 MySQL 2003 Users Conf. -> http://www.mysql.com/events/uc2003/
__  ___ ___   __
   /  |/  /_ __/ __/ __ \/ /   Peter Zaitsev <[EMAIL PROTECTED]>
  / /|_/ / // /\ \/ /_/ / /__   MySQL AB, Full-Time Developer
 /_/  /_/\_, /___/\___\_\___/   Moscow, Russia
<___/   www.mysql.com   


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php



Re: MySQL 4.0.5(a) is released

2002-11-29 Thread Peter Zaitsev
On Friday 29 November 2002 00:32, Stefan Hinz, iConnect \(Berlin\) wrote:
> Dear Lenz,
> 
> > Removed variable `safe_show_database' as it was not used anymore.
> 
> What will ISPs say about this one? They use 'safe_show_database' for their
> MySQL setups, so their customers on virtual MySQL hosts cannot see other
> customers' databases. (It's more likely that you won't attack something
> which you cannot see.)
> 
> Or am I missing out on something?

It is not that bad :)

Now safe_show_database is a sort of default. And if you need user which can 
see all databases you can grant him SHOW_DATABASE privelege.

So  ISPs should  be only happy with this change :)


-- 
    __  ___ ___   __
   /  |/  /_ __/ __/ __ \/ /Mr. Peter Zaitsev <[EMAIL PROTECTED]>
  / /|_/ / // /\ \/ /_/ / /__   MySQL AB, Full-Time Developer
 /_/  /_/\_, /___/\___\_\___/   Moscow, Russia
<___/   www.mysql.com   


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: BUG report (select distinct...) 4.0.2 and latest bk snapshot

2002-07-23 Thread Peter Zaitsev

On Tuesday 23 July 2002 19:39, Sergey S. Kostyliov wrote:
> >Description:
>
>   ERROR 2013: Lost connection to MySQL server during query.
>
> >How-To-Repeat:
>
>   select distinct s.supplier_id, s.who_pay, r.name as rname, s.name,
> s.nick, s.address, s.contact_person, s.email, s.fax,
> s.comment
> from supplier s, product_supplier ps, shop_product_supplier sps
> use index(product_supplier_id)
> left join route r on r.route_id=s.route_id
> where ps.product_supplier_id=sps.product_supplier_id and
> sps.shop_id=10 and ps.supplier_id=s.supplier_id
> order by name
> limit 0, 10
>
>
> Result:
> ERROR 2013: Lost connection to MySQL server during query
>   Note:
>   The same results with oficial mysql-4.0.2 and latest bk snapshot,
>   mysql was compiled with both gcc-3.1 and gcc-295.3
>
>

Unfortunately we can't test this bug report as we do not have tables to run 
this query with. 

Please check tables you have at first to eliminate corrupted table is the 
source of the problem and if problem persist upload them into secret 
directory at ftp://support.mysql.com

If you are able to create small repeatable case you may send it in the mail.


-- 
__  ___ ___   __
   /  |/  /_ __/ __/ __ \/ /Mr. Peter Zaitsev <[EMAIL PROTECTED]>
  / /|_/ / // /\ \/ /_/ / /__   MySQL AB, Full-Time Developer
 /_/  /_/\_, /___/\___\_\___/   Moscow, Russia
<___/   www.mysql.com   M: +7 095 725 4955


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: declining insertion speed of large amounts of data

2002-06-06 Thread Peter Zaitsev
K>  Or, does creating tables capable of growing past 4GB carry an inherent
JK> penalty? Or are the non-word-size integers slowing things down with the overhead 
of bit
JK> packing?
The great issue with your application is disk IO.  So these issues do
not matter such a lot.

One possible optimization solution for your application may be to have
series of tables so each one will be small enough to have index
fitting into memory with MERGE table across them. This will increase
insert speed a lot but slowdown select statements a bit. Also you will
have to handle auto_increment and unique manually.

JK>  Not having much experience the with MySql product, it's hard for me to finger
JK> the likely suspect. As another idea, would it be advisable to insert all the data 
into
JK> an un-indexed table (except for the primary key) and then perform one large 
indexing
JK> operation after all the data is loaded?

For one time bulk load "LOAD DATA" is the best solution. It behaves
almost the same way - postponing index creation.

JK> I suspect you have some customers that have already encountered issues in managing 
large
JK> table sizes. What solutions have worked in that past?

The solutions I presented are most commonly used. Also  customers
often spread large volumes of data across server farm.

I hope this solutions will help you.   I'm sure you're going to
benchmark them.  Could you please post results into the list so all
customers will see which benefit they may get.





-- 
For technical support contracts, visit https://order.mysql.com/?ref=mpza
   __  ___ ___   __
  /  |/  /_ __/ __/ __ \/ /Mr. Peter Zaitsev <[EMAIL PROTECTED]>
 / /|_/ / // /\ \/ /_/ / /__   MySQL AB, Full-Time Developer
/_/  /_/\_, /___/\___\_\___/   Moscow, Russia
   <___/   www.mysql.com   M: +7 095 725 4955


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Very large BLOBS in MySQL 4.0.1

2002-03-30 Thread Peter Zaitsev

Hello Adriano,

Friday, March 29, 2002, 11:38:32 PM, you wrote:

As I remember there are max_allowed_packet options exists both for
client and server. You may wish to try to raise both of them.


AM> I've been using a perl script to insert a very large blob (up to about 8 
AM> MB) into a DB successfully, but now I need to do it with a 34 MB BLOB, 
AM> so I upgraded to MySQL 4.0.1. First off, I can't get 
AM> max_allowable_packet to go higher than 64M, regardless of what I set it 
AM> to, but that shouldn't be the problem, right? But I still keep getting

AM> DBD::mysql::st execute failed: MySQL server has gone away

AM> when it tries to do the insert or update. It continues to work fine for 
AM> smaller files. The field is a longblob, BTW. Any help would be greatly 
AM> appreciated. And please don't tell me not to store large blobs in the 
AM> database. I wouldn't be doing it if I didn't have to.




-- 
Best regards,
 Petermailto:[EMAIL PROTECTED]


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re[2]: DRDB vs. MySQL replication

2002-03-30 Thread Peter Zaitsev

Hello Jeremy,

Friday, March 29, 2002, 3:10:35 AM, you wrote:



>> Also a good thing with DRBD you will not have to store and track
>> replication log files and the setup will not require to take MySQL
>> down to copy the snapshot of database.

JZ> Why would you need to take down MySQL for a backup (assuming you use
JZ> MyISAM tables at least)?

To establish replication you need a snapshot for all of the tables for
specific point  so you may do it without shutting MySQL down by
lock all tables read;
flush all tables;
flush logs and copy old one to separate place.
copy all data for replication
unlock tables;

As you see this will block any updates to MySQL for the time you copy
data (which may be rather long)

The same thing you need if you want really consistent snapshot. If you
will jut copy table one by one (even with mysqlhotcopy)  the backuped
data may be not logically consistent.




-- 
Best regards,
 Petermailto:[EMAIL PROTECTED]


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Backups

2002-03-29 Thread Peter Zaitsev

Hello Tab,

Friday, March 29, 2002, 10:05:20 PM, you wrote:

Which table type do you use ? Well if you do not know it's probably
MYISAM.  In this case you may use BACKUP TABLE to make a quick
consistent backup of the table. If you want to have consistent
snapshot of several tables you may use "lock tables" with BACKUP
table.

Also mysqldump --tab will work for any table type and it's rather
fast.

TA> What do you folks out there do to backup your MySQL databases?  I'm new
TA> to it and am wondering if there's a good utility for doing it, or some
TA> simple procedure that is already known to be effective and efficient.
TA>  
TA> Thanks,
TA> Tab


TA> -
TA> Before posting, please check:
TA>http://www.mysql.com/manual.php   (the manual)
TA>http://lists.mysql.com/   (the list archive)

TA> To request this thread, e-mail <[EMAIL PROTECTED]>
TA> To unsubscribe, e-mail <[EMAIL PROTECTED]>
TA> Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php



-- 
Best regards,
 Petermailto:[EMAIL PROTECTED]


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: CPU - hog / hangup with replication

2002-03-29 Thread Peter Zaitsev

Hello Michael,

Friday, March 29, 2002, 10:22:27 PM, you wrote:

MZ> Hello all,

MZ> sorry, if I nerve you with a problem, which is allready known -
MZ> I didn't find appropr. info in the FAQs (or was too stupid to look
MZ> at the right place?)

MZ> Trying database replication with cross-wise update (A is master for B
MZ> and B is master for A) situations occur after some sucessfule updates, 
MZ> where the system seems to 'hang' and one mysql process uses 99,9% CPU.

MZ> Is my mysql or OS too outdated,
MZ> or what would you do to debug the situation?

use  "top" program to find out which thread is cpu hogging.  Then
connect it with gdb by "gdb /path/to/mysqld " and do a "bt"
command. And then disconnect.
Repeat above couple of times (as "next" command often hangs) and send
several bt output here. This may help to find out at least there it's
looping around.




-- 
Best regards,
 Petermailto:[EMAIL PROTECTED]


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re[4]: RH 7.2 connections problems w 16 web servers to 2 MySQL servers

2002-03-28 Thread Peter Zaitsev

Hello Sam,

Thursday, March 28, 2002, 10:21:39 AM, you wrote:

SI>  >   The good solution for this would be to put OOPS behind of apache.

SI> What is OOPS ?

The OOPS is squid-like PROXY application which mostly used as WEB
accelerator (server side proxy). The main idea of using it with
PHP/MYSQL is to decrease number of clients apache has (how many do you
have now ?) also this decreases number of needed mysql connections and
so number of MySQL threads.This happens because OOPS is able to
get result quite fast from the server, buffer it and then slowly push
it to dialup client, while the same apache process is able to process
other request.

SI> Is it like SQL Relay ? Others have said that we should be using a 
SI> connection pooler & that it's a PHP/Apache config problem that we're 
SI> running into. We may also move to BSD from Redhat.

I think you're running into performance problem mostly. At least you
should look at VMSTAT output then this problem occures.

SI>  > P.S Are you using mysql binary or version compiled with patched GLIBC
SI>  > if not the threads limit should be the cause.

SI> We had to use the patched glibc. We have over 1,000 connections 
SI> supported now.

Well. That's what I have afraid of. Unfortunately the Linux has
troubles scheduling large number of threads, that's why trying to keep
the number of threads as small as possible is good. You can look at
the results I've posted to the conference a week or so ago, or do you
own once with fork_XXX.pl scripts coming with mysql distribution.

The solutions for this problem I have so far are:

- Trying to decrease number of running threads using OOPS as I
advised, decreasing timeout, limiting number of mysql_pconnect in PHP
config and so on. Also one of often errors about this is using objects
which connect to MySQL in PHP - remember they would open more
connection to mysql then not passed/returned by reference.
- Try to Speed up threads handling. You may use Ingo's sheduler
available for recent kernels or NGPT thread package. Therefore this
packages are not quite well tested with mysql yet.

SI> - Sam.

SI> On Monday, March 11, 2002, at 06:06 AM, Peter Zaitsev wrote:

>> Hello Michael,
>>
>> Monday, March 11, 2002, 3:38:28 PM, you wrote:
>>
>>
>> I had a close problem once -  then having many active connections (and
>> so threads) mysql speed may degrade a lot because of scheduling and
>> convergency problem. This does not explain the mysql lock itself, but
>> may be the reason (i.e too many threads may make mysql to lock or
>> crush because of GLIBC limits)
>>   The good solution for this would be to put OOPS behind of apache.
>> This gives two positive issues:
>>   - your apache server will have much less children and so will require
>>   much less memory and will basically work faster. In my case the
>>   number of children have dropped from 150 to 16 and required memory
>>   from about 1G to 200M (I'm running very complicated PHP scripts)
>>   - you will need much less number of connections for mysql.  In my
>>   case the number have dropped from about 500 connections from web
>>   server to 50, and load average on mysql server fell from 3.0-4.0 to
>>   0.5-1.0.
>>
>>
>> P.S Are you using mysql binary or version compiled with patched GLIBC
>> if not the threads limit should be the cause.
>>
>>
>> MW> Hi!
>>
>> Sam>> We have a very high volume site (3 million page views a day) 
>> that's run
>> Sam>> on 16 Apache / PHP web servers & 2 MySQL servers. We are using 
>> PHP with
>> Sam>> persistent connections. Each MySQL serves 8 web servers & is 
>> supposed to
>> Sam>> act as a failover machine for the other group of 8 web servers.
>> Sam>>
>> Sam>> The failover won't work now as if one MySQL goes down the cost of 
>> the 8
>> Sam>> web servers switching over is so high the other MySQL locks up.
>> Sam>>
>> Sam>> Each Apache / PHP server takes up hundreds of connections even 
>> when
>> Sam>> they're idle so we ran into the Linux connection limit of 1000 & 
>> had to
>> Sam>> recompile to get past that.
>> Sam>>
>> Sam>> Our actual MySQL CPU useage is low but the goes when with the 
>> connection
>> Sam>> overhead when starting up or failing over a bank of machines.
>> Sam>>
>> Sam>> We get a mysterious MySQL lockup once a week at the same day & 
>> time.
>>
>> MW> Could you please describe the lookup, exactly what happens ?
>>
>> MW> What does 'mysqladmin var ext' show w

Re: DRDB vs. MySQL replication

2002-03-27 Thread Peter Zaitsev

Hello Jens,

Wednesday, March 27, 2002, 2:43:22 PM, you wrote:

We have tried to use DRBD with MySQL/EXT3/NFS and some other
applications.  Generally it works rather good, showing rather nice
performance.

The only problem we had was problem with EXT3 corruption, which was in earlier
(2-3 months) versions. So You should test carefully how it will work with your
environment, including the processes of fallback/recover during the
notable load.

Also the problem with MySQL is  you will need to check the tables
before using them if you're using MYISAM, therefore using INNODB or
BDB will help to  recover mysql quite fast.


Also you may note with MySQL replication you may execute "select"
queries on Slave, which is really nice solution for statistical
scripts which may execute long selects or database backup. In case
with DRBD you will miss it.

On other side in this case you will not have any need to care about
some features which have trouble with replication - like "load data
infile", variables etc.  Also a good thing with DRBD you will not have
to store and track replication log files and the setup will not
require to take MySQL down to copy the snapshot of database.

JV> has anyone used DRDB (http://www.linbit.com/) instead of the standard MySQL
JV> replication?
JV> DRDB implements a virtual disk mirrored across a local disk and a disk in a
JV> remote stand-by server.
JV> On first thought, it adds some complexity to setup, but it can also
JV> replicate information not written to MySQL but e.g. directly to the
JV> filesystem.

JV> Any thoughts on this?


-- 
Best regards,
 Petermailto:[EMAIL PROTECTED]


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




MYSQL & NGPT

2002-03-23 Thread Peter Zaitsev

Hello mysql,

 I've found benchmark for MySQL on Linux with NGPT - the Solaris-like
  pthread compatible threads for Linux, the stable version 1.2.0 of
  which just was released.

  The user tested mysql server from binary distribution vs self
  compiled with NGPT.   2.4.19-pre3 kernel was used.

  fork_big.pl was used with --loop-cont=5000

  MySQL-LinuxThreads
  1 - 0m9.8s
  10 - 1m24s
  20 - 4m21s
  50 -14m47s


  MySQL-NGPT
  1 - 0m9.5s
  10 - 0m51s
  20 - 1m44s
  50 - 4m29s
  100- 9m50s


  So NGPT shows almost linear salability winning several times over
  MySQL optimized Linux threads in some cases.  

-- 
Best regards,
 Peter  mailto:[EMAIL PROTECTED]


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




MYSQL,Linux & large threads number

2002-03-15 Thread Peter Zaitsev

Hello mysql,

  Some time ago I wrote about slow down of mysql with large number of
  threads, which is quite common in Linux-Apache-Mysql-PHP enviroment.

  The test was simulating the worst case of concurrency - all the
  threads are modified global variable in a loop 500 times in
  total, using standard mutex for synchronization. The yeild is used
  in a loop to force even more fair distribution of lock usage by
  threads and increase context switches, therefore it did not change
  much with large number of threads. I.e with 64 threads time without
  yeild is 1:33.5

  Test was run on PIII-500 1G RAM Kernel 2.4.18. 3 runs were made for
  each number of threads and best results were taken:

 Num threads.   Time  Peak cs rate.
2   53.4  179518
4   53.8  144828
16  1:06.3 85172
64  1:48.1 48394
256 8:10.6 10235
1000   36:46.2  2602


The surprising thing is the time grows in less then linear way for up
to 64 threads but later it stars to go linear way or even worse. May
be this is because some other process are sleeping in the system which
also is used in scheduling.


For Next test I'll try to use Ingo's scheduler to see if it helps to
solve the problem, also I'll try to test real mysql server to see
which slowdown it will have.




CODE: (Dumb one just for test)

  
#include 
#include 
#include 
#define NUM_TH 1000

#define TOTAL_VALUE 500

#define LOOP (TOTAL_VALUE/NUM_TH)

pthread_t th[NUM_TH];
int thread_data[NUM_TH];

int rc,rc2;

int global=0;

pthread_mutex_t mut = PTHREAD_MUTEX_INITIALIZER;
pthread_mutex_t start = PTHREAD_MUTEX_INITIALIZER;


void f(int *thn)
 {
 int i;  
 pthread_mutex_lock(&start);
 pthread_mutex_unlock(&start);
 for (i=0;imailto:[EMAIL PROTECTED]


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re[2]: RH 7.2 connections problems w 16 web servers to 2 MySQL servers

2002-03-11 Thread Peter Zaitsev

Hello Michael,

Monday, March 11, 2002, 3:38:28 PM, you wrote:


I had a close problem once -  then having many active connections (and
so threads) mysql speed may degrade a lot because of scheduling and
convergency problem. This does not explain the mysql lock itself, but
may be the reason (i.e too many threads may make mysql to lock or
crush because of GLIBC limits)
  The good solution for this would be to put OOPS behind of apache.
This gives two positive issues:
  - your apache server will have much less children and so will require
  much less memory and will basically work faster. In my case the
  number of children have dropped from 150 to 16 and required memory
  from about 1G to 200M (I'm running very complicated PHP scripts)
  - you will need much less number of connections for mysql.  In my
  case the number have dropped from about 500 connections from web
  server to 50, and load average on mysql server fell from 3.0-4.0 to
  0.5-1.0.


P.S Are you using mysql binary or version compiled with patched GLIBC
if not the threads limit should be the cause.


MW> Hi!

Sam>> We have a very high volume site (3 million page views a day) that's run 
Sam>> on 16 Apache / PHP web servers & 2 MySQL servers. We are using PHP with 
Sam>> persistent connections. Each MySQL serves 8 web servers & is supposed to 
Sam>> act as a failover machine for the other group of 8 web servers.
Sam>> 
Sam>> The failover won't work now as if one MySQL goes down the cost of the 8 
Sam>> web servers switching over is so high the other MySQL locks up.
Sam>> 
Sam>> Each Apache / PHP server takes up hundreds of connections even when 
Sam>> they're idle so we ran into the Linux connection limit of 1000 & had to 
Sam>> recompile to get past that.
Sam>> 
Sam>> Our actual MySQL CPU useage is low but the goes when with the connection 
Sam>> overhead when starting up or failing over a bank of machines.
Sam>> 
Sam>> We get a mysterious MySQL lockup once a week at the same day & time.

MW> Could you please describe the lookup, exactly what happens ?

MW> What does 'mysqladmin var ext' show when this happens?
MW> What do you have in your log files ?

Sam>> Questions :
Sam>> 
Sam>> - Is our configuration of 2 sets of 8 Apache/PHP web servers & 1 MySQL 
Sam>> servers just not a good idea ?

MW> This should not be a problem.

Sam>> - Would we better off with FreeBSD ?

MW> If you are running a CMP machine, then Linux preforms normally better
MW> than FreeBSD.

MW> To be able to give some recommendations we need to know more about
MW> this setup.

Sam>> - Is there anyone doing any similar setups with lots of web servers & a 
Sam>> few MySQLs ?

MW> We have several hundred of paying support customers with this setup.

Sam>> - Is there any way to get Apache / PHP to use fewer connections ?

MW> Stevens Rousseys exellent answer should help you with this

Sam>> We pay for MySQL support but haven't had much help from them.

MW> I checked our support email archive, but couldn't find anything from
MW> you or your company in it.

MW> Could you please use our supportwizard interface to make a ticket of
MW> this problem so that we can help you with it?
MW> If you have already a ticket, please email me the ticket number so
MW> that we can check this up.

MW> Regards,
MW> Monty




-- 
Best regards,
 Petermailto:[EMAIL PROTECTED]


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re[2]: Innodb/Replication problems.

2002-01-06 Thread Peter Zaitsev

Hello Heikki,

Saturday, January 05, 2002, 11:55:02 PM, you wrote:



>>Hello mysql,
>>
>>  I'm using mysql 3.23.47 with Innodb and trying to play around with
>>  replication. The replication options are default.
>>
>>  I got one table content occasionally deleted on slave, and note what


HT> Who deleted the table contents? You?

Yes. I did it.

>>  this did not stop the replication: The inserts are successfully
>>  passed, therefore the updates to nonexistent rows simply ignored.


HT> But MySQL passes complete SQL statements to the slave. It is no error if an
HT> update does not update any rows. I do not believe that MySQL checks that the
HT> master updated as many rows as the slave.

May be this is the problem, because for example for insert mysql check
if it was able to insert data (no duplicates or what ever) but this
does not work with updates.

>>  I remember then I tried this half a year ago with myisam, the
>>  behavior was to print an error and stop.
>>

Well probably I'm a bit mistaken - the error is checked for insert
because duplicate key error is an error, therefore the update which
does not update anything is quite OK.





-- 
Best regards,
 Petermailto:[EMAIL PROTECTED]


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Innodb/Replication problems.

2002-01-05 Thread Peter Zaitsev

Hello mysql,

  I'm using mysql 3.23.47 with Innodb and trying to play around with
  replication. The replication options are default.

  I got one table content occasionally deleted on slave, and note what
  this did not stop the replication: The inserts are successfully
  passed, therefore the updates to nonexistent rows simply ignored.

  I remember then I tried this half a year ago with myisam, the
  behavior was to print an error and stop.

SLAVE:

mysql> select count(*) from counter.quickstat;
+--+
| count(*) |
+--+
|  661 |
+--+
1 row in set (0.04 sec)

MASTER:

mysql> select count(*) from counter.quickstat;
+--+
| count(*) |
+--+
|   294787 |
+--+
1 row in set (2.65 sec)



-- 
Best regards,
 Peter  mailto:[EMAIL PROTECTED]


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re[2]: Mysql reconnect hang

2002-01-03 Thread Peter Zaitsev

Hello Sinisa,

Wednesday, January 02, 2002, 6:21:59 PM, you wrote:


No. This happens then mysql is completely restarted.
I Understand this should not happen but somehow it does.


>> I've found the following problem often happens to me:
>> Then mysqld restarted unclear (sigsegv or server reset) the mysql
>> client library may hang during the automatic reconnect. The backtrace
>> is similar to this and does not changes other time (at least 2 hours
>> is not enough for it)
>> 
>> Does anyone have any ideas about this ? It's quite annoying to check
>> and restart all applications connected after single mysql restart.
>> 
>> 

SM> [skip]

>> -- 
>> Best regards,
>>  Peter  mailto:[EMAIL PROTECTED]

SM> Hi!

SM> The above happens when there is one thread left hanging when mysqld
SM> crashes for some reason. 

SM> If you kill that thread a client will exit.




-- 
Best regards,
 Petermailto:[EMAIL PROTECTED]


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Mysql reconnect hang

2002-01-01 Thread Peter Zaitsev

Hello mysql,



I've found the following problem often happens to me:
Then mysqld restarted unclear (sigsegv or server reset) the mysql
client library may hang during the automatic reconnect. The backtrace
is similar to this and does not changes other time (at least 2 hours
is not enough for it)

Does anyone have any ideas about this ? It's quite annoying to check
and restart all applications connected after single mysql restart.




(gdb) bt
#0  0xe17d224 in __libc_read () from /lib/libc.so.6
#1  0xe04733c in __DTOR_END__ () from /lib/libpthread.so.0
#2  0x80c5e02 in vio_read (vio=0xac4f6e8, buf=0x82a4250 "Á\fÏ\bXIÝ\n", size=4) at 
violite.c:185
#3  0x80c55b0 in my_real_read (net=0xdf1ff364, complen=0xdf1fef50) at net.c:478
#4  0x80c57c9 in my_net_read (net=0xdf1ff364) at net.c:624
#5  0x80c1d7b in net_safe_read (mysql=0xdf1ff364) at libmysql.c:297
#6  0x80c2e3c in mysql_real_connect (mysql=0xdf1ff364, host=0xad3f308 "maindb.local", 
user=0xcd9db50 "layers", 
passwd=0x9069bf0 "", db=0xd51a390 "counter", port=3306, unix_socket=0x0, 
client_flag=8333) at libmysql.c:1351
#7  0x80c373b in mysql_reconnect (mysql=0xb0bdb20) at libmysql.c:1576
#8  0x80c1ebf in simple_command (mysql=0xb0bdb20, command=COM_QUERY, 
arg=0x85f5810 "select loads from quickstat where counter_id=16766", length=50, 
skipp_check=1 '\001') at libmysql.c:429
#9  0x80c4a7f in mysql_real_query (mysql=0xb0bdb20, query=0x85f5810 "select loads from 
quickstat where counter_id=16766", 
length=50) at libmysql.c:1767
#10 0x80c4a33 in mysql_query (mysql=0xb0bdb20, query=0x85f5810 "select loads from 
quickstat where counter_id=16766")
at libmysql.c:1698
#11 0x8057bce in Query::Open (this=0xdcbf100) at query.cpp:113
#12 0x80ad3cd in GetLoadsFromQuickStat (Q=0xdcbf100, cid=@0xdf1ff860) at 
startbuild.cpp:227
#13 0x80ac208 in startBuild (pRI=0xe589d20, pQSet=0x82b57f8) at startbuild.cpp:166
#14 0x8080355 in TblQueue::start (this=0x84fcba8) at controller.cpp:321
#15 0x807fd4e in TblQueue::add (this=0x84fcba8, pRI=0xe589d20) at controller.cpp:242
#16 0x8081867 in QueueController::add (this=0xdaf4, pRI=0xe589d20) at 
controller.cpp:467
#17 0x80a1cb0 in addRequest (pRI=0xe589d20) at server.cpp:434
#18 0xe038ca3 in pthread_start_thread () from /lib/libpthread.so.0
(gdb) quit
The program is running.  Quit anyway (and detach it)? (y or n) y
Detaching from program: /usr/spylog/bin/layers, process 32444
ruby:~ # mcedit /etc/my.cnf  

-- 
Best regards,
 Peter  mailto:[EMAIL PROTECTED]


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Bug report mysql 3.23.45

2001-12-10 Thread Peter Zaitsev

Hello mysql,

  Running  Linux 2.4.x SMP/innodb.


This query works OK.

mysql> explain SELECT cp.counter_id, tg.tbl, cs.host
-> FROM counter.table_groups AS tg, counter.counterproperty AS cp, counter.servers 
AS cs 
-> WHERE tg.tblid=cp.tblid AND tg.server=1 AND cs.SERVER_ID=tg.server limit 10;
+---+---++-+-+--+--+-+
| table | type  | possible_keys  | key | key_len | ref  | rows | Extra 
|  |
+---+---++-+-+--+--+-+
| cs| const | PRIMARY| PRIMARY |   2 | const|1 |   
|  |
| tg| ref   | PRIMARY,SERVER | SERVER  |   2 | const|  120 | where used
|  |
| cp| ref   | tblid  | tblid   |   2 | tg.TBLID |  302 | where used; 
|Using index |
+---+---++-+-+--+--+-+
3 rows in set (0.00 sec)


Now I'm trying to select data only for some of counter_id:

mysql> explain SELECT cp.counter_id, tg.tbl, cs.host
-> FROM counter.table_groups AS tg, counter.counterproperty AS cp, counter.servers 
AS cs 
-> WHERE cp.counter_id IN
-> (138520,49237,56459,63677,152964,25710,25712,79327,108442)
-> AND tg.tblid=cp.tblid AND tg.server=1 AND cs.SERVER_ID=tg.server;
+---+++-+-+--+--++
| table | type   | possible_keys  | key | key_len | ref  | rows | Extra  |
+---+++-+-+--+--++
| cs| const  | PRIMARY| PRIMARY |   2 | const|1 ||
| cp| range  | PRIMARY,tblid  | PRIMARY |   4 | NULL |9 | where used |
| tg| eq_ref | PRIMARY,SERVER | PRIMARY |   4 | cp.tblid |1 | where used |
+---+++-+-+--+--++
3 rows in set (0.00 sec)


It still work and return correct result.

But if I use long list of counter id's (30.000 counters) then I get
other explain output:

table   typepossible_keys   key key_len ref rowsExtra
cs  const   PRIMARY PRIMARY 2   const   1
tg  ref PRIMARY,SERVER  SERVER  2   const   120 where used
cp  range   PRIMARY,tblid   PRIMARY 4   NULL30522   where used

And the result is wieard:


counter_id  tbl host
135824  st74.g09ss01.local
135824  st74.g09ss01.local
135824  st74.g09ss01.local
135824  st74.g09ss01.local
135824  st74.g09ss01.local
135824  st74.g09ss01.local
135824  st74.g09ss01.local
135824  st74.g09ss01.local
135824  st74.g09ss01.local
135824  st74.g09ss01.local
135824  st74.g09ss01.local
135824  st74.g09ss01.local
135824  st74.g09ss01.local
135824  st74.g09ss01.local
135824  st74.g09ss01.local
135824  st74.g09ss01.local
135824  st74.g09ss01.local
135824  st74.g09ss01.local
135824  st74.g09ss01.local
135824  st74.g09ss01.local
135824  st74.g09ss01.local
135824  st74.g09ss01.local
135824  st74.g09ss01.local
135824  st74.g09ss01.local
135824  st74.g09ss01.local
135824  st74.g09ss01.local
135824  st74.g09ss01.local
135824  st74.g09ss01.local
135824  st74.g09ss01.local
135824  st74.g09ss01.local
135824  st74.g09ss01.local
135824  st74.g09ss01.local
135824  st74.g09ss01.local
135824  st74.g09ss01.local
135824  st74.g09ss01.local
135824  st74.g09ss01.local
135824  st74.g09ss01.local
135824  st74.g09ss01.local
135824  st74.g09ss01.local
135824  st74.g09ss01.local
135824  st74.g09ss01.local
135824  st74.g09ss01.local
135824  st74.g09ss01.local
135824  st74.g09ss01.local
135824  st74.g09ss01.local
135824  st74.g09ss01.local
135824  st74.g09ss01.local
135824  st74.g09ss01.local
135824  st74.g09ss01.local
135824  st74.g09ss01.local
135824  st74.g09ss01.local
135824  st74.g09ss01.local
135824  st74.g09ss01.local
135824  st74.g09ss01.local
135824  st74.g09ss01.local
135824  st74.g09ss01.local
135824  st74.g09ss01.local
135824  st74.g09ss01.local
135824  st74.g09ss01.local
135824  st74.g09ss01.local
135824  st74.g09ss01.local
135824  st74.g09ss01.local
135824  st74.g09ss01.local
135824  st74.g09ss01.local
135824  st74.g09ss01.local
135824  st74.g09ss01.local
135824  st74.g09ss01.local
135824  st74.g09ss01.local
135824  st74.g09ss01.local
135824  st74.g09ss01.local
135824  st74.g09ss01.local
135824

Re[2]: innodb and use of indices

2001-11-26 Thread Peter Zaitsev

Hello Heikki,

Wednesday, November 21, 2001, 8:34:13 PM, you wrote:

I see some more users are interested in optimize table to work.
Why don't you just map OPTIMIZE TABLE to ALTER TABLE for INNODB tables
so user may not care about table type, there currently one should
check the table type and use OPTIMIZE table or alter table depending
on its type.

HT> Hi!

>>On Wed, 2001-11-21 at 05:19, Heikki Tuuri wrote:
>>> The way to defragment InnoDB tables, or tables in any database, is from time
>>> to time to dump and reimport them. That can give a significant performance
>>> boost.
>>> 
>>
>>That is actually not entirely true.  For MyISAM tables, one simply needs
>>to run "OPTIMIZE TABLE table_name" from time to time.  I think the end
>>result is the same, but it's atomic.
>>
>>Does OPTIMIZE TABLE work for InnoDB?

HT> Sorry, no. ALTER TABLE can be used to rebuild the table, but essentially it
HT> is equivalent to a dump and reimport.

>>Steve

HT> Regards,

HT> Heikki
HT> --
HT> Order commercial MySQL/InnoDB support at https://order.mysql.com/



HT> -
HT> Before posting, please check:
HT>http://www.mysql.com/manual.php   (the manual)
HT>http://lists.mysql.com/   (the list archive)

HT> To request this thread, e-mail <[EMAIL PROTECTED]>
HT> To unsubscribe, e-mail <[EMAIL PROTECTED]>
HT> Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php



-- 
Best regards,
 Petermailto:[EMAIL PROTECTED]


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Heap tables & replication

2001-10-01 Thread Peter Zaitsev

Hello mysql,

  Could anyone tell me if heap tables are cyrrently working with
  replication.

  As I remember some time ago you might get problems then server
  restarts (so empty it's heap tables) there fore slave does not and
  so it gets dublicate key errors on inserts to this tables, which is
  right as they allready have such values.
  

-- 
Best regards,
 Peter  mailto:[EMAIL PROTECTED]


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Bad table and mysql 3.23.42

2001-09-18 Thread Peter Zaitsev

Hello mysql,

Hello mysql,

  I'm quite lucky with providing with bad tables which does not repear
  properly or having other strange behavior.

  Here is one more:

  mysql 3.23.42

gnome:~ # mysqldump -d layers_st57 g01kw_engines
# MySQL dump 8.16
#
# Host: localhostDatabase: layers_st57
#
# Server version3.23.42-log

#
# Table structure for table 'g01kw_engines'
#

CREATE TABLE g01kw_engines (
  id int(10) unsigned NOT NULL auto_increment,
  counter_id int(10) unsigned NOT NULL default '0',
  engine_id smallint(5) unsigned NOT NULL default '0',
  kw_id int(11) NOT NULL default '0',
  PRIMARY KEY  (id),
  KEY search (counter_id,kw_id)
) TYPE=MyISAM PACK_KEYS=1;

Not id and counter_id are  NOT NULL

Let's now select some data from the table:

mysql> select * from layers_st57.g01kw_engines;
+--++---+---+
| id   | counter_id | engine_id | kw_id |
+--++---+---+
| NULL |   NULL |   232 | 41895 |
+--++---+---+
1 row in set (1.33 sec)

mysql> select * from layers_st57.g01kw_engines;
ERROR 2013: Lost connection to MySQL server during query


... almost every query to this table crashes mysql (I have a back
trace but I think it's better to upload the table)

check table threats this table as good:

mysql> check table layers_st57.g01kw_engines type=EXTENDED;
+---+---+--+--+
| Table | Op| Msg_type | Msg_text |
+---+---+--+--+
| layers_st57.g01kw_engines | check | status   | OK   |
+---+---+--+--+
1 row in set (0.31 sec)

mysql> repair table layers_st57.g01kw_engines;
+---++--+--+
| Table | Op | Msg_type | Msg_text |
+---++--+--+
| layers_st57.g01kw_engines | repair | status   | OK   |
+---++--+--+
1 row in set (0.33 sec)


MYISAMCHK:

gnome:/home/pz/strange_table # myisamchk -ce *.MYI
myisamchk: error: 22 when opening MyISAM-table 'g01kw_pages.MYI'
Press any key to continue..

So I went really surprised.

I've copied MYI file from the table of the same structure and it got
repaired OK. So it looks like this is  dammaged index definition
again.


I've uploaded the table as badtable.tar.gz  so you may look at this.

P.S I was trying to send this message to BUGS but got a rejection from
the robot :(

-- 
Best regards,
 Peter  mailto:[EMAIL PROTECTED]


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re[2]: GLIBC 2.2.4

2001-08-23 Thread Peter Zaitsev

Hello Trond,

Thursday, August 23, 2001, 8:01:33 PM, you wrote:

TEG> Peter Zaitsev <[EMAIL PROTECTED]> writes:

>>   I'm trying to compile mysql 3.23.41 with recent glibc 2.2.4 but this
>>   somehow does not work with the following diagnose:

TEG> It builds just fine with glibc 2.2.4 on a current development version
TEG> of Red Hat Linux (Roswell + a few minor updates).

Yes. I'm shure it builds OK then glibc is installed as standart.
I'm having a system with glibc 2.2.0 and trying to compile MYSQL with
glibc 2.2.4 laying in a special directory. But if fails. The stranges
thing is - is somehow builds something dynamic there I selected to
bild everything static.

TEG> Did you compile glibc yourself or have you used gcc 3?
Yes. I compiled it myself using gcc 2.95.2 which whould be OK ?




-- 
Best regards,
 Petermailto:[EMAIL PROTECTED]


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re[2]: GLIBC 2.2.4

2001-08-23 Thread Peter Zaitsev

Hello Sinisa,

Thursday, August 23, 2001, 3:42:53 PM, you wrote:

SM> Peter Zaitsev writes:
>> Hello mysql,
>> 
>>   I'm trying to compile mysql 3.23.41 with recent glibc 2.2.4 but this
>>   somehow does not work with the following diagnose:
>> 
>> l/var\""-DSHAREDIR="\"/usr/local/mysql/share/mysql\""   
>-DHAVE_CONFIG_H -I../bdb/build_unix -I../innobase/include 
>  -I./../include  
>>-I./../regex-I. -I../include -I.. -I.-O3 -DDBUG_OFF 
>-g -O6 -mpentiumpro -fno-omit-frame-pointer  -pipe  -I/usr/local/glibc-2.2.4/include 
>-c password.c
>> /bin/sh ../libtool --mode=link gcc  -O3 -DDBUG_OFF -g -O6 -mpentiumpro 
>-fno-omit-frame-pointer -felide-constructors -fno-exceptions -fno-rtti -pipe  
>-fno-implicit-templates -fno-exceptions
>> -fno-rtti -fpermissive -I/usr/local/glibc-2.2.4/include -static 
>-L/usr/local/glibc-2.2.4/lib  -o mysqlbinlog  mysqlbinlog.o mini_client.o net_serv.o 
>mini_client_errors.o violite.o password.o
>> ../isam/libnisam.a  ../merge/libmerge.a 
>../myisam/libmyisam.a   ../myisammrg/libmyisammrg.a ../heap/libheap.a 
> 
>> ../mysys/libmysys.a ../dbug/libdbug.a   
>../regex/libregex.a ../strings/libmystrings.a   -lpthread -lz 
>-lcrypt -lnsl -lm  -lpthread
>> mkdir .libs
>> gcc -O3 -DDBUG_OFF -g -O6 -mpentiumpro -fno-omit-frame-pointer -felide-constructors 
>-fno-exceptions -fno-rtti -pipe -fno-implicit-templates -fno-exceptions -fno-rtti 
>-fpermissive
>> -I/usr/local/glibc-2.2.4/include -L/usr/local/glibc-2.2.4/lib -o mysqlbinlog 
>mysqlbinlog.o mini_client.o net_serv.o mini_client_errors.o violite.o password.o 
>../isam/libnisam.a ../merge/libmerge.a
>> ../myisam/libmyisam.a ../myisammrg/libmyisammrg.a ../heap/libheap.a 
>../mysys/libmysys.a ../dbug/libdbug.a ../regex/libregex.a ../strings/libmystrings.a 
>-lpthread -lz -lcrypt -lnsl -lm -lpthread
>> /usr/local/glibc-2.2.4/lib/libpthread.so: undefined reference to 
>`_dl_cpuclock_offset'
>> /usr/local/glibc-2.2.4/lib/libc.so.6: undefined reference to 
>`[EMAIL PROTECTED]'
>> /usr/local/glibc-2.2.4/lib/libc.so.6: undefined reference to 
>`[EMAIL PROTECTED]'
>> collect2: ld returned 1 exit status
>> make[3]: *** [mysqlbinlog] Error 1
>> make[3]: Leaving directory `/home/mysql/mysql-3.23.41g/sql'
>> make[2]: *** [all-recursive] Error 1
>> make[2]: Leaving directory `/home/mysql/mysql-3.23.41g/sql'
>> make[1]: *** [all-recursive] Error 1
>> make[1]: Leaving directory `/home/mysql/mysql-3.23.41g'
>> make: *** [all-recursive-am] Error 2
>> 
>> This is rather strange as static option is not passed therefor I'm
>> compiling it with --all-static option.
>>   
>> 
>> -- 
>> Best regards,
>>  Peter  mailto:[EMAIL PROTECTED]
>> 

SM> Can't help you as I can't install 2.2.4 as it they clearly stated that
SM> it must not be compiled with gcc 3.0 and I use 3.0.1.


SM> The above error happens when some of the libs or modules still have
SM> references to the older glibc version, like for example GCC objects
SM> and other ones.

This is strange As I've compiled plain gcc 2.2.4 into other directory
and now running --with-other-libc= and It swears about glibc 2.2.3
therefore my system glibc is 2.2.0 and I have never used 2.2.3

SM> Check also version of your /lib/libdl.so.*




-- 
Best regards,
 Petermailto:[EMAIL PROTECTED]


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




GLIBC 2.2.4

2001-08-23 Thread Peter Zaitsev

Hello mysql,

  I'm trying to compile mysql 3.23.41 with recent glibc 2.2.4 but this
  somehow does not work with the following diagnose:

l/var\""-DSHAREDIR="\"/usr/local/mysql/share/mysql\""  
 -DHAVE_CONFIG_H -I../bdb/build_unix -I../innobase/include   
-I./../include  -I./../regex-I. -I../include -I.. 
-I.-O3 -DDBUG_OFF -g -O6 -mpentiumpro -fno-omit-frame-pointer  -pipe  
-I/usr/local/glibc-2.2.4/include -c password.c
/bin/sh ../libtool --mode=link gcc  -O3 -DDBUG_OFF -g -O6 -mpentiumpro 
-fno-omit-frame-pointer -felide-constructors -fno-exceptions -fno-rtti -pipe  
-fno-implicit-templates -fno-exceptions -fno-rtti -fpermissive 
-I/usr/local/glibc-2.2.4/include -static -L/usr/local/glibc-2.2.4/lib  -o mysqlbinlog  
mysqlbinlog.o mini_client.o net_serv.o mini_client_errors.o violite.o password.o 
../isam/libnisam.a  ../merge/libmerge.a 
../myisam/libmyisam.a   ../myisammrg/libmyisammrg.a ../heap/libheap.a  
 ../mysys/libmysys.a ../dbug/libdbug.a 
  ../regex/libregex.a ../strings/libmystrings.a   
-lpthread -lz -lcrypt -lnsl -lm  -lpthread
mkdir .libs
gcc -O3 -DDBUG_OFF -g -O6 -mpentiumpro -fno-omit-frame-pointer -felide-constructors 
-fno-exceptions -fno-rtti -pipe -fno-implicit-templates -fno-exceptions -fno-rtti 
-fpermissive -I/usr/local/glibc-2.2.4/include -L/usr/local/glibc-2.2.4/lib -o 
mysqlbinlog mysqlbinlog.o mini_client.o net_serv.o mini_client_errors.o violite.o 
password.o ../isam/libnisam.a ../merge/libmerge.a ../myisam/libmyisam.a 
../myisammrg/libmyisammrg.a ../heap/libheap.a ../mysys/libmysys.a ../dbug/libdbug.a 
../regex/libregex.a ../strings/libmystrings.a -lpthread -lz -lcrypt -lnsl -lm -lpthread
/usr/local/glibc-2.2.4/lib/libpthread.so: undefined reference to `_dl_cpuclock_offset'
/usr/local/glibc-2.2.4/lib/libc.so.6: undefined reference to 
`[EMAIL PROTECTED]'
/usr/local/glibc-2.2.4/lib/libc.so.6: undefined reference to 
`[EMAIL PROTECTED]'
collect2: ld returned 1 exit status
make[3]: *** [mysqlbinlog] Error 1
make[3]: Leaving directory `/home/mysql/mysql-3.23.41g/sql'
make[2]: *** [all-recursive] Error 1
make[2]: Leaving directory `/home/mysql/mysql-3.23.41g/sql'
make[1]: *** [all-recursive] Error 1
make[1]: Leaving directory `/home/mysql/mysql-3.23.41g'
make: *** [all-recursive-am] Error 2

This is rather strange as static option is not passed therefor I'm
compiling it with --all-static option.
  

-- 
Best regards,
 Peter  mailto:[EMAIL PROTECTED]


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Query optimization issue.

2001-08-22 Thread Peter Zaitsev

Hello mysql,

  I was trying to load very huge (5GB 45mil rows) to the text dump in
  sorted order (according to primary key).
  The problem is mysql somehow does not wants to scan the table by
  primary key to produce sorted output row by row, but  prefers to use
  filesort which would take quite a long time in this case:

mysql> explain select * from dominf.domip order by ip desc;
 
+---+--+---+--+-+--+--++
| table | type | possible_keys | key  | key_len | ref  | rows | Extra  |
+---+--+---+--+-+--+--++
| domip | ALL  | NULL  | NULL |NULL | NULL | 44840332 | Using filesort |
+---+--+---+--+-+--+--++
1 row in set (1.10 sec)

mysql> explain select * from dominf.domip order by ip; 
+---+--+---+--+-+--+--++
| table | type | possible_keys | key  | key_len | ref  | rows | Extra  |
+---+--+---+--+-+--+--++
| domip | ALL  | NULL  | NULL |NULL | NULL | 44840332 | Using filesort |
+---+--+---+--+-+--+--++
1 row in set (0.00 sec)


CREATE TABLE domip (
  ip int(10) unsigned NOT NULL default '0',
  domip char(80) NOT NULL default '',
  tz tinyint(4) NOT NULL default '0',
  fldom tinyint(3) unsigned NOT NULL default '0',
  ts timestamp(14) NOT NULL,
  rt tinyint(4) unsigned NOT NULL default '0',
  LCHECKED timestamp(14) NOT NULL,
  CHANGED tinyint(3) unsigned NOT NULL default '0',
  network int(11) NOT NULL default '-1',
  PRIMARY KEY  (ip)
) TYPE=MyISAM;


diamond:/spylog/db # mysqladmin processlist | grep -v Sleep
++---+-+-+-+--++--+
| Id | User  | Host| db  | Command | Time | State  | Info  
|  
| |
++---+-+-+-+--++--+
| 6  | titan | ss23.local  | counter | Query   | 0| statistics | SELECT 
|page_id,title FROM st90.g00pages WHERE page_prot=1 AND page_domain=11884 AND 
|page='close.php3 |
| 39 | root  | localhost   | | Query   | 195  | Sorting result | select * from 
|dominf.domip order by ip desc into outfile "dominf.txt"   
| |
| 48 | root  | localhost   | | Query   | 0|| show 
|processlist   
|  |
++---+-+-+-+--++--+



-- 
Best regards,
 Peter  mailto:[EMAIL PROTECTED]


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: HOWTO enable InnoDB

2001-08-10 Thread Peter Zaitsev

Hello ½ÉÃ溸,

Friday, August 10, 2001, 2:27:56 PM, you wrote:

DISABLED means you're running with --skip-innodb  AFAIK :)

½> Hi everyone.

½> I uninstalled mysql and reinstalled using --with-innodb configuration.

½> After install completed, I entered the command mysqladmin variables and
½> found that value of "have_innodb" is DISABLED.
½> So when I create table with type=InnoDB, this table has type=MyISAM. That
½> is failure to create InnoDB type table.

½> Mysql version is 3.23.40.

½> Please answer to me.

½> Thanks in advance!!


½> -
½> Before posting, please check:
½>http://www.mysql.com/manual.php   (the manual)
½>http://lists.mysql.com/   (the list archive)

½> To request this thread, e-mail <[EMAIL PROTECTED]>
½> To unsubscribe, e-mail <[EMAIL PROTECTED]>
½> Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php



-- 
Best regards,
 Petermailto:[EMAIL PROTECTED]


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Timezone problem.

2001-08-07 Thread Peter Zaitsev

Hello Michael,

Tuesday, August 07, 2001, 10:20:18 PM, you wrote:

OK guys I think I've found the problem.
It seems like new innodb version (40b) does something bad with time.

I've tested 3 binaries

1) Plain .40 with  normal GLIBC -> OK
2) Plain .40 with patched GLIBC -> OK
3) Plain .40 with new INNODB with doublewrite -> This binary shiftes
the time.

Could you Heikke please check if you have something similar ?




MW> Hi!

>>>>>> "Peter" == Peter Zaitsev <[EMAIL PROTECTED]> writes:

Peter>> Hello mysql,
Peter>>   It seems like you made an incomportable changes in 3.23.40 without
Peter>>   taking a time to write a release note :(

Peter>>   The mysql 3.23.39  used the system timezone by default:

Peter>> maindb:/spylog/mysql/logs # date
Peter>> Tue Aug  7 13:31:56 MSD 2001

Peter>> | transaction_isolation   | READ-COMMITTED 
   
|
Peter>> | timezone| MSD
   
|
Peter>> | tmp_table_size  | 4194304


Peter>> Therefore MYSQL 3.23.40 does not determinate the timezone correctly:

Peter>>   |
Peter>> | transaction_isolation   | READ-COMMITTED 
   
|
Peter>> | timezone| Local time zone must be set--see´Ÿ manual 
page   
   |
Peter>> | tmp_table_size  | 4194304
   
|
MW> Are you sure you are not starting mysqld differently ?

MW> According to my knowing, we have not changed anything in timezone
MW> usage in MySQL.

MW> The name if the timezone mysqld displays is what we get from the
MW> following call:

MW>   {
MW> struct tm tm_tmp;
MW> localtime_r(&start_time,&tm_tmp);
MW> strmov(time_zone,tzname[tm_tmp.tm_isdst == 1 ? 1 : 0]);
MW>   }

MW> I don't know why localtime_r() doesn't work in your case, but I would guess
MW> something in your setup that has changed.

MW> The only way to affect the timezone is to set the TZ variable before
MW> starting mysqld/safe_mysqld or by starting safe_mysqld with the
MW> --timezone= option.

Peter>> It uses GMT in this case which could dammage the data (as went in my
Peter>> case)

Peter>> The other thing is the manual  does not contains much info about
Peter>> setting timezone - only option to safe_mysqld which exports TZ
Peter>> variable, which does not work:

Peter>> root 19380 1  0 13:43 pts/800:00:00 sh 
/usr/local/mysql/bin/safe_mysqld --mysqld=mysqld --user=mysql 
--pid-file=/spylog/db/mysqld.pid --timezone=MSD --datadir=/spylog/db

Peter>> It set's the timezone according to required value but date still bad:

Peter>> | transaction_isolation   | READ-COMMITTED 
   
|
Peter>> | timezone| MSD
   
|
Peter>> | tmp_table_size  | 4194304


mysql>> select now();
Peter>> +-+
Peter>> | now()   |
Peter>> +-+
Peter>> | 2001-08-07 09:44:44 |
Peter>> +-+
Peter>> 1 row in set (0.00 sec)

Peter>> Therefore date returns correct value.

Peter>> rat:/spylog/layers # date
Peter>> Tue Aug  7 13:47:05 MSD 2001
Peter>> rat:/spylog/layers #

Peter>> Do you have any ideas about this ?  How I can fix the problem ?

MW> Sorry, no ideas;  It looks like there is some problem with your glibc
MW> library.

MW> Did you try the MySQL 3.23.39 and 3.23.40 binaries on the same
MW> machine?
MW> If not, then I think this is a glibc problem!

MW> Regards,
MW> Monty



-- 
Best regards,
 Petermailto:[EMAIL PROTECTED]


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re[2]: Timezone problem.

2001-08-07 Thread Peter Zaitsev

Hello Sinisa,

Tuesday, August 07, 2001, 4:48:56 PM, you wrote:

1) SUSE. Destributions does not have TZ variable set by default. And
the idea is it worked quite well wothout it with .39
2) The strange  thing is why settng TZ makes mysql to print correct
timezone in variables but still produces the wrong time ?

SM> Peter Zaitsev writes:
>> Hello mysql,
>> 
>>   It seems like you made an incomportable changes in 3.23.40 without
>>   taking a time to write a release note :(
>> 
>>   The mysql 3.23.39  used the system timezone by default:
>> 
>> maindb:/spylog/mysql/logs # date
>> Tue Aug  7 13:31:56 MSD 2001
>> 
>> | transaction_isolation   | READ-COMMITTED  
>  
>|
>> | timezone| MSD 
>  
>|
>> | tmp_table_size  | 4194304
>> 
>> 
>> Therefore MYSQL 3.23.40 does not determinate the timezone correctly:
>> 
>>   |
>> | transaction_isolation   | READ-COMMITTED  
>  
>|
>> | timezone| Local time zone must be set--see´Ÿ manual page  
>  
>  |
>> | tmp_table_size  | 4194304 
>  
>|
>> 
>> 
>> It uses GMT in this case which could dammage the data (as went in my
>> case)
>> 
>> The other thing is the manual  does not contains much info about
>> setting timezone - only option to safe_mysqld which exports TZ
>> variable, which does not work:
>> 
>> root 19380 1  0 13:43 pts/800:00:00 sh /usr/local/mysql/bin/safe_mysqld 
>--mysqld=mysqld --user=mysql --pid-file=/spylog/db/mysqld.pid --timezone=MSD 
>--datadir=/spylog/db
>> 
>> It set's the timezone according to required value but date still bad:
>> 
>> | transaction_isolation   | READ-COMMITTED  
>  
>|
>> | timezone| MSD 
>  
>|
>> | tmp_table_size  | 4194304
>> 
>> 
>> mysql> select now();
>> +-+
>> | now()   |
>> +-+
>> | 2001-08-07 09:44:44 |
>> +-+
>> 1 row in set (0.00 sec)
>> 
>> Therefore date returns correct value.
>> 
>> rat:/spylog/layers # date
>> Tue Aug  7 13:47:05 MSD 2001
>> rat:/spylog/layers #
>> 
>> 
>> Do you have any ideas about this ?  How I can fix the problem ?
>> 
>> 
>> 
>>  Peter  mailto:[EMAIL PROTECTED]

SM> Hi Pjotr,

SM> I just tested the above, and it worked just fine for me with 4.0.

SM> But I have noticed something. If I mess up my shell, it will mess up
SM> all programs started from it, including MySQL. We can not do anything
SM> about it ...

SM> If a shell that starts up MySQL does not have TZ set properly then
SM> there is not much we can do about it ...




-- 
Best regards,
 Petermailto:[EMAIL PROTECTED]


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Timezone problem.

2001-08-07 Thread Peter Zaitsev

Hello mysql,

  It seems like you made an incomportable changes in 3.23.40 without
  taking a time to write a release note :(

  The mysql 3.23.39  used the system timezone by default:

maindb:/spylog/mysql/logs # date
Tue Aug  7 13:31:56 MSD 2001

| transaction_isolation   | READ-COMMITTED 
|  
| |
| timezone| MSD
|  
| |
| tmp_table_size  | 4194304


Therefore MYSQL 3.23.40 does not determinate the timezone correctly:

  |
| transaction_isolation   | READ-COMMITTED 
|  
| |
| timezone| Local time zone must be set--see´Ÿ manual page 
|  
|   |
| tmp_table_size  | 4194304
|  
| |


It uses GMT in this case which could dammage the data (as went in my
case)

The other thing is the manual  does not contains much info about
setting timezone - only option to safe_mysqld which exports TZ
variable, which does not work:

root 19380 1  0 13:43 pts/800:00:00 sh /usr/local/mysql/bin/safe_mysqld 
--mysqld=mysqld --user=mysql --pid-file=/spylog/db/mysqld.pid --timezone=MSD 
--datadir=/spylog/db

It set's the timezone according to required value but date still bad:

| transaction_isolation   | READ-COMMITTED 
|  
| |
| timezone| MSD
|  
| |
| tmp_table_size  | 4194304


mysql> select now();
+-+
| now()   |
+-+
| 2001-08-07 09:44:44 |
+-+
1 row in set (0.00 sec)

Therefore date returns correct value.

rat:/spylog/layers # date
Tue Aug  7 13:47:05 MSD 2001
rat:/spylog/layers #


Do you have any ideas about this ?  How I can fix the problem ?






-- 
Best regards,
 Peter  mailto:[EMAIL PROTECTED]


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re[2]: I got it hanged !

2001-07-17 Thread Peter Zaitsev

Hello Jamie,

Tuesday, July 17, 2001, 7:40:03 AM, you wrote:

If I'm not mistaken this was other bug. At least I remember I've
reported really same looking bug which make sometimes mysql hang after
optimize or repair table (may be alter table also had it).


JM> I've been experiencing a simmilar problem only with MyISAM tables.
JM> This is under 3.23.35 and 3.23.36 on Sparc/Solaris 2.7.

JM> For me the problem occurs when doing an ALTER TABLE or a simmilar
JM> operation (eg OPTIMIZE) on relatively large tables (>4 millon rows)
JM> with one or two indeces.

JM> The table in question is occasionally accessed by other queries;
JM> these access could occur during the ALTER TABLE.  In this case,
JM> the application running the other queries would not mind seeing the
JM> old structure (for read) or wating until the ALTER is complete
JM> (for write).

Yes. All of them was only read queries so it should not be the
problem. But still I'm not quite shure how INNODB works in this
case...


JM> The symptom is the same as described below; with ALL threads
JM> (not just those touching the ALTERed table) getting stuck in
JM> "opening tables" or "closing tables".


JM> To recover it is necessary to terminate mysqld a SIGKILL;
JM> though data loss can usually be avoided by doing a
JM> FLUSH TABLES first (which also hangs) and also sending
JM> a SIGTERM and waiting a minute before sending the KILL.

Well. I also used this with above bug. But with this bug it looks like
killing mysqld does not make it to flush any tables, nor flush tables
forks. So the only way is to kill it with 9th signal.



JM> Would upgrading to 3.23.39 fix this problem?  Or is it
JM> still a live bug in the current version?

Your bug probably allready fixed as I didn't have such bug using only
MYISAM with mysql 3.23.39.



-- 
Best regards,
 Petermailto:[EMAIL PROTECTED]


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: InnoDB behaviour with multiple clients

2001-07-10 Thread Peter Zaitsev

Hello Denis,

Tuesday, July 10, 2001, 12:51:40 PM, you wrote:

If I'm not mistaken the problem is - conflicting transactions which
you have here.  I'm really not shure INNODB  dead lock detection is
quite fine graned.  Also it's shurely may roll back more long running
transaction.  But anyway you should be ableto handle this.



DP> Hi all,

DP> I begin to validate MySQL with InnoDB on our platform. I use a DBI perl
DP> test which manages a data pool reflecting exactly the rows stored in DB.
DP> This test internally builds some new data to insert and chooses rows to
DP> delete and update (the tables are chosen randomly). Then all these
DP> operations are executed in one transaction. If it fails, a rollback is
DP> executed in DB and in client memory. Client also check that all data in
DP> its pool are in the DB. That's my test...

DP> It works very well with one client, but with two... things become
DP> harder! clients received many 'Table handler error 100' and slow
DP> down terrifically. 

DP> I try the same test with 3 clients on MyISAM tables and there is no
DP> noticeable problems.

DP> I have reduced number of queries per transaction. I have changed
DP> innodb_unix_file_flush_method to O_DSYNC. I have increased buffer pool
DP> size and log buffer size but I noticed onlky small improvements. I'm
DP> sure I forgot something, but I don't see what ! does somebody know ??

DP> Denis
 



-- 
Best regards,
 Petermailto:[EMAIL PROTECTED]


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re[3]: Trouble with mysql

2001-07-05 Thread Peter Zaitsev

Hello Michael,

Wednesday, July 04, 2001, 1:49:25 AM, you wrote:


MW> Next time the table gets corrupt, please upload a copy to 'secret'
MW> before you repair it so that I can take a look at this when I come
MW> back.

OK.

MW> Do you have anything in the logs about the table being automaticly repaired?

No. It's bot autorepaired (only after mysqld crashes)  but a lot of
134 errors appears.


-- 
Best regards,
 Petermailto:[EMAIL PROTECTED]


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




mysql 3.23.38 bug

2001-07-05 Thread Peter Zaitsev

Hello mysql,

  It seems like if load data infile initiated to dammaged table
  sometimes after repair it does not unblock itself:
  

maindb:/spylog/db/tmpd # mysqladmin processlist | grep -v Sleep
++--++---+-+---+--+--+
| Id | User | Host   | db| Command | Time  | State| 
|Info  
|   |
++--++---+-+---+--+--+
| 30 | root | localhost  | tmpd  | Query   | 71545 |  | 
|load data infile  '/spylog/db/tmpd/domip.txt' ignore into table domip 
|   |
| 12122  | root | localhost  | tmpd  | Query   | 67021 | Locked   | 
|select count(*) from domip
|   |
| 58048  | titan| hydra.local|   | Query   | 0 | preparing| 
|select id,cy,ts from misc.yandex_sites where url='auto.dp.ua' 
|   |
| 109643 | titan| emerald.local  |   | Query   | 1 | Sending data | 
|select 
 lsl.status,
 lsl.source_id
   fr |
| 111042 | titan| emerald.local  | counter   | Query   | 0 | preparing| 
|SELECT * from statistics.user_options where stat_id='53' and SITE_ID='153529' 
|   |
| 111337 | root | localhost  |   | Query   | 0 |  | 
|show processlist  
|   |
++--++---+-+---+--+--+

If I'm not mistaken this problem presented couple of months ago for
select statement, may be it's the same sort of bug.

During this almost 20 hours mysql does not show any disk/CPU load
which should be if this process would go, also data file for domip
table did not change.




-- 
Best regards,
 Peter  mailto:[EMAIL PROTECTED]


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re[2]: Trouble with mysql

2001-07-02 Thread Peter Zaitsev

Hello Michael,

Thursday, June 28, 2001, 11:41:57 PM, you wrote:

I'm getting this again and again with two tables which are used in
quite the same fation - some selects, often updates, rare inserts,  no
deletes:




mysql> repair table counter.quickstat;
+---++--+--+
| Table | Op | Msg_type | Msg_text 
||
+---++--+--+
| counter.quickstat | repair | warning  | Number of rows changed from 168034 to 168022 
||
| counter.quickstat | repair | status   | OK   
||
+---++--+--+
2 rows in set (10.72 sec)

mysql> check table counter.counterlayers type=EXTENDED;
+---+---+--+---+
| Table | Op| Msg_type | Msg_text  
||
+---+---+--+---+
| counter.counterlayers | check | error| Record-count is not ok; is 168138   
|Should be: 168181 |
| counter.counterlayers | check | warning  | Found 43 deleted blocks   Should be: 
|0|
| counter.counterlayers | check | error| Corrupt   
||
+---+---+--+---+
3 rows in set (2.51 sec)

mysql> repair table counter.counterlayers;  
+---++--+--+
| Table | Op | Msg_type | Msg_text 
||
+---++--+--+
| counter.counterlayers | repair | warning  | Number of rows changed from 168181 to 
|168138 |
| counter.counterlayers | repair | status   | OK   
||
+---++--+--+
2 rows in set (1.55 sec)




MW> Any change you can start mysqld with --log-bin, so that you can verify
MW> this if this happens again?

It's allready running in such mode. I've checked the logs for delete
statements - there are not any of them.

Now I've started .38 version to check if it's mysql fault.




-- 
Best regards,
 Petermailto:[EMAIL PROTECTED]


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Trouble with mysql

2001-06-28 Thread Peter Zaitsev

Hello Michael,

Thursday, June 28, 2001, 1:52:09 PM, you wrote:


MW> Hi!

>>>>>> "Peter" == Peter Zaitsev <[EMAIL PROTECTED]> writes:

Peter>> Hello monty,
Peter>>   after upgrading mysql 3.23.36 to mysql 3.23.29 I started to get a
Peter>>   core dumps, therefore I can't get why it's crashing.
Peter>>   Of couse I'll try to roll it back but do you have any ideas:


Peter>> 0x806c4c4 handle_segfault__Fi + 428
Peter>> 0x822a2e8 pthread_sighandler + 168
Peter>> 0x82559df chunk_alloc + 255
Peter>> 0x8255791 malloc + 209
Peter>> 0x821487e my_malloc + 30
Peter>> 0x8068236 my_net_init__FP6st_netP6st_vio + 30
Peter>> 0x806de12 handle_connections_sockets__FPv + 870
Peter>> 0x806d5f5 main + 3101
Peter>> 0x823d853 __libc_start_main + 179
Peter>> 0x8048101 _start + 33

Peter>> ---


Peter>> Other one:

Peter>> 0x806c4c4 handle_segfault__Fi + 428
Peter>> 0x822a2e8 pthread_sighandler + 168
Peter>> 0x82563af chunk_free + 591
Peter>> 0x8256123 free + 147
Peter>> 0x8214906 my_no_flags_free + 22
Peter>> 0x8214f18 free_root + 84
Peter>> 0x80aa083 test_quick_select__10SQL_SELECTUlUlUlb + 1551
Peter>> 0x808b3eb 
make_join_statistics__FP4JOINP13st_table_listP4ItemP16st_dynamic_arrayRt4List1Z15Item_func_match
 + 2595
Peter>> 0x8089a5d 
mysql_select__FP3THDP13st_table_listRt4List1Z4ItemP4ItemRt4List1Z15Item_func_matchP8st_orderT5T3T5UiP13select_result
 + 1837
Peter>> 0x807307a mysql_execute_command__Fv + 758
Peter>> 0x8077024 mysql_parse__FP3THDPcUi + 216
Peter>> 0x8072304 do_command__FP3THD + 1436
Peter>> 0x80716f7 handle_one_connection__FPv + 655

Peter>> It seems like there are some problems with memory managent happening ?

MW> This looks serious.
MW> It's also quite strange as MySQL 3.23.39 has been quite stable for us.
MW> It has also been out almost 2 weeks without any bug reports for this
MW> and we know that lots of users are using this.

The even worse is the following situation:

I started to often get the followings in a log file:

010628 13:49:28  Aborted connection 627619 to db: 'counter' user: 'titan' host: 
`pan.local' (Got an error reading communication packets)
010628 13:49:35  read_key: Got error 134 when reading table './counter/counterlayers'
010628 13:49:35  read_key: Got error 134 when reading table './counter/counterlayers'

All the times for the same table.  The strange thing is - I do ONLY
selects and updates for this table, therefor I got the followings then
I'm trying to check and repair it.


mysql> check table counter.counterlayers;
+---+---+--+---+
| Table | Op| Msg_type | Msg_text  
||
+---+---+--+---+
| counter.counterlayers | check | error| Record-count is not ok; is 166853   
|Should be: 166912 |
| counter.counterlayers | check | warning  | Found 59 deleted blocks   Should be: 
|0|
| counter.counterlayers | check | error| Corrupt   
||
+---+---+--+---+
3 rows in set (0.49 sec)

mysql> repair  table counter.counterlayers;   
+---++--+--+
| Table | Op | Msg_type | Msg_text 
||
+---++--+--+
| counter.counterlayers | repair | warning  | Number of rows changed from 166912 to 
|166853 |
| counter.counterlayers | repair | status   | OK   
||
+---++--+--+
2 rows in set (1.94 sec)

As you see 59 rows are deleted therefore noone did it.



MW> Can you repeat this by issuing the query again ?
MW> Have you upgraded anything else on your server like kernel or glibc ?

Well. Yes. I've upgraded kernel on this machine to the latest one -
2.2.19, therefore it works for months on many machines.

MW> Have you compiled MySQL yourself ?
Yes. I always do this for more than a year now. Parameters have not
changed from default one. and the funny thing is - this executable is
NFS shared over more then 25 machines and works without any problems.


MW> Have you patched glibc to have a small stack ?
No I did not.

MW> Are you testing some new feature, like test search ?

No. I do not.  BDB and INNODB are complied in but disabled on this
machine.

MW> Have you cha

MYSQLDUMP feature request :)

2001-06-24 Thread Peter Zaitsev

Hello mysql,

  Resently  MYSQL have introduced transaction support, which at least
  in INNODB allows to make consistent backup/recovery.  The bad thing
  about this is - MYSQLDUMP does not support anything about this, so I
  had to done the same things buy hands.

  The Idea is quite simple - to add  transaction start in the begin of
  backup and in the begining of recovery. This would take a consistent
  backup of all of the tables, and as well will allow consistent
  recovery, at least  if not using drop table.
  

-- 
Best regards,
 Peter  mailto:[EMAIL PROTECTED]


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




transactions and pconnect (PHP)

2001-06-15 Thread Peter Zaitsev

Hello mysql,

  I'm planing to use transactions in application written in PHP, which
  of course uses pconnect to optimize perfomance.

  Few time ago I had to stop using "lock tables" as this lead to
  locked system then somehow the php script was aborted before unlock
  tables.

  With transactions we could have the same bad situation if it's not
  handeled, it's even worse as the result is unexpected depending on
  the following statements transaction can be committed or rolled back.

  I'm speaking about the following problem:
  
  I issue the  begin statement and some of the statements, then PHP
  script is aborted before the transaction is committed, but as
  connection is not terminated mysql does not rolls it back. The next
  statements come from the other PHP script so they could commit
  transaction or roll it back.

  The question is - does this problem present or not ? If it does I
  think it would be nice to modify  the behavior so PHP would unlock
  all the tables and rollback all the transactions from the previous
  script.

  

  

-- 
Best regards,
 Peter  mailto:[EMAIL PROTECTED]


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Some more info about backup.

2001-06-02 Thread Peter Zaitsev

Hello mysql,

  I'm repeating my test's now with more information and on more
  powerfull maching with no load now, this should give more accurate
  results.
  The machine is Dual PIII-750/1GB RAM

  I'm trying the following table  structure:

CREATE TABLE g00hits (
  id int(10) unsigned NOT NULL auto_increment,
  counter_id int(10) unsigned NOT NULL default '0',
  visitor_id int(10) unsigned NOT NULL default '0',
  server_id smallint(5) unsigned NOT NULL default '0',
  ip int(10) unsigned NOT NULL default '0',
  ts timestamp(14) NOT NULL,
  method_id tinyint(3) unsigned NOT NULL default '0',
  http_ver_id tinyint(3) unsigned NOT NULL default '0',
  page_id int(10) unsigned NOT NULL default '0',
  referer_page_id int(10) unsigned NOT NULL default '0',
  status smallint(5) unsigned NOT NULL default '0',
  bytes int(10) unsigned NOT NULL default '0',
  browser_id mediumint(8) unsigned NOT NULL default '0',
  language smallint(5) unsigned NOT NULL default '0',
  local_visitor_id bigint(10) unsigned NOT NULL default '0',
  process_time int(10) unsigned NOT NULL default '0',
  proxy_software_id mediumint(8) unsigned NOT NULL default '0',
  proxy_client_ip int(10) unsigned NOT NULL default '0',
  auth_user_id int(10) unsigned NOT NULL default '0',
  flag int(10) unsigned NOT NULL default '0',
  session_id int(10) unsigned NOT NULL default '0',
  doc_type smallint(5) unsigned NOT NULL default '0',
  online_users smallint(5) unsigned NOT NULL default '0',
  src_id int(10) unsigned NOT NULL default '0',
  PRIMARY KEY  (counter_id,ts,id),
  KEY visitor_id (visitor_id)
) TYPE=MyISAM PACK_KEYS=1 DELAY_KEY_WRITE=1;

which has:
mysql> select count(*) from g00hits;
+--+
| count(*) |
+--+
|  3722867 |
+--+
1 row in set (0.01 sec)

MYD file takes 300MB, text output 400MB

With mysqldump which uses the following query to do this:

SELECT * INTO OUTFILE '/spylog/test/g00hits.txt' FROM g00hits

It takes:
real1m23.177s   (first time)
real1m20.934s   (second time)

The backup table for the same table takes:

first time:
mysql> backup table g00hits to '/spylog/test';
+--++--+--+
| Table| Op | Msg_type | Msg_text |
+--++--+--+
| la00.g00hits | backup | status   | OK   |
+--++--+--+
1 row in set (22.60 sec)
second time:

mysql> backup table g00hits to '/spylog/test';
+--++--+--+
| Table| Op | Msg_type | Msg_text |
+--++--+--+
| la00.g00hits | backup | status   | OK   |
+--++--+--+
1 row in set (21.34 sec)


Well. We have 4 times difference here, which is a bit strange as I
would expect the disk speed should be the limit here, but not
converting the rows in text format, which is in this case done at
speed of 45000rows/second, which is not really much I think for this
type of operation. Hope mysql does not use strcat to form the strings?


Well. Let's try to do the restore:
mysql> restore table g00hits from '/spylog/test';
+--+-+--+--+
| Table| Op  | Msg_type | Msg_text |
+--+-+--+--+
| test.g00hits | restore | status   | OK   |
+--+-+--+--+
1 row in set (1 min 31.16 sec)

and again:

mysql> restore table g00hits from '/spylog/test';
+--+-+--+--+
| Table| Op  | Msg_type | Msg_text |
+--+-+--+--+
| test.g00hits | restore | status   | OK   |
+--+-+--+--+
1 row in set (1 min 30.41 sec)

mysql> load data infile '/spylog/test/g00hits.txt' ignore into table g00hits;
Query OK, 3722867 rows affected (8 min 35.73 sec)

mysql> load data infile '/spylog/test/g00hits.txt'  ignore into table g00hits;
Query OK, 3722867 rows affected (3 min 35.26 sec)

So it seems like cache really matters here, this would indicate bad
disk access way as this difference is to huge and is not found with
backup/restore table.

Now without ignore:

mysql> load data infile '/spylog/test/g00hits.txt'  into table g00hits;
Query OK, 3722867 rows affected (3 min 34.75 sec)

So it seems like it does not really matter.


Well let's drop all indexes and try to load data fastest way ?

mysql> load data infile '/spylog/test/g00hits.txt'  ignore into table g00hits;
Query OK, 3722867 rows affected (1 min 56.16 sec)
Records: 3722867  Deleted: 0  Skipped: 0  Warnings: 0

mysql> load data infile '/spylog/test/g00hits.txt'   into table g00hits;
Query OK, 3722867 rows affected (1 min 56.38 sec)
Records: 3722867  Deleted: 0  Skipped: 0  Warnings: 0


mysql> alter table g00hits add primary key(counter_id,ts,id),add key(visitor_id);
Query OK, 3722867 rows affected (1 min 54.38 sec)
Records: 3722867  Duplicates: 0  Warnings: 0

So it does not give any improvement do ad

BACKUP/RESTORE speed and delayed index creation.

2001-06-01 Thread Peter Zaitsev

Hello mysql,

  Today I played a little bit with two different ways of backup -
  first one is to use BACKUP TABLE (which works for myisam only) and
  the second one is SAVE DATA/LOAD DATA.

  In both cases if I'm not mistaken the file is wrote by mysqld server
  so there is no communication overhead.

  The table was about 3mil of rows  250MB in size, has 2 indexes.

  So the speeds are:
  
  BACKUP TABLE:
  backup: 26sec  restore: 3min.15sec

  SAVE DATA/LOAD DATA
  dump:  4.5min  restore: 40min

  Then I tried to drop all indexes from the table and tried to do
  restore again it went in:   31min


  The output from backup was 250MB, save file - 400MB.


  These speeds was really strange for me, I did't expect so huge
  difference in speads, as I don't see there it should get from. The
  save data outfile operation should be quite fast as the speed of
  text parsing should be the real limit, also the really surprising
  was so slow speed of import data from text, even with no indexes on
  the table if we would look at the speed it's only about 1000 rows
  per second  which is quite slow.

  


-- 
Best regards,
 Peter  mailto:[EMAIL PROTECTED]


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: BDB tables bugs

2001-05-31 Thread Peter Zaitsev

Hello Michael,

Thursday, May 31, 2001, 11:19:03 PM, you wrote:


MW> Hi!

>>>>>> "Peter" == Peter Zaitsev <[EMAIL PROTECTED]> writes:

Peter>> Hello bugs,
Peter>> mysql 3.23.38
Peter>> Linux diamond 2.4.4.SuSE-12-4GB-SMP #8 SMP Sat May 19 21:24:05 MSD
Peter>> 2001 i686 unknown
Peter>> SUSE 7.1 (GLIBC 2.2.x)

Peter>> I've got to try to use BDB in my production-like enviroment and
Peter>> quickly got above error:

MW> 

MW> select last from
MW> layers_st90.g00layers_stat_hits where counter=159806 and type=2
MW> and page_group=255 order 
MW> by begin desc limit 1

MW> Peter, could you try to check if you can get the problem when just
MW> running the above query ?

Well. Sorry I fogot to write this. I do not get this then I run just this
query. There ara many different queries in my log file which finaly
leads to crash and about which applications gets an error (#1) before.

So It hapens like - application starts to get error #1 then trying to
execute the query and then finaly mysql crashes.
This error gets repeatable and stays before mysql restart or crash.

Here are some more queries which lead to crash wich exactly the same
backtrace, a bit different from the first one,

thd->query at 0x8588aa8 = SELECT
 UNIX_TIMESTAMP(MAX(last)) AS uts
 FROM 
 layers_st90.g00layers_stat_pagesactions
 WHERE 
 counter = 138378
thd->thread_id = 42

thd->query at 0x856a160 = SELECT
 UNIX_TIMESTAMP(MAX(last)) AS uts
 FROM 
 layers_st90.g00layers_stat_hits
 WHERE 
 counter = 138378
thd->thread_id = 13

Backtrace is:

0x8234988 pthread_sighandler + 168
0x80ba166 rnd_next__11ha_berkeleyPc + 74
0x80ad63e rr_sequential__FP14st_read_record + 122
0x80964d9 join_init_read_record__FP13st_join_table + 89
0x80915de sub_select__FP4JOINP13st_join_tableb + 86
0x80913a0 do_select__FP4JOINPt4List1Z4ItemP8st_tableP9Procedure + 404
0x808a8eb 
mysql_select__FP3THDP13st_table_listRt4List1Z4ItemP4ItemRt4List1Z15Item_func_matchP8st_orderT5T3T5UiP13select_result
 + 5547
0x807308a mysql_execute_command__Fv + 758
0x8077074 mysql_parse__FP3THDPcUi + 216
0x8072314 do_command__FP3THD + 1436
0x8071707 handle_one_connection__FPv + 655


MW> 

I Think this is a long outstanding bug - I got it, then I was trying
to play around with BDB first time (about 3 months ago). I wrote about
it but it was hard to repeat.


Peter>> 010531 18:16:22  /usr/local/mysql/libexec/mysqld: Got error -30989
Peter>> during CHECKPOINT
Peter>> 010531 18:16:22  bdb:  log_archive: DB_ARCH_ABS: DB_RUNRECOVERY: Fatal
Peter>> error, run database recovery
Peter>> 010531 18:16:22  /usr/local/mysql/libexec/mysqld: Shutdown Complete

Peter>> So it seems like log files went dammaged.

Peter>> This is even more surprising as I had no machine crash or reboot...

MW> I agree; The above is something that should never happen.

MW> How many bdb tables do you have now?

About 1200. But only 120 of them are really used now (in test script).

MW> To be able to find and fix this, we would need a copy of the logs +
MW> the bdb tables to be able to make a bug report for the Sleepycat team.
So I packed it and uploaded into secret directory as
bdb_tables.tar.gz
log.003366.gz


MW> Have you had any luck with testing of InnoDB tables ?
Well. I'll play more now, after finding out there are some bugs with
BDB.  I'm a bit afraid of INNODB tables as no
recovery tools exists, also it's more hard to make a backups.
Also It looks like I'll need to use many tablespaces to hold my
database as files grater then 2G is not supported yet.
Other think which is really bothering me - data migration which I
often have to do if some of my servers gets overload. So I just may
copy such databases with bdb or myisam but can't do so with INNODB.

My other tests show it's pretty fast, and I'll play more with it I
think.




-- 
Best regards,
 Petermailto:[EMAIL PROTECTED]



-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




BDB tables and mysql.

2001-05-28 Thread Peter Zaitsev

Hello mysql,

  I'm testing  BDB tables in mysql to find out i'll be able to use
  them in my application.

  I was testing how does big tables work with BDB and so created table
  which is about 10GB in size.  I could easy make it with multiple
  inserts from test-insert, so the problem with insert seems to be
  fixed now.

  So I'm trying to do  select count(*) from table now and... get
  error #12 saying  BDB is out of locks, therefore my number of lock
  is 10 which is quite large.
  
  With bdb_locks=1.000.000 I was able do do this, but this resulted in
  ~230Mb of memory wasted for locks which I think is too much
  (230Bytes for lock is a huge amount)

  If I'm not mistaken the problem is  bdb_locks  corresponding to
  seting of several options of BDB, which is bad thing as only one of
  them is needed to be so huge,while others do not and are just
  wasting memory.

  


  

  

  



-- 
Best regards,
 Peter  mailto:[EMAIL PROTECTED]


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Optimize Folks.

2001-05-17 Thread Peter Zaitsev

Hello mysql,

  Currently mysql  waits until disk space freed for myisam tables for
  all situations exept of repair and optimize.
  Even more table is marked as crashed and will not repair
  automatically even after disk space freed.

  This situation is not really good thing i think, as if repair table
  is usually called in maintenace mode so you can handle the error,
  optimize table may be used just to shrink tables for to free some
  space, or (in my case) to make concurent inserts work again.

  Also by idea optimize table should be safe procedure, which should
  leave the old copy if it fails.

  My suggessions are:
  
  1) may be it would be nice to wait for disk problem to be resolved,
  as it happens in other cases, or make it configurable if you think
  it would be better not to eat all space in this case
  2) optimize table may be changed a bit to make a copy of table
  before running optimize on it. This would need a bit more disk space
  and will be a bit slower but will be much safer and will allow
  clients to read from table during all time while optimize works.

  

-- 
Best regards,
 Peter  mailto:[EMAIL PROTECTED]


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re[2]: New user InnoDB comments

2001-05-16 Thread Peter Zaitsev

Hello Heikki,

Wednesday, May 16, 2001, 4:01:26 PM, you wrote:

HT> Andreas,

HT> sorry, it is the default MySQL behavior to create a MyISAM table if
HT> another handler is not present.

HT> There was a long thread of emails between MySQL developers if MySQL
HT> should give an error message if a table cannot be created to be of
HT> type BDB or InnoDB because the handler is not present, but I think
HT> there was no decision to change the behavior. I think it would be
HT> better if the database would give an error. Transactional applications
HT> cannot be used with MyISAM tables.

Well this should be at least configurable (which is not really hard to
do I think) As in some cases this may lead to real problems - if
you're applictions are only using automatic rollback if  it crashes
you will not be able to discover error until it does so :)

Well. If I'm not mistaken mysql 4.0 will introduce warnings which will
solve the thing.



-- 
Best regards,
 Petermailto:[EMAIL PROTECTED]



-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




MYISAM recovery.

2001-05-04 Thread Peter Zaitsev

Hello mysql,

  It seems like mysql sometimes tries to hard to repair the table with
  myisam tables with fixed row format.

  I've had a chance to several times found data in one of the tables
  completely destroyed - the data looked like random.

mysql> select  * from g01layers_stat_cities limit 10;
++--++++++
| id | type | begin  | counter| last   | last_hit   | 
|page_group |
++--++++++
|  539167267 |   84 | 20231203141129 | 1920234272 | 20320611141829 | 1713399154 |  
|  111 |
| 1650553888 |  108 | 20241103100405 | 1684091184 | 20311128204214 | 1936291423 |  
|   39 |
| 1124731427 |   82 | 20061110120717 | 577632 | 20241029051540 | 1684091184 |  
|  118 |
| 1985967218 |  105 | 19750527085147 | 1684611104 | 20311125200848 |  691024168 |  
|   32 |
| 1734964078 |  110 | 20110715200141 | 1310741583 | 19870304184629 | 1869903201 |  
|   95 |
| 1701995374 |  109 | 19930820102541 | 1663049738 | 20311125210119 | 1767862885 |  
|  100 |
|  678719081 |   49 | 20320408165928 | 1734964078 | 19870323012758 | 1095124292 |  
|   85 |
|  807870548 |   39 | 20141028110448 | 1280658976 | 19870113150012 |  544437280 |  
|  116 |
| 1953719661 |   97 | 19960219154325 |  170666292 | 20340728025216 | 1948280176 |  
|  105 |
| 1953393017 |   40 | 20320408165931 | 1734964078 | 19870323012758 | 1095124292 |  
|   85 |
++--++++++
10 rows in set (0.00 sec)


It did't look like a possible bug in mysql so I checked other thing. I
took a table and completely replaced the data file with over file
which is not connected with mysql in any way, really it was a
mysqldump output for one of the databases and mysql was easily
able to repair the same way as it repairs a bit damaged tables - some
of duplicate key records there removed but it looked quite normal.

Of couse I had a trash in a file afterwards.

I don't have good solution for this yet, but it's not quite nice
behavior :)
  


-- 
Best regards,
 Peter  mailto:[EMAIL PROTECTED]



-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Question.

2001-05-03 Thread Peter Zaitsev

Hello mysql,

  Any ideas how this may happen:
  Mysql version 3.23.37, auto recovery turned on.

mysql> check table layers_st50.g00stat_sitepaths;
+---+---+--++
| Table | Op| Msg_type | Msg_text   |
+---+---+--++
| g00stat_sitepaths | check | warning  | Table is marked as crashed |
| layers_st50.g00stat_sitepaths | check | status   | OK |
+---+---+--++
2 rows in set (24.32 sec)

Also I would report the folowing problem with auto recovery:

If auto recovery fails because of out of disk space problem, the
behavior is not mysql default (to wait until there are some space) but
the table becomes dammaged and is not retried to be recovered even
disk space is freeed.





-- 
Best regards,
 Peter  mailto:[EMAIL PROTECTED]



-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re[2]: Some more about mysql perfomance.

2001-04-08 Thread Peter Zaitsev

Hello Michael,

Sunday, April 08, 2001, 4:21:18 AM, you wrote:



Peter>> No it's not but it has just a couple of collisions per 100.000
Peter>> records.

MW> The problem with a not unique key is that MySQL will do an extra check
MW> if it should use the key during the optimization stage.  If the key
MW> would be unique, MySQL can skip this stage.

That's life :)


Peter>> Both :)  I really expected about 1 of selects per second one by
Peter>> one, and much more scaling like I got with heap table, then I got up
Peter>> to 7 pages per second.

MW> A thing I forgot to add:

MW> - As a lot of times goes to sending/receiving data between
MW>   client/server I expect that you will get more than 3600 queries /
MW>   second if you are using more than one client. Have you tried this?

No I didn't, but I'm shure I'll have increase on multi cpu system,
ther on single CPU ther I connected via UNIX Domain socket  almost no
cpu was idle, so I'm not shure much increase will be here.






Peter>> Well. But then I select with multiple pages selected optimizer takes
Peter>> it's work ony once, and this is well showed with heap tables, but with
Peter>> myisam this somehow is not thrue.

MW> With MyISAM, there is a lot of more system calls involved than with
MW> HEAP tables ; It could be these that slows downs things.

Well. If I understand well it's really only indeed to do an additional
rad in this case (as the table is open and the key data is in buffer)
- all of this should not take so much :)

Peter>> Other interesting thing is - if I'll tell mysql explictly to use the
Peter>> index will it reduce optimization cost ?

MW> Yes.

Peter>> It's a pity what mysql misses something like PROFILE call for the
Peter>> query  so I could check how much time is spent for different phases of
Peter>> query execution.

MW> Any suggestions for the output for this?

Well Yes. It should be the table with some columns - first name of
stage, second some additional info, and the last one the time taken.
The problem is some info can be collected only on client - i.e  time
to send query and time to send resive all data. Also some interesting
indo may be printed about caches (misses/hits)

Totaly it would be nice to have something like

Send query to server
pharse query
open tables
make plan
Read keys  1024512 Hits 512 Mises
Read rows  10121056 Fragments
Send result to client


So the idea is to get not only timings data on different stages, but
also some additional information from different stages for example
number of key cache misses and hits.

The other odea is to have an special compile option profile - so
information will be gathered and aviable via special interface after
each query so it can be processed on application special way.




MW>> Sergei is actually working on something interesting for people that
MW>> need all the speed they can get:  We are going to provide a interface
MW>> directly to the storage handler, without any optimizations.
MW>> This will of course not be SQL, but we expect this to be VERY fast as
MW>> long as you are searching on a specific key...

Peter>> Yes it's nice.  The other possible speedup is prepeared statements, or
Peter>> execution plan cache (classical solutions) :)

MW> We will add prepared statements in MySQL 4; The question is will these
MW> really help your basic setup?

Only profiling would show this.

Peter>> OK. This is not really pain - just comments :)
Peter>> My current pains are problems with repair tables and keycache :)

MW> No comments (yet).

:)

-- 
Best regards,
 Petermailto:[EMAIL PROTECTED]



-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Some more about mysql perfomance.

2001-04-07 Thread Peter Zaitsev

Hello Michael,

Saturday, April 07, 2001, 3:14:42 AM, you wrote:


MW> Hi!

>>>>>> "Peter" == Peter Zaitsev <[EMAIL PROTECTED]> writes:

Peter>> Hello mysql,
Peter>>   I made one more test of mysql perfomance.
  
Peter>>   PIII-700/768Mb/Linux 2.4.2 glbc 2.1.3
  
Peter>>   I had the followings table  with 12.000.000 rows (well I checked
Peter>>   this with 1 rows as well but the speed does not differs much)
Peter>>   in it and I checked how fast I can select data by hash key "select *
Peter>>   from g00pages where hash=1" - the query was constant and only one
Peter>>   row matched.  The speed was about queries/sec 1800.

MW> Is hash an unique key?

No it's not but it has just a couple of collisions per 100.000
records.

Peter>>   So I decided to check what about selecting many pages at the time.
Peter>>   like  "select * from g00pages there hash in (1,2,3...)"  - the best
Peter>>   result I got is then selecting about 100 pages/query  this gave me
Peter>>   about 2 times more perfomance then selecting pages one by one.  This
Peter>>   locked even more upsetting.

MW> Why is this upsetting? Because you only got 3600 pages per second or
MW> because it was 2 times faster than single queries?
Both :)  I really expected about 1 of selects per second one by
one, and much more scaling like I got with heap table, then I got up
to 7 pages per second.



MW> The reason singly queries are slower are of course that the
MW> initialization phase (reading the data from a socket, parsing,
MW> locking, checking which keys to use) is about half of the query time.

Yes, the problem is I thought this takes much more then 50% on such
very simple query, then all data is in memory.  The only thing indeed
to call the os is data as myisam does not caches data, therefore I
tested my reiserfs and got 25000 of file open+read+close per second on
directory with 150 files of 10 bytes each :)

Peter>>   I've tried the same thing with heap table - the result's are
Peter>>   strange. The perfomance is starting from about 3600 pages/sec then
Peter>>   I've increased The number of pages/query I've got up to 7
Peter>>   pages/sec  which is quite good :)

Peter>>   The strange thing is why result differ so much then ALL data fits it
Peter>>   memory without any problem

Peter>>   Well. Of couse I'll soon try glibc 2.2.x with your patches but I
Peter>>   don't se how it can improve things then  I have obly one thread
Peter>>   running and one cpu and the query is running relatively big.

Peter>>   Heikki made tests which  also shows some strange things - for
Peter>>   example why INSERT is FASTER then SELECT.

MW> I haven't seen the test but I can imagine this is true in some
MW> context.  The reason for this is that a SELECT has to go through many
MW> optimization stages to find out what indexes to use and what queries
MW> to do.  This is one basic fault with SQL;  The optimizer has to do a
MW> lot of work...

Well. But then I select with multiple pages selected optimizer takes
it's work ony once, and this is well showed with heap tables, but with
myisam this somehow is not thrue.

Other interesting thing is - if I'll tell mysql explictly to use the
index will it reduce optimization cost ?

It's a pity what mysql misses something like PROFILE call for the
query  so I could check how much time is spent for different phases of
query execution.

MW> Sergei is actually working on something interesting for people that
MW> need all the speed they can get:  We are going to provide a interface
MW> directly to the storage handler, without any optimizations.
MW> This will of course not be SQL, but we expect this to be VERY fast as
MW> long as you are searching on a specific key...

Yes it's nice.  The other possible speedup is prepeared statements, or
execution plan cache (classical solutions) :)

Peter>>   Even looking at the context swithces does not explain the thing much
Peter>>   - my result show that this system can do about 300.000 of context
Peter>>   swithces per second - so 5.000-10.000 of context swithces per second
Peter>>   does not eat much from cpu.
  
MW> To be able to comment, I would need to make a gprof of this.
MW> (This has to wait at least until the end of next week...)

OK. This is not really pain - just comments :)
My current pains are problems with repair tables and keycache :)


-- 
Best regards,
 Petermailto:[EMAIL PROTECTED]



-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re[2]: Some more about mysql perfomance.

2001-04-07 Thread Peter Zaitsev

Hello Tim,

Saturday, April 07, 2001, 3:21:52 AM, you wrote:

>> The reason singly queries are slower are of course that the
>> initialization phase (reading the data from a socket, parsing,
>> locking, checking which keys to use) is about half of the query time.
>> 
>> Peter>   Heikki made tests which  also shows some strange things - for
>> Peter>   example why INSERT is FASTER then SELECT.
>> 
>> I haven't seen the test but I can imagine this is true in some
>> context.  The reason for this is that a SELECT has to go through many
>> optimization stages to find out what indexes to use and what queries
>> to do.  This is one basic fault with SQL;  The optimizer has to do a
>> lot of work...


TB> Most high-end relational databases address this by...

TB> a) storing the query execution plan etc in a cache keyed by the sql
TB>statement text. That way, if another statement with the same text is
TB>executed a ready-made execution plan is available.

TB> b) to make that effective they support placeholders that abstract out
TB>literal values from the statement text, so the cached plan can be
TB>reused regardless of the literal values boind to the placeholders
TB>for a particular execution.

Well. This is not the real limitation at leas at this point. I've
checked two things

1) just "select 10" which of couse does not need much optimizations
2) select from heap table the same query as I did from myisam.

First query is about 4 times faster, and the second is at least 2
times faster then stock query and scales much petter then selecting
multiple pages by   hash in (XX,XX,XX).

Both this things shows that sql parsing/commutication is not the only
point.


TB> I appreciate that doing (b) would require major changes to the protocol
TB> etc, but it's just occured to me that there's a very simple way to
TB> avoid that but still get the benefits of (a)...

TB> Imagine if, when a statement arrived, mysqld made a char-by-char copy,
TB> but in that copy skipped out the literal values and kept a seperate
TB> list of those. That would be a very fast and simple piece of code.

TB> That 'abstracted' statement could then be used as the key to the
TB> statement cache. If it matched an entry in the cache then mysql
TB> could skip the generation of the query execution plan!

TB> (To simplify access rights issues you could also add the username to
TB> the abstracted statement.)

TB> What do you think Monty?

TB> Tim.



-- 
Best regards,
 Petermailto:[EMAIL PROTECTED]



-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Some more about mysql perfomance.

2001-04-06 Thread Peter Zaitsev

Hello mysql,

  I made one more test of mysql perfomance.
  
  PIII-700/768Mb/Linux 2.4.2 glbc 2.1.3
  
  I had the followings table  with 12.000.000 rows (well I checked
  this with 1 rows as well but the speed does not differs much)
  in it and I checked how fast I can select data by hash key "select *
  from g00pages where hash=1" - the query was constant and only one
  row matched.  The speed was about queries/sec 1800.

  So I decided to check what about selecting many pages at the time.
  like  "select * from g00pages there hash in (1,2,3...)"  - the best
  result I got is then selecting about 100 pages/query  this gave me
  about 2 times more perfomance then selecting pages one by one.  This
  locked even more upsetting.

  I've tried the same thing with heap table - the result's are
  strange. The perfomance is starting from about 3600 pages/sec then
  I've increased The number of pages/query I've got up to 7
  pages/sec  which is quite good :)

  The strange thing is why result differ so much then ALL data fits it
  memory without any problem


  Well. Of couse I'll soon try glibc 2.2.x with your patches but I
  don't se how it can improve things then  I have obly one thread
  running and one cpu and the query is running relatively big.


  Heikki made tests which  also shows some strange things - for
  example why INSERT is FASTER then SELECT.

  Even looking at the context swithces does not explain the thing much
  - my result show that this system can do about 300.000 of context
  swithces per second - so 5.000-10.000 of context swithces per second
  does not eat much from cpu.
  



  This is the table structure.

CREATE TABLE g00pages (
  page_id int(10) unsigned NOT NULL auto_increment,
  page_prot tinyint(3) unsigned NOT NULL default '0',
  page_domain int(10) unsigned NOT NULL default '0',
  page varchar(255) NOT NULL default '',
  TS timestamp(14) NOT NULL,
  LCHECKED timestamp(14) NOT NULL,
  RETRY tinyint(3) unsigned NOT NULL default '0',
  UPDATED timestamp(14) NOT NULL,
  SIZE int(10) unsigned NOT NULL default '0',
  TITLE varchar(100) NOT NULL default '',
  port smallint(5) unsigned NOT NULL default '0',
  QUERY varchar(255) NOT NULL default '0',
  hash int(10) unsigned NOT NULL default '0',
  PRIMARY KEY  (page_id),
  KEY hash (hash)
) TYPE=MyISAM PACK_KEYS=1;





  

-- 
Best regards,
 Peter  mailto:[EMAIL PROTECTED]



-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Mysql speed :)

2001-04-05 Thread Peter Zaitsev

Hello mysql,

  I've resenty tried to becnhmark mysql in really simple case.
  I need to select indexes really fast so I did "select * from pages
  where hash=11" there  was a key on hash and the query returned only
  one row. The query was constant and server ad no other load.

  I've run this on 2CPU PIII-700 under 2.2.18aa2 and  on one CPU
  PIII700 on 2.4.2 The results are about 3300 req/sec and the second
  one about 1800 req/sec. Unix domain sockets was used.

  I found the rather huge number of context swithches in first case:

 2  0  0   3296 265244 104848 111472   0   0 0 0  103 27570  41  19  40
 2  0  0   3296 265244 104848 111472   0   0 0 0  103 27690  38  21  41
 0  0  0   3296 265344 104848 111472   0   0 0 0  104 26405  37  18  46

 the second one looked better, but not much if you'll look at about 2
 times speed difference:

 1  0  0 24  58656   1480 298180   0   0 0 0  105  7946  84  16   0
 1  0  0 24  58656   1480 298180   0   0 0 0  106  7942  88  12   0
 2  0  0 24  58656   1480 298180   0   0 0 4  110  7968  82  18   0
 1  0  0 24  58656   1480 298180   0   0 0 0  105  7966  81  19   0
 1  0  0 24  58656   1480 298180   0   0 0 0  105  7965  81  19   0
 1  0  0 24  58656   1480 298180   0   0 0 0  105  7948  83  17   0
 1  0  0 24  58656   1480 298180   0   0 0 0  105  7964  85  15   0

 I  tried to run  dumb query "select 10" and got about 7000 queries on
 second machine - which is 4 times faster then query which touches the
 table.

 Has anyone any ideas about if this speed is peek - I have heard
 people having 1 of inserts/sec into table with no indexes with
 mysql and so on so I was a bit surprised about this low perfomance,
 which is more likely to be not because of connection or pharsing
 speed as "select 10" works fast, and just the I/O needed should not
 take so much then everything should be in memory.

 The another story is - I've tried few time ago read speed of reiserfs
 - I created 100 of files, each ten bytes in size  and I was able
 to  read this file set in speed of 25000/sec - I mean open/read/close
 - so select speed of about 2K queries per second looks quite
 surprising, then most of this time is not because of communication
 but because of really accessing the table.

 

 

 

 
  

-- 
Best regards,
 Peter  mailto:[EMAIL PROTECTED]



-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Strange problem.

2001-04-05 Thread Peter Zaitsev

Hello ,

Few time ago I've started to get the following problems, which several
times allready lead to data lost for me.

Here is the example:

I found the followings in my application error log:


Query - #1017 [opal.local] - Can't find file: 'g04stat_languages.MYD' (errno: 2)
select visitors,loads,id ,hits from layers_st45.g04stat_languages where layer_id=19573
05.Apr 16:30:28 : pid:5224 h -- 65972/0 AllInOne rec:0 l:0 tm:1627.20 #
05.Apr 16:30:28 : pid:5224 h  65972/0 [03.23 17:10] - [01.01 02:59] q:3.73 i:0.00 
a:3.26 #


So If I check mysql it's running quite well and long:

opal:/spylog/layers # mysqladmin status
Uptime: 255087  Threads: 74  Questions: 28129440  Slow queries: 1453  Opens: 225540  
Flush tables: 1  Open tables: 1024 Queries per second avg: 110.274
opal:/spylog/layers #

The logfile contains NO errors or snything about this


I went to look what is wrong with the file: The problem is  index file
is gone. And there are MUCH such situations - I get several files lost
each day.

opal:/spylog/db/layers_st45 # ls -la g04stat_lang*
-rw---   1 mysqldaemon1473241 Mar 26 18:54 g04stat_languages.MYD
-rw---   1 mysqldaemon   8664 Jan 12 19:09 g04stat_languages.frm

This is not the worst case - in several tables I had  .MYD file to be
not existant, but TMD file left - so I just rename it and it works
find.  One or two times I didn't find even TMD file so I had to look
at my backups.


Looking at the table I found They are usually distributed in one table
group - for example I get many from  layers_stXX.gXX* lost there no
over tables lost, so I think this is related to my purge program (this
is still mysql bug anyway :)) ). The thing is after deleting some data
from table the program uses REPAIR table  - this is from the times
then optimize table was several times slower.

I'm using mysql 3.23.36, and nothing is changed in system, only mysql,
also  I had this problem with mysql 3.23.32 if I'm not mistaken.

I'll try to make an repeatable example but I'm not shure I'll be able
to do so as this does not happens to all tables this program work with
- this may happen only in some specific cases - may be then other
program is working with the table or something.







-- 
Best regards,
 Peter  mailto:[EMAIL PROTECTED]



-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




mysql 3.23.36 problem

2001-03-29 Thread Peter Zaitsev

Hello mysql,

  Resently I've found a problem when I was trying to upgrade from
  3.23.32 to 3.23.36, well I had a same problem then tried to upgrade
  to 3.23.33 so decided to wait a bit, but now i think I should check
  more about this.

  The problem is the followings: Sometimes then I rename a table I get
  the following strange error, therefore sometimes I do not. I do not
  use transactions and do not lock anything at this point but it
  sometimes happens:

mysql> create table test1 ( id int );
Query OK, 0 rows affected (0.00 sec)

mysql> create table test2 ( id int );
Query OK, 0 rows affected (0.00 sec)

mysql> rename tables test1 to test3, test2 to test1, test3 to test2;
ERROR 1192: Can't execute the given command because you have active locked
tables or an active transaction

The same problem may appear even then using just first part of rename
statement.

I tried many different things like trying to flush tables before
trying to rename them, trying to start and commit a transaction before
or lock and unlock tables nothing helps but after restarting mysql I
get ride of this problem.

Also I found what after this problem appears there is no way to get a
ride from it without restarting mysql.





-- 
Best regards,
 Peter  mailto:[EMAIL PROTECTED]



-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re[2]: Benchmarking innobase tables

2001-03-19 Thread Peter Zaitsev

Hello Heikki,

Monday, March 19, 2001, 4:40:30 PM, you wrote:


>>Also the problem with innobase_flush_log_at_trx_commit=0 should be
>>there is no guarantie the last transaction commited will be on it's
>>place if the power would be lost.  Also I don't know is it possible in
>>this case for database to be corrupted as some transactions may modify
>>database but are not in a logfile (Let's ask Heikki about this).

HT> The database does not get corrupted even if you use
HT> innobase_flush_logs_at_trx_commit=0 and it crashes: Innobase always writes
HT> the appropriate log segment to disk before writing a modified database
HT> page to disk. In this sense the log on disk is always 'ahead' of the disk
HT> image of the database. But, of course, you may lose the updates of the
HT> latest transactions in a crash, if the database has not yet written the
HT> relevant log segment to disk.


OK. The only question is is in this case only last transactions may be
lost, and what the transaction can be only be lost completely ?

I'm speaking about the situation - if I have connection there I have
1,2,3,4 transactions as a sequince - may it happen what changes made
by transaction 4 take place while while by transaction 3 not ?





-- 
Best regards,
 Petermailto:[EMAIL PROTECTED]



-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re[2]: Innobase in MySQL

2001-03-18 Thread Peter Zaitsev

Hello Greg,

Sunday, March 18, 2001, 9:29:45 PM, you wrote:

>> The only problem I see here - i don't know how innobase/bdb will
>> handle tables without primary key... i think perfomance should not be
>> so good as with myisam. Still  there is a possibility to load all data
>> to myisam and then run alter table which adds all indexes required and
>> then converts the table to format required on the same pass.
>> 

GC> Agreed, but it would be nice if this was automatic with mysqldump ?!

GC> Although a gut feeling would be that create myisam table, insert, create
GC> indexes, convert will be slower. 


Well. Index creation and conversion can be done in one pass if I'm not
mistaken.



-- 
Best regards,
 Petermailto:[EMAIL PROTECTED]



-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re[2]: Benchmarking innobase tables

2001-03-18 Thread Peter Zaitsev

Hello Christian,

Sunday, March 18, 2001, 12:22:44 PM, you wrote:

>>
>>If you are going to be committing on every record, you'll want your
>>tablespace and logfile directories on separate disks to avoid
>>thrashing.  If you only have one disk and don't care if you lose the
>>last few transactions if your system crashes, try setting
>>innobase_flush_log_at_trx_commit=0 in my.cnf.

CJ> Wow, thanks. With innobase_flush_log_at_trx_commit=0, the benchmark now shows:

CJ> autocommit=0, rollback after each insert:   1587 inserts+rollbacks/sec
CJ> autocommit=1:   2764 inserts/sec.

CJ> That's even faster than myisam (2487 inserts/sec today)!!!

In this case you should compare it to myisam created with
delay_key_write=1, also  the size of key_buffer matter.

Also the problem with innobase_flush_log_at_trx_commit=0 should be
there is no guarantie the last transaction commited will be on it's
place if the power would be lost.  Also I don't know is it possible in
this case for database to be corrupted as some transactions may modify
database but are not in a logfile (Let's ask Heikki about this).



-- 
Best regards,
 Petermailto:[EMAIL PROTECTED]



-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re[2]: Innobase in MySQL

2001-03-18 Thread Peter Zaitsev

Hello Andreas,

Saturday, March 17, 2001, 6:36:17 PM, you wrote:



AV> I noticed that in 3. and 4. the machine load was constantly at 100% during
AV> the inserts.
AV> but during index recreation the load springs wildley between 0 and 100%, so
AV> the machine is not maxed out in terms of CPU utilization here (in other
AV> words, the machine is waiting for something to do :-). The same applies to
AV> 2. when commit() was send.

AV> I tested also 1. and 3. with a myisam-table. Index recreation draws
AV> constantly 100% load from the machine here, therfore it's faster.

AV> results:
AV> 5. same as 1. but with myisam type => 116 seconds
AV> 6. same as 3. but with myisam type => 65 seconds + additional 33 seconds for
AV> index recreation.

Could you tell me what do you mean by index creation ? Did you drop
all indexes or only secondary index while leaving primary in fact ?



-- 
Best regards,
 Petermailto:[EMAIL PROTECTED]



-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re[2]: Innobase in MySQL

2001-03-18 Thread Peter Zaitsev

Hello Greg,

Saturday, March 17, 2001, 3:31:53 AM, you wrote:


GC> It would be very handy if Innobase (and the GEMINI when it comes along)
GC> where to support mysqldump in the standard way, as I assume it works as
GC> such and I and many others would have to change thier backup scripts. 
GC> Delayed index creation is very usefull (in saving time) in larger DB
GC> loads via a mysqldump - Hiekki is this difficult ?

I as well as insertion speed is 90% limited by idex creation (at least
for myisam) there could be an optimization generic for all table
handlers, done on mysql level - at first to create tables without any
keys, then insert all data in it and then run alter table to add all
keys what should be in this table.

The only problem I see here - i don't know how innobase/bdb will
handle tables without primary key... i think perfomance should not be
so good as with myisam. Still  there is a possibility to load all data
to myisam and then run alter table which adds all indexes required and
then converts the table to format required on the same pass.

I'll try to do some benchmarks accordint to this to se if it does any
speed improvement.



-- 
Best regards,
 Petermailto:[EMAIL PROTECTED]



-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re[3]: Innobase in MySQL

2001-03-16 Thread Peter Zaitsev

Hello Michael,

Wednesday, March 14, 2001, 4:41:21 PM, you wrote:


Peter>> Well I tried to do so but still it takes an amount of time to add the
Peter>> indexes...

Peter>> I think the good Idea would be to add to mysqldump an option to create
Peter>> the table without index, then insert all data indeed and then add keys
Peter>> required. This would be a good point in --opt mode or in other option
Peter>> :)

MW> This already happens when using LOAD DATA INFILE and MyISAM tables,
MW> but the Innobase table handler can't yet do this.

Well probably. But The problem is I can't backup database comfortable
way doing this (I can write a script of couse but there is one which
is doing almost the same thing and it's mysqldump)

so it looks like it wold be nice or for mysqldump to be able to
operate with load data infile format (you'll nead a number of files to
backup a database) or  special format in which ou at first create
table without keys, then do all inserts into it and then do alter
table to add keys indeed.

At least it would be a standart way to quickly backup data and recover
it for all table handlers (backup probably does not work for all tablr
types yet)



-- 
Best regards,
 Petermailto:[EMAIL PROTECTED]



-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Innobase table check.

2001-03-15 Thread Peter Zaitsev

Hello Heikki,

  Finally I was able to check the innobase tables included into mysql
  3.23.34. Well first several times I was quite happy about them, but
  later understud that this is because option  --create-options is
  broken :)

  So now I must say On my system innobase seems to work as bad as it
  worked before :(
  
  - ATIS test fails.
Retrieving data
Warning: Query 'select city.city_name,state.state_name,city.city_code from state,city 
where city.state_code=state.state_code' return
ed 1 rows when it should have returned 11 rows 
 
Got error:  when executing select flight.flight_code,aircraft.aircraft_type from 
flight,aircraft where flight.aircraft_code=aircraft
.aircraft_code 
 
got 0 instead of 579 ***

  - mysqld is restarted during alter table test and one more time
  during the tests:

Innobase: Started
/usr/local/mysql/libexec/mysqld: ready for connections
Innobase: Warning: out of memory in additional memory pool.
Innobase: Innobase will start allocating memory from the OS.
Innobase: You should restart the database with a bigger value in
Innobase: the MySQL .cnf file for innobase_additional_mem_pool_size.
010314 19:00:00  Warning: Checking table:   './oldgoodcounter/stop_recs'
010314 19:00:01  Warning: Checking table:   './oldgoodcounter/registrants_stats'
mysqld got signal 11;
The manual section 'Debugging a MySQL server' tells you how to use a
stack trace and/or the core file to produce a readable backtrace that may
help in finding out why mysqld died.
Attempting backtrace. You can use the following information to find out
where mysqld died.  If you see no messages after this, something went
terribly wrong...
Bogus stack limit or frame pointer, aborting backtrace.
Thread 5126 stopped in file buf0lru.c line 371
Thread 5126 stopped in file buf0lru.c line 371

Number of processes running now: 0
010314 21:13:33  mysqld restarted
Innobase: Database was not shut down normally.
Innobase: Starting recovery from log files...
Innobase: Starting log scan based on checkpoint at
Innobase: log sequence number 0 3385030377

 - It seems like error message for error 139 should be changed because
 it says about 16M there innobase and gemini has their own limits
 about it.
 139 = Too big row (>= 16 M)
 

Now few words about reasons why this may happen (I'll try to check
them out soon)

1) I'm using 2.4.2 kernel,SMP - so there may be some incompatibilities.
2) I'm usung patched for 2GB limit GLIBC
3) I'm using ReiserFS file system.
4) The parameters I'm using. (Like bdb does not work with big
tablecache)

Anyway MYSQL with MYISAM works with no problem on this system, and I
use the same system for production on 20 machines so this looks for me
more like incomportability problem.


  

-- 
Best regards,
 Peter  mailto:[EMAIL PROTECTED]



-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re[2]: Innobase in MySQL

2001-03-14 Thread Peter Zaitsev

Hello Dan,

Tuesday, March 13, 2001, 6:37:16 PM, you wrote:

DN> In the last episode (Mar 13), Peter Zaitsev said:
>> Well guys mysqldump have one serious problem - the speed.
>> 
>> The backup speed is quite upsetting and loads system much, but the
>> worst thing is recovery speed.
>> In my case the data is added in realtime - most queries are inserts
>> which utilize system quite hard. So to recover data I have gathered
>> for a month it will take about 1 week to feed mysql with mysqldump
>> output, even with extended inserts. So at least this is not complete
>> solution.

DN> Make sure you remove all your indexes during the load; that can really
DN> slow down inserts.

Well I tried to do so but still it takes an amount of time to add the
indexes...

I think the good Idea would be to add to mysqldump an option to create
the table without index, then insert all data indeed and then add keys
required. This would be a good point in --opt mode or in other option
:)




-- 
Best regards,
 Petermailto:[EMAIL PROTECTED]



-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re[7]: More Innobase questions (Was: RE: Innobase)

2001-03-13 Thread Peter Zaitsev

Hello Heikki,

Monday, March 12, 2001, 11:51:54 PM, you wrote:

HT> Hi Peter,

HT> and sorry that this response to your Big Mail comes some 12 days late.
HT> We have put together the release 3.23.34a and that has occupied
HT> my mind.

No Problem. At least the progress is going :) I'll try to test
innobase more shortly. Possibly later today.

>>The good thing is innobase is allready winning on some queries, there
>>it is not so optimized to use within mysql as MYISAM is.

HT> Some queries in sql-bench are a bit artificial, because they also
HT> test the bugginess of the database. If you select single rows from
HT> the database, then the CPU time is usually dominated by the communications
HT> overhead between your client and the database server. If the database is
HT> very big, it may be dominated by the disk seek needed to fetch the row.

Yes of couse. But anyway it's thrue slowdown in some things. So I just
point on this - this may mean something or may be expected (I think no
table handler will be efficient for every application)


HT> My basic test has been a COUNT(*) on a big join in main memory. That
HT> measures the speed of the database server kernel, and communications overhead is
HT> negligible. The TPC-H query benchmark contains many such queries.

HT> I have to look at the MySQL query optimizer. It does not always pick
HT> the best table access strategy for Innobase, maybe because table
HT> statistics are calculated in somewhat different way.

OK. It's the point of optimization I think :)

By the way. Does innobase optimize queries like select count(*) from
table ?  BDB needs to scan whole table for this which is not good for
some usages. If not it would be nice to have AVG_COUNT(*) aviable to
application - one of my applications uses it to deside if it's endeed
to purge the table or not.


HT> Also some insert tests are artificial. I have typically used a table
HT> with a primary key, and possible a couple of non-unique secondary indexes.
HT> Rows usually come in the ascending order of the primary key. In performing
HT> inserts, a transactional database has the extra burden of writing log records
HT> for the inserts. The log is needed to provide rollback and recovery, of course.

Yes. Of couse. That's why I'm saying I'm quite happy with insert
speed.


HT> Yes, it could handle, but currently you cannot define such tables
HT> in MySQL.

Of course :)

>>Is database clustered index page is the same as data page ?

HT> Yes, all data is stored in the pages of the clustered index.
HT> In Oracle such tables are called index organized tables, but in Sybase
HT> they use the term clustered tables. The idea is that the data is always
HT> kept in the order of the primary (cluster key). It makes accesses
HT> through the cluster key faster, since you do not need to latch and
HT> access a separate data page when you fetch a row.

Yes. This is good. But What about scaning table complitely ? Does it
slows down a lot (have you done any tests on it)


>>I would with MYSQL is to add a hash column to a table and a make an
>>index on it, and then to compute a hash function for each
>>inserting/updating row and also use it on search. This give me
>>significant perfomance improvement as well as better disk utilization.
>>I think the hash indexes is extremly good feature even if the hash
>>values are really stored in btree.

HT> It is best to do that in the application. Adding a new access method to
HT> a database server is a major task, requiring 3 man-months of work, at
HT> the minimum :).

Well. Of couse I can do it in application but this is common problem
what hash indexes are much more efficient in eqref and takes much less
space, so many databases have it and mysql in 4.0 as I remember is
also goint to have this.

Doing this in application is a bad idea as you can access a table from
many different places and for example from console, and it's hard to
keep HASH synced with a row by hand.


>>I ment What will be if I'll specify a wrong size in my.cnf ?

HT> When Innobase starts, it checks that the existing data files are of
HT> the size specified in my.cnf.

OK. So it will just not start in this case :)

>>HT> ALTER TABLESPACE has to edit also the .cnf file.
>>That's impossible because you can't find the exact place there you got
>>the value - it could be just specified as command line parameter. Also
>>the problem  is my.cnf is usualy only readable by mysql user and not
>>writable by it (it would be a security hole). Also in my case several
>>machines share one my.cnf over NFS. So if You don't like to
>>have system table space and system table it would be nice to have a
>>special database config in innobase directory which is updated by
>>mysql. This is a same way mysql does with replication - current status
>>is stored in special file.

HT> Monty wanted me to put the Innobase config parameters to my.cnf. Maybe
HT> we should do so that Innobase would internally remember the data files
HT> and when you start up,

Re[2]: Innobase in MySQL

2001-03-13 Thread Peter Zaitsev

Hello Heikki,

Tuesday, March 13, 2001, 1:31:04 AM, you wrote:

HT> Joshua,

>>I hope you can also use MySQL dump, in which case, you don't have to shut 
>>down, right?

HT> yes, you can use mysqldump without shutting down. It did not come to my
HT> mind that actually mysqldump is a kind of online backup mechanism :).
HT> Since Innobase is multiversioned, you will get consistent snapshots of
HT> your tables, and since the consistent read does not set any locks, your
HT> users should be able to update the tables concurrently. Here I have
HT> to check if mysqldump sets a full table read lock on the table you dump:
HT> for Innobase that is not needed, but maybe MySQL currently does this because of
HT> other table types.

Well guys mysqldump have one serious problem - the speed.

The backup speed is quite upsetting and loads system much, but the
worst thing is recovery speed.
In my case the data is added in realtime - most queries are inserts
which utilize system quite hard. So to recover data I have gathered
for a month it will take about 1 week to feed mysql with mysqldump
output, even with extended inserts. So at least this is not complete
solution.


-- 
Best regards,
 Petermailto:[EMAIL PROTECTED]



-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re[2]: MYSQL Feature :)

2001-03-01 Thread Peter Zaitsev

Hello Sinisa,

Thursday, March 01, 2001, 3:42:46 PM, you wrote:

SM>  > I'm not really shure if this is a bug or correct behavior, but still
SM>  > it leads to toe problems in this case:
SM>  > 
SM>  > insert into g02layers_stat_hits set counter=1, type=1, page_group=1;
SM>  > ERROR 1062: Duplicate entry '4294967295' for key 1
SM>  > 
SM>  > This is quite well result, but if we use replace in this case (which
SM>  > works quite OK then the table has not maximum number of rows:
SM>  > 
SM>  > mysql> replace into g02layers_stat_hits set counter=1, type=1, page_group=1;
SM>  > Query OK, 2 rows affected (0.33 sec)
SM>  > 
SM>  > and in this case the old row with id=4294967295 is destroyed.
SM>  > 
SM>  > There for this works quite nice then the table does not have last_id
SM>  > used:
SM>  > 
SM>  > mysql> replace into g02layers_stat_hits set counter=1, type=1, page_group=1;
SM>  > Query OK, 1 row affected (0.02 sec)
SM>  > 
SM>  > in this case the new ID is generated and everything is just fine :)
SM>  > 


SM> Hi!

SM> Yes, the above is well known behaviour. 

SM> What do you suggest, how should we change that ??

I think in this case, as you allow a replace with an auto increment
value (which is in case EQ to insert as no such value may exist), it
should return a dublicate key error in this case as it was unable to
insert new record to a database, which was expected for it to do.




-- 
Best regards,
 Petermailto:[EMAIL PROTECTED]



-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




MYSQL Feature :)

2001-02-28 Thread Peter Zaitsev

Hello mysql,

  Mysql 3.23.32 running on Linux.

I've got a table with auto_increment command stuck, well this is no
problem there is a record in this table with MAX_UINT value.

I'm not really shure if this is a bug or correct behavior, but still
it leads to toe problems in this case:

insert into g02layers_stat_hits set counter=1, type=1, page_group=1;
ERROR 1062: Duplicate entry '4294967295' for key 1

This is quite well result, but if we use replace in this case (which
works quite OK then the table has not maximum number of rows:

mysql> replace into g02layers_stat_hits set counter=1, type=1, page_group=1;
Query OK, 2 rows affected (0.33 sec)

and in this case the old row with id=4294967295 is destroyed.

There for this works quite nice then the table does not have last_id
used:

mysql> replace into g02layers_stat_hits set counter=1, type=1, page_group=1;
Query OK, 1 row affected (0.02 sec)

in this case the new ID is generated and everything is just fine :)





-- 
Best regards,
 Peter  mailto:[EMAIL PROTECTED]



-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Select bug?

2001-02-28 Thread Peter Zaitsev

Hello Mike,

Wednesday, February 28, 2001, 3:55:35 AM, you wrote:

You should know what you can't do direct comparasions between float
number and float const as the number really stored in a database can
be slightly different i.e
355.619



MM> There seems to be a matching bug in MySQL.

MM> Why will it match if I do a > of a number smaller that is in the 
MM> custom_price column, or if I do a strcmp(custom_price, '355.62'), but 
MM> will NOT match if I do "WHERE custom_price = '355.62'" OR if I try 
MM> "WHERE custom_price = 355.62"? This happens in both 3.23.28-gamma and 
MM> the latest 3.23.33. Tested on redhat 6.1,6.2 and 7.0.

MM> Here is an outline of the problem:


mysql>> desc service;
MM> 
+--+-+--+-+-++
MM> | Field| Type| Null | Key | Default | 
MM> Extra  |
MM> 
+--+-+--+-+-++
MM> | id   | int(10) |  | PRI | NULL| 
MM> auto_increment |
MM> | accountid| int(10) |  | MUL | 0   | 
MM>   |
MM> | servicetypeid| int(10) |  | MUL | 0   | 
MM>   |
MM> | status   | varchar(32) |  | | Pending | 
MM>   |
MM> | date | date|  | | -00-00  | 
MM>   |
MM> | statuschangedate | datetime|  | | -00-00 00:00:00 | 
MM>   |
MM> | addedby  | int(10) |  | | 0   | 
MM>   |
MM> | updatedby| int(10) |  | | 0   | 
MM>   |
MM> | referral | varchar(50) |  | | | 
MM>   |
MM> | serverid | int(10) |  | | 0   | 
MM>   |
MM> | chargedate   | date|  | | -00-00  | 
MM>   |
MM> | chargeinterval   | varchar(5)  |  | | | 
MM>   |
MM> | quantity | float(10,1) |  | | 0.0 | 
MM>   |
MM> | chargedsetup | int(1)  |  | | 0   | 
MM>   |
MM> | discountid   | int(10) |  | | 0   | 
MM>   |
MM> | custom_price | float(10,2) |  | | -1.00   | 
MM>   |
MM> 
+--+-+--+-+-++
MM> 16 rows in set (0.00 sec)


MM> (not work)
mysql>> select custom_price from service where accountid = 2625 and 
MM> custom_price = '355.62';
MM> Empty set (0.01 sec)

mysql>> select custom_price from service where accountid = 2625 and 
MM> custom_price = 355.62;
MM> Empty set (0.00 sec)


MM> (work)
mysql>> select custom_price from service where accountid = 2625 and 
custom_price >> '355.61';
MM> +--+
MM> | custom_price |
MM> +--+
MM> |   355.62 |
MM> |   355.62 |
MM> |   355.62 |
MM> |   355.62 |
MM> |   355.62 |
MM> |   355.62 |
MM> |   355.62 |
MM> |   355.62 |
MM> |   355.62 |
MM> +--+
MM> 9 rows in set (0.01 sec)


mysql>> select custom_price from service where accountid = 2625 and 
MM> strcmp(custom_price,'355.62') = 0;
MM> +--+
MM> | custom_price |
MM> +--+
MM> |   355.62 |
MM> |   355.62 |
MM> |   355.62 |
MM> |   355.62 |
MM> |   355.62 |
MM> |   355.62 |
MM> |   355.62 |
MM> |   355.62 |
MM> |   355.62 |
MM> +--+
MM> 9 rows in set (0.00 sec)





-- 
Best regards,
 Petermailto:[EMAIL PROTECTED]



-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re[3]: More Innobase questions (Was: RE: Innobase)

2001-02-26 Thread Peter Zaitsev

Hello Heikki,

Monday, February 26, 2001, 5:08:52 PM, you wrote:

HT> Hi!

HT> There were more questions about Innobase:

>>Here is the comming question - can you speak some more about apace
>>allocation consepts used in innobase. For example how would it like to
>>work with huge number (10.000+ of the tables) and how space allocation

HT> The data dictionary is hashed, a huge number of tables is no problem.
HT> Space allocation is done for small tables one database page at a time
HT> (default 16 kB). For big tables space is allocated in chunks of
HT> consecutive pages, up to 256 pages big.
Good. This sounds it should reduce fragmentation.

Anyway as I look at the benchmarks currently table creation test runs
about 5 times faster on MYISAM, but well about 5 times faster then
BDB. May be the speed difference is because of interface or
transaction handling...

>>is done - I know innobase supports clusterisation, but if I'm not

HT> Yes, all tables in Innobase are clustered by their primary key, that is,
HT> rows are physically in the order of their primary key. If you do not
HT> specify a primary key, then a generated row id is internally used as
HT> the primary key: the rows will be ordered according to their insertion
HT> sequence. The support for the Oracle-like clustering, physically mixing
HT> several tables, is built in but I do not have a user interface for it.
Hm. How does this connects with one page is in use by one table (or
I'm mistaken)

>>mistaken only for rows from one table. The next thing is is there a
>>difference made in handling dynamic and static rows and how
>>fragmentation is avoided for last ones ?

HT> You probably mean what happens if the size of a row changes?
HT> Innobase uses no overflow pages. If a row is updated so that it
HT> cannot fit on the page, we split the B-tree page to get more space.
HT> This will of course slow down table scans. Recreating tables where
HT> this happens may be sensible from time to time.
The better way would be to have a possibility of online optimization.
If you have online backup feature this should also be possible - the
idea is almost the same.

Other thing is - probably you hold some of space free on each page to
have a possibility to insert new rows in the same page ? Other
question is what you do with holes then for example row size changes
from small to huger and you've copied it to other location ?  Wat's
happening if you would have place on the page for a row but it's
fragmented ? Do you somehow defragment the page ?


>>As far as I know some space allocation problems are qute different in
>>file systems and in databases, so I'm surprising you're speaking about
>>similar algorithm.

HT> Yes, in file systems you cannot grow a file in the middle, only at the
HT> end.

This is not only the difference :) For example filesystems usually
work with big amount of files while databases has limit number of
them, Database does not have directories. Database have different
object types (table data, indexes) which have a bit different access
methods.

>>Other thing is - does innobase supports hash indexes internally ?

HT> Yes, it automatically creates hash indexes, no user configuration is
HT> needed.

Hm. What do you meen by this - it somehow tracks the queries and
creates indexes of needed type automatically ? This is good feature
but someone will anyway want to turn it of as want to deside himself
the index layout.

I'm speaking about user level hash indexes - for example if I'm using
index on 200 by string for reference only, and do not do range scans
on it I can save much space and  get much space using hash indexes,
there in other case I would want btree one.

>>Other thing - can I not add files but increase it size ?   How
>>innobase will handle if I'll do so ?

HT> Currently, you cannot increase a size of a data file, but you have
HT> to add a new. Increasing the size of the last data file would of course
HT> be technically easy to implement.

Yes.  That's not the problem. The question is what innobase will do in
this case - take the actual length or panic ?

>>Few words about creation. As I looked at innobase files they are
>>mostly zero filled while created. I think it would be nice to add a
>>possibility to create files not by write() but by lseek() this will
>>make creation faster and files taking less space on file system which
>>support holes in files - last one is good as you can just create huge
>>files in the very begining and do not care much about wasting thespace.

HT> The problem is that the OS may fragment the files if they are not
HT> physically allocated.

Yes. Of course - but some people may not need real speed from innobase
they just want transactions with mysql and does not want to bother
with space much.
So I think this may be good as an option for some users. Not really
important one of course. It's important also for debugging purposes -
currently It's quite annoing to wait while 512Mb database file
required for test is cre

Re[2]: More Innobase questions (Was: RE: Innobase)

2001-02-24 Thread Peter Zaitsev

Hello Heikki,

Friday, February 23, 2001, 6:51:33 PM, you wrote:

HT> Peter and Sander,

HT> relevant questions, I will try to answer them.

>>Good questions - I have a few more :)
>>A) why does it seem to use fixed-size storage units. (The files)

HT> I have copied the concept of a tablespace consisting of a small
HT> number of files from Oracle. When the database itself manages
HT> disk space, it can reduce disk fragmentation by using its own
HT> file space allocation algorithm. Innobase uses an algorithm similar
HT> to the Fast File System used in some Unixes. Also, one can use
HT> raw disks to totally eliminate the disk fragmentation caused by the
HT> operating system.

Here is the comming question - can you speak some more about apace
allocation consepts used in innobase. For example how would it like to
work with huge number (10.000+ of the tables) and how space allocation
is done - I know innobase supports clusterisation, but if I'm not
mistaken only for rows from one table. The next thing is is there a
difference made in handling dynamic and static rows and how
fragmentation is avoided for last ones ?
As far as I know some space allocation problems are qute different in
file systems and in databases, so I'm surprising you're speaking about
similar algorithm.
Other thing is - does innobase supports hash indexes internally ?


Other thing - can I not add files but increase it size ?   How
innobase will handle if I'll do so ?


>>B) what happens when they ar full?

HT> You have to shut down the database and add a new file to the
HT> configuration file my.cnf.

>>C) can it auto-create new files as demand grows?

HT> Not currently. I think no database currently can auto-create
HT> disk drives either :).

Few words about creation. As I looked at innobase files they are
mostly zero filled while created. I think it would be nice to add a
possibility to create files not by write() but by lseek() this will
make creation faster and files taking less space on file system which
support holes in files - last one is good as you can just create huge
files in the very begining and do not care much about wasting the
space.

>>D) can you safely add new files when there is data in them already?

HT> Yes you can, but you have to shut down the database first.

Shurely special command may be added for this later :)
Other thing is - don't you think it's not quite good idea to store
database configuration in config file. For now it's quite simple and
can be easyly recovered by looking at the files anf their sizes but
then you will have more complex configuration (i.e several tablespaces
with external mirroring) it will be hard to recover.

Other question - files sizes. Does innobase itself support 4GB+ files?


>>Sander> -Original Message-> From: Peter Zaitsev [mailto:[EMAIL PROTECTED]]

>> Sent: 22 February 2001 19:52> To: [EMAIL PROTECTED]> Subject: Innobase> > 
>> Hello mysql,> 
>>   Today I got a chance to compile mysql 3.23.34 with innobase, so
>>   althought it's not out yet I'll give some of my comments> 
>>   The version I'm speaking about got from work.mysql.com> 
>>   1) It does not even configure then trying to configure
>>   --with-innobase-db the problem is autoconf is not called in innobase
>>   directory so configure script is not created

HT> In the source distribution there will be the files generated by
HT> autoheader etc., you will not need to generate them in your machine.
Well of course :)


>>   2) innobase_data_home_dir   used as prefix, so if we'll not end it
>>   with "/" we'll have files prefixed by directory name created in
>>   upper level directory. This may be expected behavior but if so it
>>   should be described in the manual.

HT> I have to modify the code so that it adds the '/' or '\'.

>>   3) Data files somehow are created with "x" attribute which I think
>>   is not quite right
>> drwxr-xr-x  19 root root  366 Feb 22 21:32 ..
>> -rwxrwx--x   1 root root 1073741824 Feb 22 21:36 ibdata1
>> -rwxrwx--x   1 root root 1073741824 Feb 22 21:08 ibdata2
>> -rwxrwx--x   1 root root 1073741824 Feb 22 21:09 ibdata3
>> -rwxrwx--x   1 root root 1073741824 Feb 22 21:09 ibdata4

HT> Sorry, I will fix that. You are not supposed to execute database
HT> data files :).

>>   4) Currently ATIS test fails with innobase table:> Retrieving data
>> Warning: Query 'select 
>> city.city_name,state.state_name,city.city_code from state,city 
>> where city.state_code=state.state_code' returned 1 rows when it 
>> should have returned 11 rows> Got error:  when executing select 
>> f

Innobase

2001-02-22 Thread Peter Zaitsev

Hello mysql,

  Today I got a chance to compile mysql 3.23.34 with innobase, so
  althought it's not out yet I'll give some of my comments

  The version I'm speaking about got from work.mysql.com

  1) It does not even configure then trying to configure
  --with-innobase-db the problem is autoconf is not called in innobase
  directory so configure script is not created
  2) innobase_data_home_dir   used as prefix, so if we'll not end it
  with "/" we'll have files prefixed by directory name created in
  upper level directory. This may be expected behavior but if so it
  should be described in the manual.
  3) Data files somehow are created with "x" attribute which I think
  is not quite right
drwxr-xr-x  19 root root  366 Feb 22 21:32 ..
-rwxrwx--x   1 root root 1073741824 Feb 22 21:36 ibdata1
-rwxrwx--x   1 root root 1073741824 Feb 22 21:08 ibdata2
-rwxrwx--x   1 root root 1073741824 Feb 22 21:09 ibdata3
-rwxrwx--x   1 root root 1073741824 Feb 22 21:09 ibdata4
  4) Currently ATIS test fails with innobase table:
Retrieving data
Warning: Query 'select city.city_name,state.state_name,city.city_code from state,city 
where city.state_code=state.state_code' returned 1 rows when it should have returned 
11 rows
Got error:  when executing select flight.flight_code,aircraft.aircraft_type from 
flight,aircraft where flight.aircraft_code=aircraft.aircraft_code
got 0 instead of 579 ***

 5) Then started first time mysql creates innobase datafiles long
 during startup which I think should be also mentioned in the manual -
 I thought it's hanged up.

 6) There is currently a serious lack of documentation - is there any
 additional information about innobase tables ? For example how does
 tables distributed over data files (I'e if putting them on different
 disks will improve perfomance) Other thing is what should I do if I
 somehow lose one file - will I be able to recover data at least
 partitially ?  What is idea ? Is it better to make small files or one
 big file if you OS supports this ? How may I backup table  ? Only
 with mysqldump ?
 7) Currently No check repair tools are provided which would be real
 showstopper for production usage even if the tables will be quite
 stable - what should I do if I have file system dammage so I need to
 check the things up. Here we have even more problems as many tables
 share one file so we probably can't just check them one by one.
 8) As I understand innobase works with large files self mannaging
 usage in them - so Is it planned to support RAW devices with innobase
 ?

  

-- 
Best regards,
 Peter  mailto:[EMAIL PROTECTED]



-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re[2]: iNNOBASE

2001-02-19 Thread Peter Zaitsev

Hello Heikki,

Monday, February 19, 2001, 5:24:10 PM, you wrote:

HT> Peter,

HT> I guess you mean the source tree at www.bitkeeper.com? The interface
HT> file ha_innobase.cc can be found there, but not yet the Innobase
HT> source, I think. Then it cannot yet be compiled from the source tree,
HT> because it needs Innobase header files.

Well it's not. I'm speaking about one from work.mysql.com and it seems
like all innobase directory is there but configure scripts are still
does not know it.

HT> But we should have the source out by Wednesday, when Monty leaves for
HT> a trip to the Rio carnival.

Well that's what I'm speaking about. I was afraid he leaves not
complete version in a hurry :)



-- 
Best regards,
 Petermailto:[EMAIL PROTECTED]



-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




iNNOBASE

2001-02-19 Thread Peter Zaitsev

Hello mysql,

  Good day.
  I'm now trying to build mysql 3.23.34 aviable in bitkeeper tree, to
  get a chance to check innobase table handler before release is out,
  to make chance for release to be more stable :)

  Currently I have the following problem - innobase is allready in
  tree but configure script does not know enything about it and so
  it's not compiled in.

  I know you should have an ideas to make it work.


  Thanks in advance.

-- 
Best regards,
 Peter  mailto:[EMAIL PROTECTED]



-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Merge table problem.

2001-02-16 Thread Peter Zaitsev

Hello monty,

  I'm trying to use merge table for logging - to have a possibility to
  Rotate log files and to quickly delete old data - for this I'm goint
  to setup a number of merge tables where each contains data for one
  day, and the inserts are done to the last table using real table
  name, therefore the select runs on merged table to cover all data.

  Currently I found  the followning problem which may lead to the
  problems:

mysql> create table t1 (val char(10));
Query OK, 0 rows affected (0.00 sec)

mysql> create table t2 (val char(10)); 
Query OK, 0 rows affected (0.00 sec)

mysql> create table test (val char(10)) type=merge union=(t1,t2);
Query OK, 0 rows affected (0.00 sec)


mysql> insert into t1 values("a");
Query OK, 1 row affected (0.01 sec)

mysql> insert into t2 values("b");  
Query OK, 1 row affected (0.00 sec)


mysql> select * from test;   
+--+
| val  |
+--+
| a|
| b|
+--+
2 rows in set (0.00 sec)

mysql>


as you see the result is correct and merge table reflects all changes:

mysql> insert into t2 values("b");
Query OK, 1 row affected (0.00 sec)

mysql> select * from test;
+--+
| val  |
+--+
| a|
| b|
| b|
+--+
3 rows in set (0.00 sec)

Dublicates are also wellcome.

Let's add the key:

mysql> alter table t2 add key(val);
Query OK, 2 rows affected (0.00 sec)
Records: 2  Duplicates: 0  Warnings: 0

mysql> insert into t2 values("b"); 
Query OK, 1 row affected (0.00 sec)

mysql> select * from test; 
+--+
| val  |
+--+
| a|
| b|
| b|
+--+
3 rows in set (0.00 sec)


As you see the're starting to get incorrect result. The same thing
will be if I'll insert other different rows.


The only thing to fix this is to flush table test;

Other thing which also seems to be strange:

mysql> delete from t1;
Query OK, 0 rows affected (0.00 sec)

mysql> delete from t2;
Query OK, 0 rows affected (0.00 sec)

mysql> select count(*) from test;  
+--+
| count(*) |
+--+
|2 |
+--+
1 row in set (0.00 sec)

mysql> select * from test;
+--+
| val  |
+--+
| zzz  |
| zzz  |
+--+
2 rows in set (0.00 sec)

The last thing is unrepeatable. But I got this once during the tests.


Other tests show even more strange ting (this seems not to be key
related):

mysql> alter table t2 drop key val;
Query OK, 0 rows affected (0.00 sec)
Records: 0  Duplicates: 0  Warnings: 0

mysql> insert into t1 values("zzz");
Query OK, 1 row affected (0.00 sec)

mysql> insert into t2 values("zzz"); 
Query OK, 1 row affected (0.00 sec)

mysql> select * from test;
+--+
| val  |
+--+
| zzz  |
+--+
1 row in set (0.00 sec)

mysql>



-- 
Best regards,
 Peter  mailto:[EMAIL PROTECTED]



-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: BDB

2001-02-09 Thread Peter Zaitsev

Hello Michael,

Friday, February 09, 2001, 3:48:53 AM, you wrote:



Peter>> as you see here one thread is doing repear as this happened after
Peter>> mysqld crash, but note first insert query in system lock state (I've
Peter>> saw several queries at this place, but all of them spent in system
Peter>> lock time several tenths of seconds) this can nothing to do with
Peter>> recovery (this table was just created by mysqldump couple of minutes
Peter>> before) - I habe the only explanation which looks ok the  system lock
Peter>> takes place while many inserts are going to othe table (g03dirs) - as
Peter>> soon as the table changed all other queries which was in system lock
Peter>> state got a chance to run. The other thing is mysqldump does not uses
Peter>> lock tables to insert data so this looks like real perfomance
Peter>> (concurency) problem Some
Peter>> more examples:

MW> The "System Lock" means that you got some problem with fnctl() on your
MW> system.

MW> The fix is to restart mysqld with --skip-locking and without
MW> --enable-locking to ensure that fcntl() lock is not used.

Mysql is started with --skip-locking as it's recomended on linux, so
this should not be the problem.

MW> If you are using 'mysqldump --opt', it should generate locks for the
MW> tables.  What command did you use ?

At first I used mysqldump --all --opt  but then mysqld crashed on
multiple inserts.  The next time I just started it with --all
--add-drop-tables options and this is the cases for this usage.



Peter>> onyx:/spylog/db # mysqladmin processlist | grep -v Sleep
Peter>> 
+-+---++-+-+--++--+
Peter>> | Id  | User  | Host   | db  | Command | Time | State  | Info  
   
|
Peter>> 
+-+---++-+-+--++--+
Peter>> | 36  | root  | localhost  | lacontrol   | Query   | 384  | update | replace 
into layers_la00.g00stat_404refs (layer_id,id,visitors,loads,hits) values
Peter>> (2,2,0,4,0),
Peter>> (2,9, |
Peter>> | 39  | root  | localhost  | layers_la00 | Query   | 0| update | INSERT 
INTO g03stat_404pages VALUES (149,1563,0,1,0)  
   |
Peter>> | 272 | root  | localhost  | | Query   | 0|| show 
processlist
 |
Peter>> 
+-+---++-+-+--++--+

Peter>> as you see here replace is "hanged" - it's simple query wich should
Peter>> not take so long. But just few seconds after:

MW> Do you know if some other thread was using the table at this point ?

layers_st00.g00stat_404refs ? No. This is the only thread using it.

Peter>> onyx:/spylog/db # mysqladmin processlist | grep -v Sleep
Peter>> 
+-+---++-+-+--+-+--+Peter>>
 | Id  | User  | Host   | db  | Command | Time | State   | Info
   
  |
Peter>> 
+-+---++-+-+--+-+--+
Peter>> | 36  | root  | localhost  | lacontrol   | Query   | 16   | update  | 
replace into layers_la00.g00stat_enterpages (layer_id,id,hits,loads) values
Peter>> (2,2048,2,2),
Peter>> (2,1,60,60 |
Peter>> | 39  | root  | localhost  | layers_la00 | Query   | 0| update  | 
INSERT INTO g03stat_404refs VALUES (6,76851,0,1,0) 
  |
Peter>> | 271 | titan | mail.local | counter | Query   | 1| System lock | 
select
Peter>>   visitors,
Peter>>   visitors7d,
Peter>>   visitors30d,
Peter>>  |
Peter>> | 273 | root  | localhost  | | Query   | 0| | show 
processlist
 |
Peter>> 
+-+---++-+-+--+-+--+




Peter>> As you see here the mysqldump moved to loading other table and the
Peter>> replace passed, and now the other replace is waiting for insert.  And
Peter>> so on the thing continues with each table.

MW> Ok, I see what you mean.  This looks VERY strang

BDB

2001-02-08 Thread Peter Zaitsev

Hello mysql,

  1) Today I tried to reload my tables with backuing up and recovering
  whole database by mysqldump - it seems like it does not work -
  mysqld crashed during loading data back even after I've removed all
  bad tables and bdb logs to have a clean system.

  This looks like the same problem as I reported with insert test -
  BDB hanges/crashes during huge multiple insert queries.
  I've uploaded the mysqldump output wich crashes mysqld during load
  to the secret directory as layers_la00.sql.gz.
  The thing is after I've dumped the same data without the extended
  insert I could load it back without any problems.


  2) Then I was loading the data from .sql file I saw the followings:

onyx:/spylog/db # mysqladmin processlist | grep -v Sleep
+-+---++-++--+---+--+
| Id  | User  | Host   | db  | Command| Time | State | 
|Info  
|   |
+-+---++-++--+---+--+
| 34  | root  | localhost  | la00| Field List | 494  | Repair by sorting | 
|  
|   |
| 36  | root  | localhost  | lacontrol   | Query  | 64   | System lock   | 
|insert into layers_la00.g00keywords (counter_id,keyword) values (106339,'RSBAC')  
|   |
| 38  | root  | localhost  | la00| Field List | 468  | Waiting for table | 
|  
|   |
| 39  | root  | localhost  | layers_la00 | Query  | 0| update| 
|INSERT INTO g03dirs VALUES (110912,8288,'pictures/company/itartass/calendar') 
|   |
| 81  | root  | localhost  | la00| Field List | 296  | Waiting for table | 
|  
|   |
| 121 | titan | php.local  | counter | Query  | 5| System lock   | 
|SELECT lsh.begin AS period, sh.hosts7d,sh.visitors7d
 FROM layers_la00.g00layers_stat_hits AS ls |
| 125 | root  | localhost  | | Query  | 0|   | 
|show processlist  
|   |
+-+---++-++--+---+--+

as you see here one thread is doing repear as this happened after
mysqld crash, but note first insert query in system lock state (I've
saw several queries at this place, but all of them spent in system
lock time several tenths of seconds) this can nothing to do with
recovery (this table was just created by mysqldump couple of minutes
before) - I habe the only explanation which looks ok the  system lock
takes place while many inserts are going to othe table (g03dirs) - as
soon as the table changed all other queries which was in system lock
state got a chance to run. The other thing is mysqldump does not uses
lock tables to insert data so this looks like real perfomance
(concurency) problem Some
more examples:

onyx:/spylog/db # mysqladmin processlist | grep -v Sleep
+-+---++-+-+--++--+
| Id  | User  | Host   | db  | Command | Time | State  | Info  
|   |
+-+---++-+-+--++--+
| 36  | root  | localhost  | lacontrol   | Query   | 384  | update | replace into 
|layers_la00.g00stat_404refs (layer_id,id,visitors,loads,hits) values
(2,2,0,4,0),
(2,9, |
| 39  | root  | localhost  | layers_la00 | Query   | 0| update | INSERT INTO 
|g03stat_404pages VALUES (149,1563,0,1,0)  
|   |
| 272 | root  | localhost  | | Query   | 0|| show processlist  
|   |
+-+---++-+-+--++--+

as you see here replace is "hanged" - it's simple query wich should
not take so long. But just few seconds after:

onyx:/spylog/db # mysq

Re: Strange bug with BDB

2001-02-08 Thread Peter Zaitsev

Hello Michael,

Thursday, February 08, 2001, 1:58:24 AM, you wrote:



Peter>> I've done mysqladmin "flush logs" and then copied the .frm and .db
Peter>> files into other database directory - and the bug was unable to be
Peter>> repaired.

MW> I assume you mean 'unable to be repeated' ?

Yes of course.

MW> I haven't seen anything like this with BDB tables before :(

Peter>> also I find one of mysql processes hanged - it took 100% of one of the
Peter>> processors.

Peter>> I've connected to the process and made BT several times:

Peter>> 0x8137b17 in _mi_get_binary_pack_key (keyinfo=0x0, nod_flag=0, page_pos=0x0, 
key=0x0) at mi_search.c:1035
Peter>> 1035*page_pos= page+length-tmp;

MW> Do you think this is related ?

No. Now I don't - Yesterday I have the same behavior without any thread
hanged.


MW> Did you do a 'mysqladmin proc' when this happened?

Then I got the problem with BDB - yes - there was no active processes.
and there I saw hanged process - no - this possibly may be other
program which started to work with mysql hard. Sorry :)

Peter>> After I've restarted mysqld the problem removed itself

MW> :(

MW> It could have been the automatic repair that fixed this table.
MW> Do you think this is possible?

No. As I know you do not have autorepair with BDB tables.

I'll try today to backup the table and reload it with mysqldump. If
this help then this would mean something is wrong with the table but
it starts to appear only after there was some work with it.





-- 
Best regards,
 Petermailto:[EMAIL PROTECTED]



-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re[2]: Strange bug with BDB

2001-02-06 Thread Peter Zaitsev

Hello Miguel,

Wednesday, February 07, 2001, 5:52:50 AM, you wrote:


MAS> I have made the following table on Win2000 machine:

C:\mysql\bin>>mysql
MAS> Welcome to the MySQL monitor.  Commands end with ; or \g.
MAS> Your MySQL connection id is 3 to server version: 3.23.30-gamma

MAS> Type 'help;' or '\h' for help. Type '\c' to clear the buffer

mysql>> use test;
MAS> Database changed
mysql>> create table layers (
MAS>  ->  updated timestamp,
MAS>  ->  counter_id int,
MAS>  ->  stat_id int not null primary key,
MAS>  ->  lasthit_ts timestamp,
MAS>  ->  lasthit_id int) type=BDB;
MAS> Query OK, 0 rows affected (0.61 sec)

MAS> I have inserted 50 rows with the data you sent. I wasn't able to
MAS> repeat your result, see below. I am using 3.23.30 version, but
MAS> before I build a server with your same version, Can you send me
MAS> your table definition ?. Notice also that I change the name of
MAS> the table, because Windows doesn't permit the syntax that you
MAS> have used.

Well. If this bug would appear every time I would of couse send you a
working examble. If this would be at least table dammage problem I
would upload the table to apropirate directory. But as I wrote in my
previous email this has nothing to do with this - after I have
restarted mysql the problem dissapeared so this looks like mysql
memory internal structures went wron. Also I've reported some process
got looped - and I think this might be the source of the problem.

This was the first problem of this type with BDB tables I saw during a
2weeks of pre production testing of them.



-- 
Best regards,
 Petermailto:[EMAIL PROTECTED]



-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Strange bug with BDB

2001-02-06 Thread Peter Zaitsev

Hello mysql,

  mysql 3.23.32 + BDB 3.2.3h


mysql> select * from  layers_la00.g03last_hit;
+++-+++
| updated| counter_id | stat_id | lasthit_ts | lasthit_id |
+++-+++
| 20010206162625 | 110912 |   1 | 20010103141348 | 25 |
| 20010206162624 | 110912 |   2 | 20010103141348 | 25 |
| 20010206162624 | 110912 |   3 | 20010103141348 | 25 |
| 20010206162624 | 110912 |   6 | 20010103141348 | 25 |
| 20010206162624 | 110912 |   7 | 20010103141348 | 25 |
| 20010206162625 | 110912 |   8 | 20010103141348 | 25 |
| 20010206162624 | 110912 |   9 | 20010103141348 | 25 |
| 20010206162624 | 110912 |  10 | 20010103141348 | 25 |
| 20010206155808 | 110912 |  11 | 20010103141348 | 25 |
| 20010206162624 | 110912 |  12 | 20010103141348 | 25 |
| 20010206162624 | 110912 |  15 | 20010103141348 | 25 |
| 20010206162624 | 110912 |  16 | 20010103141348 | 25 |
| 20010206155808 | 110912 |  18 | 20010103141348 | 25 |
| 20010206155808 | 110912 |  19 | 20010103141348 | 25 |
| 20010206162624 | 110912 |  20 | 20010103141348 | 25 |
| 20010206162624 | 110912 |  21 | 20010103141348 | 25 |
| 20010206162624 | 110912 |  24 | 20010103141348 | 25 |
| 20010206162624 | 110912 |  26 | 20010103141348 | 25 |
| 20010206155808 | 110912 |  27 | 20010103141348 | 25 |
| 20010206162624 | 110912 |  29 | 20010103141348 | 25 |
| 20010206162624 | 110912 |  30 | 20010103141348 | 25 |
| 20010206162624 | 110912 |  31 | 20010103141348 | 25 |
| 20010206162625 | 110912 |  32 | 20010103141348 | 25 |
| 20010206155808 | 110912 |  33 | 20010103141348 | 25 |
| 20010206155808 | 110912 |  34 | 20010103141348 | 25 |
| 20010206155808 | 110912 |  35 | 20010103141348 | 25 |
| 20010206155808 | 110912 |  36 | 20010103141348 | 25 |
| 20010206162624 | 110912 |  39 | 20010103141348 | 25 |
| 20010206155808 | 110912 |  42 | 20010103141348 | 25 |
| 20010206155808 | 110912 |  43 | 20010103141348 | 25 |
| 20010206155808 | 110912 |  45 | 20010103141348 | 25 |
| 20010206155808 | 110912 |  46 | 20010103141348 | 25 |
| 20010206155808 | 110912 |  47 | 20010103141348 | 25 |
| 20010206162624 | 110912 |  49 | 20010103141348 | 25 |
| 20010206162625 | 110912 | 101 | 20010103141348 | 25 |
| 20010206155808 | 110912 | 102 | 20010103141348 | 25 |
| 20010206155808 | 110912 | 103 | 20010103141348 | 25 |
| 20010206155808 | 110912 | 104 | 20010103141348 | 25 |
| 20010206162625 | 110912 | 105 | 20010103141348 | 25 |
| 20010206162625 | 110912 | 106 | 20010103141348 | 25 |
| 20010206162625 | 110912 | 107 | 20010103141348 | 25 |
| 20010206155808 | 110912 | 108 | 20010103141348 | 25 |
| 20010206162625 | 110912 | 109 | 20010103141348 | 25 |
| 20010206155808 | 110912 | 110 | 20010103141348 | 25 |
| 20010206162625 | 110912 | 111 | 20010103141348 | 25 |
| 20010206155808 | 110912 | 112 | 20010103141348 | 25 |
| 20010206155808 | 110912 | 113 | 20010103141348 | 25 |
| 20010206162625 | 110912 | 114 | 20010103141348 | 25 |
| 20010206162625 | 110912 | 115 | 20010103141348 | 25 |
| 20010206155808 | 110912 | 116 | 20010103141348 | 25 |
+++-+++
50 rows in set (0.00 sec)


mysql> select * from layers_la00.g03last_hit where counter_id=110912;
+++-+++
| updated| counter_id | stat_id | lasthit_ts | lasthit_id |
+++-+++
| 20010206155808 | 110912 |  35 | 20010103141348 | 25 |
| 20010206155808 | 110912 |  36 | 20010103141348 | 25 |
| 20010206162624 | 110912 |  39 | 20010103141348 | 25 |
| 20010206155808 | 110912 |  42 | 20010103141348 | 25 |
| 20010206155808 | 110912 |  43 | 20010103141348 | 25 |
| 20010206155808 | 110912 |  45 | 20010103141348 | 25 |
| 20010206155808 | 110912 |  46 | 20010103141348 | 25 |
| 20010206155808 | 110912 |  47 | 20010103141348 | 25 |
| 20010206162624 | 110912 |  49 | 20010103141348 | 25 |
| 20010206162625 | 110912 | 101 | 20010103141348

Re[4]: Serious MySQL internal deadlock

2001-02-06 Thread Peter Zaitsev

Hello Sinisa,

Sunday, February 04, 2001, 3:15:21 PM, you wrote:

SM> Peter Zaitsev writes:
SM>  > Hello Andreas,
SM>  > 
SM>  > Thursday, February 01, 2001, 7:42:31 PM, you wrote:
SM>  > 
SM>  > 
SM>  > I must confirm the problem with table locks. Mysql realy may deadlock
SM>  > sometimes, and the funny thing is the solution to this case is to kill
SM>  > the oldest locked thread waiting this condition - afterwards
SM>  > everything resolves. So this may mean something like broadcast is lost
SM>  > sometimes (?)
SM>  > 
SM>  > 


SM> Hi!

SM> I guess we may sound to be obnoxious, but can you make a repeatable
SM> case out of it ??

Well. I wish I could - I was never able to repeate this, althought it
appears again and again - on my 15 servers under mysql I usually see
this about once per week, so it seems to be seldom one :)


-- 
Best regards,
 Petermailto:[EMAIL PROTECTED]



-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re[2]: Serious MySQL internal deadlock

2001-02-03 Thread Peter Zaitsev

Hello Andreas,

Thursday, February 01, 2001, 7:42:31 PM, you wrote:


I must confirm the problem with table locks. Mysql realy may deadlock
sometimes, and the funny thing is the solution to this case is to kill
the oldest locked thread waiting this condition - afterwards
everything resolves. So this may mean something like broadcast is lost
sometimes (?)


AS> On 01-Feb-2001 Sinisa Milivojevic wrote:
>> 
>> HI!
>> 
>> Most probably processes are waiting for the slave to get updated.
>> 
>> To circumvent the problem, you should : 
>> 
>> - use our binary (if possible)
>> 
>> - avoid LOCK TABLES, which truly is necessary only in some rare cases
>> 
>> 
>> 
>> Regards,
>> 
>> Sinisa
>> 
>>     __ _   _  ___ ==  MySQL AB
>>  /*/\*\/\*\   /*/ \*\ /*/ \*\ |*| Sinisa Milivojevic
>> /*/ /*/ /*/   \*\_   |*|   |*||*| mailto:[EMAIL PROTECTED]
>>/*/ /*/ /*/\*\/*/  \*\|*|   |*||*| Larnaka, Cyprus
>>   /*/ /*/  /*/\*\_/*/ \*\_/*/ |*|
>>   /*/^^^\*\^^^
>>  /*/ \*\Developers Team
>> 

AS> Actually I did avoid using lock tables but got hit by this problem. Thus I
AS> tried with lock tables.

AS> Using supplied binaries is no choice as for certain reasons all systems
AS> involved are completely built from source.

AS> I can't really see a reason why a slave being updated should lock the master
AS> for good. Nevertheless there's just one programmable switch between the master
AS> and the slaves involved and the network speed is 200MBits/s (channel bonding) so
AS> this can't really be the reason for the problem.

AS> If I can do anything to help to sort this out I'll happily do so.


AS> Andreas Steinmetz
AS> D.O.M. Datenverarbeitung GmbH

AS> -
AS> Before posting, please check:
AS>http://www.mysql.com/manual.php   (the manual)
AS>http://lists.mysql.com/   (the list archive)

AS> To request this thread, e-mail <[EMAIL PROTECTED]>
AS> To unsubscribe, e-mail <[EMAIL PROTECTED]>
AS> Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




-- 
Best regards,
 Petermailto:[EMAIL PROTECTED]



-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Autorepair bug

2001-01-28 Thread Peter Zaitsev

Hello monty,

It seems like something changed in 3.23.31 and 32 in auto_repair
there.

I see on many servers after crash (i've resently got unexpected
powerdown for 10 of my machines so it worked like a good testcase) I
got tables failed to be repaired (or well selects fail on them)

28.Jan 12:26:34 : pid:13836 counter:61005 AllInOne MySqlEr
Query - #1016 [ruby.local] - Can't open file: 'g06layers_stat_providers.MYD'. (errno: 
144)
select id from layers_st45.g06layers_stat_providers where type=2 and 
begin=2001011700 and counter=61005 and page_group=255
28.Jan 12:26:34 : pid:13836 h -- 61005/0 AllInOne rec:0 l:0 tm:5.54 
28.Jan 12:26:34 : pid:13836 h  61005/0 [17.1 18:34] - [] q:0.28 i:0.00 a:5.13

the check table for this table also reports

mysql> check table layers_st45.g06layers_stat_providers;
+--+---+--+---+
| Table| Op| Msg_type | Msg_text   
|   |
+--+---+--+---+
| g06layers_stat_providers | check | warning  | Table is marked as crashed 
|and last repair failed |
| g06layers_stat_providers | check | error| Key 1 doesn't point at all 
|records|
| layers_st45.g06layers_stat_providers | check | error| Corrupt
|   |
+--+---+--+---+
3 rows in set (6.60 sec)

It seems like this message is similar for many tables (I've checked
about 10 tables I found in my applicatin error log file)

mysql> check table layers_st27.g07layers_stat_framepages;
+---+---+--+---+
| Table | Op| Msg_type | Msg_text  
||
+---+---+--+---+
| g07layers_stat_framepages | check | warning  | Table is marked as 
|crashed and last repair failed |
| g07layers_stat_framepages | check | error| Key 1 doesn't point at 
|all records|
| layers_st27.g07layers_stat_framepages | check | error| Corrupt   
||
+---+---+--+---+
3 rows in set (2.94 sec)


therefore the logfiles indicates the table was checked and has no
notes about it afterwards:

010127 13:11:48  Warning: Checking table:   './layers_st27/g07layers_stat_framepages'


As far then I check and repear all the tables explictly no error
messages appears in a log files - this points to the auto repair
procedure.

Also I found that after even single crash I got the followings case,
so this is not connected to crash during repair and so on...







  


  

-- 
Best regards,
 Peter  mailto:[EMAIL PROTECTED]



-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: BDB & Check tables

2001-01-25 Thread Peter Zaitsev

Hello Michael,

Friday, January 26, 2001, 12:20:43 AM, you wrote:


MW> Hi!

>>>>>> "Peter" == Peter Zaitsev <[EMAIL PROTECTED]> writes:

Peter>> Hello monty,
Peter>>   As I remember according to documentation MYSQL should support check
Peter>>   table on BDB tables:

Peter>> `CHECK TABLE' only works on `MyISAM' and `BDB' tables. On `MyISAM'
Peter>> tables it's the same thing as running `myisamchk -m table_name' on the 
 
Peter>> table.

MW> I tried to implement this; I had got everything coded when I
MW> noticed the on can only check BDB tables when there is no active
MW> transaction and one is the only one that is using the table.  This
MW> is impossible to guarantee with the current MySQL code;  To do this we
MW> have to add a new layer on top on transactions, which isn't something
MW> we plan to do until in 4.x

MW> I have now removed the notes about BDB tables in the CHECK command.

OK. But just note to really use BDB tables in production people need
to have a possibility to recover data if BDB table somehow got
dammaged. I know there is a chance to recover data by alter table or
if this does not help to do an mysqldump and restore (well of couse
this is worse then with MYISAM there repair table sometimes helps even
then mysqldump does not work). To do the repair you of couse need to
find out something is going wrong and it's better to find this before
the errors will block normal execution.

As far as I know there is no tools now to fix BDB tables (the one
provided with BDB does not work with MYSQL tables) so it was nice
option anyway - to start other mysqld and check the table :) And as
far as I know this is safe anyway as check table does not change
anything in BDB ?

Also - does not lock tables write enshures there are no other active
transactions ?

-- 
Best regards,
 Petermailto:[EMAIL PROTECTED]



-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




BDB & Check tables

2001-01-25 Thread Peter Zaitsev

Hello monty,

  As I remember according to documentation MYSQL should support check
  table on BDB tables:

`CHECK TABLE' only works on `MyISAM' and `BDB' tables. On `MyISAM'
tables it's the same thing as running `myisamchk -m table_name' on the 
 
table.

Therefore it does not:


mysql> check table layers_la01.g00layers_stat_hits;
+-+---+--++
| Table   | Op| Msg_type | Msg_text
|   |
+-+---+--++
| layers_la01.g00layers_stat_hits | check | error| The handler for the table 
|doesn't support check/repair |
+-+---+--++
1 row in set (0.00 sec)


and this is also thrue for any table

mysql> create table t (t int) type=BDB;
Query OK, 0 rows affected (0.30 sec)

mysql> check table t;
++---+--++
| Table  | Op| Msg_type | Msg_text   |
++---+--++
| test.t | check | error| The handler for the table doesn't support check/repair |
++---+--++
1 row in set (0.01 sec)




-- 
Best regards,
 Peter  mailto:[EMAIL PROTECTED]



-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




mysql 3.23.32 & table problems

2001-01-24 Thread Peter Zaitsev

Hello monty,

  I'm using mysql 3.23.32 on linux and  it looks like there is a
  problem with check/auto-recovery left:

  On many of my servers I find the above in a log file:

24.Jan 23:45:07 : pid:1412 counter:32256 AllInOne MySqlEr
Query - #1034 [tor.local] - Incorrect key file for table: 'g09stat_sitepaths'. Try to 
repair it
delete from layers_st37.g09stat_sitepaths where layer_id=14704
24.Jan 23:45:07 : pid:1412 h -- 32256/0 AllInOne rec:290 l:0 tm:4.03 
24.Jan 23:45:07 : pid:1412 h  32256/0 [24.1 1:42] - [] q:0.23 i:0.00 a:3.32


Then I look to the .err log file I se there :

tor:/spylog/layers # tail -1 /spylog/db/tor.err | grep g09stat_sitepaths
010124  2:13:46  Warning: Checking table:   './layers_st26/g09stat_sitepaths'
010124  2:17:10  Warning: Checking table:   './layers_st38/g09stat_sitepaths'
010124  3:10:08  Warning: Checking table:   './layers_st37/g09stat_sitepaths'
010124  9:17:48  Warning: Checking table:   './layers_st49/g09stat_sitepaths'

So the table was checked and was found OK.

Then I do check table now:

mysql> check table layers_st37.g09stat_sitepaths;
+---+---+--++
| Table | Op| Msg_type | Msg_text   |
+---+---+--++
| g09stat_sitepaths | check | warning  | Table is marked as crashed |
| layers_st37.g09stat_sitepaths | check | status   | OK |
+---+---+--++
2 rows in set (9.11 sec)


So it is still marked crashed were it's really OK and even more it was
checked and found correct before.



-- 
Best regards,
 Peter  mailto:[EMAIL PROTECTED]



-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




  1   2   >