RE: Possible to DoS a slave by using multiple connections on the master!.

2005-06-22 Thread Dathan Pattishall
Something doesn't sound like it's setup properly. There was a time that
I had more then 100 slaves connected to a master. Many reads and writes
occurred on the masters and all slaves kept up.

I suspect that your running out of IO bandwidth on a slave from a table
scan or huge and frequent sorts.




DVP

Dathan Vance Pattishall http://www.friendster.com

 

> -Original Message-
> From: Kevin Burton [mailto:[EMAIL PROTECTED] 
> Sent: Tuesday, June 21, 2005 6:01 PM
> To: mysql@lists.mysql.com; [EMAIL PROTECTED]
> Subject: Possible to DoS a slave by using multiple 
> connections on the master!.
> 
> Not sure if this is a known issue or not.. but I haven't seen 
> it documented anywhere.
> 
> Anyway. My past thinking was that you should always use as many 
> connections as you have tables (at least with myisam).   This 
> way in the 
> worst case scenario you could have locks open on all tables 
> instead of one lock on one table holding back all other tables.
> 
> This is a BAD idea if you're trying to reach max qps with INSERTs.
> 
> What happens is that the disk controller is able to command 
> queue (and other opterations) to optimize IO on the master 
> since technically you have multiple INSERTs happening at once 
> (one for each table).  Then on the slave since there's only 
> one thread replaying the DML it will back up since the disk 
> controller isn't able to optimize the IO.
> 
> We were actually running a master with RAID5 and a slave with 
> RAID0 and the slave still couldn't keep up.  The problem was 
> only fixed when we told our client to only use one connection. 
> 
> While this is a temporary fix this limits the scalability of 
> MySQL as I could easily see a LOT more QPS going through these boxes. 
> 
> It might be possible place an intelligent scheduler to bulk 
> up INSERTS and use FK relationships to allow non-dependent 
> SQL to pass forward.  
> You could also have one thread per slave per connection on 
> the master.  
> Then in the binary log you could flag the thread ID that 
> performed the modification on the master and use the same 
> thread on the slave.
> 
> The downside being that you'd need more resources on SLAVE boxes.
> 
> Seems like a wiki page in the making
> 
> -- 
> 
> 
> Use Rojo (RSS/Atom aggregator)! - visit http://rojo.com. 
> See irc.freenode.net #rojo if you want to chat.
> 
> Rojo is Hiring! - http://www.rojonetworks.com/JobsAtRojo.html
> 
>Kevin A. Burton, Location - San Francisco, CA
>   AIM/YIM - sfburtonator,  Web - http://peerfear.org/ GPG 
> fingerprint: 5FB2 F3E2 760E 70A8 6174 D393 E84D 8D04 99F1 4412 
> 
> 
> --
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:
> http://lists.mysql.com/[EMAIL PROTECTED]
> 
> 

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Query Complexity (big 'O')

2005-06-21 Thread Dathan Pattishall
It's a Big O of N.


DVP

Dathan Vance Pattishall http://www.friendster.com

 

> -Original Message-
> From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] 
> Sent: Tuesday, June 21, 2005 9:39 AM
> To: Dan Bolser
> Cc: mysql@lists.mysql.com
> Subject: Re: Query Complexity (big 'O')
> 
> Dan Bolser <[EMAIL PROTECTED]> wrote on 06/21/2005 09:51:06 AM:
> 
> > Hello,
> 
> > I am interested in the theoretical time / space complexity of SQL
> queries
> > on indexed / non-indexed data.
> 
> > I think I read somewhere that a JOIN on an indexed column 
> is something 
> > like O[mn*log(mn)] (m rows joined to n).
> 
> > I assume without an index it is just O[m*n]
> 
> > Specifically I want to know the complexity of a query that does a 
> > 'cross tabulation'
> 
> > SELECT
> > X,
> > SUM(if(Y=1,Z,0)) AS s1,
> > SUM(if(Y=2,Z,0)) AS s2,
> > SUM(if(Y=3,Z,0)) AS s3,
> > ...
> > FROM
> > T1
> > GROUP BY
> > X;
> > 
> > Assuming both X and Y are indexed, how does the complexity 
> grow with 
> > increasing 's' (more if clauses). More basic, what is the 
> complexity 
> > of the group by statement?
> 
> > Can anyone point me to a good online guide to complexity of SQL?
> 
> > Thanks very much for any suggestions :)
> 
> > Dan.
> 
> I believe you will see a nearly linear growth as you add 
> terms to your query (but only to a point). My reasoning:
> 
> I think I can accurately say the total time spent on any 
> query would be the sum of the times spent on each phase of 
> the query analysis
> 
> Tp - time required to parse the query
> Tj - time to process any table JOINS (initial column 
> identifications, too) Tw - time to process the WHERE clause 
> against your source tables.
> Tgb - time to process the GROUP BY clause Th - time to 
> process the HAVING clause Tob - time to perform the final 
> sort =
> Tquery - total time for query (sum of all terms above)
> 
> Each one of those phases has a BIG O expression associated 
> with it and each phase has at least one opportunity for query 
> optimization. Some are optimized by table design (field 
> definitions + indexes), others are optimized through 
> efficient SQL (proper JOIN definitions, multistage queries, etc.)
> 
> In your case you are doing at least four things when you add 
> another term to your query:
> 1) you have added another term that needs parsing
> 1) you add another IF() evaluation that must occur for each 
> rows evaluation during the Tw phase.
> 2) you increase how much data must be transferred from the 
> output of the Tw phase, and each phase after that, by one column.
> 3) you add another SUM() evaluation to the Tgb phase of the query.
> 
> So, incrementally you have increased the time required to 
> perform the query by adding another evaluated column but to 
> estimate how much depends on both your hardware and your 
> settings. If, for example, the next column you added forced 
> the engine to evaluate the Tj portion of this query using 
> paged memory, you have just changed the slope of your 
> equation based on meeting a physical limitation and your 
> "time estimate formula" would now have a different set of 
> coefficients. The only way to honestly know how many 
> summarized columns it takes to reach that point is to 
> benchmark the query on your equipment and with your settings 
> under your normal production load. 
> 
> I know I really didn't answer your theory question (not 
> exactly) but I believe that evaluating the "BIG O" of a SQL 
> query is a bit more complex than you presented in your post. 
> I encourage you to keep asking questions and I am sure that 
> you will either tap the list dry or find out what you want to know.
> 
> Shawn Green
> Database Administrator
> Unimin Corporation - Spruce Pine
> 
> 

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Design of a Client-side MySQL Java Load Balancer

2005-06-01 Thread Dathan Pattishall
We have a client-side load balancer, as well as a central load balancer
for mySQL traffic. I've done many different models at a variety of
companies, all which still use the solutions. Building on top of the NDB
protocol would take a very long time to produce-a HA solution native to
mySQL, when other technologies already exist-free to do this for you.

For example use heartbeat from linux-ha.org for failover. 

Now for client-side load balancing, memcache from danga.com has some
interesting ideas. DBI::Multiplex also has some interesting ideas.

One of the easiest solutions is to load a database definitions file on a
shared drive. Have an external program add or remove hosts from each
cluster of the shared definitions file. Then use a random algorithm to
pick a host from a list that is defined apart a cluster to connect to.

array ('mycluster' => array (  array ('host' => myhost1,
'db'=>'mydb',...), array ('host' => myhostN,));


If your algorithm is random enough and spread across enough servers, the
results coalesce into a even distribution among all the servers in a
list. Very basic, very easy to implement and works.

The monitor program, run out of cron-or something, can have logic of
what to keep in a pool or what to take out of a pool.





DVP

Dathan Vance Pattishall http://www.friendster.com

 

> -Original Message-
> From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] 
> Sent: Wednesday, June 01, 2005 1:21 PM
> To: Kevin Burton
> Cc: [EMAIL PROTECTED]; mysql@lists.mysql.com
> Subject: Re: Design of a Client-side MySQL Java oad Balancer
> 
> Kevin Burton <[EMAIL PROTECTED]> wrote on 06/01/2005 02:31:54 PM:
> 
> > I'd love to get some feedback here:
> > 
> > > MySQL currently falls down by not providing a solution to 
> > > transparent MySQL load balancing. There are some hardware 
> solutions 
> > > but these are expensive
> and
> > > difficult to configure. Also none of them provide any information 
> > > about the current state of your MySQL configuration. For example 
> > > they can't
> handle
> > > transparent query failover if a MySQL box fails. They also can't 
> > > disconnect and reconnect to another host if the load 
> grows too high.
> > >
> > > To that end I think it makes a lot of sense to have a MySQL 
> > > client-side load balancer.
> > >
> > > This area is difficult to implement. There are a log of 
> design issues. 
> 
> > > Also the
> > > issues WRT distributed connection management start to make the 
> > > problem difficult.
> > >
> > > The other day I had a bit of an epiphany on this topic.
> > >
> > 
> > 
> > http://peerfear.typepad.com/blog/2005/06/design_of_a_cli.html
> > 
> > --
> > 
> > 
> > Use Rojo (RSS/Atom aggregator)! - visit http://rojo.com. 
> > See irc.freenode.net #rojo if you want to chat.
> > 
> > Rojo is Hiring! - http://www.rojonetworks.com/JobsAtRojo.html
> > 
> >Kevin A. Burton, Location - San Francisco, CA
> >   AIM/YIM - sfburtonator,  Web - http://peerfear.org/ GPG 
> > fingerprint: 5FB2 F3E2 760E 70A8 6174 D393 E84D 8D04 99F1 4412
> > 
> > 
> 
> Wouldn't it make better sense to build on the NDB protocol 
> and keep the native messaging infrastructure than it would be 
> to build a similar wrapper from scratch?  I mean to use the 
> NDB communications on top of regular MySQL servers to provide 
> for failover, hotswaps, and client registration. Haven't they 
> already solved some or most of your client registration and 
> reconnection issues during the development of that tool? 
> There is also the "Federated" database engine that allows for 
> queries to be run across more than one server at a time (not 
> only can a query include a table from another database on the 
> same server but also from a table in a database hosted on a 
> different server).
> 
> I think that between the two of those projects you have 
> 80-90% of the wrapper you will need to run a MySQL cluster in 
> a HA environment. The biggest problem I see is to prevent 
> data collisions between servers (users connected to different 
> servers trying to update the same record at the same time on 
> different servers). Part of the NDB code has to contain the 
> necessary cross-server locking messaging. And Federated 
> servers will also do the same (at least to some degree).
> 
> If you can get a JDBC version working, that would be great 
> but I think the best dev path would be to re-use what is 
> already there as much as possible. I am not discouraging the 
> effort as much as I am suggesting another body of work to 
> consider (or incorporate) in your design.
> 
> Shawn Green
> Database Administrator
> Unimin Corporation - Spruce Pine
> 

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Call a function from a select statement?

2005-05-31 Thread Dathan Pattishall
Are you getting a error Lost Connection to Server error? If you are
reconnect to the server every time you issue the SQL request. OR issue
the command set GLOBAL wait_timeout=28000; Then issue the select



DVP

Dathan Vance Pattishall http://www.friendster.com

 

> -Original Message-
> From: Scott Klarenbach [mailto:[EMAIL PROTECTED] 
> Sent: Tuesday, May 31, 2005 10:58 AM
> To: My SQL
> Subject: Call a function from a select statement?
> 
> Is this not the proper way to use a function in a select statement?
> 
> SELECT
>  t.field1,
>  t.field2,
>  functionPerformAdditionalQueryFromField(t.field2) AS 'customField'
> FROM Table t
> 
> I'd like to perform the function on every row in the result 
> set, and store the returned value of that function in EACH 
> row, as a custom field.
> 
> It works from the command line, but in PHP...
> 
> the result set comes back the FIRST time, but then I lose my 
> connection to the database, and need to restart the service 
> in 2003 server.
> 
> Very frusterating...any help is appreciated.
> 
> --
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:
> http://lists.mysql.com/[EMAIL PROTECTED]
> 
> 

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: MYISAM MYI

2005-05-23 Thread Dathan Pattishall
Are the the FK on your largest columns?

So if you have a char(255) and it's index - for each row added you will
have added an additional 255 bytes added to your index as well as your
data file.


If you want to optimize a table use OPTIMIZE TABLE or 

myisamchk -r -S -a // when the server is off



DVP

Dathan Vance Pattishall http://www.friendster.com

 

> -Original Message-
> From: Paul Beer [mailto:[EMAIL PROTECTED] 
> Sent: Monday, May 23, 2005 1:57 PM
> To: mysql@lists.mysql.com
> Subject: MYISAM MYI
> 
> I have a MYISAM table that has both the MYD and MYI files 
> growing at similar rates.  We have this same database 
> installed in other locations and the MYI file stays static 
> but the MYD grows.  From the docs I can see that the MYI file 
> contains the index information.  There are three foreign keys 
> in the table, but I can't see why this MYI file should ever 
> grow in size.
> I have already run "myisamchk.exe -r" on the table and it 
> said it fixed the index, but then started growing again. 
> Has anyone experienced anything like this?  Anyone have any thoughts?
> I'm using MYSQL Version 4.1.8;
> 
> Thx,
> Paul
> 
> 
> --
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:
> http://lists.mysql.com/[EMAIL PROTECTED]
> 
> 

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: slow DELETE query

2005-05-20 Thread Dathan Pattishall
DELETE LOW_PRIORITY


But since your using myISAM the table will have to lock. 

DVP

Dathan Vance Pattishall http://www.friendster.com

 

> -Original Message-
> From: Matthew Simpson [mailto:[EMAIL PROTECTED] 
> Sent: Friday, May 20, 2005 11:22 AM
> To: mysql@lists.mysql.com
> Subject: slow DELETE query
> 
> I use Mysql to store call detail records from telephone 
> calls.  I have around 20 voice switches that send the call 
> detail records in real time using INSERT statements.
> 
> I am having a problem where I need to delete "junk" call 
> records that get generated [old call records, call records 
> with no accountcode, etc.], but when I try to run the DELETE 
> query, SQL grinds to a halt which causes my voice switches to 
> halt because they can't run the INSERT queries.  Is this 
> because of table locking?  An example delete query:
> 
> DELETE from cdr WHERE accountcode=''
> 
> Is there a way to make the DELETE query run at a lower 
> priority and allow the INSERTs?
> 
> Here is the table description:
> 
> mysql> describe cdr;
> +-+--+--+-+---
> --+---+
> | Field   | Type | Null | Key | Default   
>   | Extra |
> +-+--+--+-+---
> --+---+
> | uniqueid| varchar(32)  |  | |   
>   |   |
> | userfield   | varchar(255) |  | |   
>   |   |
> | accountcode | varchar(20)  |  | MUL |   
>   |   |
> | src | varchar(80)  |  | MUL |   
>   |   |
> | dst | varchar(80)  |  | |   
>   |   |
> | dcontext| varchar(80)  |  | |   
>   |   |
> | clid| varchar(80)  |  | |   
>   |   |
> | channel | varchar(80)  |  | |   
>   |   |
> | dstchannel  | varchar(80)  |  | |   
>   |   |
> | lastapp | varchar(80)  |  | |   
>   |   |
> | lastdata| varchar(80)  |  | |   
>   |   |
> | calldate| datetime |  | | -00-00 
> 00:00:00 |   |
> | duration| int(11)  |  | | 0 
>   |   |
> | billsec | int(11)  |  | | 0 
>   |   |
> | disposition | varchar(45)  |  | |   
>   |   |
> | amaflags| int(11)  |  | | 0 
>   |   |
> +-+--+--+-+---
> --+---+
> type is MyISAM
> 
> indexes:
> mysql> show index from cdr;
> +---++--+--+-+
> ---+-+--++-+
> | Table | Non_unique | Key_name | Seq_in_index | Column_name 
> | Collation 
> | |
> Cardinality | Sub_part | Packed | Comment |
> +---++--+--+-+
> ---+-+--++-+
> | cdr   |  1 | cdr_idx  |1 | src 
> | A | 
> NULL | NULL | NULL   | |
> | cdr   |  1 | cdr_idx  |2 | dst 
> | A | 
> NULL | NULL | NULL   | |
> | cdr   |  1 | cdr_idx  |3 | calldate
> | A | 
> NULL | NULL | NULL   | |
> | cdr   |  1 | cdr_idx  |4 | accountcode 
> | A | 
> NULL | NULL | NULL   | |
> | cdr   |  1 | i1   |1 | accountcode 
> | A | 
> NULL | NULL | NULL   | |
> | cdr   |  1 | i1   |2 | calldate
> | A | 
> NULL | NULL | NULL   | |
> +---++--+--+-+
> ---+-+--++-+
> 
> 
> 
> --
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:
> http://lists.mysql.com/[EMAIL PROTECTED]
> 
> 

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Troubleshooting FullText Slowness

2005-05-19 Thread Dathan Pattishall
Wow, your going pass the 2 GB barrier on a 32 bit server. I would not do
that. Reduce your key buffer to 1700 or you will eventually crash.


Then type repair table PC1_Text quick, or ALTER TABLE PC1_TEXT
ENGINE=myISAM. This helps with rebuilding full text indexes.


DVP

Dathan Vance Pattishall http://www.friendster.com

 

> -Original Message-
> From: Dan Salzer [mailto:[EMAIL PROTECTED] 
> Sent: Thursday, May 19, 2005 7:48 AM
> To: mysql@lists.mysql.com
> Subject: Troubleshooting FullText Slowness
> 
> Hello all,
>  I'm having a bit of trouble with a full-text query being 
> slow. At first I thought it was a problem with a join, then I 
> thought it was a problem with a sort - but I've boiled down 
> the query and it seems like plain-old slowness.
>  This is the table:
> 
> CREATE TABLE `PC1_Text` (
> `AssetID` int(11) NOT NULL default '0',
> `Content` text NOT NULL,
> PRIMARY KEY (`AssetID`),
> FULLTEXT KEY `Content` (`Content`)
> ) ENGINE=MyISAM DEFAULT CHARSET=latin1
>  And here is its status:
>  Name: PC1_Text
> Engine: MyISAM
> Version: 9
> Row_format: Dynamic
> Rows: 5906
> Avg_row_length: 15849
> Data_length: 93608372
> Max_data_length: 4294967295
> Index_length: 49875968
> Data_free: 0
> Auto_increment: NULL
> Create_time: 2005-04-09 12:25:41
> Update_time: 2005-05-19 10:22:14
> Check_time: 2005-05-19 09:58:00
> Collation: latin1_swedish_ci
> Checksum: NULL
> Create_options:
> Comment:
>  The table has only 5900 rows of text and I'm trying to use 
> the full-text index to find hits on a search term using:
>  SELECT SQL_NO_CACHE AssetID FROM PC1_Text as FT WHERE 
> MATCH(Content) AGAINST('+"after dinner" ' IN BOOLEAN MODE);  
> This query takes about 1.6 seconds to execute. More general 
> queries take even longer. Performing an explain on the query 
> shows exactly what I would
> expect:
> ++-+---+--+---+---
> --+-+--+--+-+
> | id | select_type | table | type | possible_keys | key | 
> key_len | ref 
> | |
> rows | Extra |
> ++-+---+--+---+---
> --+-+--+--+-+
> | 1 | SIMPLE | FT | fulltext | Content | Content | 0 | | 1 | 
> Using where 
> | |
> ++-+---+--+---+---
> --+-+--+--+-+
>  I've even taken the time to run:
>  LOAD INDEX INTO CACHE PC1_Text;
>  And it doesn't make a difference.
>  All this leads me to believe I am dealing with a system 
> configuration issue. The server is a Dual Xeon 2.8 EM64T, it 
> has 4GB of memory and 15K Drives. And I am currently using 
> --memlock to ensure that mysql stays in real memory. My 
> config looks like the following:
>  [mysqld]
> key_buffer_size=2500M
> tmp_table_size=128M;
> max_heap_table_size=128M;
> max_allowed_packet = 1M
> table_cache = 512
> sort_buffer_size = 2M
> read_buffer_size = 2M
> read_rnd_buffer_size = 8M
> thread_concurrency = 8
> thread_cache = 8
> query_cache_size = 64M
> max_connections=1
> ft_min_word_len=3
>  Anyone have any idea what I might try to increase the 
> performance of this query? I'm quite certain it should be 
> able to do better than this.
>  Thanks as always!
>  -Dan
> 

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Restoring a database from binlogs

2005-05-18 Thread Dathan Pattishall
Dump the binarylogs into a text file greping all the log data in order
of oldest to newest (minus the massive delete). Then reply the events
backinto mysql


Mysqlbinlog  |grep [your tablename] > BIGSQLFILE.sql

mysql -uroot  < BIGSQLFILE.sql


DVP

Dathan Vance Pattishall http://www.friendster.com

 

> -Original Message-
> From: Jeff McKeon [mailto:[EMAIL PROTECTED] 
> Sent: Wednesday, May 18, 2005 11:31 AM
> To: mysql@lists.mysql.com
> Subject: Restoring a database from binlogs
> 
> Hey all,
> 
> I've got a big problem.  Seems one of our programmers decided 
> to write a script that deletes all records from a log table 
> older than 3 months.
> Problem is, we need old data from this log to reconcile our 
> customer accounts.  
> 
> Our backups only go back 2 weeks.  What I do have however is 
> replication running and therefore old binlogs.  These binlogs 
> go back as far as Nov 12, 2004.  The "cleaning" of the log 
> table didn't start until about a month ago and has deleted 
> any data prior to Jan 18, 2005.  
> 
> The only snapshot I can find of the database is from Sept 10, 2004.  
> 
> So.. (shooting off into the dark) I have binlogs starting 
> from Nov 12, a snapshot from sept 10th.  Is there some way 
> anyone can suggest that I can reconstruct the database table 
> with what I have?
> 
> Best Reguards,
> 
> Jeff
> 
> 
> --
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:
> http://lists.mysql.com/[EMAIL PROTECTED]
> 
> 

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Union Performance Question

2005-05-11 Thread Dathan Pattishall

> 
> Use one big table. A merge table will run the same query over 
> all 10 tables. The key buffer is filled from the top down so 

He is using a fulltext index he can't use merge tables.

If he where to UNION across the tables being used assuming he uses the
tables that only have the data he would get a good boost in performance.
The performance comes from a smaller in time lock on the table and only
data that is being accessed the most would stay in the buffer.

He has to pick a good hash such as date or country code for splitting
tables on. 

To explain why let's look at the key buffer structure.

Having multiple tables do not pollute the buffer this is why:

mySQL pulls OS MYI file "blocks" then puts the frequently used blocks in
the buffer. The index file itself stores the data in a B-Tree+ (or
R-Tree) so a smaller index file means more of it can exist in the
keybuffer. Since an index file is associated with a table the most
frequently used tables' indexes will have more of the key buffer-that
assumed: if the application hashes the table correctly then a boost in
performance can be gained by using more memory for index blocks that
have more pertinent data then not.


If your using INNODB (no FULLTEXT INDEX option) use 1 big table.








  


> if you have a key buffer that looks like this:
> a
>/  \
>   /\
> b  c
>/  \/  \
>  de fg
> 



> 
> Almost all queries for that index will be able to use the 
> buffer for 'a' 
> and 'b'. It's not until they get things very narrowed down 
> that you have to actually hit the index files for specific 
> leaves. Creating ten small tables creates ten duplicate 'a' 
> and 'b' sections which polutes the key buffer.
> 
> -Eric
> 
> Dathan Pattishall wrote:
> 
> >Use 10 smaller tables and perform a union. It's faster to look up in 
> >smaller tables then larger ones-generally. Additionally more of the 
> >key_buffer can be used for tables with the most hits over the tables 
> >with less hits, making the lookup sustain speed over time.
> >
> > 
> >
> >  
> >
> >>-Original Message-
> >>From: Dan Salzer [mailto:[EMAIL PROTECTED]
> >>Sent: Wednesday, May 11, 2005 11:46 AM
> >>To: mysql@lists.mysql.com
> >>Subject: Union Performance Question
> >>
> >>Hi everyone,
> >>
> >>I have a question regarding the performance of UNION queries:
> >>
> >>I need to do a full-text search against a large number of 
> rows. Is it 
> >>faster to have one table with 10,000,000 text rows and perform one 
> >>full-text search. Or, am I better off having 10 smaller 
> more managable 
> >>tables and performing UNION'ed full-text searches against them.
> >>
> >>A merge table seems like it would be the ideal solution, but the 
> >>limitation on full-text indexing prohibits it.
> >>
> >>Thanks!
> >>
> >>-Dan
> >>
> >>--
> >>MySQL General Mailing List
> >>For list archives: http://lists.mysql.com/mysql
> >>To unsubscribe:
> >>http://lists.mysql.com/[EMAIL PROTECTED]
> >>
> >>
> >>
> >>
> >
> >  
> >
> 
> 

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: SATA vs SCSI

2005-05-11 Thread Dathan Pattishall
Forget using drives all together for heavy hit applications.

Build data that can fit on a ram Drive (8GB) then your able to do 20K
qps. 

For instance, have a main master that holds a majority of tables call it
MASTER. Then a sub master that holds the tables which you desire to run
out of memory, call it SUBMASTER. Each slave connected to SUBMASTER then
does a LOAD DATA FROM MASTER on startup based on

init_file=

Defined in the my.cnf file

Loading 2GB of data across the network on a GigE is very fast. So now
you have the ability to handle 20K qps on a single box (assuming the box
is a 4GB X86 Opteron with 2 processors and the main lookup is on a
primary key) and also have redundancy with real time replication.




For instance 

I have 5 servers with 4 GB of ram - the table that displays user names
which is seen on friendster only takes 1.6 G of memory. All DB access is
real time producing 3000 qps during peak per server where one server can
handle the site, since that query type benchmarks for 2 qps. More
then enough head room to scale since I can LB the reads across the 5
boxes. The front end will fall over before the backend - Easy cheap
solution that can handle a crap load of load.


 

DVP

Dathan Vance Pattishall http://www.friendster.com

 

> -Original Message-
> From: Kevin Burton [mailto:[EMAIL PROTECTED] 
> Sent: Wednesday, May 11, 2005 12:30 PM
> To: mysql@lists.mysql.com
> Subject: SATA vs SCSI
> 
> Were kicking around using SATA drives in software RAID0 config.  
> 
> The price diff is significant.  You can also get SATA drives 
> in 10k RPM form now.,
> 
> Kevin
> 
> -- 
> 
> 
> Use Rojo (RSS/Atom aggregator)! - visit http://rojo.com. 
> See irc.freenode.net #rojo if you want to chat.
> 
> Rojo is Hiring! - http://www.rojonetworks.com/JobsAtRojo.html
> 
>Kevin A. Burton, Location - San Francisco, CA
>   AIM/YIM - sfburtonator,  Web - http://peerfear.org/ GPG 
> fingerprint: 5FB2 F3E2 760E 70A8 6174 D393 E84D 8D04 99F1 4412 
> 
> 
> --
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:
> http://lists.mysql.com/[EMAIL PROTECTED]
> 
> 

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Opteron HOWTO?!

2005-05-11 Thread Dathan Pattishall

On some boxes we do more. Some we do less.

DVP

Dathan Vance Pattishall http://www.friendster.com

 

> -Original Message-
> From: Kevin Burton [mailto:[EMAIL PROTECTED] 
> Sent: Wednesday, May 11, 2005 12:24 PM
> To: Dathan Pattishall
> Cc: Jochem van Dieten; mysql@lists.mysql.com
> Subject: Re: Opteron HOWTO?!
> 
> Dathan Pattishall wrote:
> 
> >We do about 70K qps at peak for about 1 Billion Queries per 
> day (only 
> >on 30 servers BOOYA). So, it's pretty stable.
> >
> >  
> >
> Also... based on my math.. this yields ~ 2300 qps per MySQL box...  
> which is pretty good.
> 
> Kevin
> 
> -- 
> 
> 
> Use Rojo (RSS/Atom aggregator)! - visit http://rojo.com. 
> See irc.freenode.net #rojo if you want to chat.
> 
> Rojo is Hiring! - http://www.rojonetworks.com/JobsAtRojo.html
> 
>Kevin A. Burton, Location - San Francisco, CA
>   AIM/YIM - sfburtonator,  Web - http://peerfear.org/ GPG 
> fingerprint: 5FB2 F3E2 760E 70A8 6174 D393 E84D 8D04 99F1 4412 
> 
> 

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Opteron HOWTO?!

2005-05-11 Thread Dathan Pattishall


DVP

Dathan Vance Pattishall http://www.friendster.com

 

> -Original Message-
> From: Kevin Burton [mailto:[EMAIL PROTECTED] 
> Sent: Wednesday, May 11, 2005 12:10 PM
> To: Dathan Pattishall
> Cc: [EMAIL PROTECTED]; Jochem van Dieten; mysql@lists.mysql.com
> Subject: Re: Opteron HOWTO?!
> 
> Dathan Pattishall wrote:
> 
> >>Are you using NPTL?
> >>
> >>
> >No that sucks we use the other one. Can't make a static 
> build with NPTL.
> >
> What type of performance boost are you getting from running a 
> static build.

5-7% with the security knowing that if another package updates a shared
lib mySQL will not crash.


> 
> Kevin
> 
> -- 
> 
> 
> Use Rojo (RSS/Atom aggregator)! - visit http://rojo.com. 
> See irc.freenode.net #rojo if you want to chat.
> 
> Rojo is Hiring! - http://www.rojonetworks.com/JobsAtRojo.html
> 
>Kevin A. Burton, Location - San Francisco, CA
>   AIM/YIM - sfburtonator,  Web - http://peerfear.org/ GPG 
> fingerprint: 5FB2 F3E2 760E 70A8 6174 D393 E84D 8D04 99F1 4412 
> 
> 

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Union Performance Question

2005-05-11 Thread Dathan Pattishall
Use 10 smaller tables and perform a union. It's faster to look up in
smaller tables then larger ones-generally. Additionally more of the
key_buffer can be used for tables with the most hits over the tables
with less hits, making the lookup sustain speed over time.

 

> -Original Message-
> From: Dan Salzer [mailto:[EMAIL PROTECTED] 
> Sent: Wednesday, May 11, 2005 11:46 AM
> To: mysql@lists.mysql.com
> Subject: Union Performance Question
> 
> Hi everyone,
> 
> I have a question regarding the performance of UNION queries:
> 
> I need to do a full-text search against a large number of 
> rows. Is it faster to have one table with 10,000,000 text 
> rows and perform one full-text search. Or, am I better off 
> having 10 smaller more managable tables and performing 
> UNION'ed full-text searches against them.
> 
> A merge table seems like it would be the ideal solution, but 
> the limitation on full-text indexing prohibits it.
> 
> Thanks!
> 
> -Dan
> 
> --
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:
> http://lists.mysql.com/[EMAIL PROTECTED]
> 
> 

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Opteron HOWTO?!

2005-05-10 Thread Dathan Pattishall

> -Original Message-
> From: Greg Whalin [mailto:[EMAIL PROTECTED] 
> Sent: Tuesday, May 10, 2005 3:12 PM
> To: Dathan Pattishall
> Cc: Jochem van Dieten; mysql@lists.mysql.com
> Subject: Re: Opteron HOWTO?!
> 
> Care to share any secrets?  You guys are running Suse w/ 2.4 
> kernel yes? 

Yes. We run RedHat with a Suse Kernel and pure Suse.

>   Any specifics as far as kernel/glibc/gcc versions. 

Kernel - 2.4.21-215-default #5 SMP
Glibc  - 
 rpm -qa |grep glib
glibc-profile-2.3.2-95.6
glibc-2.3.2-95.20
glibc-headers-2.3.2-95.6
glib2-2.2.3-2.0
glibc-kernheaders-2.4-8.34
glibc-devel-2.3.2-95.6
glibc-common-2.3.2-95.20
glibc-devel-2.3.2-95.6
glib-1.2.10-11.1
glibc-utils-2.3.2-95.6
glib-1.2.10-11.1
glibc-2.3.2-95.6

Gcc - gcc3.3


> Are you running mysql 4.1.*?  

Yes

> Are you using NPTL?
No that sucks we use the other one. Can't make a static build with NPTL.
> You using the 
> binary from mysql, or building yourself? 

I build it myself using gcc3.3 - 3.4 will crash mysql using -O of any
level.

> Are you running 
> Innodb or Myisam.  

Both

You mentioned reiserfs correct?  Any 
> problems w/ ext3?

You can't use O_DIRECT on ext3 and 2.4 there is a bug in EXT3 when used
under heavy load the volume will lock.


> 
> 
> Thanks,
> Greg
> 
> Dathan Pattishall wrote:
> >>Subject: Re: Opteron HOWTO?!
> >>
> >>On 5/9/05, Kevin Burton wrote:
> >>
> >>>So... it sounds like a lot of people here (Dathan and Greg)
> >>
> >>have had
> >>
> >>>problems deploying MySQL on Opteron in a production environment.
> >>
> >>To me it sounds more like a lot of people have had problems running 
> >>Linux on x86-64 systems.
> >>
> > 
> > 
> > We don't have any problems running Opterons at all. 
> > With all the tests me and my team have done, we know the 
> ins and outs 
> > of getting Opterons up, running-stable, and blazing fast. 
> Our entire 
> > datacenter are (about 200 servers)
> > 
> > Dual Opterons with at least 4GB of memory running in 64-bit mode.
> > 
> > All the databases (about 30)
> >  - Are dual opterons with 8 GB of memory connected to a 
> Hitachi 9980 
> > SAN-through a McData Switch.
> > 
> > We do about 70K qps at peak for about 1 Billion Queries per 
> day (only 
> > on 30 servers BOOYA). So, it's pretty stable.
> > 
> > 
> > --
> > 
> > Dathan V Pattishall
> > Sr. Database Engineer / Sr. Software Engineer Friendster Inc.
> > 
> 
> 

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Opteron HOWTO?!

2005-05-10 Thread Dathan Pattishall
> Subject: Re: Opteron HOWTO?!
> 
> On 5/9/05, Kevin Burton wrote:
> > So... it sounds like a lot of people here (Dathan and Greg) 
> have had 
> > problems deploying MySQL on Opteron in a production environment.
> 
> To me it sounds more like a lot of people have had problems 
> running Linux on x86-64 systems.
> 

We don't have any problems running Opterons at all. 
With all the tests me and my team have done, we know the ins and outs of
getting Opterons up, running-stable, and blazing fast. Our entire
datacenter are (about 200 servers)

Dual Opterons with at least 4GB of memory running in 64-bit mode.

All the databases (about 30)
 - Are dual opterons with 8 GB of memory connected to a Hitachi 9980
SAN-through a McData Switch.

We do about 70K qps at peak for about 1 Billion Queries per day (only on
30 servers BOOYA). So, it's pretty stable.


--

Dathan V Pattishall
Sr. Database Engineer / Sr. Software Engineer
Friendster Inc.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Dual Opteron, linux kernels, 64 bit, mysql 4.1, InnoDB

2005-05-09 Thread Dathan Pattishall
 

> -Original Message-
> From: Richard Dale [mailto:[EMAIL PROTECTED] 
> Sent: Sunday, May 08, 2005 9:37 PM
> To: mysql@lists.mysql.com
> Subject: Dual Opteron, linux kernels, 64 bit, mysql 4.1, InnoDB
> 
> A new server is about to arrive here and will have have 8x15K 
> RPM spindles, dual Opteron processors and 4GB of RAM, and 
> will have around 100GB of database (primarily stock market 
> prices) - the SCSI controller will also have battery-backed 
> RAM too.  InnoDB will be used exclusively.
> 
> I've searched the list and seen varying reports of which 
> Linux kernels work best etc.
> 
> I'd be intersted to know the following:
> a) Which 64 bit Linux distributions are good for the task?

   Suse 8.1 2.4.21-215-smp #1 SMP Tue Apr 27 16:05:19 UTC 2004
x86_64 unknown

> b) Which 64 bit Linux distributions are bad for the task?

2.6 the IO sceduler is still messed up.
  RedHat AS / Suse 9.x are messed up as well

> c) Any comments on kernels, particularly with regard to 64 
> bit support and schedulers?  Any problems with the latest 
> kernels (2.6.11 & 2.6.12-rcX)?

> d) Any recommendations for RAID volume setup

Use RAID-10 split the disks evenly across each channel


> e) Any MySQL optimisations for multiple spindles, onboard 
> caching, stripe sizes, RAID5 vs RAID10.

Don't use RAID5, use Reiser FS if your using SUSE

> f) Any MySQL reliability settings to take into account the 
> battery-backed RAM on the RAID controller?
> 
> I'm happy to collate the responses into a summary too.
> 
> I'm aware of the following discussions which describes a 
> reasonably grunty Dual AMD system with a similar 
> configuration to mine:
> http://meta.wikimedia.org/wiki/Hardware_ordered_April_2005
> http://meta.wikimedia.org/wiki/Hardware_order_May_2004
> 
> Best regards,
> Richard Dale.
> Norgate Investor Services
> - Premium quality Stock, Futures and Foreign Exchange Data for
>   markets in Australia, Asia, Canada, Europe, UK & USA - 
> www.premiumdata.net
> 
> 
> 
> --
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:
> http://lists.mysql.com/[EMAIL PROTECTED]
> 
> 

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: MySQL not using optimum disk throughput.

2005-05-06 Thread Dathan Pattishall
What kernel are you running.

If your running 2.6.x use the deadline scheduler or downgrade to
2.4.23aavm 2.6.[0-9] has major problems with the IO scheduler since the
process scheduler is very fast now.


DVP

Dathan Vance Pattishall http://www.friendster.com

 

> -Original Message-
> From: Kevin Burton [mailto:[EMAIL PROTECTED] 
> Sent: Friday, May 06, 2005 1:58 PM
> To: mysql@lists.mysql.com
> Subject: MySQL not using optimum disk throughput.
> 
> 
> We have a few of DBs which aren't using disk IO to optimum capacity.
> 
> They're running at a load of 1.5 or so with a high workload 
> of pending queries.
> 
> When I do iostat I'm not noticing much IO :
> 
> Device:rrqm/s wrqm/s   r/s   w/s  rsec/s  wsec/srkB/s 
>wkB/s 
> avgrq-sz avgqu-sz   await  svctm  %util
> sda  0.00  13.73 128.43 252.94 1027.45 1695.10   513.73   
> 847.55 7.1490.13  285.00   2.53  96.57
> 
> 
> 
> This is only seeing about 500k -> 1M per second throughput.
> 
> When I run bonnie++ on these drives they're showing 20M->40M 
> throughput.
> 
> Which is really strange.
> 
> Most of our queries are single INSERTS/DELETES.  I could 
> probably rewrite these to become batch operations but I think 
> I'd still end up seeing the above iostat results but with 
> higher throughput.
> 
>  so I'd like to get to the bottom of this before moving forward?
> 
> I ran OPTIMIZE TABLE on all tables but nothing. 
> 
> The boxes aren't paging.
> 
> They're running on a RAID5 disk on XFS.
> 
> Could it be that the disks are having to do a number of HEAD 
> seeks since we have large tables?
> 
> -- 
> 
> 
> Use Rojo (RSS/Atom aggregator)! - visit http://rojo.com. 
> See irc.freenode.net #rojo if you want to chat.
> 
> Rojo is Hiring! - http://www.rojonetworks.com/JobsAtRojo.html
> 
>Kevin A. Burton, Location - San Francisco, CA
>   AIM/YIM - sfburtonator,  Web - http://peerfear.org/ GPG 
> fingerprint: 5FB2 F3E2 760E 70A8 6174 D393 E84D 8D04 99F1 4412 
> 
> 
> --
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:
> http://lists.mysql.com/[EMAIL PROTECTED]
> 
> 

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: DB Export multiple tables to CSV

2005-03-03 Thread Dathan Pattishall
SELECT * INTO OUTFILE "/dir/they/can/get/to" FROM TABLE WHERE


DVP

Dathan Vance Pattishall http://www.friendster.com

 

> -Original Message-
> From: H Ba [mailto:[EMAIL PROTECTED] 
> Sent: Thursday, March 03, 2005 11:15 AM
> To: mysql@lists.mysql.com
> Subject: DB Export multiple tables to CSV
> 
> I have multiple tables set up and need to export these to csv 
> in order to import them into Excel. Since all tables are 
> related to each other and I can only export one at a time, is 
> there a way to somehow get a "readable" file after importing 
> into Excel, which non-programmers can edit and use for 
> further processing?
> 
> Regards,
> Holger
> 
> --
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:
> http://lists.mysql.com/[EMAIL PROTECTED]
> 
> 

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Relationship between 'table_cache' and 'max_connections'

2005-02-02 Thread Dathan Pattishall
Table_cache is a pool of file descriptors held open, so the over head of
opening table is not necessary since the open is cached. 

Max_connections relates to table_cache because they both use
descriptors. Thus at least the sum of the 2 is used by mySQL to set the
ulimit of file descriptors higher then the default if executed as a user
that can change the value.



DVP

Dathan Vance Pattishall http://www.friendster.com

 

> -Original Message-
> From: Mark [mailto:[EMAIL PROTECTED] 
> Sent: Wednesday, February 02, 2005 4:09 PM
> To: mysql@lists.mysql.com
> Subject: Relationship between 'table_cache' and 'max_connections'
> 
> I hope someone can clarify the relationship between 
> 'table_cache' and 'max_connections' (I use MySQL 4.0.23). The 
> manual says:
> 
>"table_cache is related to max_connections. For example, for
>200 concurrent running connections, you should have a table
>cache size of at least 200 * N, where N is the maximum
>number of tables in a join. You also need to reserve some
>extra file descriptors for temporary tables and files."
> 
> Does that mean, that when table_cache is set to, say, 256, 
> only 256 concurrent connections can be made? I mean, is it 
> restrictive for the amount of connections? It seems to say 
> so, but since this is also said to be a CACHE value, maybe not.
> 
> I have set table_cache to 256, max_connections to 512, and I 
> have an open_files_limit of 7408. Does that suffice?
> 
> Thanks,
> 
> - Mark
> 
> 
> --
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:
> http://lists.mysql.com/[EMAIL PROTECTED]
> 
> 

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: performance on query with ORDER BY clause

2005-02-02 Thread Dathan Pattishall
This tells the optimizer to do a table scan. If you used INNODB it's
already sorted by the primary key since INNODB supports clustered
indexes. Doing a table scan on innodb is very slow due to it's MVCC
control.

It's going to take a long time.



DVP

Dathan Vance Pattishall http://www.friendster.com

 

> -Original Message-
> From: Marc Dumontier [mailto:[EMAIL PROTECTED] 
> Sent: Wednesday, February 02, 2005 12:02 PM
> To: mysql@lists.mysql.com
> Subject: performance on query with ORDER BY clause
> 
> Hi,
> 
> I have a simple query with an ORDER BY clause, and it's 
> taking forever to run on this table. I hope i've included all 
> relevent information...it might just be one of the4 server 
> variables which need adjustment.
> 
> the query is
> 
> SELECT SubmitId from BINDSubmit ORDER BY SubmitId
> 
> SubmitId is the primary Key, about 150,000 records table type 
> is INNODB
> 
> mysql> describe BINDSubmit;
> +-+-+--+-+
> -++
> | Field   | Type| Null | Key | 
> Default | Extra  |
> +-+-+--+-+
> -++
> | SubmitId| int(10) unsigned|  | PRI | 
> NULL| auto_increment |
> | BindId  | int(10) unsigned|  | MUL | 
> 0   ||
> | UserId  | int(10) unsigned|  | MUL | 
> 0   ||
> | Delegate| int(10) unsigned|  | MUL | 
> 0   ||
> | Visible | tinyint(1)  |  | | 
> 1   ||
> | Private | tinyint(1)  |  | | 
> 0   ||
> | Compressed  | tinyint(1)  |  | | 
> 0   ||
> | Verified| tinyint(1)  |  | | 
> 0   ||
> | Status  | tinyint(3) unsigned |  | MUL | 
> 0   ||
> | CurationType| tinyint(3) unsigned |  | | 
> 1   ||
> | RecordType  | tinyint(3) unsigned |  | MUL | 
> 0   ||
> | DateCreated | datetime|  | MUL | -00-00 
> 00:00:00 ||
> | DateLastRevised | datetime|  | MUL | -00-00 
> 00:00:00 ||
> | XMLRecord   | longblob|  | 
> | ||
> +-+-+--+-+
> -++
> 14 rows in set (0.00 sec)
> 
> mysql> select count(*) from BINDSubmit;
> +--+
> | count(*) |
> +--+
> |   144140 |
> +--+
> 1 row in set (5.09 sec)
> 
> mysql> explain select SubmitId from BINDSubmit ORDER BY SubmitId;
> ++---+---+-+-+
> --++-+
> | table  | type  | possible_keys | key | key_len | 
> ref  | rows   
> | Extra   |
> ++---+---+-+-+
> --++-+
> | BINDSubmit | index | NULL  | PRIMARY |   4 | 
> NULL | 404947 
> | Using index |
> ++---+---+-+-+
> --++-+
> 1 row in set (0.00 sec)
> 
> 
> 
> # The MySQL server
> [mysqld]
> port= 3306
> socket  = /tmp/mysql.sock
> skip-locking
> key_buffer = 128M
> max_allowed_packet = 40M
> table_cache = 256
> sort_buffer_size = 1M
> read_buffer_size = 1M
> myisam_sort_buffer_size = 64M
> thread_cache = 8
> query_cache_size= 16M
> # Try number of CPU's*2 for thread_concurrency thread_concurrency = 4
> 
> # Uncomment the following if you are using InnoDB tables 
> innodb_data_home_dir = /usr/local/mysql/data/ 
> innodb_data_file_path = ibdata1:100M:autoextend 
> innodb_log_group_home_dir = /usr/local/mysql/data/ 
> innodb_log_arch_dir = /usr/local/mysql/data/ # You can set 
> .._buffer_pool_size up to 50 - 80 % # of RAM but beware of 
> setting memory usage too high innodb_buffer_pool_size = 512M 
> innodb_additional_mem_pool_size = 20M # Set .._log_file_size 
> to 25 % of buffer pool size innodb_log_file_size = 64M 
> innodb_log_buffer_size = 8M innodb_flush_log_at_trx_commit = 
> 1 innodb_lock_wait_timeout = 50
> 
> 
> 
> 
> Any help would be appreciated, so far query has been running 
> for 3000 seconds
> 
> Marc Dumontier
> 
> 
> --
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:
> http://lists.mysql.com/[EMAIL PROTECTED]
> 
> 

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Why does dropping indexes takes such a long time?

2005-02-02 Thread Dathan Pattishall


DVP

Dathan Vance Pattishall http://www.friendster.com

 

> -Original Message-
> From: Homam S.A. [mailto:[EMAIL PROTECTED] 
> Sent: Wednesday, February 02, 2005 11:50 AM
> To: mysql@lists.mysql.com
> Subject: Why does dropping indexes takes such a long time?
> 
> I have a non-primary-key index on a large MyISAM table, and 
> dropping the index takes a long time, in addition to maxing 
> out the CPU utilization in its final 1/3 interval.
> 
> Why is that

Almost all alters causes mysql to rebuild the table. So, it has to
remove the binary tree for that key and at the same time it fixes the
other keys. As a result it re-builds the table, take a look at your
datadir/database look for #sql that is the consistent copy that will get
swapped into the true table name.


> 
> In MS SQL Server for example, dropping the index is almost 
> instantaeous, unless it's clustered and you have other 
> indexes on the same table, because they have to be rebuilt. 
> But we're talking here about MyISAM tables with non-clustered indexes.
> 
> This defeats the whole idea of dropping an index on a table 
> before a large batch update.
> 
> 
> 
>   
> __
> Do you Yahoo!? 
> The all-new My Yahoo! - Get yours free! 
> http://my.yahoo.com 
>  
> 
> 
> -- 
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:
> http://lists.mysql.com/[EMAIL PROTECTED]
> 
> 

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: question

2005-02-02 Thread Dathan Pattishall
If you want to do raw writes to the disk device yes you can. This option is 
available with innodb. Search on www.innodb.com it should be in section 15 (or 
11 I forgot) of the innodb manual.


DVP

Dathan Vance Pattishall http://www.friendster.com

 

> -Original Message-
> From: João Borsoi Soares [mailto:[EMAIL PROTECTED] 
> Sent: Wednesday, February 02, 2005 8:57 AM
> To: mysql@lists.mysql.com
> Subject: question
> 
> Is it possible to use for example /dev/hdc directly as my 
> innodb database file? Is it worth?
> 
> Thanks,
> Joao.
> mysql
> 
> 
> --
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:
> http://lists.mysql.com/[EMAIL PROTECTED]
> 
> 

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Social Networking querys

2005-02-02 Thread Dathan Pattishall
DVP

Dathan Vance Pattishall http://www.friendster.com

 

> -Original Message-
> From: Bruce Douglas [mailto:[EMAIL PROTECTED] 
> Sent: Wednesday, February 02, 2005 10:20 AM
> To: Dathan Pattishall
> Cc: mysql@lists.mysql.com
> Subject: RE: Social Networking querys
> 
> dathan...
> 
> given that you work at friendster, aren't you kind of 
> restricted from commenting on how one would go about 
> creating/implementing this kind of system??

We are going to present at the mySQL conference, and let people know how
we created the storage engine not how we interact with our API via our
storage engine. I could talk to the business guys to see if we can open
up the API to allow other companies to use our sauce to run a social
network application-but it will require a partnership etc.

> 
> if you aren't and you have information to share, then we'd 
> appreciate hearing it!!!
> 
> regards,
> 
> bruce
> 
> 
> -Original Message-
> From: Dathan Pattishall <[EMAIL PROTECTED]>
> Sent: Feb 2, 2005 10:04 AM
> To: Balazs Rauznitz <[EMAIL PROTECTED]>, listsql listsql 
> <[EMAIL PROTECTED]>
> Cc: mysql 
> Subject: RE: Social Networking querys
> 
> 
> 
> DVP
> 
> Dathan Vance Pattishall http://www.friendster.com
> 
>  
> 
> > -Original Message-
> > From: Balazs Rauznitz [mailto:[EMAIL PROTECTED]
> > Sent: Wednesday, February 02, 2005 9:06 AM
> > To: listsql listsql
> 
> > I work at a social networking company. We store the social 
> network as 
> > an edge graph too, but do not use SQL to run queries on it. It'd be 
> > just way too slow in huge networks.
> > There's a custom application creted in-house written in C 
> to do that. 
> > Hope this was at least a little useful.
> > 
> > Also there's somebody from Friendster here, he might be 
> able to help 
> > you also.
> 
> We have our own storage engine that talks an API to our own 
> C++ server that stores the graph. Doing it with a pure myISAM 
> engine is possible but it's way to slow. Doing a self table 
> join or subselect is really the only way doing it, less you 
> want to have some application logic that issues 1000s of selects.
> 
> 
> > 
> > Balazs
> > 
> > 
> > --
> > MySQL General Mailing List
> > For list archives: http://lists.mysql.com/mysql
> > To unsubscribe:
> > http://lists.mysql.com/[EMAIL PROTECTED]
> > 
> > 
> 
> --
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:
> http://lists.mysql.com/[EMAIL PROTECTED]
> 
> 
> 

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Social Networking querys

2005-02-02 Thread Dathan Pattishall


DVP

Dathan Vance Pattishall http://www.friendster.com

 

> -Original Message-
> From: Balazs Rauznitz [mailto:[EMAIL PROTECTED] 
> Sent: Wednesday, February 02, 2005 9:06 AM
> To: listsql listsql

> I work at a social networking company. We store the social 
> network as an edge graph too, but do not use SQL to run 
> queries on it. It'd be just way too slow in huge networks. 
> There's a custom application creted in-house written in C to 
> do that. Hope this was at least a little useful.
> 
> Also there's somebody from Friendster here, he might be able 
> to help you also.

We have our own storage engine that talks an API to our own C++ server
that stores the graph. Doing it with a pure myISAM engine is possible
but it's way to slow. Doing a self table join or subselect is really the
only way doing it, less you want to have some application logic that
issues 1000s of selects.


> 
> Balazs
> 
> 
> --
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:
> http://lists.mysql.com/[EMAIL PROTECTED]
> 
> 

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Fixed with Fields

2005-01-28 Thread Dathan Pattishall
Use char

DVP

Dathan Vance Pattishall http://www.friendster.com

 

> -Original Message-
> From: Marc Michalowski [mailto:[EMAIL PROTECTED] 
> Sent: Friday, January 28, 2005 10:28 AM
> To: mysql@lists.mysql.com
> Subject: Fixed with Fields
> 
> I was wondering if there is a way to create fixed width 
> fields. Example:
> The field is set to 18 but data contained is 11. I need the 
> length to remain 18. Is there anyway to do this? Thanks for your help.
> 
> -Marc
> 
> 
> --
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:
> http://lists.mysql.com/[EMAIL PROTECTED]
> 
> 

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Creating indexes

2005-01-27 Thread Dathan Pattishall
Database size and records are find. But your table scanning. Look at 
 
 
Handler_read_rnd_next 1018281500 

 
that indicates a table scan. This mean your not using your indexes effectivly 
or the tables are not set up with the proper indexes.
 
Perform show full processlist and or enable log-slow-query log-long-format
 
 
 

DVP



Dathan Vance Pattishall http://www.friendster.com

 

 





From: Ângelo M. Rigo [mailto:[EMAIL PROTECTED] 
Sent: Thursday, January 27, 2005 11:23 AM
To: Dathan Pattishall
Subject: RE: Creating indexes


Thank´s again for sharing your mysql experience !
 
may you can point  me if my numbers are too high?
best regards!!
  
  records size
   TABLE1   225,893  InnoDB  54.6 MB  
   TABLE2   611   MyISAM  122.3 KB  
   TABLE3   497   MyISAM  19.7 KB  
   TABLE4   49,930  InnoDB  8.0 MB  
   TABLE5   431   InnoDB  80.0 KB  
   TABLE6   139,933  InnoDB  43.7 MB  
  6 tables   Sum  417,295  106.5 MB  
 
Variable_name  Value  
Handler_commit 112 
Handler_delete 2969004 
Handler_read_first 71073 
Handler_read_key 41714285 
Handler_read_next 2199647292 
Handler_read_prev 6942 
Handler_read_rnd 915605 
Handler_read_rnd_next 1018281500 
Handler_rollback 639 
Handler_update 31994410 
Handler_write 281417564 

Dathan Pattishall <[EMAIL PROTECTED]> wrote:

no show full processlist to see which query or set of queries 
take the longest time.
 

DVP



Dathan Vance Pattishall http://www.friendster.com

 

 




From: Ângelo M. Rigo [mailto:[EMAIL PROTECTED] 
Sent: Thursday, January 27, 2005 11:14 AM
    To: Dathan Pattishall
Subject: RE: Creating indexes


using the show status variables can i discover wich 
collumn or query is consuming resources and opening too many connections ?

    Dathan Pattishall <[EMAIL PROTECTED]> wrote: 




Perform show status.

Show status has a few variables to take a 
global look at your mysql server and how keys are being used.


 show status like 'Hand%';
+---+--+
| Variable_name | Value|
+---+--+
| Handler_commit| 0|
| Handler_delete| 0|
| Handler_discover  | 0|
| Handler_read_first| 1|
| Handler_read_key  | 27287397 |
| Handler_read_next | 12891664 |
| Handler_read_prev  | 0|
| Handler_read_rnd  | 347638   |
| Handler_read_rnd_next | 1031461  |
| Handler_rollback  | 3|
| Handler_update| 7360212  |
| Handler_write | 1591558  |
+---+--+


Take special note to Handler_read_rnd_next. If 
it's high your doing a table scan.

Look online for the rest of these vars, they 
are very helpful. I personally graph them over time to make sure things are 
good.








DVP

Dathan Vance Pattishall 
http://www.friendster.com <http://www.friendster.com/> 



> -Original Message-
  

RE: Replication talk.

2005-01-27 Thread Dathan Pattishall
No this is not a mysql option directly. Your going to have to change
your database API level to write only to the master and read only from
the slave. The only thing that mysql will do is enforce read only on a
slave via GRANTS and write only on a master.

DVP

Dathan Vance Pattishall http://www.friendster.com

 

> -Original Message-
> From: Frederic Trudeau [mailto:[EMAIL PROTECTED] 
> Sent: Thursday, January 27, 2005 10:25 AM
> To: mysql@lists.mysql.com
> Subject: Replication talk.
> 
> 
> Consider the simple case where we have a single master and a 
> single slave.
> 
> Ideally, we would redirect all writes to the master server 
> only, and reads on the master AND the slave.
> 
> Is it possible to redirect UPDATE, DELETE, INSERT queries 
> only on the master server automatically, and SELECT* on the 
> master and the slave ? Is this a MySQL configuration option 
> that I missed in the doc ?
> 
> Any pointers would be appreciated.
> 
> Thank you !
> 
> -
> Frederic Trudeau <[EMAIL PROTECTED]>
> Programmation / Departement reseau
> Communications Accessibles Montreal
> 
> 
> --
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:
> http://lists.mysql.com/[EMAIL PROTECTED]
> 
> 

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: oid or rowid equivalent

2005-01-26 Thread Dathan Pattishall
Innodb has a oracle style rowid but it can't be accessed. You might want
to use auto_increment a table option that updates with each inserted
record.
 

DVP

Dathan Vance Pattishall http://www.friendster.com

 

> -Original Message-
> From: Nupur Jain [mailto:[EMAIL PROTECTED] 
> Sent: Wednesday, January 26, 2005 2:27 PM
> To: mysql@lists.mysql.com
> Subject: oid or rowid equivalent
> 
> Hi,
> I was wondering if mysql supports rowid like oracle or oid 
> like pgsql does for updates? If no, is there a work around to these?
> 
> Thanks!
> 
> 
> 
> --
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:
> http://lists.mysql.com/[EMAIL PROTECTED]
> 
> 

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Adding user commments to database records

2005-01-19 Thread Dathan Pattishall

CREATE TABLE userComments
(
  -- comments_id int unsigned NOT NULL DEFAULT 0 AUTO INCREMENT,
  commenter_id int unsigned not NULL default 0,
  commented_id int unsigned not null default 0,
  relationship tinyint unsigned not null default 0,
  comment   TEXT NOT NULL default '',
  PRIMARY KEY (commented_id,commenter_id), -- means a commenter can
leave only 1 comment commented user
  -- PRIMARY KEY (commented_id, commenter_id, comments_id) get a better
index scan if looking for all commentes if a commenter can leave
multiple comments
  -- INDEX comments_id (comments_id)
);
 

> -Original Message-
> From: Kentucky Families [mailto:[EMAIL PROTECTED] 
> Sent: Wednesday, January 19, 2005 6:13 AM
> To: mysql@lists.mysql.com
> Subject: Adding user commments to database records
> 
> I would like for logged-in site visitors to be able to submit 
> comments to be appended to my database records. I've seen 
> some databases that use a post-it-note style system. What 
> might be the best way to go about this.
>  
> The following conditions apply:
> Multiple users may post comments on the same record; User 
> will also choose a "relationship" to the subject of the record;
>  
> Screen output will be:
>  
> Record
> Comments/User/Relationship
> Comments/User/Relationship
>  
> etc.
>  
> Thanks.
> 
>   
> -
> Do you Yahoo!?
>  Yahoo! Mail - Find what you need with new enhanced search. 
> Learn more.
> 

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Doubt about Performance

2005-01-18 Thread Dathan Pattishall
 




> -Original Message-
> From: Ronan Lucio [mailto:[EMAIL PROTECTED] 
> Sent: Tuesday, January 18, 2005 10:13 AM
> To: mysql@lists.mysql.com
> Subject: Doubt about Performance
> 
> Hello,
> 
> Trying to keep the database and the application fast, I´d 
> like to clearify my mind about it.
> 
> 1) Is a SELECT DISTINCT over 5,000 records a weight
> query? (Supposing it has about 20 different option the the
> DISTINCT key).

This is not bad, it's a mysql function that uses a KEY if a key exist.

> 
> 2) Is SELECT ORDER BY RAND() over 1,500 records
> a weight query?
> 
> I need to put these two queries in the first page of our site.
> So, I´ve been worried if it can slow down our site in the pics.

DO SELECT ORDER BY RAND() LIMIT  <<< 1500 

Such that <<< means much less then 1500 on the order of 10.

This is a rather expensive operation and should be used with care.

I've gotten around this by generating a random number in my app and trying to 
match it to a known id by making multiple selects. This was less intensive then 
ORDER BY RAND.

--
DVP
> 
> Thanks,
> Ronan
> 
> 
> 
> --
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:
> http://lists.mysql.com/[EMAIL PROTECTED]
> 
> 

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: I seem to have lost a table somehow :-(

2005-01-18 Thread Dathan Pattishall
Somehow your table was zero'ed out, i.e. TRUNCATE / DROP-CREATE.

MYD - data of the table
MYI - index of the table
Frm - TABLE definition

I hope you have a backup. 

> -Original Message-
> From: Vicki Brown [mailto:[EMAIL PROTECTED] 
> Sent: Tuesday, January 18, 2005 3:15 PM
> To: mysql@lists.mysql.com
> Subject: I seem to have lost a table somehow :-(
> 
> I have a Movable Type weblog at http://www.technofile.org/AverageJoe/
> I am using MySQL to store the data for this weblog;
> mysql  Ver 14.7 Distrib 4.1.7, for unknown-freebsd4.7 (i386)
> 
> 
> when I go to the MT Main Menu page, the weblog is not listed 
> as existing.
> When I go to the mysql database directory, several pertinent 
> .MYD files are empty; this concerns me. (massive understatment).
> 
> -rw-rw 1 mysql mysql 672 Nov 6 20:13 mt_author.MYD
> -rw-rw 1 mysql mysql 4096 Nov 13 12:16 mt_author.MYI
> -rw-rw 1 mysql mysql 9350 Nov 6 20:13 mt_author.frm
> -rw-rw 1 mysql mysql 0 Nov 6 20:13 mt_blog.MYD
> -rw-rw 1 mysql mysql 1024 Nov 6 20:13 mt_blog.MYI
> -rw-rw 1 mysql mysql 11844 Nov 6 20:13 mt_blog.frm
> -rw-rw 1 mysql mysql 0 Nov 6 20:13 mt_category.MYD
> -rw-rw 1 mysql mysql 1024 Nov 6 20:13 mt_category.MYI
> -rw-rw 1 mysql mysql 8970 Nov 6 20:13 mt_category.frm
> -rw-rw 1 mysql mysql 0 Nov 6 20:13 mt_entry.MYD
> -rw-rw 1 mysql mysql 1024 Nov 6 20:13 mt_entry.MYI
> -rw-rw 1 mysql mysql 9678 Nov 6 20:13 mt_entry.frm
> 
> myisamchk certainly doesn't give me warm fuzzies...
> 
> % myisamchk mt_blog.MYI
> Checking MyISAM file: mt_blog.MYI
> Data records:   0   Deleted blocks:   0
> - check file-size
> - check record delete-chain
> - check key delete-chain
> - check index reference
> - check data record references index: 1
> - check data record references index: 2
> - check record links
> 
> The weblog data is still "out there" (i.e. the database 
> provides a backup and datastore for "published" .html files; 
> MT isn't, thank goodness, completely dynamic in nature). I'm 
> waiting in hopes that the MT tech support has a suggestion 
> for rebuilding the database from "published" files.
> 
> Is there some way I can fix this problem at the MySQL end? 
> The .frm files have data. Does that "mean" anything? Can I use that?
> 
> 
> -- 
> Vicki Brown ZZZJourneyman Sourceror:
> SF Bay Area, CAzz  |\ _,,,---,,_  Scripts & Philtres
> http://www.cfcl.com zz /,`.-'`'-.  ;-;;,_Code, Doc, Process, QA
> http://cfcl.com/vlb   |,4-  ) )-,_. ,\ ( `'-'Perl, Unix, Mac OS X, WWW
>  '---''(_/--'  `-'\_)  ___
> 
> --
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:
> http://lists.mysql.com/[EMAIL PROTECTED]
> 
> 

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Listing all connected users?

2005-01-18 Thread Dathan Pattishall
Issue SHOW [FULL] PROCESSLIST 
 as the super user


> -Original Message-
> From: sol beach [mailto:[EMAIL PROTECTED] 
> Sent: Tuesday, January 18, 2005 3:01 PM
> To: mysql@lists.mysql.com
> Subject: Listing all connected users?
> 
> How do I see who is currently connected to MYSQL & from where 
> they originate?
> 
> --
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:
> http://lists.mysql.com/[EMAIL PROTECTED]
> 
> 

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Will "Alter Table ... enable keys" reload data to table?

2005-01-13 Thread Dathan Pattishall
DISABLE / ENABLE keys is very fast even for your dataset. Basically when
disabled it only respects UNIQUE type keys when loading data and
rebalances the binary tree when enabled.


Dathan 

> -Original Message-
> From: mos [mailto:[EMAIL PROTECTED] 
> Sent: Thursday, January 13, 2005 5:24 PM
> To: mysql@lists.mysql.com
> Subject: Will "Alter Table ... enable keys" reload data to table?
> 
> I am importing data into a large table, 100 million rows, and 
> I want to use "Alter table disable keys" prior to executing a 
> Load Data...,. But after loading the data, if I execute a 
> "Alter Table ... enable keys" will MySQL create a second 
> table, load the data into the second table, and then rebuild 
> the keys like it does for most Alter Tables? I don't want to 
> use Alter Table ... if it is going to copy the data over to 
> another table. I'm thinking MySQL should be smart enough just 
> to rebuild the index without moving data. Am I right?
> 
> TIA
> 
> Mike
> 
> 
> --
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:
> http://lists.mysql.com/[EMAIL PROTECTED]
> 
> 

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Memory tables much slower in 4.1.7 ?

2005-01-13 Thread Dathan Pattishall
Hmm that's a range, that should do a table scan in 4.0.18, since a
memory table type is just a hash table. In 4.1 I believe it supports
ranges since the table is more of a myISAM type.

Is there an index on TIMESTAMP?

Does the range cover more then 30% of the table?


> -Original Message-
> From: Kevin A. Burton [mailto:[EMAIL PROTECTED] 
> Sent: Thursday, January 13, 2005 4:01 PM
> To: mysql@lists.mysql.com
> Subject: Memory tables much slower in 4.1.7 ?
> 
> Under 4.0.18 we were loading about 800M of data into a memory 
> table to get better performance from some of our main queries.
> 
> Via crontab we would DELETE older links in the memory table 
> and then INSERT links from a myisam table.
> 
> This process under 4.1.7 is MUCH slower.  Specifically the 
> DELETE is taking forever. 
> 
> We would run:
> 
> DELETE FROM FOO_MEMORY WHERE FOO_MEMORY.TIMESTAMP < 1105055409729;
> 
> But now this takes forever...
> 
> I realize that no index is used on this query but due to the 
> fact that its a memory table it should sitll be FAST as heck.
> 
> Hash anything changed with rehashing that could effect the 
> performance here?
> 
> I could install a 4.0.18 node to benchmark but this would 
> take a while as they are all upgraded...
> 
> Kevin
> 
> -- 
> 
> Use Rojo (RSS/Atom aggregator).  Visit http://rojo.com. Ask 
> me for an invite!  Also see irc.freenode.net #rojo if you 
> want to chat.
> 
> Rojo is Hiring! - http://www.rojonetworks.com/JobsAtRojo.html
> 
> If you're interested in RSS, Weblogs, Social Networking, 
> etc... then you should work for Rojo!  If you recommend 
> someone and we hire them you'll get a free iPod!
> 
> Kevin A. Burton, Location - San Francisco, CA
>AIM/YIM - sfburtonator,  Web - http://peerfear.org/ 
> GPG fingerprint: 5FB2 F3E2 760E 70A8 6174 D393 E84D 8D04 99F1 4412
> 
> 
> --
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:
> http://lists.mysql.com/[EMAIL PROTECTED]
> 
> 

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Tuning MySQL

2005-01-11 Thread Dathan Pattishall
 

> -Original Message-
> From: Eric Gunnett [mailto:[EMAIL PROTECTED] 
> Sent: Tuesday, January 11, 2005 9:36 AM
> To: mysql@lists.mysql.com
> Subject: Tuning MySQL
> 
>   I have a quad processor server, with 4 gigs of memory. 
> It is only running MySQL right now and seems really slow. Can 
> someone give me a few suggestions on optimizing My.cnf file 
> for this system.  We are running 
> mysql-standard-4.0.23-pc-linux-i68, on it. Here is the my.cnf file

1st how many disks are in the server. What is your key cache hit
percentage? This can be figured out by 

Issue SHOW STATUS

Take 

Key_reads/Key_read_requests * 100 give the %


Next look at handler_read_rnd_next

If this number is high then your doing table scans so optimizing your
keys would be better.

Additional comments below.


> 
> # The MySQL server
> [mysqld]
> port= 3306
> socket  = /tmp/mysql.sock
> skip-locking
> key_buffer = 384M

You will prob have to raise the key buffer

> max_allowed_packet = 2M
> table_cache = 512
> sort_buffer_size = 2M
> read_buffer_size = 2M
> myisam_sort_buffer_size = 64M

Add tmp_table_size=16M

> thread_cache = 8

Look at your connection rate if it's high raise the above

> query_cache_size = 32M
> # Try number of CPU's*2 for thread_concurrency thread_concurrency = 8

Thread_concurrency is only a valid option for solaris

> 
> user=mysql
> basedir=/usr/local/mysql
> datadir=/usr2/mysql/data
> max_connections=400
> max_connect_errors=300
> interactive_timeout=2400
> wait_timeout=60
> back_log=100
> #skip-networking
> server-id   = 2
> 
> [isamchk]
> key_buffer = 256M
> sort_buffer_size = 256M
> read_buffer = 2M
> write_buffer = 2M
> 
> [myisamchk]
> key_buffer = 256M
> sort_buffer_size = 256M
> read_buffer = 2M
> write_buffer = 2M
> 
> 
> 
> --
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:
> http://lists.mysql.com/[EMAIL PROTECTED]
> 
> 

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: MySQL-4.1 and PHP

2005-01-10 Thread Dathan Pattishall
Nope no issues. 

> -Original Message-
> From: Daniel Kasak [mailto:[EMAIL PROTECTED] 
> Sent: Sunday, January 09, 2005 6:51 PM
> To: mysql@lists.mysql.com
> Subject: MySQL-4.1 and PHP
> 
> Hi all.
> 
> Does anyone know if there are any issues with PHP-4.1 and MySQL-4.1?
> I assume I'll have to recompile PHP ( and Perl and other 
> stuff that talks to MySQL ), but other than that, are there 
> any gotchas I should know about?
> 
> --
> Daniel Kasak
> IT Developer
> NUS Consulting Group
> Level 5, 77 Pacific Highway
> North Sydney, NSW, Australia 2060
> T: (+61) 2 9922-7676 / F: (+61) 2 9922 7989
> email: [EMAIL PROTECTED]
> website: http://www.nusconsulting.com.au
> 
> 

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Limit drive usage per thread

2004-12-13 Thread Dathan Pattishall
>From the brief sound of it your using myISAM, and the query taking the
most time is not indexed or using an index properly.

Alters will lock the table, and once it starts it should finish or your
going to have to recover the table. I suggest taking an outage. If you
can't 

Make a replica of the server, put it in a master slave role. Alter the
slave (ensure the new column has a default) swap the roles OR copy the
table if the application can handle having writes  blocked. If not, the
slave must take the role of master, and the previous master can go away.



 

-Original Message-
From: matt_lists [mailto:[EMAIL PROTECTED] 
Sent: Monday, December 13, 2004 10:05 AM
To: [EMAIL PROTECTED]
Subject: Limit drive usage per thread

Is there any way to limit drive usage per thread?

I have a problem where an update thread will use 100 % of the drive, and
simple index searches that should be instant will wait and wait and wait
before responding.

I dont want one user to kill everybody else

I'm adding a column to a large table for a client, but every client is
getting hit with a database that seems to be locked up


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: How to rebuild indexes in InnoDB

2004-12-10 Thread Dathan Pattishall
ALTER TABLE  TYPE = InnoDB;

Will rebuild the indexes all at the same time. 

-Original Message-
From: Anil Doppalapudi [mailto:[EMAIL PROTECTED] 
Sent: Friday, December 10, 2004 9:06 AM
To: [EMAIL PROTECTED]
Subject: How to rebuild indexes in InnoDB

Hi,

any one give me an idea how to rebuild indexes in InnDB database.
database users are complaining that response time is slow. right now
what I am doing is dropping indexes on table and recreating them. After
that Database is working fine for some days and again it is giving
problem. dropping and recreating indexes taking lot of time. any
alternative


Thanks in advance.
Anil
DBA



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: MYSQL is getting slow

2004-12-09 Thread Dathan Pattishall
This is a very broad question, I sometimes do this myself. But to answer
the question to the best of my ability I have to ask a few.


Are you using RAID? If so what RAID level?

What are you're my.cnf settings?

What type of Indexes are you using?

What does vmstat and iostat say?

What Filesystem are you using?

What are some typical queires for a given schema?


Typically for most orginizations mysql dedicated on the box below is
wicked fast, even when not optimized because of system cache etc. But if
your running out of diskspace then that's a problem in itself.

-Original Message-
From: Patrick Marquetecken [mailto:[EMAIL PROTECTED] 
Sent: Thursday, December 09, 2004 7:49 AM
To: [EMAIL PROTECTED]
Subject: MYSQL is getting slow

Hi,

I have 3 snort sensors logging to a central mySQL database after two
weeks the size of the database is about 3.3GB and the machine is getting
slow, as i'm not used to be working with mySQL is my question how far
can i go before the machine stop responding ?

HP Pentium4 - 2.2mhz - 512 mb, 99% used and no swap, HD of 40GB.

TIA
Patrick 


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: slow query issues

2004-12-08 Thread Dathan Pattishall
 

-Original Message-
From: Max Michaels [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, December 08, 2004 8:32 AM
To: [EMAIL PROTECTED]
Subject: slow query issues

Hello all,

Recently, I have been seeing some strange behavior from a particular
query on my 4.0.21 mysql server. Here is the query in question:

SELECT size_id, sum(imps) imps, sum(clicks) clicks, sum(convs) convs,
sum(imp_revenue) imp_revenue, sum(click_revenue) click_revenue,
sum(conv_revenue) conv_revenue, sum(international_imps)
international_imps, sum(adjustments) / 60 adjs,
sum(publisher_compensation) / 60 comp FROM publisher_summary WHERE ymdh
>= '2004-11-01 05:00:00' AND ymdh < '2004-12-01 05:00:00' AND
is_ym_advertiser=0 GROUP BY size_id;


The compound index I see that is usefull is
ymdh,is_ym_advistiser,size_id 

If the range covers more then 30% of the data in the table mysql will
turn off index lookups.

Set max_seeks_keys=100 in you're my.cnf file and or ANALYZE TABLE to
recompute the cardinality of the keys. Additionally you can perform
tighter ranges on ymdh and use a script to simulate the additional
summation to make it faster.


212.561.6475 | mmichaels {at} rightmedia.com | www.rightmedia.com 


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: slow date query

2004-12-07 Thread Dathan Pattishall
Additionally you should fix your key structure. 

Since BindId is already your primary key

Change the sequence of the BindId,RecordType index. Make RecordType the
leftmost prefix in the compound index, you'll get a "free index" from
this compound.
 

-Original Message-----
From: Dathan Pattishall [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, December 07, 2004 12:05 PM
To: Marc Dumontier; [EMAIL PROTECTED]
Subject: RE: slow date query

Well 1st of all Date_format doesn't allow the use of a key.


Do this.

SELECT ..

WHERE DateLastRevised >= '2004-12-07' AND DateLastRevisted <
'2004-12-08'; 



-Original Message-
From: Marc Dumontier [mailto:[EMAIL PROTECTED]
Sent: Tuesday, December 07, 2004 11:34 AM
To: [EMAIL PROTECTED]
Subject: slow date query

Hi,

I'd really appreciate any help in speeding up this type of query

SELECT BindId,RecordType from BrowseData WHERE
DATE_FORMAT(DateLastRevised,'%Y-%m-%d') = '2004-12-07';

On a MYISAM table of 122,000 rows, this query takes very long, in the
neighbourhood of 20 minutes.

i'm using mysqld 4.0.20.

I have an index on DateLastRevised


mysql> show indexes from BrowseData;
++++--+-
+---+-+--++--+--
--+-+
| Table  | Non_unique | Key_name   | Seq_in_index | 
Column_name | Collation | Cardinality | Sub_part | Packed | Null | 
Index_type | Comment |
++++--+-
+---+-+--++--+--
--+-+
| BrowseData |  0 | PRIMARY|1 | 
BindId  | A |  122850 | NULL | NULL   |  | 
BTREE  | |
| BrowseData |  1 | bbs_dlr|1 | 
DateLastRevised | A |  122850 | NULL | NULL   |  | 
BTREE  | |
| BrowseData |  1 | bbs_bid_recordtype |1 | 
BindId  | A |  122850 | NULL | NULL   |  | 
BTREE  | |
| BrowseData |  1 | bbs_bid_recordtype |2 | 
RecordType  | A |  122850 | NULL | NULL   |  | 
BTREE  | |
++++--+-
+---+-+--++--+--
--+-+



mysql> explain SELECT BindId,RecordType from BrowseData WHERE
DATE_FORMAT(DateLastRevised,'%Y-%m-%d') = '2004-12-07';
++--+---+--+-+--++--
---+
| table  | type | possible_keys | key  | key_len | ref  | rows   | 
Extra   |
++--+---+--+-+--++--
---+
| BrowseData | ALL  | NULL  | NULL |NULL | NULL | 122850 | 
Using where |
++--+---+--+-+--++--
---+
1 row in set (0.00 sec)


thanks,
Marc Dumontier


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: slow date query

2004-12-07 Thread Dathan Pattishall
Well 1st of all Date_format doesn't allow the use of a key.


Do this.

SELECT ..

WHERE DateLastRevised >= '2004-12-07' AND DateLastRevisted <
'2004-12-08'; 



-Original Message-
From: Marc Dumontier [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, December 07, 2004 11:34 AM
To: [EMAIL PROTECTED]
Subject: slow date query

Hi,

I'd really appreciate any help in speeding up this type of query

SELECT BindId,RecordType from BrowseData WHERE
DATE_FORMAT(DateLastRevised,'%Y-%m-%d') = '2004-12-07';

On a MYISAM table of 122,000 rows, this query takes very long, in the
neighbourhood of 20 minutes.

i'm using mysqld 4.0.20.

I have an index on DateLastRevised


mysql> show indexes from BrowseData;
++++--+-
+---+-+--++--+--
--+-+
| Table  | Non_unique | Key_name   | Seq_in_index | 
Column_name | Collation | Cardinality | Sub_part | Packed | Null | 
Index_type | Comment |
++++--+-
+---+-+--++--+--
--+-+
| BrowseData |  0 | PRIMARY|1 | 
BindId  | A |  122850 | NULL | NULL   |  | 
BTREE  | |
| BrowseData |  1 | bbs_dlr|1 | 
DateLastRevised | A |  122850 | NULL | NULL   |  | 
BTREE  | |
| BrowseData |  1 | bbs_bid_recordtype |1 | 
BindId  | A |  122850 | NULL | NULL   |  | 
BTREE  | |
| BrowseData |  1 | bbs_bid_recordtype |2 | 
RecordType  | A |  122850 | NULL | NULL   |  | 
BTREE  | |
++++--+-
+---+-+--++--+--
--+-+



mysql> explain SELECT BindId,RecordType from BrowseData WHERE
DATE_FORMAT(DateLastRevised,'%Y-%m-%d') = '2004-12-07';
++--+---+--+-+--++--
---+
| table  | type | possible_keys | key  | key_len | ref  | rows   | 
Extra   |
++--+---+--+-+--++--
---+
| BrowseData | ALL  | NULL  | NULL |NULL | NULL | 122850 | 
Using where |
++--+---+--+-+--++--
---+
1 row in set (0.00 sec)


thanks,
Marc Dumontier


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: When to cluster vs. replicate

2004-12-07 Thread Dathan Pattishall
 

-Original Message-
From: Richard Reina [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, December 07, 2004 11:30 AM
To: [EMAIL PROTECTED]
Subject: When to cluster vs. replicate

For some time I have been considering replication for added redundancy
so that should something happen to the server I could quickly deploy the
slave as the main server until the problem is fixed.  However, now I
reading about great MySQL clustering is.  What exactly is clustering and
should I look to employ it instead of replication?  


Clustering is designed for real-time redundancy of a transaction.
Replication is batch redundancy system that works on top of any mySQL
storage engine. Use Clustering if you need to provide multiple write
points and your system needs to stay alive in the event of a single
server crashing.

Use Replication if you want semi-realtime backup or need to use a
storage engine that is incredibly fast like myISAM to spread out
database READ access load. Replication currently doesn't allow a SLAVE
to have multiple MASTERS, thus it cannot be used out of the box as a
"clustered solution".


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Opinions: notes field ideal length?

2004-12-07 Thread Dathan Pattishall
Should be fine if you have enough resources.

Your options are 255 characters or 

64K
2^24-1 bytes
2^32-1 bytes

So 64K looks like a winner. I doubt notes will take more then a few
pages.
 

-Original Message-
From: Chris Kavanagh [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, December 07, 2004 10:08 AM
To: [EMAIL PROTECTED]
Subject: Opinions: notes field ideal length?

Dear list,

Quick question.  What's a reasonable length for a notes-type field?  I
couldn't really find any guidelines on the web so I'm thinking of just
setting it to 65,535.  Or is that ridiculously long?

Thanks list,
CK.


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Selecting a random row

2004-12-06 Thread Dathan Pattishall

Pseudo code: 

$min = SELECT MIN(id) from fortunes;
$max = SELECT MAX(id) from fortunes;

While (!$row && $count < 3) {
  
   $id = rand $max + $min;
   if ($id > $max) {
  next;
   }
  
  $row = SELECT * from fortunes where id = $id;
  $count++
}
 
If ($count >= 3) {
   return 1st row;
}


-Original Message-
From: news [mailto:[EMAIL PROTECTED] On Behalf Of Joshua Beall
Sent: Monday, December 06, 2004 12:40 PM
To: [EMAIL PROTECTED]
Subject: Selecting a random row

Hi All,

I understand that I can get a random row out of a table by doing
something like

SELECT * FROM fortunes ORDER BY RAND() LIMIT 1

But I have also been told that this is a very slow operation.

I am building a script that will display a random saying, user
testimonial, whatever, on a web page.  Since this is a public page
(i.e., not an admin backend), I have to be concerned about speed.  What
is the best way to get a random row out of a database for this sort of
application?  It's the sort of thing you see all the time, so I'm sure
others have thought about this before.  Any pointers?

Sincerely,
  -Josh 




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: A newbie and his first MySQL schema

2004-12-06 Thread Dathan Pattishall
Don't think of it a column must hold mutiple values (unless your using
Sets or bitmasks) think that this table will hold mutiple rows, and each
person is a row with permissions for each project. So, a basic approach
is forevery authorized project a person is able to see that person has a
row indicating that they can use said project.

 

-Original Message-
From: Chris Kavanagh [mailto:[EMAIL PROTECTED] 
Sent: Monday, December 06, 2004 9:11 AM
To: [EMAIL PROTECTED]
Subject: A newbie and his first MySQL schema

Dear list,

So I'm having a bit of trouble with my first schema.  I'm sure I'm
missing something idiotic here, but days of learning MySQL and setting
up servers and working in UNIX have kind of fried my brain.  Okay, here
goes:

My project management system includes (among others) two tables: 
"people" and "projects".  I want each project to have a list of people
that are authorised to view it.  To my mind, the field ought to look a
bit like this:

--
Authorised list:
Chris Kavanagh
Joe Schmoe
Jane Doe
--

But fields can't hold multiple values, can they?  And on my schema, it
seems to be a many-to-many relationship between the two tables, and I
heard that they are the work of the Devil and must be shunned.  I'm sure
I need to make a new table or something, but I'm not really sure which
one.  Can anyone help me?

Many thanks in advance,
CK.


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: wanted: back up script

2004-12-06 Thread Dathan Pattishall
Um its better to use the SQL backup table if the table is a myISAM
table.

BACKUP TABLE [tables] to [LOC].


Or a more sophisticated approach

FLUSH TABLE WITH READ LOCK;

Fork out copy myISAM datafile out

UNLOCK TABLES;


If it's innodb then use the innodb backup script offered by Heikki.
 

-Original Message-
From: Elim Qiu [mailto:[EMAIL PROTECTED] 
Sent: Monday, December 06, 2004 9:44 AM
To: [EMAIL PROTECTED]
Subject: wanted: back up script

This is on windows 2000.
I did the following as a temp solution for the full back up of a
database. I know this is not safe and possibly not complete. Any
suggestions (how to lock/unlock a db for read here)?
Thanks

#include 
#include 

int main() {
printf("start backup ESite data\n");
system("tar cf ESite_dt.tar F:/DBData/MySQLdata/ESite"); system("gzip
ESite_dt.tar"); system("mv ESite_dt.tar.gz ESite_dt.tgz");
printf("done\n"); return 1; }



--
No virus found in this outgoing message.
Checked by AVG Anti-Virus.
Version: 7.0.289 / Virus Database: 265.4.6 - Release Date: 12/5/2004



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: InnoDB tablespace Question.

2004-12-06 Thread Dathan Pattishall
Depends on your disk setup. Remember a table space is a virtual
filesystem that sits on top of the OS. Having one large file and
chopping a contiguous block of the disk out enables better seeks as well
as caching if the file doesn't bust the system cache. In your case it
will. One file needs to be autoextended else your application will run
into errors once the data needs to grow pass the tablespace.

Having multiple table spaces on different spindles enable the data to be
segmented a bit more getting a few more bits of speed, but at the
possible detriment of needed to access both separate data spaces if the
data requested spans multiple files.

In essence I have found that using multiple table spaces is best used
when the disk is starting to fill up and I need to put the data on a
different disk. You'll get a constant boost in performance if you put
the innodb log files on a different spindle or set of spindles as your
data file.

 

-Original Message-
From: Dave Juntgen [mailto:[EMAIL PROTECTED] 
Sent: Monday, December 06, 2004 6:30 AM
To: [EMAIL PROTECTED]
Subject: InnoDB tablespace Question.

Hello!
 
I have what seems to be a trivial question, but have not been able to
find a definite answer and your help would be greatly appreciated.
 
Question:
 
When creating InnoDB table spaces, are there any advantages to using
multi table spaces for each table or is it better to create a few large
table spaces for all tables?
 
If the latter, then is it best to create a very large table space, say
30G, (my OS supports LFS) rather then using the auto extend feature for
table spaces in InnoDB?  What is the over head of the InnoDB auto
extend?
 
Thanks!
 
--Dave J.
 
David W. Juntgen
Medical Informatics Engineering Inc.
Phone: 260.459.6270
Fax  : 260.459.6271
 

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Locking Issue?

2004-12-06 Thread Dathan Pattishall
 

-Original Message-
From: Terry Riley [mailto:[EMAIL PROTECTED] 
Sent: Monday, December 06, 2004 10:12 AM
To: [EMAIL PROTECTED]
Subject: Locking Issue?

Can someone help, please?

We set up a server to handle a coldfusion web application (CFMX 6.1)
running against MySQL 4.1.3b-beta on WinNT.

When it is a little stretched, we are finding many instances of queries
listed as either 'Sending...' or 'Copying...' in the processlist, 


Sending Data means stream the result set back, mysql found the rows and
is still searching.
Copying to tmp table means that it's using the tmp_table_size variable
and if it busts past that will write to a temp table.


Since you using innodb you need to increase your innodb buffer pool.
Additionaly increase your tmp_table_size buffer, and verify your
queries. You might need to tweak innodb_io_threads a feature specific
for windows, and the awe memory setting.

You might be system bound.


[mysqld]

max_connections=1000

basedir=e:/mysql
datadir=e:/mysql/data

wait_timeout=60

# TR added next 6 lines on 27/07/04, after instal of v4.1.3b
old-passwords local-infile query_cache_size=25M
query_cache_type=1
set-variable=max_allowed_packet=16M
set-variable=key_buffer=8M

log-bin=
log_slow_queries=


[mysql]
local-infile=1


Any pointers as to what I may be doing wrong? Please?

Yes, I know we should upgrade to 4.1.7, and we will - soon.

Cheers
Terry Riley


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Binlog question in replication setup

2004-12-02 Thread Dathan Pattishall


DVP

Dathan Vance Pattishall http://www.friendster.com

 

> -Original Message-
> From: Sanjeev Sagar [mailto:[EMAIL PROTECTED] 
> Sent: Thursday, December 02, 2004 4:04 PM
> To: [EMAIL PROTECTED]; [EMAIL PROTECTED]
> Cc: Sanjeev Sagar
> Subject: Binlog question in replication setup
> 
> Hello everyone,
> 
>  
>
> I need to know that how MySQL write to binlog file.
> 

It gets written in order of compeltion, so if the table exists on the
master and the alter takes 50 seconds to run the commit to the binlog
happens on the 50th second.


If the table doesn't exist on the slave a slave error get produced and
the SQL thread stops while the IO thread keeps running downloading the
transaction in a file "queue" waiting for commital.

>  
> 
> Does it write before or after a successful execution or 
> commit the statement. Why a syntax error statement need to 
> hand  over to Slave IO thread to relay log.

If a syntax error happens on the master it should not show up in the bin
log.

> 
>  
> 
> Is there any control like any parameter in option file or 
> anything else in order to control to write only those 
> statements in binary log which ran successful on master.

That's it's default behavior.

> 
>  
> 
> Appreciate it.
> 
> 

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]