Re: Optimising a very large table

2010-02-20 Thread walter harms


Nathan Harmston schrieb:
 Hi everyone,
 
 I am currently working on an application where I have a very large
 table called intervals ( atm its 80 000 000 records and growing ), and
 a smaller table ( token ) which join with it.
 
 interval is just an id, start, end, word
 token is id, interval_id, type, processed_by
 
 There is a many to one..ie one interval can have many tokens. The
 idea being that I as used different tools I generate different tokens
 but the underlying intervals tend to be the same. When I add a new
 token I first need to search the intervals table to see if one exists
 in there. Of course theres an index on intervals to deal with this.
 But as I add more and more new fields I notice a massive slow down in
 processing. I think this due to the increase in new records being
 added and indexed. The problem is I can't turn indexes off as I have
 80 million records.
 
 Does anyone have any suggestions for optimising this design? Or where
 to start from? One option and at the moment the only option I have is
 to denormalise my schema but this will complicate stuff at the
 application level considerably.
 


Hi,
your Problem sound like a time series. The problem i am dealing with.
IMHO There is no real solution, we have splitted the TS and use several tables.
(1 table per month here but it depends on your exact problem). That moves the 
problems
what table to the application. Since MySql 5.1 there are partitions, that 
moves
the problem back to the DB.
The solution depends on your usage pattern. Clearly there is nothing like a
clever select statement.

For you token-Problem you can use immodb for foreign keys, if insert fail 
simply check
if you need to add an other token in the token table.

NTL you need to decide how long you will store, lets assume you have 1E6 Data / 
day and
you want to store for 40 Year then you get 40*360*1E6 Data do you want to 
handle that ?

re,
 wh

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



set max_allowed_packet

2010-02-20 Thread Vikram A
Hi experts,

When I try to set the packet size to some other value, it is not getting 
updated.
show variables;

set max_allowed_packet = 2097152;
set global max_allowed_packet = 2097152;

it resulting,
Query is ok, 0 rows afected (0.00 sec)

Can you suggest me how set this value to higher one. By default it is 1 MB.

Thank you

VIKRAM A



  The INTERNET now has a personality. YOURS! See your Yahoo! Homepage. 
http://in.yahoo.com/

Re: set max_allowed_packet

2010-02-20 Thread Paul DuBois

On Feb 20, 2010, at 11:22 AM, Vikram A wrote:

 Hi experts,
 
 When I try to set the packet size to some other value, it is not getting 
 updated.
 show variables;
 
 set max_allowed_packet = 2097152;
 set global max_allowed_packet = 2097152;
 
 it resulting,
 Query is ok, 0 rows afected (0.00 sec)

That's okay. What does this query return:

mysql select @@global.max_allowed_packet;
+-+
| @@global.max_allowed_packet |
+-+
| 2097152 |
+-+
1 row in set (0.06 sec)

 
 Can you suggest me how set this value to higher one. By default it is 1 MB.
 
 Thank you
 
 VIKRAM A

-- 
Paul DuBois
Sun Microsystems / MySQL Documentation Team
Madison, Wisconsin, USA
www.mysql.com


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: set max_allowed_packet

2010-02-20 Thread Jesper Wisborg Krogh


Jesper Wisborg Krogh
Team Leader • Certified MySQL 5.0 Developer  DBA
Noggin IT • Level 8, 28 Foveaux St, Surry Hills NSW Australia 2010
T: +61 2 9219 8800 • F: +61 2 9280 4669 • E: jes...@noggin.com.au
http://www.noggin.com.au

On 21/02/2010, at 4:46 AM, Paul DuBois wrote:



On Feb 20, 2010, at 11:22 AM, Vikram A wrote:


Hi experts,

When I try to set the packet size to some other value, it is not  
getting updated.

show variables;

set max_allowed_packet = 2097152;
set global max_allowed_packet = 2097152;

it resulting,
Query is ok, 0 rows afected (0.00 sec)


That's okay. What does this query return:

mysql select @@global.max_allowed_packet;
+-+
| @@global.max_allowed_packet |
+-+
| 2097152 |
+-+
1 row in set (0.06 sec)


Also remember, when you change a global variable, the change won't  
effect existing connections, so you will need to log out and back in  
to see the change. Setting the session variable as well will solve  
that for the existing connection, but not for other long lived  
connections:


With mysql1 and mysql2 being two different connections:

mysql1 select @@session.max_allowed_packet,  
@@global.max_allowed_packet;

+--+-+
| @@session.max_allowed_packet | @@global.max_allowed_packet |
+--+-+
|  1048576 | 1048576 |
+--+-+
1 row in set (0.00 sec)

mysql2 select @@session.max_allowed_packet,  
@@global.max_allowed_packet;

+--+-+
| @@session.max_allowed_packet | @@global.max_allowed_packet |
+--+-+
|  1048576 | 1048576 |
+--+-+
1 row in set (0.00 sec)

mysql1 set global max_allowed_packet = 2097152;
Query OK, 0 rows affected (0.00 sec)

mysql1 select @@session.max_allowed_packet,  
@@global.max_allowed_packet;

+--+-+
| @@session.max_allowed_packet | @@global.max_allowed_packet |
+--+-+
|  1048576 | 2097152 |
+--+-+
1 row in set (0.00 sec)

mysql2 select @@session.max_allowed_packet,  
@@global.max_allowed_packet;

+--+-+
| @@session.max_allowed_packet | @@global.max_allowed_packet |
+--+-+
|  1048576 | 2097152 |
+--+-+
1 row in set (0.00 sec)

Jesper





Can you suggest me how set this value to higher one. By default it  
is 1 MB.


Thank you

VIKRAM A


--
Paul DuBois
Sun Microsystems / MySQL Documentation Team
Madison, Wisconsin, USA
www.mysql.com


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql? 
unsub=jes...@noggin.com.au






Replication for backup

2010-02-20 Thread Jeetendra Mirchandani
Is it possible to replicate only DML statements? I don't want an
operator error of drop table to replicate to the slave
I use 5.0.x

ps: please cc me on the reply

Regards,
Jeetu

There is no good, there is no bad, there's just what happened. Dance with it.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Dumping table contents to stdout in tab-separated value format

2010-02-20 Thread Yang Zhang
Hi, I'm interested in piping out the contents of a mysql table to
stdout in tab-separated value format, but:

- using 'select * into outfile' can't write to stdout.
- mysqldump --fields-... requires --tab, which requires an output file path.
- I also tried mkfifo /tmp/pipe and select * into outfile
'/tmp/pipe', but mysql complains about the file existing already.

Is there any other quick way to do this without having to write a
client app? Thanks in advance.
-- 
Yang Zhang
http://www.mit.edu/~y_z/

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Duplicate entries despite group by

2010-02-20 Thread Yang Zhang
I have the following table:

CREATE TABLE `graph` (
  `tableid1` varchar(20) NOT NULL,
  `tupleid1` int(11) NOT NULL,
  `tableid2` varchar(20) NOT NULL,
  `tupleid2` int(11) NOT NULL,
  `node1` int(11) NOT NULL,
  `node2` int(11) NOT NULL,
  `weight` int(10) NOT NULL,
  PRIMARY KEY (`tableid1`,`tupleid1`,`tableid2`,`tupleid2`)
) ENGINE=MyISAM DEFAULT CHARSET=latin1

and I'm running this query (note the 'group by'):

insert into graph (node1, node2, tableid1, tupleid1, tableid2,
tupleid2, weight)
select 0, 0, a.tableid, a.tupleid, b.tableid, b.tupleid, count(*)
from transactionlog a, transactionlog b
where a.transactionid = b.transactionid
  and (a.tableid, a.tupleid) {''} (b.tableid, b.tupleid)
group by a.tableid, a.tupleid, b.tableid, b.tupleid

However, after running for a few hours, the query fails with the
following error:

com.mysql.jdbc.exceptions.jdbc4.MySQLIntegrityConstraintViolationException:
Duplicate entry 'new_order-248642-order_line-13126643' for key
'group_key'

How is this possible? There were no concurrently running queries
inserting into 'graph'. I'm using mysql-5.4.3; is this a beta
bug/anyone else happen to know something about this? Thanks in
advance.
--
Yang Zhang
http://www.mit.edu/~y_z/

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Dumping table contents to stdout in tab-separated value format

2010-02-20 Thread Carlos Proal


I dont know if this can help, but if you are on unix/linux you can call 
cat after the dump and easily you can get the output to the stdout

ie:
$ mysqldumptempfile.txt...  cat tempfile.txt

Carlos

On 2/20/2010 9:03 PM, Yang Zhang wrote:

Hi, I'm interested in piping out the contents of a mysql table to
stdout in tab-separated value format, but:

- using 'select * into outfile' can't write to stdout.
- mysqldump --fields-... requires --tab, which requires an output file path.
- I also tried mkfifo /tmp/pipe and select * into outfile
'/tmp/pipe', but mysql complains about the file existing already.

Is there any other quick way to do this without having to write a
client app? Thanks in advance.
   



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Dumping table contents to stdout in tab-separated value format

2010-02-20 Thread Jesper Wisborg Krogh

On 21/02/2010, at 2:03 PM, Yang Zhang wrote:


Hi, I'm interested in piping out the contents of a mysql table to
stdout in tab-separated value format, but:

- using 'select * into outfile' can't write to stdout.
- mysqldump --fields-... requires --tab, which requires an output  
file path.

- I also tried mkfifo /tmp/pipe and select * into outfile
'/tmp/pipe', but mysql complains about the file existing already.

Is there any other quick way to do this without having to write a
client app? Thanks in advance.


You can execute a query from the command line and use the -B (or -- 
batch) option.


mysql mysql -B -e SELECT * FROM time_zone LIMIT 5
Time_zone_idUse_leap_seconds
1   N
2   N
3   N
4   N
5   N