Re: Best encription method?

2010-09-28 Thread Vikram A
Sir,

I must adopt the encryption. I shall bear the cost of the memory.
Thank you for the guidance. 

Vikram A





From: Johan De Meersman 
To: Vikram A 
Cc: MY SQL Mailing list 
Sent: Tue, 28 September, 2010 2:08:57 PM
Subject: Re: Best encription method?




On Mon, Sep 27, 2010 at 3:43 PM, Vikram A  wrote:


>Normally, If i need to store an integer value i have to define it as int, If I
>encrypt this, i must define its type as string of different size[it depend upon
>the encryption output] than its original size. It increases the db size. I am
>known that if it is secure i have to choose encryption. but when i choose
>encryption it leads the more memory usage.
>

Well, yes :-) That's part of the price you're going to pay for the security.

As the programmer said; I can make it better, faster and cheaper. Pick any two.
 

-- 
Bier met grenadyn
Is als mosterd by den wyn
Sy die't drinkt, is eene kwezel
Hy die't drinkt, is ras een ezel




Re: How to extend innodb files?

2010-09-28 Thread Johan De Meersman
On Tue, Sep 28, 2010 at 6:24 PM, Jan Steinman  wrote:

> > From: "Jangita" 
> >
> > I do not think there is anything wrong with having one huge file is
> there?
>
> There is if you're doing incremental back-ups, in which case adding one
> byte to that file costs you 50GB of backup space.
>

That is only true if you're backing up on the FS level, which means you have
to quiesce the database first. The better/easier way to do incremental mysql
backups is by grabbing the closed binlog files; the downside to that is of
course that replaying those takes more time than restoring the datafile.

That being said, I don't know wether InnoDB files don't get updated metadata
even if no DML happens in them.


>
> 
> You don't have to take insults personally. You can sidestep negative
> energy; you can look for the good in others and utilize that good, as
> different as it may be, to improve your point of view and to enlarge your
> perspective. -- Stephen R. Covey
>  Jan Steinman, EcoReality Co-op 
>
>
>
>
>
>
>
> --
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:http://lists.mysql.com/mysql?unsub=vegiv...@tuxera.be
>
>


-- 
Bier met grenadyn
Is als mosterd by den wyn
Sy die't drinkt, is eene kwezel
Hy die't drinkt, is ras een ezel


Re: ORDER BY with field alias issue

2010-09-28 Thread Johnny Withers
Order by reservation.time

JW

On Tuesday, September 28, 2010, Chris W <4rfv...@cox.net> wrote:
>  I have the following query that is giving me problems.
>
> SELECT DATE_FORMAT(`Time`, '%h:%i%p') as `Time`
> FROM `reservation`
> ORDER BY `Time`
>
> Problem is it sorts wrong because of the date format function output with am 
> and pm.  I guess I should have named things differently but I would rather 
> not do that.  Is there a standard way to get around this and have it sort by 
> the non-formatted time value?
>
>
> Chris W
>
> --
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:    http://lists.mysql.com/mysql?unsub=joh...@pixelated.net
>
>

-- 
-
Johnny Withers
601.209.4985
joh...@pixelated.net

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Not to show until a certain date

2010-09-28 Thread Chris W

 On 9/28/2010 10:04 AM, Patrice Olivier-Wilson wrote:

Figured it out

SELECT *
FROM announcements
WHERE announcements.announcements_expiredate >CURDATE() AND 
announcements.announcements_postdate
ORDER BY announcements_expiredate ASC




I think you probably should do it like this.

SELECT *
FROM announcements
WHERE announcements_expiredate > CURDATE()
AND announcements_postdate <= CURDATE()
ORDER BY announcements_expiredate ASC

Otherwise they won't show till after the postdate.  I assume you want to 
display them on the post date and not the next day?  This of course 
assumes your field is of type 'date' and not 'datetime'.


Prefixing the field name with the table name is not needed unless you 
have a join with a table with the same field names.  Based on your field 
naming method it appears as though that won't happen.  If it does, it is 
much less to type and easier to read if you alias the table name. like 
this..


SELECT *
FROM announcements a
WHERE a.announcements_expiredate >CURDATE()
AND a.announcements_postdate<=CURDATE()
ORDER BY a.announcements_expiredate ASC

also it is a good habit to get into to have all filed and table names 
enclosed in back ticks just in case you have field names that are sql 
reserved words or otherwise would confuse MySQL.


SELECT *
FROM `announcements` a
WHERE a.`announcements_expiredate` >CURDATE()
AND a.`announcements_postdate` <= CURDATE()
ORDER BY a.`announcements_expiredate` ASC


Also to me it just makes it easier to read/ understand if you second 
condition is rewritten like this...


AND CURDATE() >= announcements_postdate

Just my opinion on that.

Chris W


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



ORDER BY with field alias issue

2010-09-28 Thread Chris W

 I have the following query that is giving me problems.

SELECT DATE_FORMAT(`Time`, '%h:%i%p') as `Time`
FROM `reservation`
ORDER BY `Time`

Problem is it sorts wrong because of the date format function output 
with am and pm.  I guess I should have named things differently but I 
would rather not do that.  Is there a standard way to get around this 
and have it sort by the non-formatted time value?



Chris W

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: Migrating my mindset from MyISAM to InnoDB

2010-09-28 Thread Gavin Towey
Also note, 5.5 isn't production ready.  5.1 is the current GA release.


-Original Message-
From: Hank [mailto:hes...@gmail.com]
Sent: Tuesday, September 28, 2010 3:29 PM
To: mysql@lists.mysql.com
Subject: Migrating my mindset from MyISAM to InnoDB

Primarily due to many positive posts I've seen about MySQL 5.5 and
advances in InnoDB, I'm seriously considering converting all my MyISAM
databases to InnoDB.   I don't need many of the InnoDB features, but
if I'm going to upgrade from 4.1.14 to 5.5, I might as well bit the
bullet since that seems to be the direction of MySQL/Oracle.

I've been using MySQL 4.1.14 for years in my production environment,
including one master and several slaves for report and long running
queries.

Every 6 to 12 months the master MYI index files grow fairly large, so
I take the production database offline, and run myisamchk -r on the
index files to rebuild them and shrink them back down again. I usually
get a 20% to 30% space saving and improved performance after the
rebuilds. This has worked very well for me for, well, almost 10 years
now!

And when I say "large" my two main tables have about 200 million rows,
and the myisamchk can take between 60-160 minutes to complete.

I very much like how verbose myisamchk is in detailing which index it
is currently rebuilding, and the progress in terms of records
re-indexed.

SO, my questions are this:

1. With InnoDB, do the indexes ever need to be rebuilt to reduce index
size and improve performance like I get with MyISAM?
2. If so, are there any tools like myisamchk to monitor the InnoDB
index rebuild process, other than issuing a "repair table..." and
staring indefinitely at a blank screen until it finishes hours later?
3.  I've been testing the rebuild process during upgrading using
"alter table  engine=innodb" to convert my tables from
4.1.14 to 5.5.6, and I'm seeing a 130% increase (more than double) in
the raw disk space required for the new InnoDB tables compared to
their old MyISAM counterparts. (I am using single-file-per-table).  Is
this normal?  If not, how can I adjust the space requirements for
these tables so they don't take up so much additional space?

I'm sure I'll have more questions later, but many thanks for your
comments and thoughts.

-Hank

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=gto...@ffn.com


This message contains confidential information and is intended only for the 
individual named.  If you are not the named addressee, you are notified that 
reviewing, disseminating, disclosing, copying or distributing this e-mail is 
strictly prohibited.  Please notify the sender immediately by e-mail if you 
have received this e-mail by mistake and delete this e-mail from your system. 
E-mail transmission cannot be guaranteed to be secure or error-free as 
information could be intercepted, corrupted, lost, destroyed, arrive late or 
incomplete, or contain viruses. The sender therefore does not accept liability 
for any loss or damage caused by viruses or errors or omissions in the contents 
of this message, which arise as a result of e-mail transmission. [FriendFinder 
Networks, Inc., 220 Humbolt court, Sunnyvale, CA 94089, USA, FriendFinder.com

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: Migrating my mindset from MyISAM to InnoDB

2010-09-28 Thread Gavin Towey
1. Generally reducing fragmentation in the data/index files will reduce the 
footprint of tables on disk, and can be more efficient to query.  With innodb 
you need to be using the innodb-file-per-table option, and then you can use 
OPTIMIZE TABLE table; to rebuild it.  You don't get detailed progress like with 
myisamchk, but that's not important anyway.  You can estimate how long it will 
take by keeping track of how long any given ALTER / OPTIMIZE takes in GB/hr.

2.  Don't stare at the screen.  Start it, script the process & have it email 
your phone when it's done.  Do something else in the mean time.

3.  Yes, innodb table will take more space on disk.  If you have a really long 
primary key, and lots of secondary indexes, then it can take a *lot* more.  
Disk is cheap, don't worry about it.

Regards,
Gavin Towey

-Original Message-
From: Hank [mailto:hes...@gmail.com]
Sent: Tuesday, September 28, 2010 3:29 PM
To: mysql@lists.mysql.com
Subject: Migrating my mindset from MyISAM to InnoDB

Primarily due to many positive posts I've seen about MySQL 5.5 and
advances in InnoDB, I'm seriously considering converting all my MyISAM
databases to InnoDB.   I don't need many of the InnoDB features, but
if I'm going to upgrade from 4.1.14 to 5.5, I might as well bit the
bullet since that seems to be the direction of MySQL/Oracle.

I've been using MySQL 4.1.14 for years in my production environment,
including one master and several slaves for report and long running
queries.

Every 6 to 12 months the master MYI index files grow fairly large, so
I take the production database offline, and run myisamchk -r on the
index files to rebuild them and shrink them back down again. I usually
get a 20% to 30% space saving and improved performance after the
rebuilds. This has worked very well for me for, well, almost 10 years
now!

And when I say "large" my two main tables have about 200 million rows,
and the myisamchk can take between 60-160 minutes to complete.

I very much like how verbose myisamchk is in detailing which index it
is currently rebuilding, and the progress in terms of records
re-indexed.

SO, my questions are this:

1. With InnoDB, do the indexes ever need to be rebuilt to reduce index
size and improve performance like I get with MyISAM?
2. If so, are there any tools like myisamchk to monitor the InnoDB
index rebuild process, other than issuing a "repair table..." and
staring indefinitely at a blank screen until it finishes hours later?
3.  I've been testing the rebuild process during upgrading using
"alter table  engine=innodb" to convert my tables from
4.1.14 to 5.5.6, and I'm seeing a 130% increase (more than double) in
the raw disk space required for the new InnoDB tables compared to
their old MyISAM counterparts. (I am using single-file-per-table).  Is
this normal?  If not, how can I adjust the space requirements for
these tables so they don't take up so much additional space?

I'm sure I'll have more questions later, but many thanks for your
comments and thoughts.

-Hank

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=gto...@ffn.com


This message contains confidential information and is intended only for the 
individual named.  If you are not the named addressee, you are notified that 
reviewing, disseminating, disclosing, copying or distributing this e-mail is 
strictly prohibited.  Please notify the sender immediately by e-mail if you 
have received this e-mail by mistake and delete this e-mail from your system. 
E-mail transmission cannot be guaranteed to be secure or error-free as 
information could be intercepted, corrupted, lost, destroyed, arrive late or 
incomplete, or contain viruses. The sender therefore does not accept liability 
for any loss or damage caused by viruses or errors or omissions in the contents 
of this message, which arise as a result of e-mail transmission. [FriendFinder 
Networks, Inc., 220 Humbolt court, Sunnyvale, CA 94089, USA, FriendFinder.com

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Migrating my mindset from MyISAM to InnoDB

2010-09-28 Thread Hank
Primarily due to many positive posts I've seen about MySQL 5.5 and
advances in InnoDB, I'm seriously considering converting all my MyISAM
databases to InnoDB.   I don't need many of the InnoDB features, but
if I'm going to upgrade from 4.1.14 to 5.5, I might as well bit the
bullet since that seems to be the direction of MySQL/Oracle.

I've been using MySQL 4.1.14 for years in my production environment,
including one master and several slaves for report and long running
queries.

Every 6 to 12 months the master MYI index files grow fairly large, so
I take the production database offline, and run myisamchk -r on the
index files to rebuild them and shrink them back down again. I usually
get a 20% to 30% space saving and improved performance after the
rebuilds. This has worked very well for me for, well, almost 10 years
now!

And when I say "large" my two main tables have about 200 million rows,
and the myisamchk can take between 60-160 minutes to complete.

I very much like how verbose myisamchk is in detailing which index it
is currently rebuilding, and the progress in terms of records
re-indexed.

SO, my questions are this:

1. With InnoDB, do the indexes ever need to be rebuilt to reduce index
size and improve performance like I get with MyISAM?
2. If so, are there any tools like myisamchk to monitor the InnoDB
index rebuild process, other than issuing a "repair table..." and
staring indefinitely at a blank screen until it finishes hours later?
3.  I've been testing the rebuild process during upgrading using
"alter table  engine=innodb" to convert my tables from
4.1.14 to 5.5.6, and I'm seeing a 130% increase (more than double) in
the raw disk space required for the new InnoDB tables compared to
their old MyISAM counterparts. (I am using single-file-per-table).  Is
this normal?  If not, how can I adjust the space requirements for
these tables so they don't take up so much additional space?

I'm sure I'll have more questions later, but many thanks for your
comments and thoughts.

-Hank

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



How can I make mysql manager wait longer for mysql to stop

2010-09-28 Thread Bryan Cantwell
I'm using mysql 5.0.51a (yes I know it's old but I cannot upgrade at the
moment) on linux.

My challenge is that mysql admin does not wait long enough for mysql
database(s) which are all innodb and VERY large (billions of rows) and
eventually just kills the db.
This causes innodb to try and repair itself on restart and really cuases
trouble for us.

How can I make mysqladmin never give up on the mysql to gracefully stop?

Bryancan



Re: filter slowquerylog on specific user

2010-09-28 Thread Max Bube
Try maatkit mk-query-digest

mk-query-digest --filter '($event->{user} || "") =~ m/user/' mysql.slow.log



2010/9/28 Stefan Onken 

> Hello,
>
> is there any way to run a a slow query analyse with "mysdumpslow" only on
> specific mysql user connections? We have multiply application accessing
> the same server and sometimes even the same databases and tables. Now we
> want to analyse only one specific application which uses a specific
> username  for login into the mysql DB.
>
> The slow query looks like:
>
> # u...@host: my_user[my_user] @  [192.168.111.111]
> # Query_time: 0  Lock_time: 0  Rows_sent: 0  Rows_examined: 146
> 
>
>
> Thanks!
> Stonki
>
> --
> www.stonki.de
> www.proftpd.de
> www.kbarcode.net
> www.krename.net
>
>
>
> --
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:http://lists.mysql.com/mysql?unsub=maxb...@gmail.com
>
>


Re: Segmentation fault - Redhat Linux 64 Bit

2010-09-28 Thread Sharath Babu Dodda
Hi,
Any more solutions on this please.

Thanks,
Sharath.

On Mon, Sep 27, 2010 at 6:17 PM, Sharath Babu Dodda  wrote:

> Hi there,
>
> I installed Apache, MySQL and PHP on Redhat Linux 64 bit. However, when I
> try to invoke MySQL, I'm getting the Segmentation fault error and I'm not
> able to see the "mysql" prompt.
>
> Begin of problem:
>
> ###
>
> [...@xyz123 bin]$ sudo ./mysql -u root -p
>
> Enter password:
>
> Welcome to the MySQL monitor. Commands end with ; or \g.
>
> Your MySQL connection id is 6
>
> Server version: 5.0.91 Source distribution
>
> Segmentation fault
>
> ###
>
> End of problem:
>
> Could you please suggest a solution for this? Thanks in advace for your
> help.
>
> regards,
>
> Sharath
>



-- 
> I have come into the world as a light, so that *NO ONE* who believes in me
should stay in darkness. (John 12:46)


Re: Howto optimize Distinct query over 1.3mil rows?

2010-09-28 Thread John Stoffel

Johnny> BIB_ID is VARCHAR, you should probably try
Johnny> WHERE BIB_ID='464' so MySQL treats the value as a string

Wow!  What a difference that makes!  Time to A) update my queries, or
B) fix the DB schema to NOT use varchar there.  

mysql> select SQL_NO_CACHE distinct(Call_No) from Newspaper_Issues
mysql> WHERE BIB_ID = 464;
+--+
| Call_No  |
+--+
| News | 
| NewsD CT | 
+--+
2 rows in set (3.06 sec)

mysql> select SQL_NO_CACHE distinct(Call_No) from Newspaper_Issues
mysql> WHERE BIB_ID = '464';
+--+
| Call_No  |
+--+
| News | 
| NewsD CT | 
+--+
2 rows in set (0.02 sec)

Thanks a ton for your help, I would have never figured this out, esp
since I was looking down all the wrong rat holes.  

Thanks,
John

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: How to extend innodb files?

2010-09-28 Thread Jan Steinman
> From: "Jangita" 
> 
> I do not think there is anything wrong with having one huge file is there?

There is if you're doing incremental back-ups, in which case adding one byte to 
that file costs you 50GB of backup space.


You don't have to take insults personally. You can sidestep negative energy; 
you can look for the good in others and utilize that good, as different as it 
may be, to improve your point of view and to enlarge your perspective. -- 
Stephen R. Covey
 Jan Steinman, EcoReality Co-op 







--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Howto optimize Distinct query over 1.3mil rows?

2010-09-28 Thread Johan De Meersman
If Cal_NO is a recurring value, then yes, that is the way it should be done
in a relational schema.

Your index cardinality of 15.000 against 1.3 million rows is reasonable,
although not incredible; is your index cache large enough to acccomodate all
your indices ?


On Tue, Sep 28, 2010 at 5:02 PM, John Stoffel  wrote:

>
> Hi,
>
> I'm running MySQL 5.0.51a-24+lenny3-log  on a Debian Lenny box with
> 2Gb of RAM and a pair of dual core 2.6Ghz CPUs.  I'm using a pair of
> 40Gb disks mirrored using MD (Linux software RAID) for both the OS and
> the storage of the mysql DBs.
>
> My problem child is doing this simple query:
>
>   mysql> select distinct Call_No from Newspaper_Issues
>   mysql> WHERE BIB_ID = 464;
>   +--+
>   | Call_No  |
>   +--+
>   | News |
>   | NewsD CT |
>   +--+
>   2 rows in set (2.98 sec)
>
> The Newspaper_Issues table has 1.3 million rows, and has a structure
> like this:
>
>   mysql> describe  Newspaper_Issues;
>   ++-+--+-+-++
>   | Field  | Type| Null | Key | Default | Extra  |
>   ++-+--+-+-++
>   | Record_No  | int(11) | NO   | PRI | NULL| auto_increment |
>   | BIB_ID | varchar(38) | NO   | MUL | NULL||
>   | Issue_Date | date| NO   | MUL | NULL||
>   | Type_Code  | char(1) | NO   | | r   ||
>   | Condition_Code | char(1) | NO   | | o   ||
>   | Format_Code| char(1) | NO   | | p   ||
>   | Edition_Code   | char(1) | NO   | | n   ||
>   | Date_Type_Code | char(1) | NO   | | n   ||
>   | Ed_Type| tinyint(1)  | NO   | | 1   ||
>   | RS_Code| char(1) | NO   | | c   ||
>   | Call_No| varchar(36) | YES  | MUL | NULL||
>   | Printed_Date   | varchar(10) | YES  | | NULL||
>   | Update_Date| date| NO   | | NULL||
>   ++-+--+-+-++
>   13 rows in set (0.00 sec)
>
>
> I've tried adding various indexes, and reading up on howto optimize
> DISTINCT or GROUP BY queries, but I'm hitting a wall here.  My current
> indexes are:
>
> mysql> show index from Newspaper_Issues;
>
> +--++--+--+-+---+-+--++--++-+
> | Table| Non_unique | Key_name | Seq_in_index |
> Column_name | Collation | Cardinality | Sub_part | Packed | Null |
> Index_type | Comment |
>
> +--++--+--+-+---+-+--++--++-+
> | Newspaper_Issues |  0 | PRIMARY  |1 |
> Record_No   | A | 1333298 | NULL | NULL   |  | BTREE
>  | |
> | Newspaper_Issues |  1 | BIB_ID   |1 |
> BIB_ID  | A |   14980 | NULL | NULL   |  | BTREE
>  | |
> | Newspaper_Issues |  1 | Call_No  |1 |
> Call_No | A | 927 | NULL | NULL   | YES  | BTREE
>  | |
> | Newspaper_Issues |  1 | Issue_Date   |1 |
> Issue_Date  | A |   49381 | NULL | NULL   |  | BTREE
>  | |
> | Newspaper_Issues |  1 | BIB_ID_Issue_Date|1 |
> BIB_ID  | A |   14980 | NULL | NULL   |  | BTREE
>  | |
> | Newspaper_Issues |  1 | BIB_ID_Issue_Date|2 |
> Issue_Date  | A | 1333298 | NULL | NULL   |  | BTREE
>  | |
> | Newspaper_Issues |  1 | call_no_short|1 |
> Call_No | A |  30 |6 | NULL   | YES  | BTREE
>  | |
> | Newspaper_Issues |  1 | BIB_ID_call_no_short |1 |
> BIB_ID  | A |   14980 | NULL | NULL   |  | BTREE
>  | |
> | Newspaper_Issues |  1 | BIB_ID_call_no_short |2 |
> Call_No | A |   15503 | NULL | NULL   | YES  | BTREE
>  | |
> | Newspaper_Issues |  1 | call_no_bib_id   |1 |
> Call_No | A | 927 | NULL | NULL   | YES  | BTREE
>  | |
> | Newspaper_Issues |  1 | call_no_bib_id   |2 |
> BIB_ID  | A |   15503 | NULL | NULL   |  | BTREE
>  | |
>
> +--++--+--+-+---+-+--++--++-+
> 11 ro

Re: Howto optimize Distinct query over 1.3mil rows?

2010-09-28 Thread Johnny Withers
BIB_ID is VARCHAR, you should probably try

WHERE BIB_ID='464' so MySQL treats the value as a string

JW


On Tue, Sep 28, 2010 at 10:02 AM, John Stoffel  wrote:

>
> Hi,
>
> I'm running MySQL 5.0.51a-24+lenny3-log  on a Debian Lenny box with
> 2Gb of RAM and a pair of dual core 2.6Ghz CPUs.  I'm using a pair of
> 40Gb disks mirrored using MD (Linux software RAID) for both the OS and
> the storage of the mysql DBs.
>
> My problem child is doing this simple query:
>
>   mysql> select distinct Call_No from Newspaper_Issues
>   mysql> WHERE BIB_ID = 464;
>   +--+
>   | Call_No  |
>   +--+
>   | News |
>   | NewsD CT |
>   +--+
>   2 rows in set (2.98 sec)
>
> The Newspaper_Issues table has 1.3 million rows, and has a structure
> like this:
>
>   mysql> describe  Newspaper_Issues;
>   ++-+--+-+-++
>   | Field  | Type| Null | Key | Default | Extra  |
>   ++-+--+-+-++
>   | Record_No  | int(11) | NO   | PRI | NULL| auto_increment |
>   | BIB_ID | varchar(38) | NO   | MUL | NULL||
>   | Issue_Date | date| NO   | MUL | NULL||
>   | Type_Code  | char(1) | NO   | | r   ||
>   | Condition_Code | char(1) | NO   | | o   ||
>   | Format_Code| char(1) | NO   | | p   ||
>   | Edition_Code   | char(1) | NO   | | n   ||
>   | Date_Type_Code | char(1) | NO   | | n   ||
>   | Ed_Type| tinyint(1)  | NO   | | 1   ||
>   | RS_Code| char(1) | NO   | | c   ||
>   | Call_No| varchar(36) | YES  | MUL | NULL||
>   | Printed_Date   | varchar(10) | YES  | | NULL||
>   | Update_Date| date| NO   | | NULL||
>   ++-+--+-+-++
>   13 rows in set (0.00 sec)
>
>
> I've tried adding various indexes, and reading up on howto optimize
> DISTINCT or GROUP BY queries, but I'm hitting a wall here.  My current
> indexes are:
>
> mysql> show index from Newspaper_Issues;
>
> +--++--+--+-+---+-+--++--++-+
> | Table| Non_unique | Key_name | Seq_in_index |
> Column_name | Collation | Cardinality | Sub_part | Packed | Null |
> Index_type | Comment |
>
> +--++--+--+-+---+-+--++--++-+
> | Newspaper_Issues |  0 | PRIMARY  |1 |
> Record_No   | A | 1333298 | NULL | NULL   |  | BTREE
>  | |
> | Newspaper_Issues |  1 | BIB_ID   |1 |
> BIB_ID  | A |   14980 | NULL | NULL   |  | BTREE
>  | |
> | Newspaper_Issues |  1 | Call_No  |1 |
> Call_No | A | 927 | NULL | NULL   | YES  | BTREE
>  | |
> | Newspaper_Issues |  1 | Issue_Date   |1 |
> Issue_Date  | A |   49381 | NULL | NULL   |  | BTREE
>  | |
> | Newspaper_Issues |  1 | BIB_ID_Issue_Date|1 |
> BIB_ID  | A |   14980 | NULL | NULL   |  | BTREE
>  | |
> | Newspaper_Issues |  1 | BIB_ID_Issue_Date|2 |
> Issue_Date  | A | 1333298 | NULL | NULL   |  | BTREE
>  | |
> | Newspaper_Issues |  1 | call_no_short|1 |
> Call_No | A |  30 |6 | NULL   | YES  | BTREE
>  | |
> | Newspaper_Issues |  1 | BIB_ID_call_no_short |1 |
> BIB_ID  | A |   14980 | NULL | NULL   |  | BTREE
>  | |
> | Newspaper_Issues |  1 | BIB_ID_call_no_short |2 |
> Call_No | A |   15503 | NULL | NULL   | YES  | BTREE
>  | |
> | Newspaper_Issues |  1 | call_no_bib_id   |1 |
> Call_No | A | 927 | NULL | NULL   | YES  | BTREE
>  | |
> | Newspaper_Issues |  1 | call_no_bib_id   |2 |
> BIB_ID  | A |   15503 | NULL | NULL   |  | BTREE
>  | |
>
> +--++--+--+-+---+-+--++--++-+
> 11 rows in set (0.00 sec)
>
>
> So now when I do an explain on my query I get:
>
>mysql> explain select distinct(Call_No) from Newspaper_Issues WHERE
> BIB_ID = 

Howto optimize Distinct query over 1.3mil rows?

2010-09-28 Thread John Stoffel

Hi,

I'm running MySQL 5.0.51a-24+lenny3-log  on a Debian Lenny box with
2Gb of RAM and a pair of dual core 2.6Ghz CPUs.  I'm using a pair of
40Gb disks mirrored using MD (Linux software RAID) for both the OS and
the storage of the mysql DBs.  

My problem child is doing this simple query:

   mysql> select distinct Call_No from Newspaper_Issues
   mysql> WHERE BIB_ID = 464;
   +--+
   | Call_No  |
   +--+
   | News | 
   | NewsD CT | 
   +--+
   2 rows in set (2.98 sec)

The Newspaper_Issues table has 1.3 million rows, and has a structure
like this:

   mysql> describe  Newspaper_Issues;
   ++-+--+-+-++
   | Field  | Type| Null | Key | Default | Extra  |
   ++-+--+-+-++
   | Record_No  | int(11) | NO   | PRI | NULL| auto_increment | 
   | BIB_ID | varchar(38) | NO   | MUL | NULL|| 
   | Issue_Date | date| NO   | MUL | NULL|| 
   | Type_Code  | char(1) | NO   | | r   || 
   | Condition_Code | char(1) | NO   | | o   || 
   | Format_Code| char(1) | NO   | | p   || 
   | Edition_Code   | char(1) | NO   | | n   || 
   | Date_Type_Code | char(1) | NO   | | n   || 
   | Ed_Type| tinyint(1)  | NO   | | 1   || 
   | RS_Code| char(1) | NO   | | c   || 
   | Call_No| varchar(36) | YES  | MUL | NULL|| 
   | Printed_Date   | varchar(10) | YES  | | NULL|| 
   | Update_Date| date| NO   | | NULL|| 
   ++-+--+-+-++
   13 rows in set (0.00 sec)


I've tried adding various indexes, and reading up on howto optimize
DISTINCT or GROUP BY queries, but I'm hitting a wall here.  My current indexes 
are:

mysql> show index from Newspaper_Issues;
+--++--+--+-+---+-+--++--++-+
| Table| Non_unique | Key_name | Seq_in_index | 
Column_name | Collation | Cardinality | Sub_part | Packed | Null | Index_type | 
Comment |
+--++--+--+-+---+-+--++--++-+
| Newspaper_Issues |  0 | PRIMARY  |1 | 
Record_No   | A | 1333298 | NULL | NULL   |  | BTREE  | 
| 
| Newspaper_Issues |  1 | BIB_ID   |1 | BIB_ID  
| A |   14980 | NULL | NULL   |  | BTREE  | 
| 
| Newspaper_Issues |  1 | Call_No  |1 | Call_No 
| A | 927 | NULL | NULL   | YES  | BTREE  | 
| 
| Newspaper_Issues |  1 | Issue_Date   |1 | 
Issue_Date  | A |   49381 | NULL | NULL   |  | BTREE  | 
| 
| Newspaper_Issues |  1 | BIB_ID_Issue_Date|1 | BIB_ID  
| A |   14980 | NULL | NULL   |  | BTREE  | 
| 
| Newspaper_Issues |  1 | BIB_ID_Issue_Date|2 | 
Issue_Date  | A | 1333298 | NULL | NULL   |  | BTREE  | 
| 
| Newspaper_Issues |  1 | call_no_short|1 | Call_No 
| A |  30 |6 | NULL   | YES  | BTREE  | 
| 
| Newspaper_Issues |  1 | BIB_ID_call_no_short |1 | BIB_ID  
| A |   14980 | NULL | NULL   |  | BTREE  | 
| 
| Newspaper_Issues |  1 | BIB_ID_call_no_short |2 | Call_No 
| A |   15503 | NULL | NULL   | YES  | BTREE  | 
| 
| Newspaper_Issues |  1 | call_no_bib_id   |1 | Call_No 
| A | 927 | NULL | NULL   | YES  | BTREE  | 
| 
| Newspaper_Issues |  1 | call_no_bib_id   |2 | BIB_ID  
| A |   15503 | NULL | NULL   |  | BTREE  | 
| 
+--++--+--+-+---+-+--++--++-+
11 rows in set (0.00 sec)


So now when I do an explain on my query I get:

mysql> explain select distinct(Call_No) from Newspaper_Issues WHERE BIB_ID 
= 464;

++-+--+---+---++-+--+-+--+
| id | select_type | table|

Re: Not to show until a certain date

2010-09-28 Thread Patrice Olivier-Wilson

On 9/28/10 10:52 AM, Patrice Olivier-Wilson wrote:

Hi... beginner here. Working on a php page and using this



$query_announcements = "SELECT * FROM announcements WHERE
announcements.announcements_expiredate >CURDATE() ORDER BY
announcements_expiredate ASC ";


Client now wants the announcement NOT to show until a specific date.

I have an announcements_postdate in the table. Just not sure what the
WHERE should be to not show until that date.

Thanks much,




Figured it out

SELECT *
FROM announcements
WHERE announcements.announcements_expiredate >CURDATE() AND 
announcements.announcements_postdate
ORDER BY announcements_expiredate ASC

thx


--
Patrice Olivier-Wilson
http://biz-comm.com

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Not to show until a certain date

2010-09-28 Thread Patrice Olivier-Wilson

Hi... beginner here. Working on a php page and using this



$query_announcements = "SELECT * FROM announcements WHERE 
announcements.announcements_expiredate >CURDATE() ORDER BY 
announcements_expiredate ASC ";



Client now wants the announcement NOT to show until a specific date.

I have an announcements_postdate in the table. Just not sure what the 
WHERE should be to not show until that date.


Thanks much,



--
Patrice Olivier-Wilson
http://biz-comm.com

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: How to extend innodb files?

2010-09-28 Thread Johan De Meersman
On Tue, Sep 28, 2010 at 1:46 PM, Vokern  wrote:

> Can I upgrade to innodb_file_per_table smoothly?
>

When you activate it, the db will keep reading and using your existing
innodb datafiles. All new tables will be created using .ibd files.
Converting your existing tables is done table per table using ALTER TABLE
mytable ENGINE=INNODB which will basically do a full table copy - keep I/O
and lock time in mind when doing this.


>
> thanks.
>
> --
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:http://lists.mysql.com/mysql?unsub=vegiv...@tuxera.be
>
>


-- 
Bier met grenadyn
Is als mosterd by den wyn
Sy die't drinkt, is eene kwezel
Hy die't drinkt, is ras een ezel


Re: How to extend innodb files?

2010-09-28 Thread Krishna Chandra Prajapati
Hi Vokern,

On a running MySQL Server enabling *innodb_file_per_table* makes no changes
to the existing tables. The newly created table (innodb) will be affected
and have thier own .ibd and .frm tables.

Although, you can enable smoothly. But it's better to have it from scratch.
So, that you can reclaim the free space if required simply by dropping the
table.

_Krishna

On Tue, Sep 28, 2010 at 5:16 PM, Vokern  wrote:

> 2010/9/28 Krishna Chandra Prajapati :
> > Hi Vokern,
> >
> > I suggest to have a single ibdata1 file and use innodb_file_per_table to
> > have multiple .ibd tables.
> >
>
> Can I upgrade to innodb_file_per_table smoothly?
>
> thanks.
>


Re: How to extend innodb files?

2010-09-28 Thread Vokern
2010/9/28 Krishna Chandra Prajapati :
> Hi Vokern,
>
> I suggest to have a single ibdata1 file and use innodb_file_per_table to
> have multiple .ibd tables.
>

Can I upgrade to innodb_file_per_table smoothly?

thanks.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: How to extend innodb files?

2010-09-28 Thread Johan De Meersman
That's a very good point, actually, as that will also immediately free the
space from tables you delete.

My instincts say that it's marginally slower, though; although honestly I
don't have any data to support that. Does anyone have benchmarks about that
?


On Tue, Sep 28, 2010 at 1:26 PM, Krishna Chandra Prajapati <
prajapat...@gmail.com> wrote:

> Hi Vokern,
>
> I suggest to have a single ibdata1 file and use *innodb_file_per_table* to
> have multiple .ibd tables.
>
> _Krishna
>
> On Tue, Sep 28, 2010 at 11:29 AM, Vokern  wrote:
>
> > Hello,
> >
> > Currently I have the setting:
> >
> >
> >
> innodb_data_file_path=ibdata1:10G;ibdata2:10G;ibdata3:10G;ibdata4:10G:autoextend
> >
> > Because the last file of ibdata4 is very large (more than 50G), if I
> > want extend the data to more files, for example, ibdata5, ibdata6...
> > how  to do it?
> >
> > Thanks!
> >
> > --
> > MySQL General Mailing List
> > For list archives: http://lists.mysql.com/mysql
> > To unsubscribe:
> > http://lists.mysql.com/mysql?unsub=prajapat...@gmail.com
> >
> >
>



-- 
Bier met grenadyn
Is als mosterd by den wyn
Sy die't drinkt, is eene kwezel
Hy die't drinkt, is ras een ezel


Re: How to extend innodb files?

2010-09-28 Thread Krishna Chandra Prajapati
Hi Vokern,

I suggest to have a single ibdata1 file and use *innodb_file_per_table* to
have multiple .ibd tables.

_Krishna

On Tue, Sep 28, 2010 at 11:29 AM, Vokern  wrote:

> Hello,
>
> Currently I have the setting:
>
>
> innodb_data_file_path=ibdata1:10G;ibdata2:10G;ibdata3:10G;ibdata4:10G:autoextend
>
> Because the last file of ibdata4 is very large (more than 50G), if I
> want extend the data to more files, for example, ibdata5, ibdata6...
> how  to do it?
>
> Thanks!
>
> --
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:
> http://lists.mysql.com/mysql?unsub=prajapat...@gmail.com
>
>


Re: SUM Top 10 records

2010-09-28 Thread Евгений Килимчук
I'm sorry!

SELECT sum(rating) FROM (SELECT rating FROM your_table_name ORDER BY rating
DESC LIMIT 5) AS result;

2010/9/28 Евгений Килимчук 

> select `rating`/100+`id` as result from `your_table_name` order by `rating`
> desc LIMIT 5;
>
> ++
> | result |
> ++
> | 4.5000 |
> | 1.2500 |
> | 7.2000 |
> | 6.1500 |
> | 0.1000 |
> ++
>
> 2010/9/28 Tompkins Neil 
>
>> Hi
>>
>>
>> I've a basic table like and want to SUM the top 5 values.  For example if
>> I
>> have
>>
>> id, rating
>> 0, 10
>> 1, 25
>> 2, 5
>> 3, 10
>> 4, 50
>> 5, 1
>> 6, 15
>> 7, 20
>> 8, 9
>>
>> I want my query to sum the values
>>
>> 4,50
>> 1,25
>> 7,20
>> 6.15
>> 0,10
>>
>> Suming a value of 120
>>
>> Any suggestions on how to achieve this ?
>>
>> Cheers
>> Neil
>>
>
>
>
> --
> Best regards,
>
> Eugene Kilimchuk 
>



-- 
Best regards,

Eugene Kilimchuk 


Re: SUM Top 10 records

2010-09-28 Thread Евгений Килимчук
select `rating`/100+`id` as result from `your_table_name` order by `rating`
desc LIMIT 5;

++
| result |
++
| 4.5000 |
| 1.2500 |
| 7.2000 |
| 6.1500 |
| 0.1000 |
++

2010/9/28 Tompkins Neil 

> Hi
>
> I've a basic table like and want to SUM the top 5 values.  For example if I
> have
>
> id, rating
> 0, 10
> 1, 25
> 2, 5
> 3, 10
> 4, 50
> 5, 1
> 6, 15
> 7, 20
> 8, 9
>
> I want my query to sum the values
>
> 4,50
> 1,25
> 7,20
> 6.15
> 0,10
>
> Suming a value of 120
>
> Any suggestions on how to achieve this ?
>
> Cheers
> Neil
>



-- 
Best regards,

Eugene Kilimchuk 


Re: SUM Top 10 records

2010-09-28 Thread Tompkins Neil
Christoph, this SUMs all values ?

On Tue, Sep 28, 2010 at 11:25 AM, Christoph Boget  wrote:

> > I've a basic table like and want to SUM the top 5 values.  For example if
> I
> > have
> >
> > Any suggestions on how to achieve this ?
> >
>
> SELECT SUM( rating ) as total_rating FROM my-table ORDER BY rating DESC
> LIMIT 5
>
> IIRC, that should work
>
> thnx,
> Christoph
>
>


SUM Top 10 records

2010-09-28 Thread Tompkins Neil
Hi

I've a basic table like and want to SUM the top 5 values.  For example if I
have

id, rating
0, 10
1, 25
2, 5
3, 10
4, 50
5, 1
6, 15
7, 20
8, 9

I want my query to sum the values

4,50
1,25
7,20
6.15
0,10

Suming a value of 120

Any suggestions on how to achieve this ?

Cheers
Neil


Re: MySQL Community Server 5.1.51 has been released

2010-09-28 Thread Vokern
Yeah I have been using the latest 5.1.51 one.

# ./mysqld -V
100928 17:07:55 [Warning] '--skip-locking' is deprecated and will be
removed in a future release. Please use '--skip-external-locking'
instead.
./mysqld  Ver 5.1.51-log for pc-linux-gnu on i686 (MySQL Community Server (GPL))

Thanks!


2010/9/28 Karen Langford :
>  Dear MySQL users,
>
> MySQL Community Server 5.1.51, a new version of the popular Open
> Source Database Management System, has been released.  MySQL 5.1.51 is
> recommended for use on production systems.
>
> For an overview of what's new in MySQL 5.1, please see
>
> http://dev.mysql.com/doc/refman/5.1/en/mysql-nutshell.html
>

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



MySQL Community Server 5.1.51 has been released

2010-09-28 Thread Karen Langford

 Dear MySQL users,

MySQL Community Server 5.1.51, a new version of the popular Open
Source Database Management System, has been released.  MySQL 5.1.51 is
recommended for use on production systems.

For an overview of what's new in MySQL 5.1, please see

http://dev.mysql.com/doc/refman/5.1/en/mysql-nutshell.html

For information on installing MySQL 5.1.51 on new servers or upgrading
to MySQL 5.1.51 from previous MySQL releases, please see

http://dev.mysql.com/doc/refman/5.1/en/installing.html

MySQL Server is available in source and binary form for a number of
platforms from our download pages at

http://dev.mysql.com/downloads/

Not all mirror sites may be up to date at this point in time, so if
you can't find this version on some mirror, please try again later or
choose another download site.

We welcome and appreciate your feedback, bug reports, bug fixes,
patches, etc.:

http://forge.mysql.com/wiki/Contributing

For information on open issues in MySQL 5.1, please see the errata
list at

http://dev.mysql.com/doc/refman/5.1/en/open-bugs.html

The following section lists the changes in the MySQL source code since
the previous released version of MySQL 5.1.  It may also be viewed
online at

http://dev.mysql.com/doc/refman/5.1/en/news-5-1-51.html

Enjoy!

===

C.1.1. Changes in MySQL 5.1.51 (10 September 2010)

   InnoDB Notes:

 * InnoDB Plugin has been upgraded to version 1.0.12. This
   version is considered of General Availability (GA) quality.
   In this release, the InnoDB Plugin is included in source and
   binary distributions, except RHEL3, RHEL4, SuSE 9 (x86,
   x86_64, ia64), generic Linux RPM packages, and any builds
   produced with the icc compiler. It also does not work for
   FreeBSD 6 and HP-UX or for Linux on generic ia64.

   Bugs fixed:

 * Incompatible Change: Replication: As of MySQL 5.5.6, handling
   of CREATE TABLE IF NOT EXISTS ... SELECT statements has been
   changed for the case that the destination table already
   exists:

  + Previously, for CREATE TABLE IF NOT EXISTS ... SELECT,
MySQL produced a warning that the table exists, but
inserted the rows and wrote the statement to the binary
log anyway. By contrast, CREATE TABLE ... SELECT (without
IF NOT EXISTS) failed with an error, but MySQL inserted
no rows and did not write the statement to the binary
log.

  + MySQL now handles both statements the same way when the
destination table exists, in that neither statement
inserts rows or is written to the binary log. The
difference between them is that MySQL produces a warning
when IF NOT EXISTS is present and an error when it is
not.
   This change in handling of IF NOT EXISTS results in an
   incompatibility for statement-based replication from a MySQL
   5.1 master with the original behavior and a MySQL 5.5 slave
   with the new behavior. Suppose that CREATE TABLE IF NOT EXISTS
   ... SELECT is executed on the master and the destination table
   exists. The result is that rows are inserted on the master but
   not on the slave. (Row-based replication does not have this
   problem.)
   To address this issue, statement-based binary logging for
   CREATE TABLE IF NOT EXISTS ... SELECT is changed in MySQL 5.1
   as of 5.1.51:

  + If the destination table does not exist, there is no
change: The statement is logged as is.

  + If the destination table does exist, the statement is
logged as the equivalent pair of CREATE TABLE IF NOT
EXISTS and INSERT ... SELECT statements. (If the SELECT
in the original statement is preceded by IGNORE or
REPLACE, the INSERT becomes INSERT IGNORE or REPLACE,
respectively.)
   This change provides forward compatibility for statement-based
   replication from MySQL 5.1 to 5.5 because when the destination
   table exists, the rows will be inserted on both the master and
   slave. To take advantage of this compatibility measure, the
   5.1 server must be at least 5.1.51 and the 5.5 server must be
   at least 5.5.6.
   To upgrade an existing 5.1-to-5.5 replication scenario,
   upgrade the master first to 5.1.51 or higher. Note that this
   differs from the usual replication upgrade advice of upgrading
   the slave first.
   A workaround for applications that wish to achieve the
   original effect (rows inserted regardless of whether the
   destination table exists) is to use CREATE TABLE IF NOT EXISTS
   and INSERT ... SELECT statements rather than CREATE TABLE IF
   NOT EXISTS ... SELECT statements.
   Along with the change just described, the following related
   change was made: Previously, if an exi

Re: Best encription method?

2010-09-28 Thread Johan De Meersman
On Mon, Sep 27, 2010 at 3:43 PM, Vikram A  wrote:

>
> Normally, If i need to store an integer value i have to define it as int,
> If I
> encrypt this, i must define its type as string of different size[it depend
> upon
> the encryption output] than its original size. It increases the db size. I
> am
> known that if it is secure i have to choose encryption. but when i choose
> encryption it leads the more memory usage.
>

Well, yes :-) That's part of the price you're going to pay for the security.

As the programmer said; I can make it better, faster and cheaper. Pick any
two.


-- 
Bier met grenadyn
Is als mosterd by den wyn
Sy die't drinkt, is eene kwezel
Hy die't drinkt, is ras een ezel


Re: How to extend innodb files?

2010-09-28 Thread Johan De Meersman
On Tue, Sep 28, 2010 at 10:23 AM, Vokern  wrote:

> 2010/9/28 Jangita :
>
> >
> > I do not think there is anything wrong with having one huge file is
> there?
> > We have one innodb file of 85GB on ext3.
>

In and of itself, there is no problem with that. You may, however, prefer
multiple smaller files for managability, when doing file-level backups, to
put files on different disks, ...

It works just as well, but there can be reasons to split, too.


> but how about the problem on the file has been increasing continuously?
>

The file size increases as you add data to InnoDB tables. This is nothing to
worry about - unless you *aren't* actually adding data :-)


-- 
Bier met grenadyn
Is als mosterd by den wyn
Sy die't drinkt, is eene kwezel
Hy die't drinkt, is ras een ezel


php - mysql tandem and multi core performance

2010-09-28 Thread Giulio Mastrosanti
Hi all,
I have to setup a new dedicated server for my job,
it will need to host some mysql databases with php accessing them,
for a number of reasons it need to be a Windows server ( 2008 )

the amount of data will not be very great ( the biggest table will contain 
between 100.000 and 200.000 rows ), but php will perform some complex, 
full-text queries to the databases.

now I have a couple of choices about server configuration:

1) Bi Xeon E5504 - it has 8 cores at 2.4 Ghz, with 24 GB RAM

2) Xeon i7 W3520 - it has 4 cores ar 2.66 Ghz with  12 GB RAM

obviously the second choice is quite cheaper than the first;

my question is:

on your experience, how do php / mySQL behave on multicores?

maybe its better a lesser_core / highest frequency system than a more_core / 
lower frequency?

can a 12 GB / 24 GB RAM configuration make the difference? in which terms?

thank you for your answer,

Giulio

Giulio Mastrosanti
giu...@cantoberon.it






--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: How to extend innodb files?

2010-09-28 Thread Vokern
2010/9/28 Jangita :

>
> I do not think there is anything wrong with having one huge file is there?
> We have one innodb file of 85GB on ext3.
> Is there?
>

but how about the problem on the file has been increasing continuously?

thanks

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: How to extend innodb files?

2010-09-28 Thread Jangita

--
From: "Carlos Proal" 
 You have to round the size of the last data file (ibdata4) and add the 
new ones.

You can find more information on the manual:
http://dev.mysql.com/doc/refman/5.1/en/innodb-configuration.html
http://dev.mysql.com/doc/refman/5.1/en/adding-and-removing.html

Carlos


I do not think there is anything wrong with having one huge file is there? 
We have one innodb file of 85GB on ext3.

Is there?

Jangita | +254 76 918383 | MSN & Y!: jang...@yahoo.com
Skype: jangita | GTalk: jangita.nyag...@gmail.com 



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



filter slowquerylog on specific user

2010-09-28 Thread Stefan Onken
Hello,

is there any way to run a a slow query analyse with "mysdumpslow" only on
specific mysql user connections? We have multiply application accessing
the same server and sometimes even the same databases and tables. Now we
want to analyse only one specific application which uses a specific
username  for login into the mysql DB.

The slow query looks like:

# u...@host: my_user[my_user] @  [192.168.111.111]
# Query_time: 0  Lock_time: 0  Rows_sent: 0  Rows_examined: 146



Thanks!
Stonki

-- 
www.stonki.de
www.proftpd.de
www.kbarcode.net
www.krename.net



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org