Sir,
I must adopt the encryption. I shall bear the cost of the memory.
Thank you for the guidance.
Vikram A
From: Johan De Meersman
To: Vikram A
Cc: MY SQL Mailing list
Sent: Tue, 28 September, 2010 2:08:57 PM
Subject: Re: Best encription method?
On Mo
On Tue, Sep 28, 2010 at 6:24 PM, Jan Steinman wrote:
> > From: "Jangita"
> >
> > I do not think there is anything wrong with having one huge file is
> there?
>
> There is if you're doing incremental back-ups, in which case adding one
> byte to that file costs you 50GB of backup space.
>
That is
Order by reservation.time
JW
On Tuesday, September 28, 2010, Chris W <4rfv...@cox.net> wrote:
> I have the following query that is giving me problems.
>
> SELECT DATE_FORMAT(`Time`, '%h:%i%p') as `Time`
> FROM `reservation`
> ORDER BY `Time`
>
> Problem is it sorts wrong because of the date form
On 9/28/2010 10:04 AM, Patrice Olivier-Wilson wrote:
Figured it out
SELECT *
FROM announcements
WHERE announcements.announcements_expiredate >CURDATE() AND
announcements.announcements_postdate
ORDER BY announcements_expiredate ASC
I think you probably should do it like this.
SELECT *
FROM
I have the following query that is giving me problems.
SELECT DATE_FORMAT(`Time`, '%h:%i%p') as `Time`
FROM `reservation`
ORDER BY `Time`
Problem is it sorts wrong because of the date format function output
with am and pm. I guess I should have named things differently but I
would rather not
Also note, 5.5 isn't production ready. 5.1 is the current GA release.
-Original Message-
From: Hank [mailto:hes...@gmail.com]
Sent: Tuesday, September 28, 2010 3:29 PM
To: mysql@lists.mysql.com
Subject: Migrating my mindset from MyISAM to InnoDB
Primarily due to many positive posts I've
1. Generally reducing fragmentation in the data/index files will reduce the
footprint of tables on disk, and can be more efficient to query. With innodb
you need to be using the innodb-file-per-table option, and then you can use
OPTIMIZE TABLE table; to rebuild it. You don't get detailed progr
Primarily due to many positive posts I've seen about MySQL 5.5 and
advances in InnoDB, I'm seriously considering converting all my MyISAM
databases to InnoDB. I don't need many of the InnoDB features, but
if I'm going to upgrade from 4.1.14 to 5.5, I might as well bit the
bullet since that seems
I'm using mysql 5.0.51a (yes I know it's old but I cannot upgrade at the
moment) on linux.
My challenge is that mysql admin does not wait long enough for mysql
database(s) which are all innodb and VERY large (billions of rows) and
eventually just kills the db.
This causes innodb to try and repair
Try maatkit mk-query-digest
mk-query-digest --filter '($event->{user} || "") =~ m/user/' mysql.slow.log
2010/9/28 Stefan Onken
> Hello,
>
> is there any way to run a a slow query analyse with "mysdumpslow" only on
> specific mysql user connections? We have multiply application accessing
> the
Hi,
Any more solutions on this please.
Thanks,
Sharath.
On Mon, Sep 27, 2010 at 6:17 PM, Sharath Babu Dodda wrote:
> Hi there,
>
> I installed Apache, MySQL and PHP on Redhat Linux 64 bit. However, when I
> try to invoke MySQL, I'm getting the Segmentation fault error and I'm not
> able to see
Johnny> BIB_ID is VARCHAR, you should probably try
Johnny> WHERE BIB_ID='464' so MySQL treats the value as a string
Wow! What a difference that makes! Time to A) update my queries, or
B) fix the DB schema to NOT use varchar there.
mysql> select SQL_NO_CACHE distinct(Call_No) from Newspap
> From: "Jangita"
>
> I do not think there is anything wrong with having one huge file is there?
There is if you're doing incremental back-ups, in which case adding one byte to
that file costs you 50GB of backup space.
You don't have to take insults personally. You can sideste
If Cal_NO is a recurring value, then yes, that is the way it should be done
in a relational schema.
Your index cardinality of 15.000 against 1.3 million rows is reasonable,
although not incredible; is your index cache large enough to acccomodate all
your indices ?
On Tue, Sep 28, 2010 at 5:02 PM
BIB_ID is VARCHAR, you should probably try
WHERE BIB_ID='464' so MySQL treats the value as a string
JW
On Tue, Sep 28, 2010 at 10:02 AM, John Stoffel wrote:
>
> Hi,
>
> I'm running MySQL 5.0.51a-24+lenny3-log on a Debian Lenny box with
> 2Gb of RAM and a pair of dual core 2.6Ghz CPUs. I'm u
Hi,
I'm running MySQL 5.0.51a-24+lenny3-log on a Debian Lenny box with
2Gb of RAM and a pair of dual core 2.6Ghz CPUs. I'm using a pair of
40Gb disks mirrored using MD (Linux software RAID) for both the OS and
the storage of the mysql DBs.
My problem child is doing this simple query:
mys
On 9/28/10 10:52 AM, Patrice Olivier-Wilson wrote:
Hi... beginner here. Working on a php page and using this
$query_announcements = "SELECT * FROM announcements WHERE
announcements.announcements_expiredate >CURDATE() ORDER BY
announcements_expiredate ASC ";
Client now wants the announcement
Hi... beginner here. Working on a php page and using this
$query_announcements = "SELECT * FROM announcements WHERE
announcements.announcements_expiredate >CURDATE() ORDER BY
announcements_expiredate ASC ";
Client now wants the announcement NOT to show until a specific date.
I have an ann
On Tue, Sep 28, 2010 at 1:46 PM, Vokern wrote:
> Can I upgrade to innodb_file_per_table smoothly?
>
When you activate it, the db will keep reading and using your existing
innodb datafiles. All new tables will be created using .ibd files.
Converting your existing tables is done table per table us
Hi Vokern,
On a running MySQL Server enabling *innodb_file_per_table* makes no changes
to the existing tables. The newly created table (innodb) will be affected
and have thier own .ibd and .frm tables.
Although, you can enable smoothly. But it's better to have it from scratch.
So, that you can re
2010/9/28 Krishna Chandra Prajapati :
> Hi Vokern,
>
> I suggest to have a single ibdata1 file and use innodb_file_per_table to
> have multiple .ibd tables.
>
Can I upgrade to innodb_file_per_table smoothly?
thanks.
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
T
That's a very good point, actually, as that will also immediately free the
space from tables you delete.
My instincts say that it's marginally slower, though; although honestly I
don't have any data to support that. Does anyone have benchmarks about that
?
On Tue, Sep 28, 2010 at 1:26 PM, Krishn
Hi Vokern,
I suggest to have a single ibdata1 file and use *innodb_file_per_table* to
have multiple .ibd tables.
_Krishna
On Tue, Sep 28, 2010 at 11:29 AM, Vokern wrote:
> Hello,
>
> Currently I have the setting:
>
>
> innodb_data_file_path=ibdata1:10G;ibdata2:10G;ibdata3:10G;ibdata4:10G:autoe
I'm sorry!
SELECT sum(rating) FROM (SELECT rating FROM your_table_name ORDER BY rating
DESC LIMIT 5) AS result;
2010/9/28 Евгений Килимчук
> select `rating`/100+`id` as result from `your_table_name` order by `rating`
> desc LIMIT 5;
>
> ++
> | result |
> ++
> | 4.5000 |
> | 1.25
select `rating`/100+`id` as result from `your_table_name` order by `rating`
desc LIMIT 5;
++
| result |
++
| 4.5000 |
| 1.2500 |
| 7.2000 |
| 6.1500 |
| 0.1000 |
++
2010/9/28 Tompkins Neil
> Hi
>
> I've a basic table like and want to SUM the top 5 values. For example if
Christoph, this SUMs all values ?
On Tue, Sep 28, 2010 at 11:25 AM, Christoph Boget wrote:
> > I've a basic table like and want to SUM the top 5 values. For example if
> I
> > have
> >
> > Any suggestions on how to achieve this ?
> >
>
> SELECT SUM( rating ) as total_rating FROM my-table ORDER
Hi
I've a basic table like and want to SUM the top 5 values. For example if I
have
id, rating
0, 10
1, 25
2, 5
3, 10
4, 50
5, 1
6, 15
7, 20
8, 9
I want my query to sum the values
4,50
1,25
7,20
6.15
0,10
Suming a value of 120
Any suggestions on how to achieve this ?
Cheers
Neil
Yeah I have been using the latest 5.1.51 one.
# ./mysqld -V
100928 17:07:55 [Warning] '--skip-locking' is deprecated and will be
removed in a future release. Please use '--skip-external-locking'
instead.
./mysqld Ver 5.1.51-log for pc-linux-gnu on i686 (MySQL Community Server (GPL))
Thanks!
20
Dear MySQL users,
MySQL Community Server 5.1.51, a new version of the popular Open
Source Database Management System, has been released. MySQL 5.1.51 is
recommended for use on production systems.
For an overview of what's new in MySQL 5.1, please see
http://dev.mysql.com/doc/refman/5.1/en/mys
On Mon, Sep 27, 2010 at 3:43 PM, Vikram A wrote:
>
> Normally, If i need to store an integer value i have to define it as int,
> If I
> encrypt this, i must define its type as string of different size[it depend
> upon
> the encryption output] than its original size. It increases the db size. I
>
On Tue, Sep 28, 2010 at 10:23 AM, Vokern wrote:
> 2010/9/28 Jangita :
>
> >
> > I do not think there is anything wrong with having one huge file is
> there?
> > We have one innodb file of 85GB on ext3.
>
In and of itself, there is no problem with that. You may, however, prefer
multiple smaller f
Hi all,
I have to setup a new dedicated server for my job,
it will need to host some mysql databases with php accessing them,
for a number of reasons it need to be a Windows server ( 2008 )
the amount of data will not be very great ( the biggest table will contain
between 100.000 and 200.000 rows
2010/9/28 Jangita :
>
> I do not think there is anything wrong with having one huge file is there?
> We have one innodb file of 85GB on ext3.
> Is there?
>
but how about the problem on the file has been increasing continuously?
thanks
--
MySQL General Mailing List
For list archives: http://lis
--
From: "Carlos Proal"
You have to round the size of the last data file (ibdata4) and add the
new ones.
You can find more information on the manual:
http://dev.mysql.com/doc/refman/5.1/en/innodb-configuration.html
http://dev.mysql.com/doc/refman
Hello,
is there any way to run a a slow query analyse with "mysdumpslow" only on
specific mysql user connections? We have multiply application accessing
the same server and sometimes even the same databases and tables. Now we
want to analyse only one specific application which uses a specific
user
35 matches
Mail list logo