On Thu, Dec 1, 2011 at 5:54 PM, Miguel Angel Nieto miguel.ni...@percona.com
wrote:
You should check pt-archiver.
+1. It works very well for this type of job.
- Perrin
On Mon, Oct 15, 2012 at 11:24 AM, Carlos Eduardo Caldi
ce_ca...@hotmail.com wrote:
Somebody knows how can I log or measure the index use ?
http://www.percona.com/doc/percona-toolkit/2.1/pt-index-usage.html
- Perrin
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
On Thu, Oct 11, 2012 at 7:29 AM, Markus Falb markus.f...@fasel.at wrote:
Should I raise innodb_lock_wait_timeout? What are the disadvantages?
The disadvantage is that if the locks still don't clear by the time
the timeout is reached, you're just making the other process wait
longer before
On Wed, Aug 8, 2012 at 1:12 PM, Elim Qiu elim@gmail.com wrote:
But when I use browser for the system testing (perl_test.cgi and forum.pl),
both report me that DBD:mysql is not installed (or not available)
It's possible that DBD::mysql is installed in a place where it's not
in the library
Hi,
This probably means you changed your MySQL libraries but didn't
recompile or reinstall DBD::mysql, Usually this kind of issue is
solved by reinstalling DBD::mysql in the same way that you previously
installed it, so it can build against your current libraries.
- Perrin
On Mon, Aug 6, 2012
On Thu, Feb 23, 2012 at 5:50 PM, Daevid Vincent dae...@daevid.com wrote:
Anyone have any thoughts on how I might optimize this query?
As always, it's all about the indexes. The index it chose on your
main table looks pretty weak. You probably should move those
non-joining columns out of your
On Fri, Mar 4, 2011 at 1:32 AM, Angela liu yyll2...@yahoo.com wrote:
Can Isolation levels be set per transaction? I know isolation levels can be
set per session or globally, but not sure at transaction level.
Given that you can only have one transaction at a time in a session,
there is no
On Thu, Oct 14, 2010 at 3:28 AM, Johan De Meersman vegiv...@tuxera.be wrote:
That depends on the type of lock. If no lock type is specified, InnDB will
prefer row locks, while MyISAM will do table locks.
That may help, unless all your queries are trying to access the same rows
anyway :-)
On Wed, Jun 2, 2010 at 10:28 AM, Lin Chun franks1...@gmail.com wrote:
*don't know it very clear, but i think is the problem of derived seems it
take full scaning*
Yes, it has no indexes so it isn't good for very large subqueries.
You should create them as temporary tables instead and give them
On Fri, May 28, 2010 at 11:38 AM, Andre Matos andrema...@mineirinho.org wrote:
I have a table that uses auto_increment to generate the Id automatically
working fine.
However, I need to create a new table where the Id must be a number generated
randomly, so I cannot use the auto_increment.
On Wed, Apr 21, 2010 at 11:14 AM, shamu...@gmail.com shamu...@gmail.com wrote:
This is my current my.cnf setttings. Could anyone take a quick peek and tell
me if I set anything awfully wrong?
If your tables are MyISAM (not InnoDB), then 128MB is much too small
for your key_buffer. You should
On Wed, Apr 14, 2010 at 10:22 AM, Dan Nelson dnel...@allantgroup.com wrote:
Switch to InnoDB :)
Seconded. No need to complicate your life with MyISAM workarounds
when InnoDB solves this problem already.
- Perrin
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To
On Thu, Feb 25, 2010 at 2:48 AM, Dan Nelson dnel...@allantgroup.com wrote:
IN is fine ( for example ... WHERE field1 IN (1,2,3,4,5,6) is extremely
efficient);
Yes, I meant to say IN/NOT IN subqueries, not value lists.
it's subqueries in general that are killers.
Subqueries in the FROM clause
On Wed, Feb 24, 2010 at 10:11 AM, Yang Zhang yanghates...@gmail.com wrote:
Any ideas on
how to optimize this by convincing mysql to see the independence use a
const join?
http://www.xaprb.com/blog/2006/04/30/how-to-optimize-subqueries-and-joins-in-mysql/
You need to rewrite as a join or use a
On Sat, Feb 6, 2010 at 12:11 PM, D. Dante Lorenso da...@lorenso.com wrote:
I have a system that imports about 40 million records every 2 days into a
single table in MySQL. I was having problems with LOAD DATA CONCURRENT
LOCAL INFILE where the table I was importing into would lock until the
On Thu, Jan 21, 2010 at 7:15 AM, Krishna Chandra Prajapati
prajapat...@gmail.com wrote:
1. Inserted data into a table A by user.
2. Selecting data from table A inserting data to table B after applying some
rules(update).
3. Deleting data from table A.
4. Selecting data from table B using some
On Fri, Jan 15, 2010 at 2:54 AM, Johan Machielse
johan.machie...@kpnplanet.nl wrote:
The problem is that multiple users can read and update the same field
simultaneously (worse case) which could lead to unpredictable problems.
There are other ways to do handle most cases. For example:
UPDATE
On Mon, Dec 14, 2009 at 8:32 PM, D. Dante Lorenso da...@lorenso.com wrote:
I have an items table that is heavily updated with 40 million records
every 1 or 2 days and I need all those items indexed so they can be
searched. The problem that I'm having is that the table is constantly
locked
On Tue, Dec 15, 2009 at 11:58 AM, Keith Murphy bmur...@paragon-cs.com wrote:
Writers do block readers. Just at the row level vs the table level of
MyISAM. It's just much less likely for writers to block readers.
No, they don't. Not unless you use an extreme isolation level.
InnoDB uses
On Mon, Sep 7, 2009 at 8:18 PM, Allen Fowlerallen.fow...@yahoo.com wrote:
Note: In this scheme the worker winds up with all new records generated
since the last worker claimed any. Not sure how else to guarantee atomicity.
I would prefer only n records per request. Ideas?
SELECT...FOR UPDATE
On Thu, Aug 20, 2009 at 10:43 AM, Johnny Withersjoh...@pixelated.net wrote:
MySQL can only use one index at a time.
That was fixed years ago, in MySQL 5.0.
http://dev.mysql.com/doc/refman/5.0/en/index-merge-optimization.html
- Perrin
--
MySQL General Mailing List
For list archives:
On Tue, Aug 18, 2009 at 5:08 AM, Simon Kimbersi...@internetstuff.ltd.uk wrote:
I have indexes on siteid, datestamp and msgtype.
Queries such as the following are constantly appearing in the slow
queries log:
SELECT * FROM enquiries WHERE siteid = 59255 AND msgtype = 0 ORDER BY
datestamp
Nope, Nathan is right: INSERT...SELECT is a locking statement due to
statement-based replication. There's lots of info on this if you
google it. The common solution is to dump to disk and load back in
with SELECT INTO OUTFILE and LOAD DATA INFILE.
- Perrin
On Thu, Jul 9, 2009 at 3:55 PM,
On Fri, Jun 5, 2009 at 9:09 AM, Aaron Fischerafisc...@smith.edu wrote:
So I build the query in the order that I want it displayed. That is display
221593, then 221591, then CC3762, etc. However, when the query is executed
it looks like it automatically sorts the codes in ascending order, so I
On Wed, May 27, 2009 at 6:43 AM, Simon J Mudd sjm...@pobox.com wrote:
So is the format of the DELETE FROM .. WHERE ... IN ( ... ) clause I
propose valid and SHOULD the optimiser recognise this and be expected
to just find the 2 rows by searching on the primary key?
Not according to the docs:
On Fri, May 22, 2009 at 1:22 PM, Chris W 4rfv...@cox.net wrote:
Of course study to profile is a one to many relationship. How do I run an
update to set p.`Date` equal to s.`Date`?
This is covered in the docs for UPDATE. Read that and come back if
you're stuck.
- Perrin
--
MySQL General
On Tue, Mar 17, 2009 at 12:42 PM, BobSharp bobsh...@ntlworld.com wrote:
These have been written successfully with Sub-Queries,
I would like to know how they can be done with only JOINs ?
http://dev.mysql.com/doc/refman/5.0/en/rewriting-subqueries.html
- Perrin
--
MySQL General Mailing List
2009/3/12 Carl c...@etrak-plus.com:
I am still a little puzzled about how we could have a relatively large set
of records (100,000+) and yet not cause any table to be locked as the server
has only 8GB of memory.
What's the relationship you're implying between memory and locking?
Multi-version
I'm afraid enums are useless for anything except compressing your
data, since they don't reject bad values. If you want to limit a
field to a set of values, you need to use a lookup table and a foreign
key constraint.
- Perrin
On Fri, Mar 6, 2009 at 1:35 PM, David Karr
On Wed, Mar 4, 2009 at 10:27 AM, Morten my.li...@mac.com wrote:
Hi, I was hoping that using SQL_NO_CACHE would help me bypass the query
cache, but judging from the below it doesn't.
You probably just brought the data into the cache and are not hitting
the query cache.
- Perrin
--
MySQL
On Wed, Mar 4, 2009 at 11:23 AM, Thomas Spahni t...@lawbiz.ch wrote:
SQL_NO_CACHE means that the query result is not cached. It does not mean
that the cache is not used to answer the query.
Oh, right, he's looking for this:
SET SESSION query_cache_type=off;
- Perrin
--
MySQL General Mailing
2009/3/4 Carl c...@etrak-plus.com:
However, when I had all the pieces in the query
(copy attached), I could easily see it was locking tables using the Server
Monitor in Navicat.
I don't know what that is, but I think you'd better look at something
closer to the bone, like SHOW INNODB STATUS.
On Tue, Mar 3, 2009 at 10:53 AM, Carl c...@etrak-plus.com wrote:
A query that is selecting data for a report locks the files that it accesses
forcing users who are attempting to enter transactions to wait until the
select query is finished.
Is it an INSERT INTO...SELECT FROM? Those lock.
My guess would be that your table is too small to bother using an
index on. There's some information in the MySQL docs about when it
chooses to use an index. For small tables, using one makes the query
slower.
- Perrin
On Tue, Mar 3, 2009 at 7:58 PM, David Karr davidmichaelk...@gmail.com
On Tue, Feb 3, 2009 at 12:24 PM, Jerry Schwartz
jschwa...@the-infoshop.com wrote:
Somebody, I think it was somebody from MySQL, said that you should never put
anything into a WHERE clause that could be put into the ON clause of a JOIN.
My guess is that this helps with the optimization, but it
On Fri, Jan 30, 2009 at 9:32 AM, lance raymond lance.raym...@gmail.com wrote:
basically I have an old fc5 machine running mysql 4.1 and a
newer server running centos5 / mysql 5.0.45.
So, different hardware, different OS, different database server...
could be anything. I suggest you run EXPLAIN
SELECT EXISTS(
SELECT * FROM table WHERE condition
)
- Perrin
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
On Thu, Oct 23, 2008 at 10:31 PM, Fayland Lam [EMAIL PROTECTED] wrote:
B one SQLs with some operation
SELECT col FROM table WHERE $where
while $count is scalar @cols and real cols is splice(@cols, $start, $rows)
If you're talking about Perl/DBI, doing that normally loads the entire
result set
On Mon, Oct 20, 2008 at 10:15 PM, Sangeetha [EMAIL PROTECTED] wrote:
Gosh I wonder
why mysql does not support SEubqueries.. It just hangs in the copying to tmp
table status. Atleast I know whats wrong... Thanks very much
It supports them, but won't optimize them well until 5.2. There's
some
On Thu, Oct 16, 2008 at 3:40 PM, Reyna.Sabina [EMAIL PROTECTED] wrote:
A test using mysql -e LOAD table ... was ran to see
if LOAD' will give an error when loading a record with
the same primary-key of an existing record in the table.
Do you mean LOAD DATA INFILE? It will give an error
On Wed, Oct 8, 2008 at 11:56 AM, Jim Lyons [EMAIL PROTECTED] wrote:
Indexes speed up joins. Foreign keys should be indexes themselves, so they
can also speed up joins. If the FK is not an index, it won't help. So,
index your FKs
If you add a FOREIGN KEY constraint in MySQL 5+ it adds an
On Tue, Sep 23, 2008 at 9:29 PM, kalin m [EMAIL PROTECTED] wrote:
count(if(a.Type = Signature Based Return, a.amount,'')) group by
order by
I think you're looking for sum().
- Perrin
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
On Tue, Sep 23, 2008 at 10:09 PM, kalin m [EMAIL PROTECTED] wrote:
i.e. i'd like to (assuming in the above example that _type_ and _amount_
are column names) sum all the amounts but count only the ones with a certain
type, all other amounts that are different type should not be part of the
On Thu, Sep 4, 2008 at 12:15 AM, Ryan Schwartz [EMAIL PROTECTED] wrote:
We're seeing a huge surge in our qps and I'd like to make sure we're tuned
as well as we can be. I'm wondering if I've got some variables maybe set too
large (is that even possible?) ? We do have a fair bit of innodb, so
On Thu, Sep 4, 2008 at 3:23 PM, Ryan Schwartz [EMAIL PROTECTED] wrote:
I'll have to crack open my copy - haven't read through it in a while
If you have the first edition, I recommend getting the newer one. It
has a lot more tuning info.
- Perrin
--
MySQL General Mailing List
For list
On Thu, Sep 4, 2008 at 4:26 PM, Josh Miller [EMAIL PROTECTED] wrote:
We're seeing a significantly higher percentage of IO wait on the system,
averaging 20% now with the majority of that being user IO. The system is
not swapping at all.
O_DIRECT may not be the best setting for your hardware.
On Thu, Sep 4, 2008 at 6:43 PM, Josh Miller [EMAIL PROTECTED] wrote:
We'd like to prove InnoDB and move onto that storage engine for the
transaction support, MVCC, etc.. but we're finding that performance is poor.
Well, thousands of large InnoDB database users prove that the engine
itself has
On Fri, Aug 29, 2008 at 4:57 AM, walter harms [EMAIL PROTECTED] wrote:
Since diskspace is plenty i thinking about to use the name directly. does
anyone has any idea
what is the performance penalty ?
On Wed, Aug 27, 2008 at 8:21 PM, Jose Estuardo Avila [EMAIL PROTECTED] wrote:
Hi, I've been trying to find information on how myisam handles locks. I
though myisam had locking only on writes and not on reads.
No, readers block writers. This true of any system that only has read
and write locks
On Thu, Aug 28, 2008 at 10:59 AM, Jose Estuardo Avila
[EMAIL PROTECTED] wrote:
I understand that reads are locked by writes but nowhere does of mention
that reads also block reads.
How could they not? You can't simultaneously read and write the same
data -- the read would get half-written
On Thu, Aug 28, 2008 at 1:14 PM, Jose Estuardo Avila [EMAIL PROTECTED] wrote:
My point is that on my process lists there are no writes being done at that
time only reads and actually only one read all other reads are locked as
well as writes.
Sure, that's because the reads are in line behind
On Wed, Aug 13, 2008 at 7:30 AM, Ananda Kumar [EMAIL PROTECTED] wrote:
Can u please guide me to any good books or URL for mysql sql tunning..
http://www.highperfmysql.com/
- Perrin
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
On Sun, Aug 10, 2008 at 10:54 AM, Kevin Waterson [EMAIL PROTECTED] wrote:
I have 3 tables (schema below) with categories, questions and answers.
Each category can of course have many questions in it. Also, the
answers are multiple choice, so each question can have several
related answers. I am
On Sat, Aug 9, 2008 at 8:10 AM, John Smith [EMAIL PROTECTED] wrote:
Now here's the question: I need to lock certain rows, so that no other
client can read or write that rows (I want those clients to wait until the
transaction is commited or rolled back). I don't want to lock the entire
table,
On Fri, Aug 1, 2008 at 7:32 AM, Rudolf Lippan [EMAIL PROTECTED] wrote:
How would you get duplicated data? In one case you have an integer
and in the other you have the value, but you still have to store one token
of information with the row.
I meant in the case of storing the value
On Thu, Jul 31, 2008 at 9:46 AM, Jerry Schwartz
[EMAIL PROTECTED] wrote:
Other than the fact that an ENUM can have many more values than a SET, is
there any particular reason to choose one over the other?
The only use for ENUM is to make your data smaller. It offers no
protection against
On Thu, Jul 31, 2008 at 11:38 AM, Mr. Shawn H. Corey
[EMAIL PROTECTED] wrote:
I don't see how that can be; with ENUM the DB has to set aside enough
bytes for the longest identifier.
ENUMs are stored as integers.
The only advantage of ENUM is that
the data is in the same table; you don't have
On Thu, Jul 31, 2008 at 8:00 AM, Fish Kungfu [EMAIL PROTECTED] wrote:
Ideally, I was hoping COUNT() could work like this, BUT it doesn't of
course:
mysql SELECT COUNT(SELECT aviName,MAX(dateTime) ,count(*) FROM
aviTrackerMain WHERE DATE(dateTime) LIKE CONCAT(DATE(NOW()),'%') GROUP
BY
On Thu, Jul 31, 2008 at 12:59 PM, Chris W [EMAIL PROTECTED] wrote:
So my question is, is doing that way better than making the query more
complex with all the joins?
If by better you mean faster then yes, it probably is marginally
faster. It would be simpler to just use the actual values you
On Mon, Jul 21, 2008 at 11:44 AM, mos [EMAIL PROTECTED] wrote:
I can't specify all of the columns in a Set statement in the
OnDuplicate clause because I don't know what the column names are and there
could be 100 columns.
Write code to do it. There is no way around specifying the
On Mon, Jul 21, 2008 at 2:43 PM, mos [EMAIL PROTECTED] wrote:
I thought if MySQL found a duplicate key on the insert, it would
automatically update the existing row that it found with the results from
table1 if I left out the column expressions in the update clause. But
apparently it doesn't
On Sun, Jul 20, 2008 at 12:12 AM, mos [EMAIL PROTECTED] wrote:
Is there a way to get Insert ... select ... On Duplicate Update to update
the row with the duplicate key?
That's what it does.
Why can't it do this?
What makes you think it can't?
- Perrin
--
MySQL General Mailing List
For
On Wed, Jul 9, 2008 at 12:47 PM, Brian Dunning [EMAIL PROTECTED] wrote:
Can someone spot a way to improve the performance of this query? I see that
every time it runs, it's Copying to tmp table and then Creating sort
index and taking way too long.
You're sorting by a computed field. That's
On Sat, Jun 21, 2008 at 8:34 AM, Joe Pearl [EMAIL PROTECTED] wrote:
Thanks, but this is not the result I want.
I really think it is. You seem to be misunderstanding how GROUP BY
works. In any other database than MySQL, the SQL you wrote would
simply give an error. In MySQL, it gives you
On Fri, Jun 20, 2008 at 10:50 PM, Joe Pearl [EMAIL PROTECTED] wrote:
I want to get back only the most recent entry for each person and I don't
care about the order. I want the result to show Jim with the acq_date of
2008-01-03, Mary and Sally with the location and date for all of them.
On Tue, May 20, 2008 at 7:05 AM, Wakan [EMAIL PROTECTED] wrote:
can someone could explain where are problems in this query:
EXPLAIN
SELECT ID
FROM ven_tes
WHERE ID IN (SELECT ID FROM ven_tes WHERE ID_ven=6573)
If that subselect only returns a single result, try using = instead of
IN. MySQL
On Thu, Apr 24, 2008 at 2:54 AM, Sebastian Mendel
[EMAIL PROTECTED] wrote:
IMHO not in this case, cause it is just a simple WHERE field IN ()
I'm pretty sure that just looks like a bunch of ORs to MySQL. If it
didn't use the index with OR, it won't use it with IN.
What usually works is to
On Wed, Apr 23, 2008 at 9:22 PM, D Hill [EMAIL PROTECTED] wrote:
Can anyone shed some light if I should index wite_desc to speed things up?
No, since you don't use that column at all. If you're not on MySQL 5,
upgrading to MySQL 5 will help. Otherwise, you're best bet is to
rewrite the query
On Tue, Apr 22, 2008 at 11:41 AM, Bruno B. B. Magalhães
[EMAIL PROTECTED] wrote:
I thing
the most problematic part of those queries are the date range part, should I
use a different index only for this column to maintain the index small?
My experience with doing data warehousing in MySQL was
On Tue, Apr 15, 2008 at 4:21 AM, Sebastian Mendel
[EMAIL PROTECTED] wrote:
use UNION
You can't use UNION to add the results of two queries. It would
return two rows.
- Perrin
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
On Fri, Apr 11, 2008 at 4:01 PM, Victor Danilchenko
[EMAIL PROTECTED] wrote:
Oooh, this looks evil. It seems like such a simple thing. I guess
creating max(log_date) as a field, and then joining on it, is a solution --
but my actual query (not the abridged version) is already half a
On Mon, Apr 14, 2008 at 5:40 PM, Jonathan Mangin [EMAIL PROTECTED] wrote:
select
round(sum(my_menu.carb * units) + sum(simple.carb),2)
from itemized inner join simple using (uid)
left join my_menu on itemized.personal_id = my_menu.id;
Instead of 218.3 this returns 602, which is
(52.9
On Sat, Apr 5, 2008 at 9:28 PM, Jonathan Mangin [EMAIL PROTECTED] wrote:
select itemized.day_date as day_date,
round(sum(my_menu.carb * units) + simple.carb,2)
from itemized inner join simple using (uid) inner join my_menu on
itemized.personal_id = my_menu.id where itemized.uid = 'me' and
On Thu, Apr 3, 2008 at 2:28 PM, Dre [EMAIL PROTECTED] wrote:
1) Several sources seem to suggest MyISAM is a good choice for data
warehousing, but due to my lack of experience in a transaction-less world,
this makes me a little nervous.
MyISAM has the advantage of very fast loading. It's much
On Feb 12, 2008 12:28 PM, Frederic Belleudy [EMAIL PROTECTED] wrote:
select video_id from videos_innodb where state='QUEUE' limit 10 FOR UPDATE;
= it's waiting for the first to session to commit, so I cannot get
other videos with the same state!!
commit;
= I get 10 video_id
On Feb 10, 2008 5:30 PM, Yves Goergen [EMAIL PROTECTED] wrote:
My problem is that the sub-select in line 7 (SELECT 1) takes a rather
long time. (When I remove it, it's much faster.)
This is a known issue with EXISTS/NOT EXISTS subqueries in MySQL,
which has some fixes slated for MySQL 6. In
On Feb 11, 2008 4:46 PM, Yves Goergen [EMAIL PROTECTED] wrote:
Thank you for the link. Is there some way to get only the headlines and
a summary for all entries? Reading through the entire contents by month
and finding the misleading captions is hard work for such masses of
content. The search
On Feb 6, 2008 6:40 AM, Britske [EMAIL PROTECTED] wrote:
SELECT * FROM prices WHERE prices.productid IN (SELECT id FROM priducts
ORDER BY id LIMIT 0, 1000)
However, I'm getting an error-message stating that Limit is not allowed in a
subquery.
How would you approach this?
SELECT *
FROM
On Jan 30, 2008 5:25 PM, Jerry Schwartz [EMAIL PROTECTED] wrote:
The problem starts when I try to fetch the memos. I want one email address,
two or more customer_ids (so I can tell them apart), and all of the memos
for each customer_id (preferably using GROUP_CONCAT(). In other words, this
is
On Jan 31, 2008 1:30 PM, Jerry Schwartz [EMAIL PROTECTED] wrote:
That's rather what I feared. I'm trying to avoid code, mostly because this
should be a one-off run.
If it's a one-shot deal and you don't mind it taking a little time, I
think something like this would work:
SELECT
On Jan 22, 2008 7:18 PM, Mont McAdam [EMAIL PROTECTED] wrote:
In my opinion it should return every record from the table where `method` is
equal to NULL, ZERO or EMPTY STRING, as this table doesn't allow null, and
the field type is varchar, this would be EMPTY STRING's only.
Although it
On Jan 11, 2008 7:22 AM, Sebastian Mendel [EMAIL PROTECTED] wrote:
i can only find one source in the manual, where MySQL is using more than on
index:
http://dev.mysql.com/doc/refman/6.0/en/index-merge-optimization.html
Uh, how many sources do you need? It uses multiple indexes, just like
it
On Jan 10, 2008 5:40 PM, Eben [EMAIL PROTECTED] wrote:
I want to be able to do count of the total records brought back by this
query... but since I am using the group by I can't use COUNT() in the
desired way.
Assuming what you want is all the data from your current query plus an
additional
On Jan 9, 2008 4:33 AM, Sebastian Mendel [EMAIL PROTECTED] wrote:
usually ... but i do not know of any index capable of having FULLTEXT and
'normal' fields in one index
Does that matter? It would have to be doing a full scan for
SQL_CALC_FOUND_ROWS to work out well.
- Perrin
--
MySQL
On Jan 9, 2008 8:34 AM, Sebastian Mendel [EMAIL PROTECTED] wrote:
yes, as written in the mentioned article the test is only relevant with
correct used indexes, but MySQL does not use more than one index, so this
query cannot all be done with indexes
Well, first of all, MySQL 5 does use more
On Jan 4, 2008 5:51 PM, Eben [EMAIL PROTECTED] wrote:
The resultset is paginated on the front end using the
SQL_CALC_FOUND_ROWS functionality...
Usually a bad idea:
http://www.mysqlperformanceblog.com/2007/08/28/to-sql_calc_found_rows-or-not-to-sql_calc_found_rows/
- Perrin
--
MySQL General
On Jan 3, 2008 4:23 PM, Tanner Postert [EMAIL PROTECTED] wrote:
When I try to add a force index (PRIMARY) after the media table to try and
make is use PRIMARY, rather than TYPE, the optimizer switches and uses no
key at all.
It usually knows better than you do about indexes.
I've tried to
On Dec 30, 2007 1:50 PM, donr2020 [EMAIL PROTECTED] wrote:
Our search engine does a master query INNER JOINed to a series of COUNT (*)
subqueries that return what the number of results would have been had the
user chosen different filters (or no filter at all). As an example:
Hmm. Why are you
On Dec 31, 2007 3:05 PM, donr2020 [EMAIL PROTECTED] wrote:
Sorry, I didn't type the subqueries quite correctly. They all have the same
WHERE part (in this case, WHERE Col1 = X that essentially joins all the
queries.
It still doesn't make sense to me. Count queries don't return
anything to
On Nov 27, 2007 10:21 AM, mos [EMAIL PROTECTED] wrote:
At 05:57 PM 11/26/2007, you wrote:
The second query might be faster due to caching.
This can be verified by executing:
RESET QUERY CACHE
before executing the second query. This will clear the queries from the cache.
No need to blow
On Nov 13, 2007 4:53 AM, Yves Goergen [EMAIL PROTECTED] wrote:
From that page:
Sometimes it would be useful to lock further tables in the course of
a transaction. Unfortunately, LOCK TABLES in MySQL performs an
implicit COMMIT and UNLOCK TABLES. An InnoDB variant of LOCK TABLES
has been
On Nov 13, 2007 11:39 AM, Baron Schwartz [EMAIL PROTECTED] wrote:
InnoDB can also lock the gap, which will prevent new rows that would
have been returned by the SELECT. The manual has more info on this in
the section on consistent reads in InnoDB. FOR UPDATE will do what you
need.
On Nov 13, 2007 3:32 PM, Yves Goergen [EMAIL PROTECTED] wrote:
I found the Oracle reference and it says that locks can never lock
queries, so reading a table is possible in any case.
No, you just have to use FOR UPDATE and it will block.
- Perrin
--
MySQL General Mailing List
For list
On Nov 12, 2007 1:25 PM, Yves Goergen [EMAIL PROTECTED] wrote:
When I start a transaction, then find the maximum value of a column and
use that + 1 to write a new row into the table, how do transactions
protect me from somebody else doing the same thing so that we'd both end
up writing a new
On Nov 12, 2007 2:43 PM, Yves Goergen [EMAIL PROTECTED] wrote:
SELECT COUNT(*) FROM table WHERE name = ?
-- a short delay which is long enough for a concurrent request :(
UPDATE table SET name = ? WHERE id = ?
I think that even with SERIALIZABLE isolation level, this won't lock
anything if it
On Nov 12, 2007 5:24 PM, Yves Goergen [EMAIL PROTECTED] wrote:
Damn, I found out that I need table locking *and* transactions.
What makes you say that?
Maybe I'm really better off using a sequence (like the one PostgreSQL
offers and like it is available as an add-on for Perl [1]).
That Perl
On Nov 12, 2007 5:58 PM, Yves Goergen [EMAIL PROTECTED] wrote:
BEGIN TRANSACTION
SELECT MAX(id) FROM table
INSERT INTO table (id) VALUES (?)
INSERT INTO othertable (id) VALUES (?)
COMMIT
First I find a new id value, then I do several INSERTs that need to be
atomic, and especially roll back
On 8/1/07, Jerry Schwartz [EMAIL PROTECTED] wrote:
Will SELECT SQL_CALC_FOUND_ROWS and SELECT FOUND_ROWS() do what you want?
Be careful. While it will give the correct number of rows that would
have been found if LIMIT had not been used, it makes the server
compute the whole result set,
On 8/1/07, Les Fletcher [EMAIL PROTECTED] wrote:
SQL_CALC_FOUND_ROWS isn't an option for us. Right now I am doing two
queries, but I am just trying to see if there is a better way to do the
count query than to just turn it into a dervied table and count the
results i.e. SELECT COUNT(*) FROM
On 7/23/07, Sid Lane [EMAIL PROTECTED] wrote:
is either one significantly faster than the other?
Yes, LOAD DATA INFILE is much faster.
are there additional (faster) approaches I have not thought of?
Not that I've found. I think you'd have to write directly to the C
API to beat LOAD DATA
1 - 100 of 113 matches
Mail list logo