At 06:34 AM 7/18/2012, Johan De Meersman wrote:
Hey all,
Apologies for this mostly-off-topic mail, but I would like to draw
interested parties' attention to the yearly Linux Bier Wanderung -
the Linux Beer Hike - that I'm helping organise this year in
Diksmuide, Belgium :-)
In brief, it's m
At 02:04 AM 6/21/2012, you wrote:
Thank you a lot. The first indexing process finished after about 13
hours, so I think the problem is solved now.
I set the myisam_sort_bufffer_size to 10GB.
The "first indexing process"???
You should have created all of your indexes with one Alter statement.
At 07:27 AM 6/13/2012, Manivannan S. wrote:
Hi all,
I am using MySQL 5.1, in this I am inserting 5GB of
data for two days into my database. I am trying to generate a
report by processing these data which are available in my database.
Our clients are planning to keep the record
At 02:45 AM 11/23/2011, you wrote:
Also,
since MySQL 5.1 MyISAM has an algorythm to detect if you are going to
delete a row without ever reading it,
so when you insert it, it will use the blackhole storage engine instead.
:O (NB: it is a joke)
Claudio
Claudio,
I have been using the bl
If you could use MyISAM tables then you could use Merge Tables and
create a table for each day (or whatever period you are collecting
data for). Then when it is time to get rid of the old data, drop the
oldest table (T2001 or T10 for 10 days ago) and create a new
empty table for the new day
At 03:32 AM 10/25/2011, you wrote:
Am 25.10.2011 05:45, schrieb mos:
> At 05:31 PM 10/24/2011, Reindl Harald wrote:
>
>
>> Am 24.10.2011 23:31, schrieb mos:
>> > At 11:32 AM 10/24/2011, Reindl Harald wrote:
>> >
>> >
>> >> Am 24.10.2011 1
At 05:31 PM 10/24/2011, Reindl Harald wrote:
Am 24.10.2011 23:31, schrieb mos:
> At 11:32 AM 10/24/2011, Reindl Harald wrote:
>
>
>> Am 24.10.2011 18:02, schrieb mos:
>> > At 10:34 AM 10/24/2011, you wrote:
>> >> select id from table order by rand() limit
At 11:32 AM 10/24/2011, Reindl Harald wrote:
Am 24.10.2011 18:02, schrieb mos:
> At 10:34 AM 10/24/2011, you wrote:
>> select id from table order by rand() limit 1;
>> is doing as example a dumb temporary table with the full size
>
> Because it has to sort the entire table
At 10:34 AM 10/24/2011, you wrote:
select id from table order by rand() limit 1;
is doing as example a dumb temporary table with the full size
Because it has to sort the entire table, then it
returns the one row. This of course is extremely inefficient. :)
You need to choose a random row by u
At 05:49 AM 10/21/2011, you wrote:
how to optimize mysql in easy way step, i know indexing, mapping other than
that is any way.
--
Thanks & Regards,
P.Benaya Paul
Apart from enrolling in Hogwarts, you may be interested in the
book "High Performance MySQL 2nd Edition". It can be found in
se
At 01:58 PM 10/7/2011, you wrote:
Do you have any good documentation with regards creating indexes.
Also information for explain statement and what would be the desired
result of the explain statement?
This might help:
http://dev.mysql.com/doc/refman/5.0/en/mysql-indexes.html
http://www.site
to retrieve this
autoinc value.
I hope this clears it up.
Mike
From: mos
To: mysql@lists.mysql.com
Sent: Monday, October 3, 2011 3:01 PM
Subject: Re: How MyISAM handle auto_increment
At 04:46 PM 10/3/2011, you wrote:
> Hi, Folks: I have questions regarding how MyISAM handles
auto_incr
At 04:46 PM 10/3/2011, you wrote:
Hi, Folks: I have questions regarding how MyISAM handles
auto_increment clolumn? 1. is there a auto_increment counter for
MyISAM to assign a new value to auto_increment columns?
Yes
2. if MyISAM has the counter, is the counter stored in memory or
disk? Thn
Derek is right. The Sphynx search engine is much
faster than MyISAM's full text search engine. It
will work on InnoDb tables so you don't have to export the data to MyISAM.
Mike
At 01:43 PM 9/29/2011, you wrote:
But I could create an additional myisam table to
overcome my problem providing I
Doc,
When the user clicks on the Category on the left, it is not
changing the URL. If I change the url manually to "Path=2" etc. and
press enter, the page loads correctly with that category. So it looks
like your problem is the PHP code not executing the SQL query at all
or it is not req
At 01:26 PM 7/13/2011, you wrote:
I have a prime table
+-+--+-+
| oid | pv | descipt |
+-+--+-+
| 1 |2 | NULL|
| 2 |3 | NULL|
| 3 |5 | NULL|
| 4 |7 | NULL|
| 5 | 11 | NULL|
| .|
+-+-
Leonardo,
What happens when you use "force index(user_id)" ?
See http://dev.mysql.com/doc/refman/5.1/en/index-hints.html
Mike
At 09:19 AM 7/8/2011, you wrote:
Same as before, but with the new index listed in the possible keys:
++-+---+---+--
I have a table (MyISAM) with summarized Sales data:
Table: ProdSales
Columns:
Sales_Date Date,
Product_Code Char(10),
Amt_Sold Double
There are approx 5,000 products sold each day and there are 3 years worth
of data. I would like to create a table with the top 25 Amt_Sold products
for each
igner. Take a look at the screen
shots. http://www.webyog.com/en/screenshots.php
Mike
> -Original Message-
> From: mos [mailto:mo...@fastmail.fm]
> Sent: Thursday, April 07, 2011 1:36 PM
> To: mysql@lists.mysql.com
> Subject: Re: Any table visualization tools with wires c
At 02:17 PM 4/7/2011, Daevid Vincent wrote:
Does anyone have any suggestions on this? I've written to SQL Maestro twice
and they've not replied either.
Take a look at SqlYog from www.webyog.com. I use their community version
but their paid version has a schema designer. They are responsive to
At 11:20 AM 3/31/2011, you wrote:
At 07:29 AM 3/31/2011, you wrote:
Thanks for your insight! But I'm still worried about the performance of
IN ( big list of values ). Can you tell me how it is implemented?
Suppose I have SELECT a FROM b WHERE c IN (1, 4, 5, 117, 118, 119, ...,
387945)
1) If
At 07:29 AM 3/31/2011, you wrote:
Thanks for your insight! But I'm still worried about the performance of IN
( big list of values ). Can you tell me how it is implemented?
Suppose I have SELECT a FROM b WHERE c IN (1, 4, 5, 117, 118, 119, ...,
387945)
1) If I put 200 values there, does it do
hing or encrypting the id column will make the id's non-contiguous
and impossible to guess.
Mike
How do fear your db would be violated?
On Mar 10, 2011 6:13 PM, "mos"
<<mailto:mo...@fastmail.fm>mo...@fastmail.fm> wrote:
> I want to bounce some ideas off of MySQL
If the optimizer chooses the wrong index, you can tell it what index to use.
SELECT a.IdAppt, a.IdPatient,
p.NameLast, p.NameFirst, p.NameMI
from Appt_ a force index(id_patient)
LEFT JOIN patient_ p
ON a.IdPatient = p.IdPatient
WHERE a.ApptDate >= '2009-03-01';
See http://dev.mysql.com/doc/re
I want to bounce some ideas off of MySQL developers that use it for web
development. Maybe I'm a little paranoid, but when dealing with the
Internet, I want to make my web app as secure as possible. I'm hoping some
of you can offer me some ideas in this respect.
I am building a web application
At 03:36 PM 2/28/2011, Michael Dykman wrote:
One statement will do:
SELECT Min(prod_price) , Max(prod_price), Avg(prod_price) INTO pl, ph, pa
from products;
- michael dykman
Michael,
Brilliant! Thanks. :-)
Mike
On Mon, Feb 28, 2011 at 4:30 PM, mos wrote:
> I want to hav
I want to have a stored procedure that returns 5 columns from a table and
do some calculations on it. But when I try it, it complains there is a
syntax error on the 2nd "INTO" clause.
It appears I can have only 1 INTO clause per SQL statement. That means I
have to execute 5 different SQL stateme
At 05:13 AM 2/24/2011, you wrote:
Use a quote around the column name or explicitly specify the column as
. (as for e.g. mytable.group) in the query. For more
details refer to http://dev.mysql.com/doc/refman/5.5/en/reserved-words.html
Thanks
Aveek
Hmmm. Everyone has given me a great idea. I a
I want to use
select * into outfile "myfile.txt" from table1;
and have it export the data as tab delimited but with the column names. I
need the column names because the import utility will use that to create
the table in another (non-MySQL) database.
As it stands, I can't get the column nam
At 05:39 PM 2/13/2011, Andre Polykanine wrote:
Hi all,
Hope this question is appropriate here :-).
I've got 4 queries:
$q1=mysql_query("SELECT *FROM`CandidateQuestions`WHERE
`Category`='1' ORDER BY RAND() LIMIT 1");
$q2=mysql_query("SELECT *FROM`CandidateQuestions`
I want to use
select * into outfile "myfile.txt" from table1;
and have it export the data as tab delimited but with the column names. I
need the column names because the import utility will use that to create
the table in another (non-MySQL) database.
As it stands, I can't get the column nam
Have you tried:
select UserName, Sum(ColB) from Table group by UserName;
or
select UserName, Sum(ColB) from Table group by UserName where
UserName="Emily";
Mike
At 11:43 AM 1/12/2011, Nicholas Moreno wrote:
My issue is actually in Excel. I'm hoping someone could help me...
I need to to
Global Information Incorporated
195 Farmington Ave.
Farmington, CT 06032
860.674.8796 / FAX: 860.674.8341
E-mail: je...@gii.co.jp
Web site: www.the-infoshop.com
>-Original Message-
>From: mos [mailto:mo...@fastmail.fm]
>Sent: Monday, January 03, 2011 3:25 PM
>To: Jerry Schwart
n Ave.
Farmington, CT 06032
860.674.8796 / FAX: 860.674.8341
E-mail: je...@gii.co.jp
Web site: www.the-infoshop.com
>-Original Message-
>From: mos [mailto:mo...@fastmail.fm]
>Sent: Sunday, January 02, 2011 11:42 PM
>To: mysql@lists.mysql.com
>Subject: Re: This just seems to sl
Jerry,
Use "Load Data Infile" when loading a lot of data. Whoever is giving
you the data should be able to dump it to a CSV file. Your imports will be
much faster.
Mike
At 07:51 PM 1/2/2011, you wrote:
I'm trying to load data into a simple table, and it is taking many hours (and
still
At 12:56 AM 10/25/2010, you wrote:
Hello,
We are a company for gaming.
Our main db is mysql 5.1 installed on Linux.
Currently the hardware for mysql is 2*4 CPU, 16G memory, Raid 10 (four disks).
Now we have the plan to replace the disks with SSD for better performance.
Do you think is it right f
At 06:12 AM 10/24/2010, you wrote:
Regardless of that, it would be nice to know what the parameters are that
cause this slowdown - some people may be stuck with the default version -
companies with a support contract come to mind.
You didn't say whether the slowdown occurs when
1) adding new ro
At 08:55 PM 10/16/2010, you wrote:
After executing the SQL statement, the next Id inserted will be 1000.
Oops. I meant :
After executing the SQL statement, the next Id inserted will be 1001.
Mike
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
At 08:05 PM 10/16/2010, you wrote:
Hello,
I have a table which has the ID key with auto_increment and which is a
primary key.
If I insert the table with the id value which is generated by the
program, for example,
insert table (id, user_name, age) values (1000, 'kenn', 30);
the value 1000 is
At 10:49 AM 9/29/2010, Steve Staples wrote:
Google has not been kind to me on this one, so I figured I would ask
here...
how can I select with NICE options, so that it doesn't KILL my server,
or any other queries...
Do you understand what I am asking?
Steve
Steve,
You might look at http://
At 04:23 PM 9/9/2010, Tompkins Neil wrote:
Hi all,
Needing some advice on my tables design.
Basically I am designing a soccer application, and have a table which
contains player_bids (the values of which a player costs to be transferred
between clubs). Can someone please offer some input on th
At 04:44 AM 9/3/2010, Alexandre Vieira wrote:
Hi Johnny,
mysql> EXPLAIN SELECT * FROM clientinfo WHERE userid='911930694';
++-++---+---+-+-+---+--+---+
| id | select_type | table | type | possible_keys | key | key_
At 02:52 PM 8/18/2010, Xn Nooby wrote:
Below is a generic version of the code I am trying. It does copy the
rows very quickly, but I will have to test to see how quickly the
indices are built. Is the below code what you were suggesting? I had
a little trouble dropping and later adding the prim
At 10:34 AM 8/18/2010, Xn Nooby wrote:
minutes to dump the 70M rows. However, it takes the LOAD FILE command
13 hours to import the CSV file. My
understanding of LOAD FILE was that it was already optimized to load
the data, then build the indices afterwords. I
don't understand why it takes so
At 01:06 AM 8/10/2010, you wrote:
Hello Nunzio,
Instead of Dropping a index, you can disable the indexes and get the work
done and re-enable them.
Disabling keys will NOT disable Primary or Unique keys. They will still be
active.
Mike
If you are ok with this then run the below as a shel
Nunzio Daveri,
Joerg Bruehe gave you a lot of good tips to try and speed things up.
A few hundred queries per second seem to be a relatively small number to
cause the server to crawl. I don't have the rest of your thread, but can
you publish some of the slow queries (see Slow Query Log) an
Robert,
The titles should be sorted as they are stored. The "12" should come
before the other titles, just as they do with other online movie databases.
See http://www.imdb.com/find?s=all&q=angry
The only problem would be of course if someone searches on "Twelve", it
should also return movi
At 09:00 AM 7/15/2010, alba\.albetti wrote:
Browsing the Web I've seen that usually companies look for developers
working on MySQL and Php. Why are the two things linked? I mean I've not
found any requests for just a MySQL developer or DBA (as for example it
happens for Oracle), but it's always
At 10:36 AM 7/13/2010, Rob Wultsch wrote:
On Tue, Jul 13, 2010 at 8:22 AM, mos wrote:
> At 09:29 AM 7/13/2010, æ å¾  wrote:
>>
>> Hello,
>>
>> There are more than 2m records in the table -- fxrate.
>> I create patitions, indexes, but it still takes me
At 09:29 AM 7/13/2010, æå¾ wrote:
Hello,
There are more than 2m records in the table -- fxrate.
I create patitions, indexes, but it still takes me about 7 minutes to
execute the following query
SELECT COUNT(*)
FROM fxrate
WHERE MONTH(quoteDate) = 6
AND quoteHourMinSec BETWEEN "06:00:00" AND
At 12:43 AM 7/9/2010, you wrote:
On Fri, Jul 9, 2010 at 7:30 AM, Neil Tompkins
wrote:
>
> Should we be looking to create a index for all fields that we might be
> perform a select condition in a where clause for ?
Neil,
Keep in mind that the more indexes you add to a table, the slower
yo
Roam,
Why are you installing a client server database on an embedded
system? There are better databases for this task. In the past I've been
able to squeeze a Windows application and an SQL database so it could run
directly off of a floppy disk (remember those?). But it wasn't client
se
David,
You did not define a primary key for the table.
The "PRIMARY KEY ( ) " is expecting a column name inside the "( )"
Mike
At 11:21 AM 6/23/2010, David Stoltz wrote:
Hi Folks,
I use an online SQL Design tool to design my tables, etc. This generates
script code that I can run in phpM
See http://dev.mysql.com/doc/refman/5.0/en/gone-away.html
Mike
At 12:37 PM 6/1/2010, durumdara wrote:
Hi!
I want to test my program that coded into PGSQL, and PySQLite. With these
DBs I have problem on many blob deletion (2 hours) and compact/vacuum (1
hours)...
So I'm trying to port my progr
Chris,
You are using Count when you should be using Sum. Here is a solution
you can try:
SELECT
SUM(IF(DATEDIFF(dateofinterview,dateofbirth)/365.25 BETWEEN 0 AND
18.999, 1,0)) AS "18 and Under",
SUM(IF(DATEDIFF(dateofinterview,dateofbirth)/365.25 BETWEEN 19 AND
65.999, 1
At 10:42 AM 5/21/2010, you wrote:
Dear sirs,
Accidentally files in a database have been deleted (/ var/lib/mysql
/"database") when entering the mysql console shows that the database is
created but does not show any table, there is some method to recever the
information in this database?
Yo
Carlos,
Perfect! Thanks.
Mike
On 5/11/2010 11:06 AM, mos wrote:
At 10:53 AM 5/11/2010, Anirudh Sundar wrote:
Hi Mike,
Did you try this :-
Create table TableDef select * from TableX where 1 = 2;
If you give this statement you will create the new table "TableDef" with
the same
Double
The above data are actually "rows" from TableDef. TableDef only has 2
columns "ColName" and "ColType". The TableDef rows are of course the
columns making up TableX.
Mike
On Tue, May 11, 2010 at 9:06 PM, mos
<<mailto:mo...@fastmail.fm>mo..
I'd like to get the field names and data types of a table, say TableX, and
put it into TableDef using nothing but SQL. I know I can list the table
definition using "Describe Table" and then loop through the results and
insert the first two columns Field and Type into TableDef, but is there a
wa
At 03:58 PM 5/7/2010, you wrote:
At 12:04 PM 5/7/2010, Stephen Sunderlin wrote:
Can't get slow querys to log. Does this not work in myisam?
Sure it does. Have you tried:
slow_query_time = 1
Mike
Sorry, ignore that previous message. (Serves me right for trying to
remember it from the top
At 12:04 PM 5/7/2010, Stephen Sunderlin wrote:
Can't get slow querys to log. Does this not work in myisam?
Sure it does. Have you tried:
slow_query_time = 1
Mike
*snip*
[mysqld]
log-slow-queries = /var/log/mysql/mysql-slow.log
long_query_time = 1
*snip*
restarted mysqld - no log.
Creat
At 04:54 PM 4/30/2010, you wrote:
I have a table with over 8 million rows of lat / long data all currently in
decimal(12,8) format (actually some in float(12,8)).
First question - should I have these all in decimal or float? I can't
think of any reason why I would actually do any math via mysq
Yves,
What happens if you replace the "tk.UserId IN (22943, 10899)" with
just one argument " tk.UserId = 22943".
Does it run much faster? If so, the In() statement may not be using an
index. You could try using a Union instead of In() to see if that is any
faster.
I have also found tha
At 12:03 AM 4/22/2010, Aveek Misra wrote:
I have a InnoDB table which contains columns named 'cluster' and 'file'
('cluster' + 'file' is a primary key). I want to add a new column that
tracks the revision number of a file for a given cluster and a file. The
situation is tailor made for a MyIsam
le, and since it's going to be fairly small, the impact of stuffing it
with data is not going to be great.
It's a bit of a hassle to set up, but once done you don't have to worry
about creating and deleting tables every day.
/ Carsten
mos skrev:
It looks like you only want to k
It looks like you only want to keep the current data, perhaps the current
day's worth, and delete the old data.
I would store the data in separate MySIAM tables, each table would
represent a date, like D20100413 and D20100414. Your program will decide
which table to insert the data into by cre
uffer scales much more
linearly than MyISAM but I don't know the details. We clustered our
data using a longer composite primary key and saw fairly good data
access performance.
I would caution against InnoDB if you foresee heavy random inserts.
Kyong
On Thu, Apr 8, 2010 at 8:21 AM, mos wrot
At 09:02 AM 4/8/2010, alba\.albetti wrote:
I've just installed MySQL on Windows 2000.
I've opened the MS-DOS windows and I've
written >C:\Programs\MySQL\...\bin\mysqladmin -u root -p password mysql2010
After the enter the prompt says >Enter password: and I've given
and I get
mysqladmin: con
At 09:10 PM 4/7/2010, you wrote:
Also depends on your data access pattern as well.
If you can take advantage of clustering my primary key for your
selects, then InnoDB could do it for you.
My suggestion would be to write some queries based on projected
workload, build 2 tables with lots and lots
ve used other databases with AES256 with
no noticeable reduction in speed.
In conclusion, I wouldn't store anything at an ISP that I wouldn't write on
the back of a postcard (unless it was encrypted). :-)
Mike
-Original Message-
From: mos
Sent: Sunday, March 21, 2010 3:
At 05:54 PM 3/20/2010, John Daisley wrote:
Jim
In the case of our encrypted data no user, application or script is given
access to the tables in question. Access is only granted via a couple of
stored procedures and to be honest if you didn't know which ones you would
have a hard job finding
At 04:18 AM 3/18/2010, Tompkins Neil wrote:
Hi
I'm currently looking to develop an on line web application - which is going
to be used by around 200+ concurrent users at any one time. Can
you recommend to me the best/preferred MySQL database engine to use for
example MyISAM ?
Also, in your exp
Keith,
You will need to reference the mysql_insert_id() function after
adding the row to Table_One. Then use the value returned for the rest of
the tables.
http://dev.mysql.com/doc/refman/5.1/en/getting-unique-id.html
Mike
At 08:48 AM 3/12/2010, you wrote:
I have two tables and I have
I am using MySQL 5.1.30 (MyISAM) and the query optimizer is quite bad at
choosing the proper index.
I often have to force it to use the correct index for Select statements,
otherwise takes 50x to 100x slower than it should be.
Has the query optiimizer improved any for later MySQL versions?
TI
Something I didn't realize about full text searches, but they can be used
with multi-table views!
Maybe I'm the last one on the planet to discover this, but I think this is
really neat.
For example, I can create a view with all the fulltext index column of the
customer table (name, address, cu
At 08:59 PM 2/27/2010, you wrote:
Hello everyone,
How would I select a random row that changes daily?
Thanks
The common way would be to do:
select * from table order by rand() limit 1;
You can of course add a Where clause to select only those rows that were
added today.
select * from tab
At 08:56 AM 2/26/2010, Elim PDT wrote:
Is there a way to create a user that has full control over a specified
database but cannot access any other database at all?
that user cannot delete the specified database, cannot create/delete other
database, but can do everything within the specified data
At 03:28 PM 2/23/2010, you wrote:
Is there still no such thing anywhere for Mysql as an index analyser?
Many others have such a thing that will sit and monitor db activity over a
poeriod of time and suggest the exact indexes on each table based on what
it has seen to improve performance
An
I am loading 35 million rows of data into an empty MyISAM table. This table
has 1 primary key (AutoInc) and 1 unique index and 2 non-unique indexes.
Is it going to be any faster if I remove the indexes from the table before
loading the data, load the data, then do an Alter Table .. add index ..
At 10:51 AM 2/15/2010, Frank Becker wrote:
Hello,
I tried to response to a posting on the list named in the subject. But it
is rejected with the following message:
SMTP error from remote mail server after end of data:
host lists.mysql.com [213.136.52.31]: 552 spam score exceeded
thresho
At 09:27 AM 2/9/2010, andy knasinski wrote:
I've used the general and slow query log in the past, but I am trying
to track down some queries from a compiled app that never seem to be
hitting the DB server.
My guess is that the SQL syntax is bad and never get executed, but I
don't see any related
At 01:57 PM 1/29/2010, Chris W wrote:
fsb wrote:
as a relatively unsophisticated dbms user (just dynamic web site back end),
i thought it was very interesting to see the kinds of things oracle users do
that i'd never have imagined.
more than 61 joins in a query?! man, those guys are hardcore.
Get yourself a copy of the book High Performance MySQL 2nd Edition
Publisher: O'Reilly Media; 2 edition (June 18, 2008) Language: English
ISBN-10: 0596101716 ISBN-13: 978-0596101718
Here is a brief preview of the first edition:
http://books.google.ca/books?id=iaCCQ13_zMIC&printsec=frontcover&d
At 06:15 AM 1/21/2010, Krishna Chandra Prajapati wrote:
Hi Abhishek.
insert, select and delete are in the same proportion.
1. Inserted data into a table A by user.
2. Selecting data from table A inserting data to table B after applying some
rules(update).
3. Deleting data from table A.
4. Selec
At 04:55 PM 1/14/2010, Jacek Becla wrote:
Hello,
I need to convert a non-materialized MySQL view to
a MySQL table. Are there any tools to do that?
Thanks,
Jacek
Jacek,
Can't you just do a:
create table mytable select * from myview;
???
Mike
--
MySQL General Mailing List
For list arc
At 08:36 AM 1/13/2010, Intell! Soft wrote:
As I told, Backticks don't work.
But it is valid to use spaces in DB's because, mySQL Administrator can
handle it and my third party tool MySQL Maestro can also handle it. So,
when those two programs get the tables, then there has to be a solution
for it
At 07:00 AM 1/13/2010, F.A.I.Z.A.L wrote:
where (
ATX_Status.Status like '%running%' or ATX_Status.Status like '%queued%' or
ATX_Status.Status like '%migrating%' or ATX_Status.Status like '%loading%'
or
ATX_Status.Status like '%configuring%' or ATX_Status.Status like
'%activating%' )
It looks l
Some other info you can post to the list to help us zero in on the problem:
1) What is the SQL used to update the tables
2) What is the table structure? "Show Create Table mytablex"
3) How long does it take for how many rows?
4) How busy is the CPU when the updates are occuring?
5) Show Status -
At 09:56 AM 1/11/2010, Johnny Withers wrote:
Victor,
The temporary table solution is not a good one. Use a single table and store
each item put into a cart identified by the session ID of the user. A
process should clean out this table periodically since there are "shoppers"
that abandon carts f
At 08:25 AM 12/31/2009, you wrote:
OK, this problem (for me at least) is becoming a dead horse which I beat
daily. I was having problems, I thought, with a spatial query running
ridiculously slowly. Turns out the previous non-spatial index query I was
using is also running super slow for reason
Monty,
Good looking site. I just signed the petition and hope others will too.
Go get'em Monty!
Mike
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
Instead of forming a group hug and singing kumbuya to save MySQL, why
don't we take some action to prevent Oracle from keeping
(destroying/crippling) MySQL?
Who are the regulatory bodies that will decide whether Oracle can keep
MySQL? Both in Europe and North America? Asia?
What are their ema
At 03:59 AM 12/17/2009, you wrote:
Madison Kelly wrote:
Hi all,
I've got a fairly large set of databases I'm backing up each Friday. The
dump takes about 12.5h to finish, generating a ~172 GB file. When I try
to load it though, *after* manually dumping the old databases, it takes
1.5~2 days
At 07:32 PM 12/14/2009, you wrote:
All,
I am using MySQL currently, but am starting to think that maybe I don't
really need to use an RDBMS. The data I am storing ends up getting
indexed with Sphinx because I have full-text indexes for about 40 million
records.
I have an "items" table that
At 01:17 PM 12/13/2009, bharani kumar wrote:
hi friend
Am working in php and mysql, i want to write the join query from the 4
table,
Am very confused how to do ,
I'm not sure why you would want to join all of the tables together and then
export them. It will mean you will have rows repeated
At 12:13 AM 12/6/2009, Oscar wrote:
hi all,
Someone told me that UPDATE equals DELETE old row + INSERT new row.
Anybody clarify it?
Are you talking about InnoDb or MyISAM tables.
I'm only familiar with MyISAM tables. Perhaps he was referring to an Update
that added more text to a variable
I'm trying to explicitly specify the location of the my.ini file on Windows XP
I'm using:
u:\mysql5.1\bin\mysqld --install --defaults-file="u:\\mysql5.1\\my.ini"
and it complains "Failed to install the service (Couldn't create service)"
I've tried "/" and "\" and "\\" but nothing seems to fix i
7;s how many part numbers we have.)
> and for speed, does Transactions.PartNumber have an index?
Done... I didn't think that you could do that for columns where duplicates
were allowed.
mos wrote:
> Do the Left Join as the other person said and also replace the
> "Distinct&qu
At 01:21 PM 12/1/2009, David Shere wrote:
select distinct a.PartNumber as 'Part Number', (
select count(1)
from Transactions b
where b.PartNumber = a.PartNumber) as 'No. Sold'
from listings a
order by a.PartNumber
It currently takes 30 seconds to run. Transactions has 1200 records and
At 12:09 AM 11/28/2009, Krishna Chandra Prajapati wrote:
Hi Mos,
In the below two command does 1 is faster than 2.
1. LOAD DATA LOCAL INFILE '/path/pet.txt' INTO TABLE pet;
2. LOAD DATA LOCAL INFILE '/path/pet.txt' INTO TABLE pet LINES TERMINATED
BY '\r\n
1 - 100 of 757 matches
Mail list logo