Rhys Johnson wrote:
Hello
I am running an online web shop using a MySQL (v5.0.19) backend. The web
app runs really fast whilst the MySQL database is on the local machine
(2x PIII procs, 2GB mem). However I tried moving the database onto a
dedicated database machine (2x P4 Zeon 4Gb Mem Raid5) and
Hello
I am running an online web shop using a MySQL (v5.0.19) backend. The web
app runs really fast whilst the MySQL database is on the local machine
(2x PIII procs, 2GB mem). However I tried moving the database onto a
dedicated database machine (2x P4 Zeon 4Gb Mem Raid5) and it runs really
slow. I
[EMAIL PROTECTED] wrote:
Hi ,
select count(*) is painfully slow in case of innoDB when the number of
records are around 1 million. Ths select count(*) query in myISAM takes
0.01 secs and the same query in InnoDB takes around 20.15 secs.
Can anybody suggest me how to speed up this query ?
Yo
Hi ,
select count(*) is painfully slow in case of innoDB when the number of
records are around 1 million. Ths select count(*) query in myISAM takes
0.01 secs and the same query in InnoDB takes around 20.15 secs.
Can anybody suggest me how to speed up this query ?
Thanks
Prasad
The informatio
noup
On Tue, 2004-01-06 at 11:24 +0330, mysql@lists.mysql.com wrote:
> abuse?
signature.asc
Description: This is a digitally signed message part
There will hopefully be some MySQL guru to confirm if I am right or wrong. I
also did the "inner product" version instead of JOIN's, but moved to JOIN's
that are more logical in the way I work with the tables I have.
/Peter
-Original Message-
From: Chris [mailto:[EMAIL PROTECTED]
Sent: W
Peter Lauri wrote:
[snip "Chris"]
The 'where' clause cuts that down to only matching records between the
tables. Without the where, you'd end up with lots of rows but with the
where it will be fine.
[/snip]
Yes, it cuts it down to that number of records in the end, so the final
result set wil
[snip "Chris"]
The 'where' clause cuts that down to only matching records between the
tables. Without the where, you'd end up with lots of rows but with the
where it will be fine.
[/snip]
Yes, it cuts it down to that number of records in the end, so the final
result set will just be a few rows t
Peter Lauri wrote:
[snip "Chris"]
If you want multiple customers to be associated with each issue you need
3 tables:
create table customers (customerid int auto_increment primary key,
customername varchar(255));
create table issues (issueid int auto_increment primary key, issuetitle
varch
[snip "Chris"]
If you want multiple customers to be associated with each issue you need
3 tables:
create table customers (customerid int auto_increment primary key,
customername varchar(255));
create table issues (issueid int auto_increment primary key, issuetitle
varchar(255));
create table
Neil Tompkins wrote:
The problem appears to be retrieving the data which is slow.
Are you sure it's not the actual query that's slow?
If you run the query in mysql (ie through the command prompt) is it fast?
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsu
Brian E Boothe wrote:
how do i do 1 to many relationship?? i have this SQL syntax " i may need
to add many issues per customer" so this is what i have
Please help
SELECT *
FROM
mcsdata.customers
INNER JOIN mcsdata.issues ON (mcsdata.customers.id = mcsdata.issues.id
Yes of course you can and...you should
As the my.cnf says
# InnoDB, unlike MyISAM, uses a buffer pool to cache both indexes and
# row data. The bigger you set this the less disk I/O is needed to
# access data in tables. On a dedicated database server you may set this
# parameter up to 80% of the
how do i do 1 to many relationship?? i have this SQL syntax " i may need
to add many issues per customer" so this is what i have
Please help
SELECT *
FROM
mcsdata.customers
INNER JOIN mcsdata.issues ON (mcsdata.customers.id = mcsdata.issues.id)
WHERE
(mcsdata.cus
I'm currently running RHEL4.1 64-bit, mysql 4.0.26 w/ 4GB RAM and have my
innodb_buffer_pool_size set to 2GB,
My question is, can I increase my ram to 8GB and set innodb_buffer_pool_size
to 4GB or even 6?
Thanks
hi guys:
I'm not the sharpest tack in the box. I'm having
trouble making a trigger work.
CREATE TRIGGER update_calldate BEFORE INSERT ON
contacts
FOR EACH ROW
UPDATE contacts SET calldate="2006-08-15" where
calldate NOT IN ('2006-08-15')
;
The trigger accepts in so far as it creates the
trigge
On 8/15/06, Dan Buettner <[EMAIL PROTECTED]> wrote:
It would be nice if MySQL would have a more generic 'TEMPORARY TABLES'
permission that would allow one to create, insert, delete from and
drop temporary tables without having to give up insert/update/delete
privileges on the real tables
Can't
When I created the user, I specified:
grant all on ecommerce.* to admin@'localhost' identified by '';
Doesn't that cover the file privilege? I noticed the "navicat" commercial
product doesn't list that privilege specifically. the mysql-administrator
just locks up when I go to manage users. i'
Hi Bruce
What Fred said is true. Only transactions will help you.
Regards
Visolve DB Team
- Original Message -
From: "Frederic Wenzel" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Cc: "Visolve DB Team" <[EMAIL PROTECTED]>;
Sent: Tuesday, August 15, 2006 1:01 PM
Subject: Re: multiple t
Bruce,
why do you want to do that at all?
If you need to add values to several tables either at once or (in case
of an error) not at all, you should use transactions.
Fred
On 8/15/06, bruce <[EMAIL PROTECTED]> wrote:
- Original Message -
From: "bruce" <[EMAIL PROTECTED]>
To:
Sent: T
The problem appears to be retrieving the data which is slow.
> Date: Tue, 15 Aug 2006 11:32:23 -0500> From: [EMAIL PROTECTED]> To: [EMAIL
> PROTECTED]> Subject: Re: Access mySQL database across Internet> CC:
> mysql@lists.mysql.com> > You could look into establishing persistent
> connections.
visolve
that's not the function/purpose of the mysql_insert_id... that function
still requires you to do multiple hits to the db...
but thanks for the attempt...
-Original Message-
From: Visolve DB Team [mailto:[EMAIL PROTECTED]
Sent: Tuesday, August 15, 2006 10:59 AM
To: [EMAIL PRO
Hello Brain.
Solution is simple
Create two table customer table and product table in the below format
Definition of the Customer table
cust_id
Customername
Customer address
--
--
- etc
Productid
We are not sure about ur exact business requirement S
hi...
you guys misunderstood my question...
i don't want to insert into one tbl from multiple tables
i want to insert into multiple tables simultaneously using a single query...
-Original Message-
From: Visolve DB TEAM [mailto:[EMAIL PROTECTED]
Sent: Tuesday, August 15, 2006 10:55
On 8/15/06, Visolve DB TEAM <[EMAIL PROTECTED]> wrote:
Hello William
Try the below Query to insert next maximum value of the field into same
table
INSERT INTO Sample(id) SELECT MAX(id)+1 FROM Sample
Wooww it works :D ... i didn't know that... great !
Wouldn't this cause a problem if
obed wrote:
On 8/15/06, Visolve DB TEAM <[EMAIL PROTECTED]> wrote:
Hello William
Try the below Query to insert next maximum value of the field into same
table
INSERT INTO Sample(id) SELECT MAX(id)+1 FROM Sample
Wooww it works :D ... i didn't know that... great !
Wouldn't this cause
Hi Bruce
U can you mysql_insert_id function to insert values into multiple tables.
Please refer to http://www.desilva.biz/mysql/insertid.html for more details.
It also illustrates with examples.
Thanks
Visolve DB Team
- Original Message -
From: "bruce" <[EMAIL PROTECTED]>
To:
Sen
On 8/15/06, Visolve DB TEAM <[EMAIL PROTECTED]> wrote:
Hello William
Try the below Query to insert next maximum value of the field into same
table
INSERT INTO Sample(id) SELECT MAX(id)+1 FROM Sample
Wooww it works :D ... i didn't know that... great !
--
MySQL General Mailing List
For
Hell Bruce
Yes ,Here is the solution to insert the values from multiple table into a
table . We can achieve this by using the JOINs(INNER JOIN,LEFT JOIN,RIGHT
JOIN, join)
Consider the tables Animal,Feed and Animalfeed
Structure of the animal table
Id
name
variety
Structure of the feed tabl
Thank you very much!
-Original Message-
From: Visolve DB TEAM [mailto:[EMAIL PROTECTED]
Sent: Tuesday, August 15, 2006 12:44 PM
To: [EMAIL PROTECTED]; mysql@lists.mysql.com
Subject: Re: Incrementing using Max(Field) ?
Hello William
Try the below Query to insert next maximum value of th
HI all ;
i wanna be able to link tables bu either id or Customer , and i also
want to have a seperate table for Software information on that Customer,
so in one table it would be Customer name : and another table would be
products, linked to product id,
so when i do a Queru it shows both t
Curtis, you might need to make sure 'admin'@'localhost' has 'FILE'
privileges in the proper database, for load data infile. Note that
'admin'@'%' is not the same as 'admin'@'localhost'
See http://dev.mysql.com/doc/refman/5.0/en/grant.html
Also here is a comment from the same page:
QUOTE
Posted
here is what I get. I'm logged in as the user admin (really a regular user.)
mysql> create temporary table customer_tmp as select * from customer limit 0;
Query OK, 0 rows affected (0.00 sec)
Records: 0 Duplicates: 0 Warnings: 0
mysql> load data infile '/home/bluetarp/ezauth/customers.txt' in
Hello William
Try the below Query to insert next maximum value of the field into same
table
INSERT INTO Sample(id) SELECT MAX(id)+1 FROM Sample
Thanks
Visolve DB Team
- Original Message -
From: "William DeMasi" <[EMAIL PROTECTED]>
To:
Sent: Tuesday, August 15, 2006 12:34 AM
You could look into establishing persistent connections. Know it's
possible with mod_perl and I believe PHP. Not sure about ASP.
Basically, this leaves a connection always open, eliminating the
overhead of resolving host, connecting, authenticating, switching to
the proper database. Not really
The connection time to my mySQL database is OK from the ASP page. Are there
any ways to speed this up though ?
> From: [EMAIL PROTECTED]> To: My.SQL " "@ashcomp.net>
> Subject: RE: Access mySQL database across Internet> Date: Fri, 11 Aug 2006
> 12:20:19 -0400> > On Fri, 11 Aug 2006 15:02:12 +0
On Tue, 2006-08-15 at 07:56 -0700, bruce wrote:
> hi...
>
> the mysql docs don't seem to allow this.. but i'll ask.
>
> is there a way to accomplish:
>
> insert into cat,dog (cat.x,dog.y) values (1,2)
>
> allowing me to essentially insert items into multiple tables
> simultaneously...
If your
I like to run CHECK TABLEs every day ... I have a small perl program
that does this for me, which I keep meaning to post for download -
today's shaping up to be light, so while I'm thinking about it maybe
I'll just do it!
Anyway - I agree Mike's suggestion, backup then repair. You could
also fir
hi...
the mysql docs don't seem to allow this.. but i'll ask.
is there a way to accomplish:
insert into cat,dog (cat.x,dog.y) values (1,2)
allowing me to essentially insert items into multiple tables
simultaneously...
thanks
--
MySQL General Mailing List
For list archives: http://lists.mys
Or possibly that the mysql user on the box does not have access to the
data file in question. Can you post the error messages you get?
Dan
On 8/15/06, Curtis Maurand <[EMAIL PROTECTED]> wrote:
This may sound like a stupid question, but I have to ask. I've been
running a script that goes like
On 8/15/06, mos <[EMAIL PROTECTED]> wrote:
At 09:24 AM 8/15/2006, you wrote:
>Dan
>
>thanks for the help
>so i will remove the redundant keys
>below is the explain plans
>but here is the weird part.
>I exported the database and imported into a testDB to play with and
>the query that was giving me
This may sound like a stupid question, but I have to ask. I've been
running a script that goes like the following.
use ecommerce;
create temporary table customer_tmp as select * from customer limit 0;
load data infile '/home/bluetarp/ezauth/customers.txt' into table
\customer_tmp;
at which po
At 09:24 AM 8/15/2006, you wrote:
Dan
thanks for the help
so i will remove the redundant keys
below is the explain plans
but here is the weird part.
I exported the database and imported into a testDB to play with and
the query that was giving me grief in the previous message returned
immediately
On 8/14/06, Dan Buettner <[EMAIL PROTECTED]> wrote:
Randy, it's possible your indexes are sadly out of date, or corrupted somehow.
Can you post the output of
EXPLAIN select * from gallery_object where dir = 'dirname';
as well as of
EXPLAIN select * from gallery_object
where parent_id = 1
and dir
It is likely you are also logging any queries not using an index (doing
full table scans). Check the configuration variable: log_long_format
Cheers,
Jay
On Tue, 2006-08-15 at 14:45 +0200, Dominik Klein wrote:
> I have specified
>
> log-slow-queries
> long-query-time=10
>
> in my.cnf and restar
Renato Golin schreef:
[EMAIL PROTECTED] wrote:
How do I check two tables is it? Username and userpass are submitted
through a from and are unique
$sql = "SELECT username, userpass FROM mytable, mytable2 WHERE
username = '$username' AND userpass = '$userpass'";
This way you'll have an ambigu
I have specified
log-slow-queries
long-query-time=10
in my.cnf and restarted my server. After that I see statements logged
into the slow-log-file.
But it seems that mysql logs too much into that file.
When executing this statement:
mysql> show variables like "%tx%";
+---+---
[EMAIL PROTECTED] wrote:
How do I check two tables is it? Username and userpass are submitted through a
from and are unique
$sql = "SELECT username, userpass FROM mytable, mytable2 WHERE username = '$username' AND
userpass = '$userpass'";
This way you'll have an ambiguous error as username a
How do I check two tables is it? Username and userpass are submitted through a
from and are unique
$sql = "SELECT username, userpass FROM mytable, mytable2 WHERE username =
'$username' AND
userpass = '$userpass'";
Ta,
Ross
You are linking against the static compress library which is intended for
static linking not inclusion into a dynamic object. I would not rebuild
libz.a with fPIC. You should be linking against libz.so which I'd guess you
haven't got. Check you've got a libz.so and then make sure DBD::mysql picks i
50 matches
Mail list logo