Hello,
Recently lot of MySQL clients try to overcome host
based privilege system of MySQL by using PHP tunneling
method.
In this method they call up a PHP file in the server
and the PHP file executes a query and sends the data
in XML format.
I am using C API() and I was just wondering if
somebo
At 05:51 PM 7/9/2004, you wrote:
Thanks everyone for helping out.I took Michael's advice and made a
new table called ranking and two columns. It defiantly cleared some
things up but I am still having issues using the BETWEEN operator. I
just need to pull up everything BETWEEEN 10 and 18 a
Peter,
My apologies on calling you Paul in my previous response instead of
Peter.
On Jul 9, 2004, at 10:30 PM, Peter Paul Sint wrote:
Bill, thank you for the prompt help.
This works.
I have just to find out how to get the Startup Item (or some
replacement) to open MySQL with --old_passwords (ju
Paul,
You can place --old-passwords (without the leading dashes) in the
my.cnf file under the option group [mysqld] instead of passing it on
the command line.
The my.cnf file probably isn't on your system by default, at least it
wasn't on mine until I created it. This file is generally placed
At 18:16 h -0400 2004.07.09, Bill Allaire wrote:
>Your problem may have to do with the difference in how 4.1.x+ does password hashing
>and that method is incompatible with older clients.
>
>You might find some help with this document:
>http://dev.mysql.com/doc/mysql/en/Old_client.html
>
>Specific
I'm curious if USING() works with more than one join. I can't seem to get it
to work.
http://dev.mysql.com/doc/mysql/en/JOIN.html
The USING (column_list) clause names a list of columns that must exist in
both tables.
The following two clauses are semantically identical:
a LEFT JOIN b USING (c
For the information of someone who may need it in the future. I used
Jeffrey's idea for determining duplicates. Then I created a temporary
table, and used insert...select to put the id's of the duplicates in the
temporary table. Then it was a simple "delete from table where
temp.id=table.id".
T
Using mysql v4.0.x on linux.
Given three tables...
CREATE TABLE Departments (
DeptID int(10) unsigned NOT NULL auto_increment,
DeptName char(30) default NULL,
PRIMARY KEY (DeptID)
)
CREATE TABLE UserDept (
CoreID int(10) unsigned NOT NULL default '0',
DeptID int(10) unsigned NOT NULL
Swany,
I do indeed have a host.frm file, and the timestamp is from 2000.
Unfortunately, I've had to start up 4.1.0 again and leave it up as folks
here have to work on the db. Since they won't be working tomorrow, I'll
try to remove the host.frm (also perhaps the host.ISD and host.ISM?) file
tomor
Do you have a hosts.MYD, or a hosts.frm file?
If you do, and there is no .MYI file, perhaps the
older version is just ignoring the table and not
making it available while the newer version errors
out.
If those files exist, try removing them from the data
directory (move them somewhere else) then
If you are usign 4.1 you could try:
SELECT DISTINCT d, title
FROM
(select p.id, p.title
from product p
join e_prod ep on ep.product=p.id
join story s on s.id = ep.story and s.status = 9 and
s.type = 14
where p.platform_id = 5 and p.genre_id = 23282
order by s.post_date desc
)
limit 10
otherwise
Thanks everyone for helping out.I took Michael's advice and made a
new table called ranking and two columns. It defiantly cleared some
things up but I am still having issues using the BETWEEN operator. I
just need to pull up everything BETWEEEN 10 and 18 and it keeps adding
additional ro
Group,
I have a project where I need to move a 4.x database to 3.x database. Are
there any issues with doing the following:
mysqldump 4.x
mysqladmin create 3.x database
mysql 3.x database < 4x.dump
Thanks.
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To uns
--- mos <[EMAIL PROTECTED]> schrieb: > At 12:23 PM
Thanks a lot, Mike.
It works.
I spent many hours with this problem.
I also tried the trick with a temporary table,
but used the "INSERT" command, which didn't help.
All the best to you,Harald
>
> Harald,
> 1) load the data into
Your problem may have to do with the difference in how 4.1.x+ does
password hashing and that method is incompatible with older clients.
You might find some help with this document:
http://dev.mysql.com/doc/mysql/en/Old_client.html
Specifically check out the information regarding resetting the pa
On Wed, Jul 07, 2004 at 06:23:45PM +0800, Linda wrote:
> Hi,
>
> My old mysql is 3.23.56 on RedHat9(Intel). After moving mySQL to
> Solaris 9 (Sun F280R/2GB Memory) and upgrading mySQL to 4.0.20, I
> got a lot of complaints about the performance for select and
> update. Have anyone can tell me if
At 12:23 PM 7/9/2004, you wrote:
Hi,
I'd like to load "TEXT"-data of variable length
into an existing table with "CHAR"-columns.
The input file consists of about 4200 lines
with alphanumeric characters and newline only.
Instead of filling up the "TEXT"-column
next to the existing columns, Mysql
app
Yup. It's all there, everything's fine on directory structure. I can
start 4.1.0 back up without changing anything else and it starts up okay.
Very perplexing.
jf
On Fri, 9 Jul 2004, Victor Pendleton wrote:
> When you ls to /database/var do you see the mysql/ directory?
>
> -Original Messa
--- Victor Pendleton <[EMAIL PROTECTED]> schrieb:
> What are the delimiters for this file?
I tried different variants:
No delimiters.
Delimiter ' (quote).
Delimiter " (doublequote).
All lines terminated by \n.
The files holds only a single column.
The column makes up 4200 rows.
Each row of diff
Yup. It's definitely there and definitely being accessed when I start up
mysql -- for fun, I tried removing it (temporarily, natch) and starting
mysql and got different errors.
jf
On Fri, 9 Jul 2004, Victor Pendleton wrote:
> When you ls to /database/var do you see the mysql/ directory?
>
> ---
When you ls to /database/var do you see the mysql/ directory?
-Original Message-
From: [EMAIL PROTECTED]
To: Victor Pendleton
Cc: '[EMAIL PROTECTED] '; ''[EMAIL PROTECTED] ' '
Sent: 7/9/04 4:15 PM
Subject: RE: problem upgrading from 4.1.0-alpha to 4.1.3-beta on Solaris 9
Okay, I changed t
Okay, I changed the datadir to /database/var, same error. The symlink is
definitely valid: when doing cd and ls and such I always use the symlink.
jf
On Fri, 9 Jul 2004, Victor Pendleton wrote:
> Is the symlink still valid? Can you point the data directory variable to
> this location and see if
CPAN is your friend. for example;
http://search.cpan.org/modlist/Security
uru
-Dave
Sarah Tanembaum wrote:
So, we can virtually use any database to do the job. It is really the
function of the program to encrypt(save) and decrypt(read) the sensitive
data.
Does anyone knows such a program that can
Is the symlink still valid? Can you point the data directory variable to
this location and see if the MySQL server starts up?
-Original Message-
From: [EMAIL PROTECTED]
To: Victor Pendleton
Cc: 'John Fink '; '[EMAIL PROTECTED] '
Sent: 7/9/04 4:08 PM
Subject: RE: problem upgrading from 4.1
Victor Pendleton wrote:
Have you tried using a group by clause? Group by title
same problem - the group by happens before the order by and you get
essentially random results.
-Original Message-
From: news
To: [EMAIL PROTECTED]
Sent: 7/9/04 3:08 PM
Subject: SELECT DISTINCT + ORDER BY conf
On Fri, 9 Jul 2004, Victor Pendleton wrote:
> What is the location of your data/mysql directory?
>
It's actually in /database/var. There's a symlink in /opt/mysql that
points it over. Could a symlink be the problem?
jf
--
MySQL General Mailing List
For list archives: http://lists.mysql.com
What is the location of your data/mysql directory?
-Original Message-
From: John Fink
To: [EMAIL PROTECTED]
Sent: 7/9/04 3:49 PM
Subject: problem upgrading from 4.1.0-alpha to 4.1.3-beta on Solaris 9
Hey folks,
My mysql-fu is minimal to the point of nonexistent, so please forgive
any
va
What are the delimiters for this file?
-Original Message-
From: Jens Gerster
To: [EMAIL PROTECTED]
Sent: 7/9/04 12:23 PM
Subject: Loading data into "TEXT" column;
Hi,
I'd like to load "TEXT"-data of variable length
into an existing table with "CHAR"-columns.
The input file consists of
Hey folks,
My mysql-fu is minimal to the point of nonexistent, so please forgive any
vagaries that come across:
I've recently compiled 4.1.3 to replace 4.1.0 on a machine here where I
work. The compile and install went fine (as far as I can tell, anyway),
but when I try to start mysqld via the
Have you tried using a group by clause? Group by title
-Original Message-
From: news
To: [EMAIL PROTECTED]
Sent: 7/9/04 3:08 PM
Subject: SELECT DISTINCT + ORDER BY confusion
I've got a product & story setup where there can be multiple stories of
a given type for any product. I want to f
Thank you for your detailed response.
You might get better performance just from using the explicit INNER JOINS
but I make no assumptions.
I tried INNER JOINS and did not see any difference in speed.
You may also get better performance if you had
composite indexes (not just several individual field
Very good, gmail does not handle mailing lists properly..
Sorry for sending this off-list to you, Alec.
On Fri, 9 Jul 2004 17:09:45 -0300, João Paulo Vasconcellos
<[EMAIL PROTECTED]> wrote:
>
>
> On Fri, 9 Jul 2004 10:44:42 +0100, [EMAIL PROTECTED]
> <[EMAIL PROTECTED]> wrote:
> > "L. Yeung
I've got a product & story setup where there can be multiple stories of
a given type for any product. I want to find the names of the products
with the most-recently-posted stories of a certain type. This query
works well:
SELECT p.id,p.title
FROM product p
join e_prod ep on ep.product=p.id
j
Hi,
I have a table called Bookings which holds start times and end times for
appointments, these are held in Booking_Start_Date and Booking_End_Date. I
have a page on my site that runs a query to produce a grid to show
availiability per day for the next ten days for each user of the system.
Use
On Fri, 9 Jul 2004 15:46:37 +0100 , Marvin Wright
<[EMAIL PROTECTED]> wrote:
> Hi,
>
> Current Platform
> RH version is 7.3
> IBM Blade Server - 2 x Intel(R) Xeon(TM) CPU 3.20GHz
> 32 GB SCSI
> 4 GB Ram
>
> This is the platform we are moving to in a week or so
> RH Enterprise AS 2.1 or 3.0
>
I tried to install binary mysql 4.0.20 on MacOS 1.2.8 Jaguar
The installer works but if I try to use the result I get messages including:
defaults undefined reference to _stpcpy expected to be defined in
/usr/lib/libSystem.B.dylib
As Marc Liyanaage
http://www.entropy.ch/software/macosx/mysql/
On Fri, Jul 09, 2004 at 09:39:02AM -0500, Craig Hoffman wrote:
> Style: Traditional
> Area: Yosemite
> Rating: From: 5.5 To: 5.10c
...
> "SELECT * FROM routes, users WHERE area='$area' AND style='$style'
> BETWEEN rating='[$rating1]' AND rating='[$rating2]' GROUP BY route
> ORDER BY rating ASC
Well well...
That worked too!
Damn... this is starting to make life easier :)
Thanks again. Very much appreciated!!!
Aaron
> -Original Message-
> From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
> Sent: July 9, 2004 2:00 PM
> To: Aaron Wolski
> Cc: [EMAIL PROTECTED]
> Subject: RE: any
Aaron,
That would be an INNER JOIN situation:
SELECT a.ID, a.First, a.Last, a.Email
FROM producta_customers a
INNER JOIN productb_customers b
ON a.email=b.email
Yours,
Shawn Green
Database Administrator
Unimin Corporation - Spruce Pine
Hi,
I'd like to load "TEXT"-data of variable length
into an existing table with "CHAR"-columns.
The input file consists of about 4200 lines
with alphanumeric characters and newline only.
Instead of filling up the "TEXT"-column
next to the existing columns, Mysql
appends new rows, filling up the
Hi all,
First... I just want tot hank everyone for their help and explanations
of how I was going wrong, and the measures to correct my logic!
Great, great advice.
Shawn's solution worked absolutely wonderful for my needs.
My next question is how do I reverse the query so that I can get all of
> I am running Debian sarge with MySQL 4.0.20. My problem is I can
> connect from localhost, but when I try to connect from other host this
> error comes up:
> ERROR 2013: Lost connection to MySQL server during query
Sorry I found the answer. I have ALL:ALL in hosts.deny
--
MySQL General Maili
Hi,
I am running Debian sarge with MySQL 4.0.20. My problem is I can connect
from localhost, but when I try to connect from other host this error
comes up:
ERROR 2013: Lost connection to MySQL server during query
I tried from many clients, included MySQL 4.0 and MySQL 3.23, but they
all got same
Lachlan,
I want to identify the entries in the table where the email addresses
are the same as another entry. Whatever else is in the record does not
matter to me.
However, a second requirement for the query is that it show me the last
duplicate instead of the first. This way I keep the first e
You have written a cross-product join. This is what happened but with a
much smaller example:
Assume you have two tables: Colors and Sizes
CREATE TABLE Colors (
id int auto_increment primary key
, name varchar(10)
);
CREATE TABLE Sizes (
id int auto_increment primary key
Hi Luis,
I've made a small script to do this, it's in the 'backing up mysql
databases' on http://www.control-alt-del.org/code
Basically, you probably want to save the binary logs by archiving
them (in case of a database crash). On a binary log I get about
60-80% compression ratios, so it's worth
From the documentation
mysql> SELECT table1.* FROM table1
->LEFT JOIN table2 ON table1.id=table2.id
->WHERE table2.id IS NULL;
will normally give you the right answer.
and you should get : 2026 x 240 - 486,057 = 183 results
Aaron Wolski wrote:
Hi all,
Having a problem with
On Fri, 09 Jul 2004 11:38:05 -0400, Keith Ivey <[EMAIL PROTECTED]>
wrote:
Craig Hoffman wrote:
This should pull up all the rock climbs that are in Yosemite, that are
traditional style and are between the rating 5.5 to 5.10c. Here is my
query:
"SELECT * FROM routes, users WHERE area='$area'
"Aaron Wolski" <[EMAIL PROTECTED]> wrote on 09/07/2004 16:33:27:
> Hi all,
>
> Having a problem with a query that's returning 486,057 results when it
> most definitely should NOT be doing that.
>
> I have two tables:
>
> 1 for a list of customers that purchase product A, another for customers
>
It sounds like a cartesian join. Have you run an explain plan on this query?
What are you joining the two tables on?
-Original Message-
From: Aaron Wolski
To: [EMAIL PROTECTED]
Sent: 7/9/04 10:33 AM
Subject: anyone help with this query? Returning to many results
Hi all,
Having a prob
Hi Aaron,
> Having a problem with a query that's returning 486,057 results when it
> most definitely should NOT be doing that.
>
> I have two tables:
>
> 1 for a list of customers that purchase product A, another for customers
> who purchased product B.
>
> Columns are:
>
> Id
> First
> Last
> Ema
gerald_clark wrote:
Hardware?
Celeron 1.3Ghz, IDE drive, 512Mb RAM
OS and version?
GNU/Linux, 2.4.20-8 kernel
MySql version?
4.0.17
Size of data file?
Size of index file?
postsearch.frm 8.7K
postsearch.MYD 3.5G
postsearch.MYI 1.0G
postsearch.TMD 3.5G
Filesystem type?
ext3
Sorry 'bout that!
Also,
On Fri, 9 Jul 2004 09:24:37 -0500, Darryl Hoar <[EMAIL PROTECTED]>
wrote:
Someone on a technical forum I participate in stated that
MySQL was converting to be a commercial application.
I always knew that MySQL had a commercial arm, but always
continued to have the Open Source arm as well.
For pr
I prefer to use the _explicit_ form of INNER JOIN rather than the
_implicit_ form of the comma-separated list of tables. I feel, with no
proof either way, that by specifying which conditions belong to which JOINs
I gain more detailed control over the query process.
Here is your same query (refor
Craig Hoffman wrote:
This should pull up all the rock climbs that are in Yosemite, that are
traditional style and are between the rating 5.5 to 5.10c. Here is my
query:
"SELECT * FROM routes, users WHERE area='$area' AND style='$style'
BETWEEN rating='[$rating1]' AND rating='[$rating2]' GROUP
Hi all,
Having a problem with a query that's returning 486,057 results when it
most definitely should NOT be doing that.
I have two tables:
1 for a list of customers that purchase product A, another for customers
who purchased product B.
Columns are:
Id
First
Last
Email
I am trying to compare
[EMAIL PROTECTED] wrote:
Just run the query:
SELECT * FROM table_name_goes_here;
To see all of the columns and all of the data in a particular table. I
don't use the control center but I have heard that if you do not change a
setting, it limits you by default to viewing only the first 1000 records
At 06:29 PM 7/8/2004, you wrote:
Hi David, the link you provided is quite interesting. Is such
database(translucent database) actually exist? Or is it just a concept?
Thanks
Sarah,
These databases do exist. Transparent (translucent) encryption has
been around for a while (at least on Windo
Again, this isn't an issue in latest kernels for well over 3 years (2.4.0
Test7+) and uses LFS, though the filesystem implementation has been more
recent but still a couple of years old).
Ext3 supports this if you are this if you are looking at the Enterprise
Linux kernels and ReiserFS also suppor
Hi,
Current Platform
RH version is 7.3
IBM Blade Server - 2 x Intel(R) Xeon(TM) CPU 3.20GHz
32 GB SCSI
4 GB Ram
This is the platform we are moving to in a week or so
RH Enterprise AS 2.1 or 3.0
4 x Intel(R) Xeon(TM) MP CPU 2.70GHz
128 GB SCSI Raid
16 GB Ram
So with the new platform I'll
"Joshua J. Kugler" <[EMAIL PROTECTED]> wrote on 07/08/2004 04:24:41 PM:
> On Thursday 08 July 2004 02:35 pm, Chip Wiegand said something like:
> > I was sent an excel file from a remote office, and need to put the
data
> > into a mysql database to be displayed on our web site. I removed a few
> >
Hey Everyone,
I have query where one selects an "style" then a "area" and finally
"rating". When some selects a rating they select a range of ratings.
For example:
Style: Traditional
Area: Yosemite
Rating: From: 5.5 To: 5.10c
This should pull up all the rock climbs that are in Yosemite, that
Just run the query:
SELECT * FROM table_name_goes_here;
To see all of the columns and all of the data in a particular table. I
don't use the control center but I have heard that if you do not change a
setting, it limits you by default to viewing only the first 1000 records of
any query. How to s
Hi Michael,
> >> > If you need more performance, throw more hardware at it -
> >> > a larger cache (settings -> memory), faster disks and a faster CPU.
> >>
> >> After adding a column for "one level up", adding indexes, optimizing
the
> >> query it took only a few hundreds of seconds.
> >
> > Of c
What version are you using?
What platform are you on?
How old is your hardware?
The 2Gb limit has long been addressed.
RH9, Fedora, RHES all support more than 2Gb Ram (assuming Ram) out of the
box... but its dependent on the kernel.
Newer 2.4 uses a 3G/1G split to address the 4Gb it could handle
Greetings,
Someone on a technical forum I participate in stated that
MySQL was converting to be a commercial application.
I always knew that MySQL had a commercial arm, but always
continued to have the Open Source arm as well.
For project planning purposes, will there continue to be an
open source
It appears as though your key column has exceeded its numerical limit. Do
you have that column typed large enough to contain the data you are putting
into it? Your database will accept the first out-of-range value and re-size
it to the MAX of that column's datatype (int?). When the second
out-of-r
In article <[EMAIL PROTECTED]>,
gerald_clark <[EMAIL PROTECTED]> writes:
> Asif Iqbal wrote:
>> Jack Coxen wrote:
>>
>>> If you database contains time-based data you could age out old records. I
>>> only need to keep data for 6 months so I run a nightly script to delete any
>>> records more tha
From: "Jochem van Dieten" <[EMAIL PROTECTED]>
> > After adding a column for "one level up", adding indexes, optimizing the
> > query it took only a few hundreds of seconds.
>
> Maybe I misunderstand the problem, but I get the impression you have
> the category computers>internet>providers>adsl and
Estaba trabajando con el ArcView, con ula extension "OpenSVGMapServer 1.01"
(http://arcscripts.esri.com/scripts.asp?eLang=&eProd=&perPage=10&eQuery=mysq
l) permitiendo poder exportar un Shapefile (Archivo Vectorial) a MySQL
generandome el siguiente Peru.sql(que con gusto enviaria al correo que me
p
You may try increasing the sort buffer size variable since it appears
MySQL is resorting to sorting in a file.
On Jul 9, 2004, at 1:51 AM, Doug V wrote:
A query which is constantly being run takes about 3 seconds when not
cached, and I was wondering if there were any way to speed this up.
There
Hello Michael,
On Thursday 08 July 2004 23:58, Michael Johnson wrote:
> I just tried installing 4.1.3 on my development machine today. To my
> dismay, I couldn't get it to start properly. I was upgrading from 4.1.2,
> which I installed identically to the procedure below.
>
> On to the actual
On Fri, 9 Jul 2004 14:01:46 +0100, [EMAIL PROTECTED]
<[EMAIL PROTECTED]> wrote:
>
> Nearly always, but not absolutely always. I have a table with columns
> primary start
> primary finish
> secondary start
> secondary finish
>
> Since it is defined that the distance
On Fri, 9 Jul 2004 14:45:41 +0200, Jigal van Hemert <[EMAIL PROTECTED]> wrote:
> From: "Martijn Tonies" <[EMAIL PROTECTED]>
>> Design for understanding, logic and maintenance, not performance.
>>
>> If you need more performance, throw more hardware at it -
>> a larger cache (settings -> memory), fa
On Fri, 9 Jul 2004 14:55:40 +0200, Martijn Tonies <[EMAIL PROTECTED]>
wrote:
> If you need more performance, throw more hardware at it -
> a larger cache (settings -> memory), faster disks and a faster CPU.
After adding a column for "one level up", adding indexes, optimizing the
query it took o
Hardware?
OS and version?
MySql version?
Size of data file?
Size of index file?
Filesystem type?
Jim Nachlin wrote:
I have a table with several keys. When I try to delete anything from
this table, I get data corruption and have to repair it with
myisamchk. Selects, updates work fine.
Here's th
Rory McKinley wrote:
Hi Sarah
This is more of a PHP question than a MySQL question as to my mind
while it is all possible, the bulk of the work would need to be done
on the PHP side. Assuming that you don't have the time to write all
the necessary code from scratch, you might want to look for a
Asif Iqbal wrote:
Jack Coxen wrote:
If you database contains time-based data you could age out old records. I
only need to keep data for 6 months so I run a nightly script to delete any
records more than 6 months old. And before anyone asks...yes, I also run
another script to ANALYZE/OPTIMIZE
Alec,
> > If you're de-normalizing
> > your design to get better performance, then there's something
> > wrong with the database engine (whatever engine that may be).
>
> Nearly always, but not absolutely always. I have a table with columns
> primary start
> primary finish
>
Hi,
Is there any work around for this yet where a process can not allocate more
than 2GB.
Can I upgrade my Redhat OS to any particular version ?
Many Thanks.
Marvin Wright
Flights Developer
Lastminute.com
[EMAIL PROTECTED]
+44 (0) 207 802 4543
__
"Martijn Tonies" <[EMAIL PROTECTED]> wrote on 09/07/2004 13:55:40:
> If you're de-normalizing
> your design to get better performance, then there's something
> wrong with the database engine (whatever engine that may be).
Nearly always, but not absolutely always. I have a table with columns
> > Design for understanding, logic and maintenance, not performance.
> >
> > If you need more performance, throw more hardware at it -
> > a larger cache (settings -> memory), faster disks and a faster CPU.
>
> Sorry, but I can't agree with you. Years ago I had to put the DMOZ
> (http://www.dmoz.
From: "Martijn Tonies" <[EMAIL PROTECTED]>
> Design for understanding, logic and maintenance, not performance.
>
> If you need more performance, throw more hardware at it -
> a larger cache (settings -> memory), faster disks and a faster CPU.
Sorry, but I can't agree with you. Years ago I had to p
Have you tried running this from the mysql monitor?
-Original Message-
From: Andre MATOS
To: Victor Pendleton
Cc: '[EMAIL PROTECTED] '
Sent: 7/8/04 5:33 PM
Subject: RE: Scripts - ERROR
Hi,
I tried but didn't work. Here is my script:
#
> "Martijn Tonies" >
> > Design for understanding, logic and maintenance, not performance.
> >
> > If you need more performance, throw more hardware at it -
> > a larger cache (settings -> memory), faster disks and a faster CPU.
>
> Or more indexes (which may require more hardware).
Oh yes, ind
"Martijn Tonies" <[EMAIL PROTECTED]> wrote on 09/07/2004 13:28:23:
>
> Design for understanding, logic and maintenance, not performance.
>
> If you need more performance, throw more hardware at it -
> a larger cache (settings -> memory), faster disks and a faster CPU.
Or more indexes (which may
Margaret MacDonald <[EMAIL PROTECTED]> wrote on 09/07/2004 12:07:54:
> Is there a generally-accepted rule of thumb for estimating the
> performance cost of joins? I can't find one even in Date, but
> intuitively it seems as though there must be one by now.
I don't think there is a general answe
Margaret,
> Is there a generally-accepted rule of thumb for estimating the
> performance cost of joins? I can't find one even in Date, but
> intuitively it seems as though there must be one by now.
Don't bother...
> I'm thinking of something like 'if not doing anything costs 0, and
> reading
From: "Margaret MacDonald" <[EMAIL PROTECTED]>
> Is there a generally-accepted rule of thumb for estimating the
> performance cost of joins? I can't find one even in Date, but
> intuitively it seems as though there must be one by now.
It's hard to estimate the cost of a join as such. The perfor
I have a question that may be similar to the one which Margaret asked recently
concerning the "Cost of Joins". I have a DB with numerous tables and have inserted
keys to relate one table to another. The method minimizes the data I store, but
results in me joining multiple tables, sometimes 10 at
Sarah Tanembaum wrote:
We have 10 computers(5bros, 4sisters, and myself) plus 1 server with I
maintained. The data should be synchronize/replicate between those
computers.
Well, so far it is easy, isn't it?
Here's my question:
a) How can I make sure that it secure so only authorized person can
mod
Is there a generally-accepted rule of thumb for estimating the
performance cost of joins? I can't find one even in Date, but
intuitively it seems as though there must be one by now.
I'm thinking of something like 'if not doing anything costs 0, and
reading 1 table costs 100, then joining a secon
Hi,
mysqlcc have a nice option which is : show create, but is there a menu
where one can obtain something like : show insert (or a dump of the table) ?
--
Philippe Poelvoorde
COS Trading Ltd.
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://l
"L. Yeung" <[EMAIL PROTECTED]> wrote on 09/07/2004 08:38:38:
> Hi! I wanted to set-up a master-slave replication on
> my win2k box. When my master server fails, my slave
> server will automatically becomes my new "master
> server". And when my master server is back online, any
> changes on my slav
Hi
I have installed mysql 4.1 for experimental purpose. i took source and
compiled it on Redhat Linux AS 3.0
I am getting these errors when i run mysql_installdb script
what is the problem ? reply me immediately.
[EMAIL PROTECTED] bin]# mysql_install_db --user=mysql
Installing all prepare
Hi!
I am posting this separate note about this change in 4.1.3, because it is
unusual that a data conversion is needed in a MySQL server upgrade.
The default charset of MySQL was latin1 in 3.23 and in 4.0, and it is
latin1_swedish_ci from 4.1.2 on.
InnoDB users who have used a non-default charse
Hi,
I have 3 servers ( A B and C ) and every Server has one db ( DBA DBB DBC )
On the third Server I would like to query all three databases ( only selects
from DBA and DBB ).
I thought about setting 2 slave and 1 master Servers on Server C but then I
would need to open 3 connections to query all
afaik the term "translucent database" applies to a regular database that
has encrypted data in it. The main differences is in whether the
encryption is one way only (ie. using a md5 hash of a name instead of
the actual name) or reversible (using 3des to encrypt and decrypt the
name). a good
Daniel,
I tested this with very small test tables, and it did not crash.
You should run CHECK TABLE on both tables. Maybe they are corrupt.
Can you make a repeatable test case that you can email or upload to ftp:
support.mysql.com:/pub/mysql/secret ?
Best regards,
Heikki
..
[EMAI
Hi! I wanted to set-up a master-slave replication on
my win2k box. When my master server fails, my slave
server will automatically becomes my new "master
server". And when my master server is back online, any
changes on my slave server is replicated back to my
master server.
Normal: A -> B
1 - 100 of 101 matches
Mail list logo