cascade
on delete cascade,
...
) engine=innodb;
PB
thx in advance.
Rajeev
From: Peter Brawley
To: Rajeev Prasad ; "mysql@lists.mysql.com"
Sent: Wednesday, August 15, 2012 4:01 PM
Subject: Re: suggestion needed for table
thx in advance.
Rajeev
From: Peter Brawley
To: Rajeev Prasad ; "mysql@lists.mysql.com"
Sent: Wednesday, August 15, 2012 4:01 PM
Subject: Re: suggestion needed for table design and relationship
On 2012-08-15 1:54 PM, Rajeev Prasad wrote:
> I
On 2012-08-15 1:54 PM, Rajeev Prasad wrote:
I have to keep this data in MySql, and i am not sure (as SQL/databse is not my
field) how to organise this into one or many tables? right now I would
represent my info as follows:
device_name|HW_version|SW_version|IP_addr_pvt|IP_addr_pub|data_specifi
I have to keep this data in MySql, and i am not sure (as SQL/databse is not my
field) how to organise this into one or many tables? right now I would
represent my info as follows:
device_name|HW_version|SW_version|IP_addr_pvt|IP_addr_pub|data_specific_to_device|associated_service
|associated_
Thanks, Ian.
W.
On 4/4/2012 4:02 AM, Ian wrote:
> On 04/04/2012 01:11, Wes Modes wrote:
>> On 4/3/2012 3:04 AM, Ian wrote:
>>> On 03/04/2012 00:47, Wes Modes wrote:
Thanks again for sharing your knowledge. I do believe the answers I've
receiving, but since I have requirements that I ca
On 4/3/2012 3:04 AM, Ian wrote:
> On 03/04/2012 00:47, Wes Modes wrote:
>> Thanks again for sharing your knowledge. I do believe the answers I've
>> receiving, but since I have requirements that I cannot easily alter, I'm
>> also gently pushing my expert advisers here to look beyond their own
>> p
Am I right in seeing that if you can split reads and writes without the
application having to be replication-aware, one does not need multiple
masters? One can simply have standard MySQL replication, yes?
For us, we only were interested in multiple masters so that we could
read or write from any
- Original Message -
> From: "Ian"
>
> with each master having any number of slaves. Set MySQL Proxy to
> send writes to the masters and reads to the slaves.
Yes, except when you have replication delays (asynchronous, remember?) like
someone else recently posted, your application write
On 03/04/2012 00:47, Wes Modes wrote:
> Thanks again for sharing your knowledge. I do believe the answers I've
> receiving, but since I have requirements that I cannot easily alter, I'm
> also gently pushing my expert advisers here to look beyond their own
> preferences and direct experience.
>
>
ssage -
From: "Wes Modes"
To: mysql@lists.mysql.com
Sent: Monday, April 2, 2012 7:47:18 PM
Subject: Re: HA & Scalability w MySQL + SAN + VMWare: Architecture Suggestion
Wanted
Thanks again for sharing your knowledge. I do believe the answers I've
receiving, but since I
Thanks again for sharing your knowledge. I do believe the answers I've
receiving, but since I have requirements that I cannot easily alter, I'm
also gently pushing my expert advisers here to look beyond their own
preferences and direct experience.
RE: Shared storage. I can easily let go of the p
Hello Wes,
On 4/2/2012 4:05 PM, Wes Modes wrote:
Thanks Shawn and Karen, for the suggestions, even given my vague
requirements.
To clarify some of my requirements.
*Application: *We are using an open-source application called Omeka,
which is a "free, flexible, and open source web-publishing p
DRBD, SAN, etc. Sure, they are highly redundant. Sure they are
reliable. But they do not handle the building being in a
flood/earthquake/tornado/etc. If you want HA, you have to start with
having two (or more) copies of all the data sitting in geographically
distinct flood plains, etc. If
Thanks Shawn and Karen, for the suggestions, even given my vague
requirements.
To clarify some of my requirements.
*Application: *We are using an open-source application called Omeka,
which is a "free, flexible, and open source web-publishing platform for
the display of library, museum, archiv
Hello Wes,
On 3/29/2012 9:23 PM, Wes Modes wrote:
First, thank you in advance for good solid suggestions you can offer. I
suppose someone has already asked this, but perhaps you will view it as
a fun challenge to meet my many criteria with your suggested MySQL
architecture.
I am working at a Un
Hi,
First, it is kind of funny to advise on something that is unknown. The devil
of such systems is in details. A small detail might cancel the whole big idea
of using, say, sharing, clustering, etc.So any discussion on this will be
quite general and can only be applied to your project
Caution: You are not going to like my answers.
> and VMWare
> shared storage
Why? Seems like scalability should plan on having dedicated hardware.
> replication
The best choice
> multi-mastering
Dual-Master gives good HA
> DRBD
Partially solves one subset of HA; don't bother with it; set your
First, thank you in advance for good solid suggestions you can offer. I
suppose someone has already asked this, but perhaps you will view it as
a fun challenge to meet my many criteria with your suggested MySQL
architecture.
I am working at a University on a high-profile database driven project
th
Hello everyone,
Recently, while installing MySQL, I ran into an error[1] which complained about
the hostname:
---
Neither host 'blahblah' nor 'localhost' could be looked up with
/usr/bin/resolveip
Please configure the 'hostname' command to return a correct
hostname.
If you want to solve t
The GUI tools are horrible, and I probably wouldn't recommend them to
my worst enemy :)
Take a look at workbench. It is getting better with every release,
especially now that they added SSH tunneling into it. It is still
beta-status though, but it might work for you:
http://dev.mysql.com/downloads/
http://dev.mysql.com/downloads/gui-tools/5.0.html
On Mon, Oct 12, 2009 at 7:23 PM, Marcelo de Assis wrote:
> Hi people!
>
> Can anyone suggest a query manager on linux environment - like Heidisql?
>
> I using MySQL Navigator:
> http://www.bookofjesus.org/images/fl8ze90wpgyt87bkp5.png
>
> Thanks!
Hi people!
Can anyone suggest a query manager on linux environment - like Heidisql?
I using MySQL Navigator:
http://www.bookofjesus.org/images/fl8ze90wpgyt87bkp5.png
Thanks!
--
Marcelo de Assis
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:h
Hi all,
I'm just creating my first partitioned table and have run into a bit of a
snag. The table primary key is a double and I want to create partitions based
on ranges of the key.
I have read the Partition Limitations section in the docs which states that
the partition key must be, or resolve
Hi Mike,
I don't understand with the delete the 'deleted' rows. Can you explain
me more? And about the RAM, yes we are going to upgrade it. The
application opens 1000 simultan connections to service the requests
For the tables, we have 2 identical tables, the flow will be like this.
First when the
At 04:42 PM 7/19/2008, sangprabv wrote:
Hi Mike,
Thanks for the reply.
1. Currently the hardware is P4 2.8 on Slackware 12 with 1GB of DDR
Memory (we plan to upgrade it)
2. The table type is MyISAM
3. There is no slow query, because all of the queries are a simple type
4. The table's size is incr
Hi Mike,
Thanks for the reply.
1. Currently the hardware is P4 2.8 on Slackware 12 with 1GB of DDR
Memory (we plan to upgrade it)
2. The table type is MyISAM
3. There is no slow query, because all of the queries are a simple type
4. The table's size is increasing dynamically with at least 10 thousa
At 12:11 PM 7/19/2008, sangprabv wrote:
Hi,
I have situation where a MySQL server processes about 10-20 thousands
requests per minute. I need suggestions from you for tuning up this
server to get optimized setting. TIA
Willy
Willy,
You will need post more information:
1) What type of ha
Hi,
I have situation where a MySQL server processes about 10-20 thousands
requests per minute. I need suggestions from you for tuning up this
server to get optimized setting. TIA
Willy
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists
, from small to
quite big (tables with about 2M rows). I've got a SAS disk array and I was
wondering what the best configuration could be:
1) raid 10
2) raid 5
3) a combination (e.g., raid10 for the data and raid 5 for the logs).
Any suggestion or link?
Thanks,
Luca
--
B. Keith M
logs).
Any suggestion or link?
Thanks,
Luca
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Hi,
On Thu, June 14, 2007 18:16, Jake Peavy wrote:
> Hi all,
>
> Can someone suggest a good method or normalized schema for storing product
> information (id, description, price) which changes over time so that as a
> product is gradually discounted, an order will reflect the cost of that
> partic
Hi all,
Can someone suggest a good method or normalized schema for storing product
information (id, description, price) which changes over time so that as a
product is gradually discounted, an order will reflect the cost of that
particular product at that particular time?
--
-jp
At birth, Chuck
"Sun, Jennifer" <[EMAIL PROTECTED]> wrote:
> l consuming all my RAM and swap and being killed with error=20
> 'VM: kill=
> ing process mysql
> __alloc_pages: 0-order allocation failed (gfp=3D0x1d2/=
> 0)'
>
> I would like to find a startup parameter either for client or serv=
> er to limit per th
At 01:33 PM 9/2/2004, Sun, Jennifer wrote:
I did 'handler table_name read limit large_numbers'. Is there a way I can
use lower number, but automatically loop through the number and display
all of the table records? Thanks.
If "large_numbers" is the number of rows in the table, then of course it
On Thu, 2 Sep 2004 15:19:44 -0400, Sun, Jennifer
<[EMAIL PROTECTED]> wrote:
> Thanks Marc,
>
> What version of myisam table you are talking about? We are on 4.0.20, when I ran the
> big table query, I tried to insert to it twice without any issues.
> The -q worked good for mysql client. Thanks.
]
Sent: Thursday, September 02, 2004 2:41 PM
To: Sun, Jennifer
Cc: [EMAIL PROTECTED]
Subject: Re: tuning suggestion for large query
Due to the nature of myisam tables, when you are doing a query then
the table will be locked for writes. Reads will still be permitted
until another write request is
an use without locking the table?
>
>
>
>
> -Original Message-
> From: Marc Slemko [mailto:[EMAIL PROTECTED]
> Sent: Thursday, September 02, 2004 2:24 PM
> To: Sun, Jennifer
> Cc: [EMAIL PROTECTED]
> Subject: Re: tuning suggestion for large query
>
> On Wed, 1 Sep 20
0:37 AM
To: [EMAIL PROTECTED]
Subject: RE: tuning suggestion for large query
At 04:13 PM 9/1/2004, Sun, Jennifer wrote:
>Thanks Mike.
>Seems like even with handler, the big query process is still consuming all
>my RAM and swap and being killed with error
>'VM: killing process my
suggestion for large query
On Wed, 1 Sep 2004 11:40:34 -0400, Sun, Jennifer
<[EMAIL PROTECTED]> wrote:
> Hi,
>
> We have a job that do 'select * from big-table' on a staging mysql database, then
> dump to data warehouse, it is scheduled to run once a day, but may be r
On Wed, 1 Sep 2004 11:40:34 -0400, Sun, Jennifer
<[EMAIL PROTECTED]> wrote:
> Hi,
>
> We have a job that do 'select * from big-table' on a staging mysql database, then
> dump to data warehouse, it is scheduled to run once a day, but may be run manually.
> Also we have several other small OLTP da
27;t get an exact snapshot if people are updating
the table as you are exporting it, but it will be very low on memory.
Mike
-Original Message-
From: mos [mailto:[EMAIL PROTECTED]
Sent: Wednesday, September 01, 2004 4:39 PM
To: [EMAIL PROTECTED]
Subject: Re: tuning suggestion for large
r to limit per
thread memory usage.
-Original Message-
From: mos [mailto:[EMAIL PROTECTED]
Sent: Wednesday, September 01, 2004 4:39 PM
To: [EMAIL PROTECTED]
Subject: Re: tuning suggestion for large query
At 10:40 AM 9/1/2004, you wrote:
>Hi,
>
>We have a job that do 'selec
At 10:40 AM 9/1/2004, you wrote:
Hi,
We have a job that do 'select * from big-table' on a staging mysql
database, then dump to data warehouse, it is scheduled to run once a day,
but may be run manually. Also we have several other small OLTP database on
the same server.
When the big job run, it w
Hi,
We have a job that do 'select * from big-table' on a staging mysql database, then dump
to data warehouse, it is scheduled to run once a day, but may be run manually. Also we
have several other small OLTP database on the same server.
When the big job run, it would use all the physical mem and
I have several fields in which I will be strong text. Various categories,
and for each category, its related subcategories. Each subcategory then
contains various items.
My question is, for performance, would it be better to assign each
category/subcategory pair a unique ID number and then anyti
I have several fields in which I will be strong text. Various categories,
and for each category, its related subcategories. Each subcategory then
contains various items.
My question is, for performance, would it be better to assign each
category/subcategory pair a unique ID number and then anyti
Hi list:
I would like to know some of the best practices to manage innodb tables. I
have an ibdata file that its size is 4.5GB, and it will increase every day
the max size of the hard disk is about 330GB, the question is should I
split this ibdata file in several files in a way that I can rea
I suppose datetime would be the better option here
-Original Message-
From: Victoria Reznichenko [mailto:[EMAIL PROTECTED]
Sent: Friday, July 18, 2003 10:00 AM
To: [EMAIL PROTECTED]
Subject: Re: mysqlbinlog suggestion
"Luc Foisy" <[EMAIL PROTECTED]> wrote:
>
&g
"Luc Foisy" <[EMAIL PROTECTED]> wrote:
>
> allowing date range options to the command line would be really neat
Date or datetime option?
--
For technical support contracts, goto https://order.mysql.com/?ref=ensita
This email is sponsored by Ensita.net http://www.ensita.net/
__ ___ ___
allowing date range options to the command line would be really neat
Luc
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
[EMAIL PROTECTED]
Subject: Re: Question / suggestion re: mysqlhotcopy
At 10:34 +1000 6/7/03, Murray Wells wrote:
>Hi All,
>
>Apologies if this has been done to death previously, but would it be
>sensible to indicate in the MySQL documentation that the Perl
>mysqlhotcopy script only appear
At 10:34 +1000 6/7/03, Murray Wells wrote:
Hi All,
Apologies if this has been done to death previously, but would it be
sensible to indicate in the MySQL documentation that the Perl
mysqlhotcopy script only appears to work on the Linux platform? (Or,
more accurately, doesn't work on the WinXP plat
Hi All,
Apologies if this has been done to death previously, but would it be
sensible to indicate in the MySQL documentation that the Perl
mysqlhotcopy script only appears to work on the Linux platform? (Or,
more accurately, doesn't work on the WinXP platform, I have no idea
about other platforms)
Hello,
We run a large on-line books database which is searched a lot. We are using
MySQL but we are seriously running into optimisation problems because the
searches are becoming slower and slower as the database grows.
Simplifying our situation, visitors need to search for author and/or title a
often, mysql in web services, there're many situation make a mysql
connection idle
for example: after php open mysql_pconnect(), finished a request, http
start keep-alive
or service other pages/images which do not require mysql connection
however, the connection is still there, counted as "connec
PROTECTED]>
Newsgroups: mailing.database.mysql
Sent: Friday, February 21, 2003 6:01 PM
Subject: RE: "SET FOREIGN_KEY_CHECKS = 0" Suggestion.
> Changing one local variable, IMO, shouldn't replicate.
>
> I would much rather have a REPLICATE command that I could place before =
>
ne, I can.
-J
-Original Message-
From: wertyu [mailto:[EMAIL PROTECTED]]
Sent: Friday, February 21, 2003 3:15 AM
To: [EMAIL PROTECTED]
Subject: "SET FOREIGN_KEY_CHECKS = 0" Suggestion.
Hello, everyone.
I'm using MySQL replication(Version 4.0.10)
Master and slave have FOR
Hi,
I have got mysql-version.src.rpm downloaded and when i was trying to install
it
rpm --rebuild
** I got the following error***
*** Can some one drag me out of this error***
..
..
...
reating cache ./config.cache
checking host system type... i686-pc-linux-gnu
checking
on master.
But, this command does not forwarded to Slave.
so, slave fails to insert record.
So my suggestion is "SET FOREIGN_KEY_CHECKS = 0;" forwarded to slave.
What do you think of this?
Have a nice weekend!
##
Heo, Jungsu Mr.
SimpleX Internet. http://www.s
you were to refer to it before generating it, what's to guarantee
that you would ever generate it?
This is useful when storing hierarchical data in a table by using char() to
store "ancestorid.parentid.objectid."
Suggestion: Document this behaviour.
?
Can't document t
--+
| id | t|
++--+
| 1 | 0|
| 2 | 2|
++--+
That is - when used in an insert statement the last_insert_id() returns the
value BEFORE the insert?
This is useful when storing hierarchical data in a table by using char() to
store "ancestorid.parentid.objec
In the last episode (Nov 19), Benjamin Pflugmann said:
> On Mon 2002-11-18 at 19:01:57 -0500, [EMAIL PROTECTED] wrote:
> [...]
> > A BITMAP index will work efficiently in this case, no matter what
> > the distribution of the keys is. The only requirement is that the
> > column be low-cardinality.
Hi.
On Mon 2002-11-18 at 19:01:57 -0500, [EMAIL PROTECTED] wrote:
[...]
> A BITMAP index will work efficiently in this case, no matter what the
> distribution of the keys is. The only requirement is that the column be
> low-cardinality.
>
> It basically works like this:
>
> true 01101
> f
On Mon, 2002-11-18 at 16:22, Michael T. Babcock wrote:
> Neulinger, Nathan wrote:
>
> >It's actually relatively speedy WITH the bad index, but maintaining that
> >index (given it's lopsided nature) is very expensive.
> >
> >Yes, the point is to ONLY index the row if it matches the restriction.
>
Neulinger, Nathan wrote:
It's actually relatively speedy WITH the bad index, but maintaining that
index (given it's lopsided nature) is very expensive.
Yes, the point is to ONLY index the row if it matches the restriction.
To clarify, if MySQL is indexing a binary value with lopsided
distr
On Tue, Nov 19, 2002 at 07:12:27AM +1100, Dean Harding wrote:
> Actually, it's a slightly different problem - a very uneven distribution
> of values on a column, not a small number of possible values like a
> bitmap index is for.
Exactly.
> In my opinion, this is a pretty useless feature, I mean
1
Computing Services Fax: (573) 341-4216
> -Original Message-
> From: Dean Harding [mailto:[EMAIL PROTECTED]]
> Sent: Monday, November 18, 2002 2:12 PM
> To: 'Daniel Koch'; [EMAIL PROTECTED]
> Cc: 'Egor Egorov'; Neulinger, Nat
]
> Sent: Tuesday, 19 November 2002 5:58 am
> To: [EMAIL PROTECTED]
> Cc: Egor Egorov; Neulinger, Nathan; Jeremy Zawodny
> Subject: Re: feature suggestion - indexes with "where" clause or
similar
>
> On Mon, 2002-11-18 at 12:29, Jeremy Zawodny wrote:
>
> > >
On Mon, 2002-11-18 at 12:29, Jeremy Zawodny wrote:
> > If I've got you right status can have values 0 or 1. In this case
> > you can just use " SELECT ... WHERE status=1 .." (index wil be used)
> > or "SELECT .. WHERE status=0 .." (index will not be used, because
> > scan the whole table will be f
On Mon, Nov 18, 2002 at 05:38:00PM +0200, Egor Egorov wrote:
> Neulinger,
> Friday, November 15, 2002, 7:25:27 PM, you wrote:
>
> NN> Assume I have a mysql table (myisam most likely) with a few hundred
> NN> thousand rows in it. One of the columns indicates success or failure.
> NN> 99.9% of the r
Neulinger,
Friday, November 15, 2002, 7:25:27 PM, you wrote:
NN> Assume I have a mysql table (myisam most likely) with a few hundred
NN> thousand rows in it. One of the columns indicates success or failure.
NN> 99.9% of the rows will have "0" in that column. But a small number will
NN> have 1. I n
Assume I have a mysql table (myisam most likely) with a few hundred
thousand rows in it. One of the columns indicates success or failure.
99.9% of the rows will have "0" in that column. But a small number will
have 1. I need to be able to fetch those rows quickly, without slowing
everything else do
ugust 20, 2002 9:18 PM
Subject: Suggestion on New MySQL Function
> Through your MySQL documentation, I wasn't able to find a function that I
> think could be useful. If you DELETE a row with an INT AUTO_INCREMENT
field,
> it will still be able to add the next highest value to this
To: <[EMAIL PROTECTED]>
Sent: Tuesday, August 20, 2002 1:18 PM
Subject: Suggestion on New MySQL Function
| Through your MySQL documentation, I wasn't able to find a function that I
| think could be useful. If you DELETE a row with an INT AUTO_INCREMENT
field,
| it will still be able to ad
Hello.
On Tue 2002-08-20 at 13:18:04 -0500, [EMAIL PROTECTED] wrote:
> Through your MySQL documentation, I wasn't able to find a function that I
> think could be useful. If you DELETE a row with an INT AUTO_INCREMENT field,
> it will still be able to add the next highest value to this column on a
Through your MySQL documentation, I wasn't able to find a function that I
think could be useful. If you DELETE a row with an INT AUTO_INCREMENT field,
it will still be able to add the next highest value to this column on an
INSERT (ex. If after INSERTing rows, you DELETE a row with a value of five
>>return just the first time row per each event_id.
>>
>>Thanks anyway. I may have to use second query... :-(
>>
>>
>>Mihail
>>
>>
>>- Original Message -
>>From: "Bhavin Vyas" <[EMAIL PROTECTED]>
>>To: &q
cond query... :-(
>
>
> Mihail
>
>
> - Original Message -
> From: "Bhavin Vyas" <[EMAIL PROTECTED]>
> To: "Mihail Manolov" <[EMAIL PROTECTED]>;
> <[EMAIL PROTECTED]>
> Sent: Thursday, July 11, 2002 10:51 PM
> Subject: Re: Help - qu
first time row per each event_id.
>
>Thanks anyway. I may have to use second query... :-(
>
>
>Mihail
>
>
>- Original Message -
>From: "Bhavin Vyas" <[EMAIL PROTECTED]>
>To: "Mihail Manolov" <[EMAIL PROTECTED]>; <[EMAIL PROTECTE
ROTECTED]>
To: "Mihail Manolov" <[EMAIL PROTECTED]>; <[EMAIL PROTECTED]>
Sent: Thursday, July 11, 2002 10:51 PM
Subject: Re: Help - query suggestion needed - interesting case
> How about:
>
> SELECT
> event_id, time,
> count(DISTINCT time) AS Ran
PM
Subject: Help - query suggestion needed - interesting case
> Greetings,
>
> I am stuck with this problem:
>
> I have the following table:
>
> event_id time
> 1002000-10-23
> 1002000-10-23
> 1012000-10-24
> 1012000-10-25
>
&g
Greetings,
I am stuck with this problem:
I have the following table:
event_id time
1002000-10-23
1002000-10-23
1012000-10-24
1012000-10-25
I need to know all event_id's that have multiple times + time columns. Is it
possible to get that result in just one quer
I just installed PHPMyAdmin and it looks terrific. The only thing that it's
missing is a Check Table on the selected tables. People could check to see
if their tables are in good working order or are corrupted. They could
specify what type of Check Table to perform (Quick, Extended etc.) By
Hello,
The documentation often mentions that foreign key support in mysql is
not implemented because it slows down the database server. The argument
is that since the applications built on top of mysql have to check for
database integrity constraints (ICs) it is redundant for the database
to pe
Hi!
> "Steve" == Steve Edberg <[EMAIL PROTECTED]> writes:
Steve> ...perhaps a NEAR function could be added; as a config file or compile-time
Steve> option, you could define an accuracy range. Say,
Steve> ./config --with-epsilon=0.0001
Steve> (if memory of my numerical analysis c
with high load this problem could happen...
My suggestion to you is to use
ps acx | grep mysqld | grep -v grep
instead.
--
Yours,
Michael
-
Before posting, please check:
http://www.mysql.com/manual.php (the manual)
htt
with high load this problem could happen...
My suggestion to you is to use
ps acx | grep mysqld | grep -v grep
instead.
--
Yours,
Michael
-
Before posting, please check:
http://www.mysql.com/manual.php (the manual)
htt
[ database mysql query ]
Sinisa Milivojevic wrote:
>On Fri, 01 Feb 2002 15:21:07 -0800
>Steve Edberg <[EMAIL PROTECTED]> wrote:
>
>>...perhaps a NEAR function could be added; as a config file or
>>compile-time option, you could define an accuracy range. Say,
>> ./config --with-epsilon=0.0001
>
On Fri, 01 Feb 2002 15:21:07 -0800
Steve Edberg <[EMAIL PROTECTED]> wrote:
> ...perhaps a NEAR function could be added; as a config file or
> compile-time option, you could define an accuracy range. Say,
>
> ./config --with-epsilon=0.0001
>
> (if memory of my numerical analysis classes
...perhaps a NEAR function could be added; as a config file or compile-time
option, you could define an accuracy range. Say,
./config --with-epsilon=0.0001
(if memory of my numerical analysis classes serves, the 'fudge factor' was
conventionally symbolized by epsilon; I suppose you co
In message <[EMAIL PROTECTED]>,
Thomas Spahni <[EMAIL PROTECTED]> writes
>On Mon, 26 Nov 2001, Brent wrote:
>
>> A Select statement will display the TmeStamp as 2005095105. Now I ask
>> you, how many users will understand this format?
>> Why not display TimeStamp in the same format as DateTim
On Mon, 26 Nov 2001, Brent wrote:
> A Select statement will display the TmeStamp as 2005095105. Now I ask
> you, how many users will understand this format?
> Why not display TimeStamp in the same format as DateTime? At least then the
> TimeStamp is in a meaningful representation that users
Displaying timestamp columns are a PIK (pain in keester) if you ask me.
A Select statement will display the TmeStamp as 2005095105. Now I ask
you, how many users will understand this format?
Why not display TimeStamp in the same format as DateTime? At least then the
TimeStamp is in a meanin
See V 4
-Original Message-
From: Alexander Barkov [mailto:[EMAIL PROTECTED]]
Sent: 28 August 2001 14:13
To: [EMAIL PROTECTED]
Subject: Feature suggestion
Hello!
Currently replication is implemented between
two mysqld. I would like to suggest new feature.
Make it possible for two
Hello!
Currently replication is implemented between
two mysqld. I would like to suggest new feature.
Make it possible for two mysqld to exchange data
between each other. For example, MERGE tables
distributed between two machines.
Question to authors. How do you think, will
it ever be implemen
Bruce Stewart writes:
> Hi Monty and all,
>
> I believe that you are planning to introduce foreign key support in a future
> version of MySQL (I think version 4.0).
>
> Could you consider offering an option to the definition of a foreign key
> which specifies the error message to return if the F
Hello,
I just reinstalled Red Hat 6.2 and reinstalled MySQL 3.23.38 and got the
following message as usual:
PLEASE REMEMBER TO SET A PASSWORD FOR THE MySQL root USER !
This is done with:
/usr/bin/mysqladmin -u root -p password 'new-password'
/usr/bin/mysqladmin -u root -h tulip.math.ualberta.ca
nobody
and group nobody. So, What suggestion will you give me in order to count
the bytes of each database is consuming using apache + PHP.
Should I change the apache's user and group to mysql or Should I make the
mysql database world readable.
Any Suggestions is helpful.
Thanks in Ad
If your using "R"edhat, use "R"pm. Otherwise, use the tarball.
David Loszewski wrote:
>
> is it recommended to install mysql by rpm or tar?
>
> dave
>
> -
> Before posting, please check:
>http://www.mysql.com/manual.php
-ravi.
-Original Message-
From: David Loszewski [mailto:[EMAIL PROTECTED]]
Sent: Monday, April 09, 2001 4:40 PM
To: [EMAIL PROTECTED]
Subject: Someone give me suggestion please?
is it recommended to install mysql by rpm or tar?
1 - 100 of 132 matches
Mail list logo