On 10/9/2017 3:27 AM, Xiaoyu Wang wrote:
Hello,I reported a bug, at https://bugs.mysql.com/bug.php?id=87637, as well as
a patch. And Bogdan, the bug hunter, told me this patch would show up on the
dev contribution report. So, could anyone please tell me how to contact dev
team, or how can I
Am 08.01.2015 um 16:01 schrieb bruce:
hey.
within php (or any other language)
is there a way to create the mysql sql, and execute the sql, where the
process can wait until the network connection for the mysql
command/process is actually valid?
IE (phpesque)
$pdo=new pdo()
sql = select *
The only way I could see this work would be to write forms to a temporary
text file array. Then using a cron job to update the database.
On Thu, January 8, 2015 10:01 am, bruce wrote:
hey.
within php (or any other language)
is there a way to create the mysql sql, and execute the sql, where
I have to update the query every time.
Therein lies the difficulty with the schema design.
You could write a stored procedure to locate all the tables (use
information_schema.TABLES, etc) and build the UNION, and finally execute it.
The SP would have something very remotely like the foreach
2013/07/30 14:12 -0400, Sukhjinder K. Narula
I have several databases (all with same structure), which I to query. For
instansce:
db1, db2, db3 - all have table tb1 with field a, b and table tb2 with
fields flag1, flag2
So I want to query and get field a from tb for all db's. One way to do is
Meta info about the tables is stored in ibdata1. Hence, it is not possible to
copy just the .ibd file to another database or machine. 5.6.x will remedy this
with some export/import commands that do not involve reading/writing the rows
individually. (Ditto for moving partitions.)
(Sorry, I
Am 25.06.2012 06:17, schrieb Sabika Makhdoom:
I want to test our memcmp() binaries to see if we have the mysql binaries
that are impacted by the recent security breach. How do I test it?
why do you simply not update?
signature.asc
Description: OpenPGP digital signature
By recent security breach, do you mean the issue with passwords? If so:
http://www.dbasquare.com/2012/06/11/a-security-flaw-in-mysql-authentication-is-your-system-vulnerable/
for i in `seq 1 2000`; do mysql -u USERNAME --password=INCORRECTPASSWORD -h
HOSTNAME ; done
If you get in using that,
Does drop table use the undo log (rollback segment) to temporarily store
records to be purged later, the way delete from table does?
As 'DROP TABLE' causes an implicit commit
(http://dev.mysql.com/doc/refman/5.5/en/implicit-commit.html), I would highly
suspect that it doesnt. You cannot roll
Am 25.11.2011 14:20, schrieb Machiel Richards - Gmail:
Just a quick question relating to the use of transactions on innodb tables.
We are doing some archiving on some innodb tables, however there seems to be
some issues somewhere in the
process with data not being updated accordingly.
I don't think any other than show full processlist. In which state query
is locked or not.
I/O related things you check at OS level.
On Thu, Sep 22, 2011 at 11:07 PM, jiangwen jiang jiangwen...@gmail.comwrote:
Hi,
Is there any performance toolds about UPDATE/INSERT querys?
I want to
The server hosting bacula and the database only has one kind of disk: SATA,
maybe I should buy a couple of SSD for mysql.
I have read all your mails, and still not sure if I should enable innodb
compression. My ibfile is 50 GB, though.
Regards
Maria
Questions:
1) Why are you putting
Am 14.09.2011 09:50, schrieb Maria Arrea:
I have read all your mails, and still not sure if I should enable innodb
compression
if you have enough free cpu-ressources and IO is your problem simply yes
because the transfer from/to disk will be not so high as uncompressed
signature.asc
| |
+++-++---++-+-+--+---++-+-+-+---+--+-+-+
I am still benchmarking, but I see a 15-20% performance gain after enabling
compression using bacula gui (bat).
Regards
Maria
- Original Message -
From: Maria Arrea
Sent: 09/14/11 09:50 AM
To: mysql@lists.mysql.com
Subject: Re: Question about slow storage and InnoDB
Am 14.09.2011 14:50, schrieb Maria Arrea:
I have finally enabled compression:
I am still benchmarking, but I see a 15-20% performance gain after enabling
compression using bacula gui
as expected if disk-io is the only bottenleck
the same with NTFS-Compression inside a VMware Machine on
compression using bacula gui (bat).
Regards
Maria
- Original Message -
From: Maria Arrea
Sent: 09/14/11 09:50 AM
To: mysql@lists.mysql.com
Subject: Re: Question about slow storage and InnoDB compression
The server hosting bacula and the database only has one kind of disk:
SATA, maybe I
I would recommend to go for a 15K rpm SSD raid-10 to keep the mysql data and
add the Barracuda file format with innodb file per table settings, 3 to 4 GB
of innodb buffer pool depending the ratio of myisam v/s innodb in your db.
Check the current stats and reduce the tmp and heap table size to a
Thanks for correcting me in the disk stats Singer, A typo error of SSD
instead of SAS 15k rpm.
Compression may not increase the memory requirements :
To minimize I/O and to reduce the need to uncompress a page, at times the
buffer pool contains both the compressed and uncompressed form of a
Subject: Re: Question about Backup
Forget mysqldump because TABLE LOCKS for so hughe databases
I would setup a replication-slave because you can stop
the salave and make a filesystem-backup of the whole db-folder
while the production server is online, we do this with our
dbmail-server since
that the database is one table of 5.000 gigabyte, and not
5.000 tables of one gigabyte; and that the backup needs to be consistent :-p
- Original Message -
From: Reindl Harald h.rei...@thelounge.net
To: mysql@lists.mysql.com
Sent: Monday, 21 March, 2011 12:44:08 PM
Subject: Re: Question about
Message -
From: Reindl Harald h.rei...@thelounge.net
To: mysql@lists.mysql.com
Sent: Monday, 21 March, 2011 12:44:08 PM
Subject: Re: Question about Backup
Forget mysqldump because TABLE LOCKS for so hughe databases
I would setup a replication-slave because you can stop
the salave and make
Forget mysqldump because TABLE LOCKS for so hughe databases
I would setup a replication-slave because you can stop
the salave and make a filesystem-backup of the whole db-folder
while the production server is online, we do this with our
dbmail-server since 2009
Am 21.03.2011 12:23, schrieb Pedro
Hi,
The statement like 'I need to back up a 5T database' is not a backup strategy.
It is intention. There are some specifics that have to be determined to work
out a strategy. Going from there, the backup solution can be chosen. The
examples of questions one typically asks when
That would be the last question :-) Suppose we worked out strategy, lined up
the solutions along with their costs and then compare them with our budget.
That would be easy to find the one we can afford, and we will know what we
could dream about :-).
On Mar 21, 2011, at 11:28 AM, Singer
Or you can interrupt the query instead, although I've seen it not to
work on occasions: KILL QUERY id;
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
from the mysql console: show processlist
this will show you ids of all active connections, even the dead ones
then, again form the console kill processid
On Thu, Feb 17, 2011 at 3:52 PM, Rafael Valenzuela rav...@gmail.com wrote:
Hi all;
I wonder if there is any tool to Performance Tuning
Hi Michael:
Yeah , i think that i do a shell script.something like that.
require 'mysql'
mysql = Mysql.new(ip, user, pass)
processlist = mysql.query(show full processlist)
killed = 0
processlist.each { | process |
mysql.query(KILL #{process[0].to_i})
}
puts
Rafael,
You realize that script will kill perfectly well-behaved queries in
mid-flight? If you have so many dead connections that it is interfering
with operation, you have another problem elsewhere..
- md
On Thu, Feb 17, 2011 at 4:16 PM, Rafael Valenzuela rav...@gmail.com wrote:
Hi
I am working with mysql since many yaers and i have never
found e reason to kill braindead connections - what
benefit do you think to have from such actions instead
looking why there are hanging ones?
kill a connection of postfix and some user gets
temorary lookup error, php-scripts are closing
Are you using the strict SQL mode? Check your my.cnf file.
Peter
Date: Fri, 4 Feb 2011 14:08:01 -0800
From: awall...@ihouseweb.com
To: mysql@lists.mysql.com
Subject: Question about database value checking
So, a problem popped up today that has caused us no end of hair-pulling, and
it
Thanks Peter, exactly what I was hoping for!
andy
On 2/4/11 3:11 PM, Peter He wrote:
Are you using the strict SQL mode? Check your my.cnf file.
Peter
Date: Fri, 4 Feb 2011 14:08:01 -0800
From: awall...@ihouseweb.com
To: mysql@lists.mysql.com
Subject: Question about database value checking
From the OP:
I have a copy of the INNODB files for these two tables - is there a way
to extract the table contents from these files short of a full import?
I have to agree, that's quite ambiguous. Andy, is it a copy of the innoDB
datafiles, or a database dump that you have ?
In the latter
If you just need specific records, you can use -w option of mysql to
extract only the specifc records.
Then you can run the dump file into another db.
regards
anandkl
On Fri, Nov 12, 2010 at 2:35 PM, Johan De Meersman vegiv...@tuxera.bewrote:
From the OP:
I have a copy of the INNODB files
Thanks, guys. I have copies of the innodb files. The boss went whole hog on
using zfs for everything, so backups of files are readily available. Looks
like I'll be having the db reconstituted...
thanks again
On 11/12/10 1:05 AM, Johan De Meersman wrote:
From the OP:
I have a copy of the
Cc: Gavin Towey; Andy Wallace; mysql list
Subject: Re: question about restoring...
On Tue, Nov 9, 2010 at 11:39 PM, Jerry Schwartz je...@gii.co.jp wrote:
Then I guess it's a matter of preference. I'd rather edit a text file than
build a new instance of MySQL.
The way I parse that, you're
No, you should import the data into another instance of mysql to extract the
records.
Regards,
Gavin Towey
-Original Message-
From: Andy Wallace [mailto:awall...@ihouseweb.com]
Sent: Tuesday, November 09, 2010 10:34 AM
To: mysql list
Subject: question about restoring...
So, I got a
Not if he has the raw innodb files.
-Original Message-
From: Jerry Schwartz [mailto:je...@gii.co.jp]
Sent: Tuesday, November 09, 2010 11:05 AM
To: Gavin Towey; 'Andy Wallace'; 'mysql list'
Subject: RE: question about restoring...
That's overkill.
You should be able to import the data
Farmington Ave.
Farmington, CT 06032
860.674.8796 / FAX: 860.674.8341
E-mail: je...@gii.co.jp
Web site: www.the-infoshop.com
-Original Message-
From: Gavin Towey [mailto:gto...@ffn.com]
Sent: Tuesday, November 09, 2010 3:22 PM
To: Jerry Schwartz; 'Andy Wallace'; 'mysql list'
Subject: RE: question
On Tue, Nov 9, 2010 at 11:39 PM, Jerry Schwartz je...@gii.co.jp wrote:
Then I guess it's a matter of preference. I'd rather edit a text file than
build a new instance of MySQL.
The way I parse that, you're saying that there is a way to reattach ibd
files to another database ?
--
Bier met
On 03/09/2010 9:27 p, Hank wrote:
On 02/09/2010 8:30 p, Hank wrote:
Simple question about views:
Hank,
Have you tried running away from the problem :-) by doing...
CREATE PROCEDURE `combo`(theid INT)
BEGIN
(SELECT * FROM table1 WHERE id = theid)
UNION
(SELECT * FROM
On 03/09/2010 9:26 p, Hank wrote:
On Fri, Sep 3, 2010 at 6:23 AM, Jangitajang...@jangita.com wrote:
On 02/09/2010 8:30 p, Hank wrote:
Simple question about views:
Hank,
Have you tried running away from the problem :-) by doing...
CREATE PROCEDURE `combo`(theid INT)
BEGIN
(SELECT
On 02/09/2010 8:30 p, Hank wrote:
Simple question about views:
I have a view such as:
create view combo as
select * from table1
union
select * from table2;
Where table1 and table2 are very large and identical and have a
non-unique key on field id..
On 9/3/2010 6:23 AM, Jangita wrote:
On 02/09/2010 8:30 p, Hank wrote:
Simple question about views:
I have a view such as:
create view combo as
select * from table1
union
select * from table2;
...
(I've also tried UNION ALL with the same results).
...
On 02/09/2010 8:30 p, Hank wrote:
Simple question about views:
Hank,
Have you tried running away from the problem :-) by doing...
CREATE PROCEDURE `combo`(theid INT)
BEGIN
(SELECT * FROM table1 WHERE id = theid)
UNION
(SELECT * FROM table2 WHERE id = theid);
MySQL is a tradition Relational DataBase System. It underlays
something like 80% (somebody correct me if I'm out-of-date here) of
the http applications populating the internet. While some RDBMSs
offer extensions for RESP-like HTTP implementations, MySQL does not
support this directly. It can be
Given that OP is talking about a single delete statement, I'm gonna be very
surprised if he manages to squeeze an intermediate commit in there :-)
For a single-statement delete on a single table, the indexes will be rebuilt
only once. I'm not entirely sure what happens to cascaded deletes,
] On Behalf Of Johan De
Meersman
Sent: Thursday, March 18, 2010 6:48 AM
To: Ananda Kumar
Cc: Price, Randall; [MySQL]
Subject: Re: Question about DELETE
Given that OP is talking about a single delete statement, I'm gonna be very
surprised if he manages to squeeze an intermediate commit
is happening multiple times?
Thanks,
-Randall Price
From: vegiv...@gmail.com [mailto:vegiv...@gmail.com] On Behalf Of Johan De
Meersman
Sent: Thursday, March 18, 2010 6:48 AM
To: Ananda Kumar
Cc: Price, Randall; [MySQL]
Subject: Re: Question about DELETE
Given that OP is talking about
: Thursday, March 18, 2010 10:11 AM
To: Price, Randall
Cc: Johan De Meersman; Ananda Kumar; [MySQL]
Subject: RE: Question about DELETE
Hi Randall,
If you're talking about processes that are taking that long, then
running SHOW PROCESSLIST several times during the operation should give
you a rough idea
-Original Message-
From: Ian Simpson [mailto:i...@it.myjobgroup.co.uk]
Sent: Thursday, March 18, 2010 10:11 AM
To: Price, Randall
Cc: Johan De Meersman; Ananda Kumar; [MySQL]
Subject: RE: Question about DELETE
Hi Randall,
If you're talking about processes that are taking that long
: Thursday, March 18, 2010 11:15 AM
To: Price, Randall
Cc: Ian Simpson; Johan De Meersman; [MySQL]
Subject: Re: Question about DELETE
delete will also cause the undo(before image) to be generated, in case u want
to rollback. This will also add up to the delete completion time.
After each mass delete
Hi,
It depends how frequently ur doing a commit.
If you have written a plsql, with loop and if you commit after each row is
deleted, then it get update for each row. Else if you commit at the end the
loop, it commits only once for all the rows deleted.
regards
anandkl
On Thu, Mar 18, 2010 at 1:21
Is mysql the owner of the directories?
- Original Message
From: Manasi Save manasi.s...@artificialmachines.com
To: Johan De Meersman vegiv...@tuxera.be
Cc: Waynn Lue waynn...@gmail.com; mysql mysql@lists.mysql.com
Sent: Wed, November 25, 2009 8:12:25 PM
Subject: Re: question regarding
Hi Johan,
It worked perfectly. Thank you so much for this explanation.
I am really greatful.
--
Best Regards,
Manasi Save
Artificial Machines Pvt Ltd.
On Wed, Nov 25, 2009 at 3:42 PM, Manasi Save
manasi.s...@artificialmachines.com wrote:
Dear Johan,
Need your help again in
...@tuxera.be
Cc: Waynn Lue waynn...@gmail.com; mysql mysql@lists.mysql.com
Sent: Wed, November 25, 2009 8:12:25 PM
Subject: Re: question regarding mysql database location
Dear Johan,
Need your help again in understanding How mysql reads symlink.
As you said below, I have created symlinks
I fixed this by using symlinks for the directories for the underlying
databases. The limit for files is significantly higher than
directories.
Waynn
On 11/24/09, Manasi Save manasi.s...@artificialmachines.com wrote:
Hi All,
I have asked this question before But, I think I am not able to
Thanks Waynn,
I could not get your point of using symlinks. Because as per my knowledge
symlink will store same data which is there in original directory.
and What do you mean by The limit for files is significantly higher than
directories.
Can you elaborate it more.
Thanks in advance.
On Wed, Nov 25, 2009 at 12:53 AM, Manasi Save
manasi.s...@artificialmachines.com wrote:
Thanks Waynn,
I could not get your point of using symlinks. Because as per my knowledge
symlink will store same data which is there in original directory.
and What do you mean by The limit for files is
Well Waynn,
In this case I need to move all the existing databases to new location
right. Which I don't want to do. Is it possible that I create sym link
between two and use both.
--
Thanks and Regards,
Manasi Save
Artificial Machines Pvt Ltd.
On Wed, Nov 25, 2009 at 12:53 AM, Manasi Save
You don't need to move any databases. Look at this structure:
/data/disk1/mysql/db1 (directory)
/db2 (directory)
/db3 (directory)
/db4 (symlink to /data/disk2/mysql/db4)
/db5 (symlink to /data/disk2/mysql/db5)
Hi Johan,
I am Sorry. If I have complicated the senerio But, this still not fix my
purpose.
What I want is - From your example :-
/data/disk1/mysql/db1 (directory)
/db2 (directory)
/db3 (directory)
/db4 (symlink to /data/disk2/mysql/db4)
Hi Manasi,
At a time mysql can point to one data directory. For your task you can have
n number of mysql installation with different data directory. After that you
can use federated storage engine to perform your task.
Thanks,
Krishna Ch. Prajapati
On Wed, Nov 25, 2009 at 12:19 PM, Manasi Save
On Wed, Nov 25, 2009 at 11:55 AM, Manasi Save
manasi.s...@artificialmachines.com wrote:
Hi Johan,
I am Sorry. If I have complicated the senerio But, this still not fix my
purpose.
What I want is - From your example :-
/data/disk1/mysql/db1 (directory)
/db2 (directory)
On Wed, Nov 25, 2009 at 12:05 PM, Krishna Chandra Prajapati
prajapat...@gmail.com wrote:
At a time mysql can point to one data directory. For your task you can have
n number of mysql installation with different data directory. After that
you
can use federated storage engine to perform your
Thanks Johan,
It was really a great help. I'll try to implement it. I dont want to opt
for multiple mysql instances option as thats not feasible.
I'll get back to you all if it works fine.
Thanks again.
--
Best Regards,
Manasi Save
Artificial Machines Pvt Ltd.
On Wed, Nov 25, 2009 at 11:55
Dear Johan,
Need your help again in understanding How mysql reads symlink.
As you said below, I have created symlinks in default mysql directory.
and try to read that symlink file as a database. But mysql is not reading
that file as Database. Is there any settings which I need to change.
Thanks
On Wed, Nov 25, 2009 at 3:42 PM, Manasi Save
manasi.s...@artificialmachines.com wrote:
Dear Johan,
Need your help again in understanding How mysql reads symlink.
As you said below, I have created symlinks in default mysql directory.
and try to read that symlink file as a database. But
Also I forgot to mention that I have gone through the innodb option of
innodb_data_file_path but I can just specify it as :
innodb_data_file_path=ibdata1:2048M:autoextend:max:1024M;ibdata1:2048M:autoextend:max:1024M;
But not as :
Regards,
Gavin Towey
-Original Message-
From: Banyan He [mailto:ban...@rootong.com]
Sent: Friday, August 07, 2009 11:12 AM
To: Gavin Towey; joerg.bru...@sun.com; Peter Chacko
Cc: mysql
Subject: Re: Question about MySQL
Hi Gavin,
I am interested in the things you made for the optimization. Can
Hi all!
First of all, please excuse the typo I made in my posting.
I had written
There may be some merit to this in a specialized setup (NAS systems -
I'm not convinced of them, but don't claim expert knowledge about them),
and of course meant SAN, not NAS systems.
As regards NFS:
Peter
Hi Peter, all,
let me just concentrate on the NFS aspect:
Peter Chacko wrote:
[[...]]
Another question is , whats the general experience of running MySQL
servers on NFS shares ?
I would *never* use NFS storage for any DBMS (except for some testing):
NFS access is slower than local disk
Hi Jorg,
I really appreciate your help sharing your experience/thoughts.
Yes, i fully concur with you, NFS is not designed for Databases. But
you know there are Distributed SAN file systems (that use Direct IO to
the SAN) are serving databases like DB2 in many installations for
shared
over few hops, then
it's really not slower than local disks.
Remember: benchmark and test your assumptions!
Regards,
Gavin Towey
-Original Message-
From: joerg.bru...@sun.com [mailto:joerg.bru...@sun.com]
Sent: Friday, August 07, 2009 1:19 AM
To: Peter Chacko
Cc: mysql
Subject: Re: Question
://www.rootong.com
From: Gavin Towey gto...@ffn.com
Date: Fri, 7 Aug 2009 11:07:19 -0700
To: joerg.bru...@sun.com joerg.bru...@sun.com, Peter Chacko
peterchack...@gmail.com
Cc: mysql mysql@lists.mysql.com
Subject: RE: Question about MySQL
I always accepted that NFS was unacceptably slow
-Original Message-
From: joerg.bru...@sun.com [mailto:joerg.bru...@sun.com]
Sent: Friday, August 07, 2009 1:19 AM
To: Peter Chacko
Cc: mysql
Subject: Re: Question about MySQL
Hi Peter, all,
let me just concentrate on the NFS aspect:
Peter Chacko wrote:
[[...]]
Another
On Tue, Jun 2, 2009 at 11:52 AM, Ray r...@stilltech.net wrote:
Hello,
I've tried the manual and google, but I am not even sure what to call what I
want to do.
simplified data example:
I have a table of start and end times for an event, and an id for that event
in a table. each event may
Ray,
I want a query that will provide one record per event with all times included.
feel free to answer RTFM or STFW as long as you provide the manual section or
key words. ;)
Can be done with a pivot table. Examples under Pivot tables at
On June 2, 2009 10:44:48 am Peter Brawley wrote:
Ray,
I want a query that will provide one record per event with all times
included. feel free to answer RTFM or STFW as long as you provide the
manual section or key words. ;)
Can be done with a pivot table. Examples under Pivot tables at
On June 2, 2009 03:14:36 pm Ray wrote:
On June 2, 2009 10:44:48 am Peter Brawley wrote:
Ray,
I want a query that will provide one record per event with all times
included. feel free to answer RTFM or STFW as long as you provide the
manual section or key words. ;)
Can be done
4:58 PM
To: mysql@lists.mysql.com
Subject: Re: Question about query - can this be done?
On June 2, 2009 03:14:36 pm Ray wrote:
On June 2, 2009 10:44:48 am Peter Brawley wrote:
Ray,
I want a query that will provide one record per event with all times
included. feel free to answer RTFM
that completes the picture.
Just what I was looking for.
Ray
-Original Message-
From: Ray [mailto:r...@stilltech.net]
Sent: Tuesday, June 02, 2009 4:58 PM
To: mysql@lists.mysql.com
Subject: Re: Question about query - can this be done?
On June 2, 2009 03:14:36 pm Ray wrote:
On June 2, 2009 10
only if you are implementing INNODB Transactional Storage Engine
MySQL uses table-level locking for MyISAM,
MEMORY and MERGE tables,
page-level locking for BDB tables, and
row-level locking for InnoDB tables.
Dominik Klein wrote:
Hi.
I have a question regarding mysql replication and mysqldump.
I have a master (A). All my clients insert/update/delete only to this
master. Then I have a Slave (B). This slave only replicates the master.
There are no other processes changing/inserting data into the
Dual master replication can be either dual master dual write or dual
master single writer. The latter is preferred. In this configuration
replication is connected in both directions but clients only ever
connect to one master at a time. It's just as safe as master - slave
replication if you handle
I think what's really being sought after, here is clustering.
--C
Eric Bergen wrote:
Dual master replication can be either dual master dual write or dual
master single writer. The latter is preferred. In this configuration
replication is connected in both directions but clients only ever
Hi there,
I would only like to stress that the only supported (and recommended)
replication solution in MySQL is
Master---Slave replication.
In this scenario you can have ONLY one master and (virtually) any number
of slaves.
There is NO other safe replication solution.
The terms you mention
Hi!
Jarikre == Jarikre Efemena jefem...@yahoo.com writes:
Jarikre Dear sir,
Jarikre
Jarikre I am young web developer using PHP Script in designing interactive
website. I desire to include Mysql database on my websites.
Jarikre
Jarikre Please, how do I import, upload/export Mysql database
On Tue, Mar 31, 2009 at 1:30 AM, Jarikre Efemena jefem...@yahoo.com wrote:
Dear sir,
I am young web developer using PHP Script in designing interactive website.
I desire to include Mysql database on my websites.
Please, how do I import, upload/export Mysql database to a website server
Read the online Manual.
-Original Message-
From: Jarikre Efemena [mailto:jefem...@yahoo.com]
Sent: Monday, March 30, 2009 11:30 PM
To: mysql@lists.mysql.com
Subject: Question!
Dear sir,
I am young web developer using PHP Script in designing interactive website. I
desire to include
Send the value of @@server_id in the message, and make sure each
server has a unique value for that. Compare the value in the received
message to the value in the server and see whether you should stop the
loop.
On Mon, Feb 2, 2009 at 4:38 AM, Tobias Stocker
tobias.stoc...@ch.netstream.com
The natural join will JOIN on *all* the fields whose names match, not just the
ones you want it to.
In particular, the JOIN is matching up .expires and .expires with =
You then use WHERE to get only the ones with
This is a tautology: There are NO records both = and on the field
Thank you.
On Wed, 21 Jan 2009, c...@l-i-e.com wrote:
The natural join will JOIN on *all* the fields whose names match, not just the
ones you want it to.
In particular, the JOIN is matching up .expires and .expires with =
You then use WHERE to get only the ones with
This is a tautology:
On Tue, Jan 13, 2009 at 12:32 PM, Frank Becker
computersac...@beckerwelt.de wrote:
Hello together,
I have successfully set up a master-master-replication between two
servers. My question is: It is possible to set up such a replication
between three (or more) servers? Like this
Master3 ---
In the topology you just illustrated, you need to be specific about your scheme
using arrows. Here are some examples:
==
Example 1: This is MultiMaster Replication among 4 servers
Master1---Master2
^ |
|
Hello Baron, thanks for your response.
These types of questions can always be answered by asking: does my
proposed setup require any server to have more than one master? If
so, it's currently not possible.
What I want to do is the following:
eGroupware is a enterprise-groupware solution. I
-Original Message-
From: blue.trapez...@gmail.com [mailto:blue.trapez...@gmail.com] On
Behalf Of Vikram Vaswani
Sent: Thursday, December 25, 2008 5:47 AM
To: mysql@lists.mysql.com
Subject: Question on default database for stored functions
Hi
According to the MySQL manual, By default, a
select get_area(11);
ERROR 1305 (42000): FUNCTION test2.get_area does not exist
Can someone tell me what I'm doing wrong? Thanks.
SELECT dbWhereFunctionWasCreated.get_area(11);
PB
-
Jerry Schwartz wrote:
-Original Message-
From: blue.trapez...@gmail.com
Eric,
I'd replace
(avg(IF(avgTest.Q17,avgTest.Q1,Null))
+avg(IF(avgTest.Q27,avgTest.Q2,Null))
+avg(IF(avgTest.Q37,avgTest.Q3,Null))
+avg(IF(avgTest.Q47,avgTest.Q4,Null))
+avg(IF(avgTest.Q57,avgTest.Q5,Null)))/5 as overallAvg from avgTest
group by course;
with ...
-8939
Fax 303-778-0378
[EMAIL PROTECTED]
From: Peter Brawley [mailto:[EMAIL PROTECTED]
Sent: Tuesday, November 04, 2008 1:14 PM
To: Eric Lommatsch
Cc: mysql@lists.mysql.com
Subject: Re: Question about Averaging IF() function results
Eric,
I'd replace
Usually, you'd have 3 tables: USER, FRIEND, and a third table named
something like USER_FRIEND. They'd be set up like:
USER:
emailID (PK)
userName
Password
Address
Etc
FRIEND:
emailID (PK)
USER_FRIEND
user_emailID (PK)
friend_emailID (PK)
1 - 100 of 683 matches
Mail list logo