Re: Question about contributing a patch

2017-10-09 Thread shawn l.green



On 10/9/2017 3:27 AM, Xiaoyu Wang wrote:

Hello,I reported a bug, at https://bugs.mysql.com/bug.php?id=87637, as well as 
a patch. And Bogdan, the bug hunter, told me this patch would show up on the 
dev contribution report. So, could anyone please tell me how to contact dev 
team, or how can I know the progress about integrating the patch. By the way, I 
signed Oracle Contributor Agreement.
Any reply would be a great help.
Thanks, sincerely
Xiaoyu



Hello Xiaoyu,

Your interaction with the developers will happen through your bug report 
just as it did with our bug report handling team. If they need any 
details or if they need to engage with you again that is where they will 
contact you.


As to the integration of your fix into our code... that gets more 
complicated. There may be edge cases or use cases that need us to modify 
your code to handle. Sometimes these are found as the developer applies 
your patch to our code, sometimes with post-build unit testing, 
sometimes only after full integration testing.


And when that work may start depends on when a developer is scheduled to 
work on the specific bug you designed the patch for. So it could be a 
while.


Thank you very much for helping MySQL to become a better product!

Regards,
--
Shawn Green
MySQL Senior Principal Technical Support Engineer
Oracle USA, Inc. - Integrated Cloud Applications & Platform Services
Office: Blountville, TN

Become certified in MySQL! Visit https://www.mysql.com/certification/ 
for details.


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: question?

2015-01-08 Thread Reindl Harald



Am 08.01.2015 um 16:01 schrieb bruce:

hey.

within php (or any other language)

is there a way to create the mysql sql, and execute the sql, where the
process can wait until the network connection for the mysql
command/process is actually valid?

IE (phpesque)
$pdo=new pdo()
sql = select * from foo where a=:a
$s=$pdo-prepare($sql)
$s-bind(a,$one)
$s-execute()

The issue we're seeing, is that the network (it's all wifi) is really
bad.. and the app machine, might not have a connection to the db box.

We're wondering if there's a way to simply have mysql/php detect when
a valid connection is there, and then proceed.

We've thought about the try/catch process, any others?


snippets from our db-layer to get an idea

* everytime a error happens the reconnect method is triggered
* reconnect tries 300 times with a small sleep to connect again
* after it returns the last query is fired again

works that way for around 8 years now and we can restart mysqld at every 
random moment without lose a single web request




if(!in_array(@mysqli_errno($this-conn), array_keys($this-ignored_errors)))
   {
$this-reconnect();
$fehler = 0;
switch($unbuffered)
{
 case true: $result = @mysqli_query($this-conn, $sql, 
MYSQLI_USE_RESULT) or $fehler = 1; break;
 default:   $result = @mysqli_query($this-conn, $sql, 
MYSQLI_STORE_RESULT) or $fehler = 1; break;

}
   }


  public function reconnect()
  {
   $this-disconnect();
   $this-connect($this-persistent);
  }


  if(!$rw)
  {
   for($retry=1; $retry=300; $retry++)
   {
/** Initialisieren */
$this-conn = @mysqli_init();
/** SSL-Encryption wenn aktiviert und TCP */
if($this-ssl  $this-host != 'localhost')
{
 /** Wenn kein Zertifikat angegeben ist Dummy-Eintrag fuer 
Standard-Handshake da sonst keine Verschluesselung etabliert wird */

 if($this-ssl_crt === '')
 {
  $this-ssl_crt = 'dummy.crt';
 }
 /** SSL aktivieren */
 $this-conn-ssl_set($this-ssl_key, $this-ssl_crt, 
$this-ssl_ca, NULL, 
'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA:RSA-AES256-SHA');

}
/** Verbindung herstellen */
switch($persistent)
{
 case 1:  $rw = @mysqli_real_connect($this-conn, 'p:' . 
$this-host, $this-user, $this-pwd, $this-db, $this-port, '', 
$flags); break;
 default: $rw = @mysqli_real_connect($this-conn, $this-host, 
$this-user, $this-pwd, $this-db, $this-port, '', $flags); break;

}
/** Wenn Server wieder verfuegbar Verbindung erneut trennen und 
aufbauen um Fehlern waehrend des Initialisieren des Dienstes aus dem Weg 
zu gehen */

if($rw)
{
 /** Initialisieren */
 $this-conn = @mysqli_init();
 /** SSL-Encryption wenn aktiviert und TCP */
 if($this-ssl  $this-host != 'localhost')
 {
  /** Wenn kein Zertifikat angegeben ist Dummy-Eintrag fuer 
Standard-Handshake da sonst keine Verschluesselung etabliert wird */

  if($this-ssl_crt === '')
  {
   $this-ssl_crt = 'dummy.crt';
  }
  /** SSL aktivieren */
  $this-conn-ssl_set($this-ssl_key, $this-ssl_crt, 
$this-ssl_ca, NULL, 
'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA:RSA-AES256-SHA');

 }
 /** Verbindung herstellen */
 switch($persistent)
 {
  case 1:  $rw = @mysqli_real_connect($this-conn, 'p:' . 
$this-host, $this-user, $this-pwd, $this-db, $this-port, '', 
$flags); break;
  default: $rw = @mysqli_real_connect($this-conn, $this-host, 
$this-user, $this-pwd, $this-db, $this-port, '', $flags); break;

 }
 break;
}
usleep(5);
   }
   /** Es konnte trotz mehrmaligem Versuch keine Verbindung 
hergestellt werden */

   if(!$rw)
   {
$host_resolved  = @gethostbyaddr($this-host);
if(!$host_resolved)
{
 $host_resolved = $this-host;
}
$this-conn = 0;
$this-error('Verbindung zu Datenbank-Server span 
style=white-space:nowrap;quot;' . trim($host_resolved . ':' . 
$this-port . ' ' . $this-db) . 'quot;/span konnte nicht hergestellt 
werden.br /br /' . mysqli_connect_error());

   }
  }



signature.asc
Description: OpenPGP digital signature


Re: question?

2015-01-08 Thread Ron Piggott
The only way I could see this work would be to write forms to a temporary
text file array.  Then using a cron job to update the database.

On Thu, January 8, 2015 10:01 am, bruce wrote:
 hey.

 within php (or any other language)

 is there a way to create the mysql sql, and execute the sql, where the
 process can wait until the network connection for the mysql
 command/process is actually valid?

 IE (phpesque)
 $pdo=new pdo()
 sql = select * from foo where a=:a $s=$pdo-prepare($sql)
 $s-bind(a,$one)
 $s-execute()


 The issue we're seeing, is that the network (it's all wifi) is really
 bad.. and the app machine, might not have a connection to the db box.

 We're wondering if there's a way to simply have mysql/php detect when
 a valid connection is there, and then proceed.

 We've thought about the try/catch process, any others?


 Thanks


 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql






-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



RE: Question regarding creating a query

2013-07-30 Thread Rick James
 I have to update the query every time.
Therein lies the difficulty with the schema design.

You could write a stored procedure to locate all the tables (use 
information_schema.TABLES, etc) and build the UNION, and finally execute it.  
The SP would have something very remotely like the foreach you suggested.

 -Original Message-
 From: Sukhjinder K. Narula [mailto:narula...@gmail.com]
 Sent: Tuesday, July 30, 2013 11:13 AM
 To: mysql@lists.mysql.com
 Subject: Question regarding creating a query
 
 Hello,
 
 I have a question regarding creating a query as follows:
 
 I have several databases (all with same structure), which I to query. For
 instansce:
 
 db1, db2, db3 - all have table tb1 with field a, b and table tb2 with
 fields flag1, flag2
 
 So I want to query and get field a from tb for all db's. One way to do is
 union i.e.
 
 SELECT a FROM db1.tb1 WHERE (SELECT flag1 FROM db1.tb2) = 'y'
 UNION
 SELECT a FROM db2.tb1 WHERE (SELECT flag1 FROM db2.tb2) = 'y'
 UNION
 SELECT a FROM db3.tb1 WHERE (SELECT flag1 FROM db3.tb2) = 'y'
 
 But the problem here is that if I add more db's, I have to update the
 query every time.
 
 In addition to above, I also have a database e.g. common, which has a
 table called dbnames with field name, that keeps the name of all the
 databases I have (db1, db2, db3).
 
 So, what I would like to do is query the common db to get the names of the
 db's and then run the select query on each db.
 
 So here is the pseudocode of what I want to do:
 
 
 for each (SELECT name AS DbName FROM common.dbnames)
 
 (SELECT a FROM DbName.tb1 WHERE (SELECT flag1 FROM DbName.tb2) = 'y')  AS
 CONCAT(DbName, '-', a)
 
 
 Could you please advice if this possible and if yes, how can this be
 acheived.
 
 Many Thanks,
 
 SK

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Question regarding creating a query

2013-07-30 Thread hsv
 2013/07/30 14:12 -0400, Sukhjinder K. Narula 
I have several databases (all with same structure), which I to query. For
instansce:

db1, db2, db3 - all have table tb1 with field a, b and table tb2 with
fields flag1, flag2

So I want to query and get field a from tb for all db's. One way to do is
union i.e.

SELECT a FROM db1.tb1 WHERE (SELECT flag1 FROM db1.tb2) = 'y'
UNION
SELECT a FROM db2.tb1 WHERE (SELECT flag1 FROM db2.tb2) = 'y'
UNION
SELECT a FROM db3.tb1 WHERE (SELECT flag1 FROM db3.tb2) = 'y'

But the problem here is that if I add more db's, I have to update the query
every time.

In addition to above, I also have a database e.g. common, which has a table
called dbnames with field name, that keeps the name of all the databases I
have (db1, db2, db3).

So, what I would like to do is query the common db to get the names of the
db's and then run the select query on each db.

So here is the pseudocode of what I want to do:


for each (SELECT name AS DbName FROM common.dbnames)

(SELECT a FROM DbName.tb1 WHERE (SELECT flag1 FROM DbName.tb2) = 'y')  AS
CONCAT(DbName, '-', a)

Well, you could build up the united query in a string and pass it to PREPARE


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



RE: Question about Innodb

2013-02-04 Thread Rick James
Meta info about the tables is stored in ibdata1.  Hence, it is not possible to 
copy just the .ibd file to another database or machine.  5.6.x will remedy this 
with some export/import commands that do not involve reading/writing the rows 
individually.  (Ditto for moving partitions.)

(Sorry, I don't know the exact fields in ibdata1.) 

 -Original Message-
 From: Wayne Leutwyler [mailto:wleut...@columbus.rr.com]
 Sent: Monday, February 04, 2013 11:47 AM
 To: mysql@lists.mysql.com
 Subject: Question about Innodb
 
 Question about InnoDB tables and tablespaces.
 
 I have one file per table turned on. Its my understanding that even
 with one file per table turned on, that data is updated in the default
 system ibdata files. What type of data is stored in the ibdata files?
 
 Thanks,
 
 Walter Wayne Leutwyler, RHCT
 Sr. MySQL Database Administrator
 Mobile: 614 519 5672
 Office: 614 889 4956
 E-mail: wayne.leutwy...@gmail.com
 E-mail: wleut...@columbus.rr.com
 Website: http://penguin-workshop.dyndns.org
 
 Courage is being scared to death, but saddling up anyway. --John
 Wayne


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Question about testing memcmp()

2012-06-25 Thread Reindl Harald


Am 25.06.2012 06:17, schrieb Sabika Makhdoom:
 I want to test our memcmp() binaries to see if we have the mysql binaries 
 that are impacted by the recent security breach. How do I test it?

why do you simply not update?



signature.asc
Description: OpenPGP digital signature


RE: Question about testing memcmp()

2012-06-25 Thread Stillman, Benjamin
By recent security breach, do you mean the issue with passwords? If so: 
http://www.dbasquare.com/2012/06/11/a-security-flaw-in-mysql-authentication-is-your-system-vulnerable/

for i in `seq 1 2000`; do mysql -u USERNAME --password=INCORRECTPASSWORD -h 
HOSTNAME ; done

If you get in using that, you're vulnerable.

But as Harald said, why not just update? Always better to be patched and 
up-to-date.


Ben Stillman
bstill...@limitedbrands.com


-Original Message-
From: Sabika Makhdoom [mailto:sabika.makhd...@gmail.com]
Sent: Monday, June 25, 2012 12:17 AM
To: mysql@lists.mysql.com
Subject: Question about testing memcmp()

I want to test our memcmp() binaries to see if we have the mysql binaries that 
are impacted by the recent security breach. How do I test it?

Thanks


Life was meant to be mobile
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql




Notice: This communication may contain privileged and/or confidential 
information. If you are not the intended recipient, please notify the sender by 
email, and immediately delete the message and any attachments without copying 
or disclosing them. LBI may, for any reason, intercept, access, use, and 
disclose any information that is communicated by or through, or which is stored 
on, its networks, applications, services, and devices.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Question about deleting data and the undo log

2011-12-02 Thread Rik Wasmus
 Does drop table use the undo log (rollback segment) to temporarily store
 records to be purged later, the way delete from table does?

As 'DROP TABLE' causes an implicit commit 
(http://dev.mysql.com/doc/refman/5.5/en/implicit-commit.html), I would highly 
suspect that it doesnt. You cannot roll it back...
-- 
Rik Wasmus

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Question relating to transactions on innodb tables.

2011-11-25 Thread Reindl Harald


Am 25.11.2011 14:20, schrieb Machiel Richards - Gmail:
 Just a quick question relating to the use of transactions on innodb tables.
 
 We are doing some archiving on some innodb tables, however there seems to be 
 some issues somewhere in the
 process with data not being updated accordingly.
 
 We would like to make use of transactions for this , in order to allow us to 
 roll back on changes if the
 after checks does not correspond.
 
 What I am trying to find out is whether a transaction will cause the locks to 
 be handled as table locks or
 will it be row level locks?

which locks about you are speaking?

a transaction is a transaction
a lock is a lock



signature.asc
Description: OpenPGP digital signature


Re: [question]any performance tools about UPDATE

2011-09-23 Thread Prabhat Kumar
I don't think any other than  show full processlist. In which state query
is locked or not.
I/O  related things you check at OS level.

On Thu, Sep 22, 2011 at 11:07 PM, jiangwen jiang jiangwen...@gmail.comwrote:

 Hi,

 Is there any performance toolds about UPDATE/INSERT querys?
 I want to monitor the UPDATE/INSERT performance, check out if there's any
 performance bottleneck, for example:
 slow INSERT/UPDATE
 more I/O where execute INSERT

 Regards

 Thanks
 J.W




-- 
Best Regards,

Prabhat Kumar
MySQL DBA

My Blog: http://adminlinux.blogspot.com
My LinkedIn: http://www.linkedin.com/in/profileprabhat


Re: Question about slow storage and InnoDB compression

2011-09-14 Thread Maria Arrea
The server hosting bacula and the database only has one kind of disk: SATA, 
maybe I should buy a couple of SSD for mysql.

 I have read all your mails, and still not sure if I should enable innodb 
compression. My ibfile is 50 GB, though.

 Regards

 Maria


 Questions:
 1) Why are you putting your MySQL data on the same volume as your Bacula 
backups? Bacula does large sequential I/O and MySQL will do random I/O based on 
teh structure.





 What you want to do is:

 1) you have 5MB InnoDB Log Files, that's a major bottleneck. I would use at 
256MB or 512MB x 2 InnoDB log files.
 2) dump and import the database using innodb_file_per_table so that 
optimization will free up space..
 3) are you running Bacula on the server as well? If so, decrease the buffer 
pool to 1-2GB.. if not bump it up to to 3GB as you need some memory for bacula

 and 4, this is the most important one:
 How big is your MySQL data? Its not that big, I figure in the 80-100GB range. 
Get yourself a pair of 240GB SSDs, mount it locally for MySQL.

 S



 On Tue, Sep 13, 2011 at 21:19, Suresh Kuna  sureshkumar...@gmail.com  wrote:
 I would recommend to go for a 15K rpm SSD raid-10 to keep the mysql data and
 add the Barracuda file format with innodb file per table settings, 3 to 4 GB
 of innodb buffer pool depending the ratio of myisam v/s innodb in your db.
 Check the current stats and reduce the tmp and heap table size to a lower
 value, and reduce the remaining buffer's and cache as well.

 On Tue, Sep 13, 2011 at 9:06 PM, Maria Arrea  maria_ar...@gmx.com  wrote:

  Hello
 
  I have upgraded our backup server from mysql 5.0.77 to mysql 5.5.15. We
  are using bacula as backup software, and all the info from backups is stored
  in a mysql database. Today I have upgraded from mysql 5.0 to 5.5 using IUS
  repository RPMS and with mysql_upgrade procedure, no problem so far. This
  backup systems hold the bacula daemon, the mysql server and the backup of
  other 100 systems (Solaris/Linux/Windows)
 
  Our server has 6 GB of ram, 1 quad Intel Xeon E5520 and 46 TB of raid-6
  SATA disks (7200 rpm) connected to a Smart Array P812 controller  Red Hat
  Enterprise Linux 5.7 x64. Our mysql has dozens of millions of lines, and we
  are using InnoDB as storage engine for bacula internal data. We add hundred
  of thousands lines /day to our mysql (files are incrementally backed up
  daily from our 100 servers). So, we have a 7-8 concurrent writes (in
  different lines, of course) , and theorically we only read from mysql when
  we restore from backup.
 
  Daily we launch a cron job that executes an optimize table in each table
  of our database to compact the database. It takes almost an hour. We are
  going to increase the memory of the server from 6 to 12 GB in a couple of
  weeks, and I will change my.cnf to reflect more memory. My actual my.cnf is
  attached below:
 
 
  These are my questions:
 
 
  - We have real slow storage (raid 6 SATA), but plenty CPU and ram . Should
  I enable innodb compression to make this mysql faster?
  - This system is IOPS-constrained for mysql (fine for backup, though).
  Should I add a SSD only to hold mysql data?
  - Any additional setting I should use to tune this mysql server?
 
 
 
  my.cnf content:
 
  [client]
  port = 3306
  socket = /var/lib/mysql/mysql.sock
 
 
  [mysqld]
  innodb_flush_method=O_DIRECT
  max_connections = 15
  wait_timeout = 86400
  port = 3306
  socket = /var/lib/mysql/mysql.sock
  key_buffer = 100M
  max_allowed_packet = 2M
  table_cache = 2048
  sort_buffer_size = 16M
  read_buffer_size = 16M
  read_rnd_buffer_size = 12M
  myisam_sort_buffer_size = 384M
  query_cache_type=1
  query_cache_size=32M
  thread_cache_size = 16
  query_cache_size = 250M
  thread_concurrency = 6
  tmp_table_size = 1024M
  max_heap_table = 1024M
 
 
  skip-federated
  innodb_buffer_pool_size= 2500M
  innodb_additional_mem_pool_size = 32M
 
  [mysqldump]
  max_allowed_packet = 16M
 
  [mysql]
  no-auto-rehash
 
  [isamchk]
  key_buffer = 1250M
  sort_buffer_size = 384M
  read_buffer = 8M
  write_buffer = 8M
 
  [myisamchk]
  key_buffer = 1250M
  sort_buffer_size = 384M
  read_buffer = 8M
  write_buffer = 8M
 
  [mysqlhotcopy]
  interactive-timeout
 
 
  Regards
 
  Maria
 


--
 Thanks
 Suresh Kuna
 MySQL DBA
 -- The best compliment you could give Pythian for our service is a referral.


Re: Question about slow storage and InnoDB compression

2011-09-14 Thread Reindl Harald


Am 14.09.2011 09:50, schrieb Maria Arrea:
  I have read all your mails, and still not sure if I should enable innodb 
 compression

if you have enough free cpu-ressources and IO is your problem simply yes
because the transfer from/to disk will be not so high as uncompressed







signature.asc
Description: OpenPGP digital signature


Re: Question about slow storage and InnoDB compression

2011-09-14 Thread Maria Arrea
 | 0 | 0 | 0 | | 
2011-09-14 11:44:16 | | | latin1_swedish_ci | | row_format=COMPRESSED 
KEY_BLOCK_SIZE=16 | |
 
+++-++---++-+-+--+---++-+-+-+---+--+-+-+


 I am still benchmarking, but I see a 15-20% performance gain after enabling 
compression using bacula gui (bat).

 Regards

 Maria

- Original Message -
From: Maria Arrea
Sent: 09/14/11 09:50 AM
To: mysql@lists.mysql.com
Subject: Re: Question about slow storage and InnoDB compression

 The server hosting bacula and the database only has one kind of disk: SATA, 
maybe I should buy a couple of SSD for mysql. I have read all your mails, and 
still not sure if I should enable innodb compression. My ibfile is 50 GB, 
though. Regards Maria Questions: 1) Why are you putting your MySQL data on the 
same volume as your Bacula backups? Bacula does large sequential I/O and MySQL 
will do random I/O based on teh structure. What you want to do is: 1) you have 
5MB InnoDB Log Files, that's a major bottleneck. I would use at 256MB or 512MB 
x 2 InnoDB log files. 2) dump and import the database using 
innodb_file_per_table so that optimization will free up space.. 3) are you 
running Bacula on the server as well? If so, decrease the buffer pool to 
1-2GB.. if not bump it up to to 3GB as you need some memory for bacula and 4, 
this is the most important one: How big is your MySQL data? Its not that big, I 
figure in the 80-100GB range. Get yourself a pair of 240GB SSDs, mount it 
locally for MySQL. S On Tue, Sep 13, 2011 at 21:19, Suresh Kuna  
sureshkumar...@gmail.com  wrote: I would recommend to go for a 15K rpm SSD 
raid-10 to keep the mysql data and add the Barracuda file format with innodb 
file per table settings, 3 to 4 GB of innodb buffer pool depending the ratio of 
myisam v/s innodb in your db. Check the current stats and reduce the tmp and 
heap table size to a lower value, and reduce the remaining buffer's and cache 
as well. On Tue, Sep 13, 2011 at 9:06 PM, Maria Arrea  maria_ar...@gmx.com  
wrote:  Hello   I have upgraded our backup server from mysql 5.0.77 to mysql 
5.5.15. We  are using bacula as backup software, and all the info from backups 
is stored  in a mysql database. Today I have upgraded from mysql 5.0 to 5.5 
using IUS  repository RPMS and with mysql_upgrade procedure, no problem so 
far. This  backup systems hold the bacula daemon, the mysql server and the 
backup of  other 100 systems (Solaris/Linux/Windows)   Our server has 6 GB 
of ram, 1 quad Intel Xeon E5520 and 46 TB of raid-6  SATA disks (7200 rpm) 
connected to a Smart Array P812 controller  Red Hat  Enterprise Linux 5.7 
x64. Our mysql has dozens of millions of lines, and we  are using InnoDB as 
storage engine for bacula internal data. We add hundred  of thousands lines 
/day to our mysql (files are incrementally backed up  daily from our 100 
servers). So, we have a 7-8 concurrent writes (in  different lines, of course) 
, and theorically we only read from mysql when  we restore from backup.   
Daily we launch a cron job that executes an optimize table in each table  of 
our database to compact the database. It takes almost an hour. We are  going 
to increase the memory of the server from 6 to 12 GB in a couple of  weeks, 
and I will change my.cnf to reflect more memory. My actual my.cnf is  attached 
below:These are my questions:- We have real slow storage (raid 6 
SATA), but plenty CPU and ram . Should  I enable innodb compression to make 
this mysql faster?  - This system is IOPS-constrained for mysql (fine for 
backup, though).  Should I add a SSD only to hold mysql data?  - Any 
additional setting I should use to tune this mysql server? my.cnf 
content:   [client]  port = 3306  socket = /var/lib/mysql/mysql.sock
[mysqld]  innodb_flush_method=O_DIRECT  max_connections = 15  wait_timeout = 
86400  port = 3306  socket = /var/lib/mysql/mysql.sock  key_buffer = 100M  
max_allowed_packet = 2M  table_cache = 2048  sort_buffer_size = 16M  
read_buffer_size = 16M  read_rnd_buffer_size = 12M  myisam_sort_buffer_size = 
384M  query_cache_type=1  query_cache_size=32M  thread_cache_size = 16  
query_cache_size = 250M  thread_concurrency = 6  tmp_table_size = 1024M  
max_heap_table = 1024Mskip-federated  innodb_buffer_pool_size= 2500M  
innodb_additional_mem_pool_size = 32M   [mysqldump]  max_allowed_packet = 
16M   [mysql]  no-auto-rehash   [isamchk]  key_buffer = 1250M  
sort_buffer_size = 384M  read_buffer = 8M  write_buffer = 8M   [myisamchk] 
 key_buffer = 1250M  sort_buffer_size = 384M  read_buffer = 8M  
write_buffer = 8M   [mysqlhotcopy]  interactive-timeoutRegards   
Maria  -- Thanks Suresh Kuna MySQL DBA -- The best compliment you could give 
Pythian for our service is a referral.


Re: Question about slow storage and InnoDB compression

2011-09-14 Thread Reindl Harald


Am 14.09.2011 14:50, schrieb Maria Arrea:
 I have finally enabled compression:
 I am still benchmarking, but I see a 15-20% performance gain after enabling 
 compression using bacula gui
as expected if disk-io is the only bottenleck
the same with NTFS-Compression inside a VMware Machine on modern hardware



signature.asc
Description: OpenPGP digital signature


Re: Question about slow storage and InnoDB compression

2011-09-14 Thread Suresh Kuna
-09-14 11:44:16 | | | latin1_swedish_ci | | row_format=COMPRESSED
 KEY_BLOCK_SIZE=16 | |
  | Storage | InnoDB | 10 | Compressed | 1 | 16384 | 16384 | 0 | 0 | 0 | 2 |
 2011-09-14 11:44:16 | | | latin1_swedish_ci | | row_format=COMPRESSED
 KEY_BLOCK_SIZE=16 | |
  | UnsavedFiles | InnoDB | 10 | Compressed | 0 | 0 | 16384 | 0 | 0 | 0 | 1
 | 2011-09-14 11:44:16 | | | latin1_swedish_ci | | row_format=COMPRESSED
 KEY_BLOCK_SIZE=16 | |
  | Version | InnoDB | 10 | Compressed | 1 | 16384 | 16384 | 0 | 0 | 0 | |
 2011-09-14 11:44:16 | | | latin1_swedish_ci | | row_format=COMPRESSED
 KEY_BLOCK_SIZE=16 | |

  
 +++-++---++-+-+--+---++-+-+-+---+--+-+-+


  I am still benchmarking, but I see a 15-20% performance gain after
 enabling compression using bacula gui (bat).

  Regards

  Maria

 - Original Message -
 From: Maria Arrea
 Sent: 09/14/11 09:50 AM
 To: mysql@lists.mysql.com
 Subject: Re: Question about slow storage and InnoDB compression

  The server hosting bacula and the database only has one kind of disk:
 SATA, maybe I should buy a couple of SSD for mysql. I have read all your
 mails, and still not sure if I should enable innodb compression. My ibfile
 is 50 GB, though. Regards Maria Questions: 1) Why are you putting your MySQL
 data on the same volume as your Bacula backups? Bacula does large sequential
 I/O and MySQL will do random I/O based on teh structure. What you want to do
 is: 1) you have 5MB InnoDB Log Files, that's a major bottleneck. I would use
 at 256MB or 512MB x 2 InnoDB log files. 2) dump and import the database
 using innodb_file_per_table so that optimization will free up space.. 3) are
 you running Bacula on the server as well? If so, decrease the buffer pool to
 1-2GB.. if not bump it up to to 3GB as you need some memory for bacula and
 4, this is the most important one: How big is your MySQL data? Its not that
 big, I figure in the 80-100GB range. Get yourself a pair of 240GB SSDs,
 mount it locally for MySQL. S On Tue, Sep 13, 2011 at 21:19, Suresh Kuna 
 sureshkumar...@gmail.com  wrote: I would recommend to go for a 15K rpm
 SSD raid-10 to keep the mysql data and add the Barracuda file format with
 innodb file per table settings, 3 to 4 GB of innodb buffer pool depending
 the ratio of myisam v/s innodb in your db. Check the current stats and
 reduce the tmp and heap table size to a lower value, and reduce the
 remaining buffer's and cache as well. On Tue, Sep 13, 2011 at 9:06 PM, Maria
 Arrea  maria_ar...@gmx.com  wrote:  Hello   I have upgraded our
 backup server from mysql 5.0.77 to mysql 5.5.15. We  are using bacula as
 backup software, and all the info from backups is stored  in a mysql
 database. Today I have upgraded from mysql 5.0 to 5.5 using IUS  repository
 RPMS and with mysql_upgrade procedure, no problem so far. This  backup
 systems hold the bacula daemon, the mysql server and the backup of  other
 100 systems (Solaris/Linux/Windows)   Our server has 6 GB of ram, 1 quad
 Intel Xeon E5520 and 46 TB of raid-6  SATA disks (7200 rpm) connected to a
 Smart Array P812 controller  Red Hat  Enterprise Linux 5.7 x64. Our mysql
 has dozens of millions of lines, and we  are using InnoDB as storage engine
 for bacula internal data. We add hundred  of thousands lines /day to our
 mysql (files are incrementally backed up  daily from our 100 servers). So,
 we have a 7-8 concurrent writes (in  different lines, of course) , and
 theorically we only read from mysql when  we restore from backup.   Daily
 we launch a cron job that executes an optimize table in each table  of
 our database to compact the database. It takes almost an hour. We are 
 going to increase the memory of the server from 6 to 12 GB in a couple of 
 weeks, and I will change my.cnf to reflect more memory. My actual my.cnf is
  attached below:These are my questions:- We have real slow
 storage (raid 6 SATA), but plenty CPU and ram . Should  I enable innodb
 compression to make this mysql faster?  - This system is IOPS-constrained
 for mysql (fine for backup, though).  Should I add a SSD only to hold mysql
 data?  - Any additional setting I should use to tune this mysql server?  
   my.cnf content:   [client]  port = 3306  socket =
 /var/lib/mysql/mysql.sock[mysqld]  innodb_flush_method=O_DIRECT 
 max_connections = 15  wait_timeout = 86400  port = 3306  socket =
 /var/lib/mysql/mysql.sock  key_buffer = 100M  max_allowed_packet = 2M 
 table_cache = 2048  sort_buffer_size = 16M  read_buffer_size = 16M 
 read_rnd_buffer_size = 12M  myisam_sort_buffer_size = 384M 
 query_cache_type=1  query_cache_size=32M  thread_cache_size = 16 
 query_cache_size = 250M  thread_concurrency = 6  tmp_table_size = 1024M 
 max_heap_table = 1024Mskip-federated

Re: Question about slow storage and InnoDB compression

2011-09-13 Thread Suresh Kuna
I would recommend to go for a 15K rpm SSD raid-10 to keep the mysql data and
add the Barracuda file format with innodb file per table settings, 3 to 4 GB
of innodb buffer pool depending the ratio of myisam v/s innodb in your db.
Check the current stats and reduce the tmp and heap table size to a lower
value, and reduce the remaining buffer's and cache as well.

On Tue, Sep 13, 2011 at 9:06 PM, Maria Arrea maria_ar...@gmx.com wrote:

 Hello

  I have upgraded our backup server from mysql 5.0.77 to mysql 5.5.15. We
 are using bacula as backup software, and all the info from backups is stored
 in a mysql database. Today I have upgraded from mysql 5.0 to 5.5 using IUS
 repository RPMS and with mysql_upgrade procedure, no problem so far. This
 backup systems hold the bacula daemon, the mysql server and the backup of
 other 100 systems (Solaris/Linux/Windows)

  Our server has 6 GB of ram, 1 quad Intel Xeon E5520 and 46 TB of raid-6
 SATA disks (7200 rpm) connected to a Smart Array P812 controller  Red Hat
 Enterprise Linux 5.7 x64. Our mysql has dozens of millions of lines, and we
 are using InnoDB as storage engine for bacula internal data. We add hundred
 of thousands lines /day to our mysql (files are incrementally backed up
 daily from our 100 servers). So, we have a 7-8 concurrent writes (in
 different lines, of course) , and theorically we only read from mysql when
 we restore from backup.

  Daily we launch a cron job that executes an optimize table in each table
 of our database to compact the database. It takes almost an hour. We are
 going to increase the memory of the server from 6 to 12 GB in a couple of
 weeks, and I will change my.cnf to reflect more memory. My actual my.cnf is
 attached below:


  These are my questions:


  - We have real slow storage (raid 6 SATA), but plenty CPU and ram . Should
 I enable innodb compression to make this mysql faster?
  - This system is IOPS-constrained for mysql (fine for backup, though).
 Should I add a SSD only to hold mysql data?
  - Any additional setting I should use to tune this mysql server?



  my.cnf content:

  [client]
  port = 3306
  socket = /var/lib/mysql/mysql.sock


  [mysqld]
  innodb_flush_method=O_DIRECT
  max_connections = 15
  wait_timeout = 86400
  port = 3306
  socket = /var/lib/mysql/mysql.sock
  key_buffer = 100M
  max_allowed_packet = 2M
  table_cache = 2048
  sort_buffer_size = 16M
  read_buffer_size = 16M
  read_rnd_buffer_size = 12M
  myisam_sort_buffer_size = 384M
  query_cache_type=1
  query_cache_size=32M
  thread_cache_size = 16
  query_cache_size = 250M
  thread_concurrency = 6
  tmp_table_size = 1024M
  max_heap_table = 1024M


  skip-federated
  innodb_buffer_pool_size= 2500M
  innodb_additional_mem_pool_size = 32M

  [mysqldump]
  max_allowed_packet = 16M

  [mysql]
  no-auto-rehash

  [isamchk]
  key_buffer = 1250M
  sort_buffer_size = 384M
  read_buffer = 8M
  write_buffer = 8M

  [myisamchk]
  key_buffer = 1250M
  sort_buffer_size = 384M
  read_buffer = 8M
  write_buffer = 8M

  [mysqlhotcopy]
  interactive-timeout


  Regards

  Maria




-- 
Thanks
Suresh Kuna
MySQL DBA


Re: Question about slow storage and InnoDB compression

2011-09-13 Thread Suresh Kuna
Thanks for correcting me in the disk stats Singer, A typo error of SSD
instead of SAS 15k rpm.

Compression may not increase the memory requirements :
To minimize I/O and to reduce the need to uncompress a page, at times the
buffer pool contains both the compressed and uncompressed form of a database
page. To make room for other required database pages, InnoDB may “evict”
from the buffer pool an uncompressed page, while leaving the compressed page
in memory. Or, if a page has not been accessed in a while, the compressed
form of the page may be written to disk, to free space for other data. Thus,
at any given time, the buffer pool may contain both the compressed and
uncompressed forms of the page, or only the compressed form of the page, or
neither.

More details and benefits about the barracuda file format can be found in
the below url Which helps to know the pros and cons on file format

http://dev.mysql.com/doc/refman/5.6/en/glossary.html#glos_antelope
http://dev.mysql.com/doc/innodb/1.1/en/glossary.html#glos_barracuda
http://www.mysqlperformanceblog.com/2008/04/23/real-life-use-case-for-barracuda-innodb-file-format/
http://dev.mysql.com/doc/innodb/1.1/en/innodb-other-changes-file-formats.html

I would go with the Singer suggestions in What you want to do is part.

Thanks
Suresh Kuna


On Wed, Sep 14, 2011 at 7:21 AM, Singer X.J. Wang w...@singerwang.comwrote:

 Comments:
 1) There is no such thing as 15K RPM SSDs... SSDs are NON ROTATIONAL
 STORAGE, therefore RPMS make no sense..
 2) Upgrading to Barracuda file format isn't really worth it in this case,
 you're not going to get any real benefits. In your scenario I doubt InnoDB
 table compression will help, as it will significantly increase your memory
 requirements as it to keep uncompressed and compressed copies in RAM.

 Questions:
 1) Why are you putting your MySQL data on the same volume as your Bacula
 backups? Bacula does large sequential I/O and MySQL will do random I/O based
 on teh structure.

 What you want to do is:

 1) you have 5MB InnoDB Log Files, that's a major bottleneck. I would use at
 256MB or 512MB x 2 InnoDB log files.
 2) dump and import the database using innodb_file_per_table so that
 optimization will free up space..
 3) are you running Bacula on the server as well? If so, decrease the buffer
 pool to 1-2GB.. if not bump it up to to 3GB as you need some memory for
 bacula

 and 4, this is the most important one:
 How big is your MySQL data? Its not that big, I figure in the 80-100GB
 range.  Get yourself a pair of 240GB SSDs, mount it locally for MySQL.

 S





 On Tue, Sep 13, 2011 at 21:19, Suresh Kuna sureshkumar...@gmail.comwrote:

 I would recommend to go for a 15K rpm SSD raid-10 to keep the mysql data
 and
 add the Barracuda file format with innodb file per table settings, 3 to 4
 GB
 of innodb buffer pool depending the ratio of myisam v/s innodb in your db.
 Check the current stats and reduce the tmp and heap table size to a lower
 value, and reduce the remaining buffer's and cache as well.



 On Tue, Sep 13, 2011 at 9:06 PM, Maria Arrea maria_ar...@gmx.com wrote:

  Hello
 
   I have upgraded our backup server from mysql 5.0.77 to mysql 5.5.15. We
  are using bacula as backup software, and all the info from backups is
 stored
  in a mysql database. Today I have upgraded from mysql 5.0 to 5.5 using
 IUS
  repository RPMS and with mysql_upgrade procedure, no problem so far.
 This
  backup systems hold the bacula daemon, the mysql server and the backup
 of
  other 100 systems (Solaris/Linux/Windows)
 
   Our server has 6 GB of ram, 1 quad Intel Xeon E5520 and 46 TB of raid-6
  SATA disks (7200 rpm) connected to a Smart Array P812 controller  Red
 Hat
  Enterprise Linux 5.7 x64. Our mysql has dozens of millions of lines, and
 we
  are using InnoDB as storage engine for bacula internal data. We add
 hundred
  of thousands lines /day to our mysql (files are incrementally backed up
  daily from our 100 servers). So, we have a 7-8 concurrent writes (in
  different lines, of course) , and theorically we only read from mysql
 when
  we restore from backup.
 
   Daily we launch a cron job that executes an optimize table in each
 table
  of our database to compact the database. It takes almost an hour. We are
  going to increase the memory of the server from 6 to 12 GB in a couple
 of
  weeks, and I will change my.cnf to reflect more memory. My actual my.cnf
 is
  attached below:
 
 
   These are my questions:
 
 
   - We have real slow storage (raid 6 SATA), but plenty CPU and ram .
 Should
  I enable innodb compression to make this mysql faster?
   - This system is IOPS-constrained for mysql (fine for backup, though).
  Should I add a SSD only to hold mysql data?
   - Any additional setting I should use to tune this mysql server?
 
 
 
   my.cnf content:
 
   [client]
   port = 3306
   socket = /var/lib/mysql/mysql.sock
 
 
   [mysqld]
   innodb_flush_method=O_DIRECT
   max_connections = 15
   wait_timeout = 86400
   port = 

Re: Question about Backup

2011-03-22 Thread Johan De Meersman

You are assuming that the database is one table of 5.000 gigabyte, and not 
5.000 tables of one gigabyte; and that the backup needs to be consistent :-p


- Original Message -
 From: Reindl Harald h.rei...@thelounge.net
 To: mysql@lists.mysql.com
 Sent: Monday, 21 March, 2011 12:44:08 PM
 Subject: Re: Question about Backup
 
 Forget mysqldump because TABLE LOCKS for so hughe databases
 I would setup a replication-slave because you can stop
 the salave and make a filesystem-backup of the whole db-folder
 while the production server is online, we do this with our
 dbmail-server since 2009
 

-- 
Bier met grenadyn
Is als mosterd by den wyn
Sy die't drinkt, is eene kwezel
Hy die't drinkt, is ras een ezel

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Question about Backup

2011-03-22 Thread Karen Abgarian
Why, if they shut down the slave, it will be quite consistent.  Only that this 
technique is not as much of the 21th century, but is like 30 years old.  
Placing locks is about the same as shutting it down. 


On Mar 22, 2011, at 6:01 AM, Johan De Meersman wrote:

 
 You are assuming that the database is one table of 5.000 gigabyte, and not 
 5.000 tables of one gigabyte; and that the backup needs to be consistent :-p
 
 
 - Original Message -
 From: Reindl Harald h.rei...@thelounge.net
 To: mysql@lists.mysql.com
 Sent: Monday, 21 March, 2011 12:44:08 PM
 Subject: Re: Question about Backup
 
 Forget mysqldump because TABLE LOCKS for so hughe databases
 I would setup a replication-slave because you can stop
 the salave and make a filesystem-backup of the whole db-folder
 while the production server is online, we do this with our
 dbmail-server since 2009
 
 
 -- 
 Bier met grenadyn
 Is als mosterd by den wyn
 Sy die't drinkt, is eene kwezel
 Hy die't drinkt, is ras een ezel
 
 -- 
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql?unsub=a...@apple.com
 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Question about Backup

2011-03-22 Thread Wm Mussatto
On Tue, March 22, 2011 10:32, Karen Abgarian wrote:
 Why, if they shut down the slave, it will be quite consistent.  Only that
 this technique is not as much of the 21th century, but is like 30 years
 old.
 Placing locks is about the same as shutting it down.

Ah, but if you have the dump function do the locking it will also remember
to restart it when its done.

 On Mar 22, 2011, at 6:01 AM, Johan De Meersman wrote:


 You are assuming that the database is one table of 5.000 gigabyte, and
 not 5.000 tables of one gigabyte; and that the backup needs to be
 consistent :-p


 - Original Message -
 From: Reindl Harald h.rei...@thelounge.net
 To: mysql@lists.mysql.com
 Sent: Monday, 21 March, 2011 12:44:08 PM
 Subject: Re: Question about Backup

 Forget mysqldump because TABLE LOCKS for so hughe databases
 I would setup a replication-slave because you can stop
 the salave and make a filesystem-backup of the whole db-folder
 while the production server is online, we do this with our
 dbmail-server since 2009


 --
 Bier met grenadyn
 Is als mosterd by den wyn
 Sy die't drinkt, is eene kwezel
 Hy die't drinkt, is ras een ezel

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql?unsub=a...@apple.com



 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql?unsub=mussa...@csz.com





-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Question about Backup

2011-03-21 Thread Reindl Harald
Forget mysqldump because TABLE LOCKS for so hughe databases
I would setup a replication-slave because you can stop
the salave and make a filesystem-backup of the whole db-folder
while the production server is online, we do this with our
dbmail-server since 2009

Am 21.03.2011 12:23, schrieb Pedro Nuñez:
 Hi
 
 I need set up a backup strategy  for a mysql database in a ubuntu server,
 the database will grow up to a 5TB.
 What would be the best option ?? Maybe a script that uses mysqldump?? There
 is a better way to do this?
 
 Thanks in advance to all
 Pedro.

-- 

Mit besten Grüßen, Reindl Harald
the lounge interactive design GmbH
A-1060 Vienna, Hofmühlgasse 17
CTO / software-development / cms-solutions
p: +43 (1) 595 3999 33, m: +43 (676) 40 221 40
icq: 154546673, http://www.thelounge.net/



signature.asc
Description: OpenPGP digital signature


Re: Question about Backup

2011-03-21 Thread Karen Abgarian
Hi,  

The statement like 'I need to back up a 5T database' is not a backup strategy.  
It is intention.  There are some specifics that have to be determined to work 
out a strategy.  Going from there, the backup solution can be chosen.  The 
examples of questions one typically asks when determining a strategy are as 
follows:

1.  What is my allowed data loss in time units, f.e. hours and minutes? 
2.  What are my availability requirements for the data to be brought up, that 
is how quickly I want to recover from backup?
3.  Do I need to be able to recover as of discrete points of time or as of any 
continuous time range?
4.  What are my availability requirements during backups?  


Tx
Karen.


On Mar 21, 2011, at 4:44 AM, Reindl Harald wrote:

 Forget mysqldump because TABLE LOCKS for so hughe databases
 I would setup a replication-slave because you can stop
 the salave and make a filesystem-backup of the whole db-folder
 while the production server is online, we do this with our
 dbmail-server since 2009
 
 Am 21.03.2011 12:23, schrieb Pedro Nuñez:
 Hi
 
 I need set up a backup strategy  for a mysql database in a ubuntu server,
 the database will grow up to a 5TB.
 What would be the best option ?? Maybe a script that uses mysqldump?? There
 is a better way to do this?
 
 Thanks in advance to all
 Pedro.
 
 -- 
 
 Mit besten Grüßen, Reindl Harald
 the lounge interactive design GmbH
 A-1060 Vienna, Hofmühlgasse 17
 CTO / software-development / cms-solutions
 p: +43 (1) 595 3999 33, m: +43 (676) 40 221 40
 icq: 154546673, http://www.thelounge.net/
 


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Question about Backup

2011-03-21 Thread Karen Abgarian
That would be the last question :-)  Suppose we worked out strategy, lined up 
the solutions along with their costs and then compare them with our budget.   
That would be easy to find the one we can afford, and we will know what we 
could dream about :-).  

On Mar 21, 2011, at 11:28 AM, Singer X.J. Wang wrote:

 Also, very important but often not asked:
 
 1) What's my budget?
 
 
 On Mon, Mar 21, 2011 at 14:24, Karen Abgarian a...@apple.com wrote:
 Hi,
 
 The statement like 'I need to back up a 5T database' is not a backup 
 strategy.  It is intention.  There are some specifics that have to be 
 determined to work out a strategy.  Going from there, the backup solution can 
 be chosen.  The examples of questions one typically asks when determining a 
 strategy are as follows:
 
 1.  What is my allowed data loss in time units, f.e. hours and minutes?
 2.  What are my availability requirements for the data to be brought up, that 
 is how quickly I want to recover from backup?
 3.  Do I need to be able to recover as of discrete points of time or as of 
 any continuous time range?
 4.  What are my availability requirements during backups?
 
 
 Tx
 Karen.
 
 
 On Mar 21, 2011, at 4:44 AM, Reindl Harald wrote:
 
  Forget mysqldump because TABLE LOCKS for so hughe databases
  I would setup a replication-slave because you can stop
  the salave and make a filesystem-backup of the whole db-folder
  while the production server is online, we do this with our
  dbmail-server since 2009
 
  Am 21.03.2011 12:23, schrieb Pedro Nuñez:
  Hi
 
  I need set up a backup strategy  for a mysql database in a ubuntu server,
  the database will grow up to a 5TB.
  What would be the best option ?? Maybe a script that uses mysqldump?? There
  is a better way to do this?
 
  Thanks in advance to all
  Pedro.
 
  --
 
  Mit besten Grüßen, Reindl Harald
  the lounge interactive design GmbH
  A-1060 Vienna, Hofmühlgasse 17
  CTO / software-development / cms-solutions
  p: +43 (1) 595 3999 33, m: +43 (676) 40 221 40
  icq: 154546673, http://www.thelounge.net/
 
 
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql?unsub=w...@singerwang.com
 
 
 --
 The best compliment you could give Pythian for our service is a referral.
 



Re: Question about performance

2011-03-01 Thread András Lukács
Or you can interrupt the query instead, although I've seen it not to 
work on occasions: KILL QUERY id;


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Question about performance

2011-02-17 Thread Michael Dykman
from the mysql console:   show processlist
this will show you ids of all active connections, even the dead ones

then, again form the console   kill processid

On Thu, Feb 17, 2011 at 3:52 PM, Rafael Valenzuela rav...@gmail.com wrote:
 Hi all;
 I wonder if there is any tool to Performance Tuning querys. In other know if
 there is any way to kill connections that take x hours dead (for example 1
 hour)

 --
 Mit forever
 My Blog http://www.redcloverbi.wordpress.com
 My Faborite 
 Webhttp://ocw.mit.edu/OcwWeb/Electrical-Engineering-and-Computer-Science/index.htm
 http://www.technologyreview.com/




-- 
 - michael dykman
 - mdyk...@gmail.com

 May the Source be with you.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Question about performance

2011-02-17 Thread Rafael Valenzuela
Hi Michael:

Yeah , i think that i do a shell script.something like that.

require 'mysql'

mysql = Mysql.new(ip, user, pass)
processlist = mysql.query(show full processlist)
killed = 0
processlist.each { | process |
  mysql.query(KILL #{process[0].to_i})
}
puts #{Time.new} -- killed: #{killed} connections



2011/2/17 Michael Dykman mdyk...@gmail.com

 from the mysql console:   show processlist
 this will show you ids of all active connections, even the dead ones

 then, again form the console   kill processid

 On Thu, Feb 17, 2011 at 3:52 PM, Rafael Valenzuela rav...@gmail.com
 wrote:
  Hi all;
  I wonder if there is any tool to Performance Tuning querys. In other know
 if
  there is any way to kill connections that take x hours dead (for example
 1
  hour)
 
  --
  Mit forever
  My Blog http://www.redcloverbi.wordpress.com
  My Faborite Web
 http://ocw.mit.edu/OcwWeb/Electrical-Engineering-and-Computer-Science/index.htm
 
  http://www.technologyreview.com/
 



 --
  - michael dykman
  - mdyk...@gmail.com

  May the Source be with you.




-- 
Mit forever
My Blog http://www.redcloverbi.wordpress.com
My Faborite 
Webhttp://ocw.mit.edu/OcwWeb/Electrical-Engineering-and-Computer-Science/index.htm
http://www.technologyreview.com/


Re: Question about performance

2011-02-17 Thread Michael Dykman
Rafael,

You realize that script will kill perfectly well-behaved queries in
mid-flight?  If you have so many dead connections that it is interfering
with operation, you have another problem elsewhere..

 - md

On Thu, Feb 17, 2011 at 4:16 PM, Rafael Valenzuela rav...@gmail.com wrote:

 Hi Michael:

 Yeah , i think that i do a shell script.something like that.

 require 'mysql'

 mysql = Mysql.new(ip, user, pass)
 processlist = mysql.query(show full processlist)
 killed = 0
 processlist.each { | process |
   mysql.query(KILL #{process[0].to_i})
 }
 puts #{Time.new} -- killed: #{killed} connections



 2011/2/17 Michael Dykman mdyk...@gmail.com

 from the mysql console:   show processlist
 this will show you ids of all active connections, even the dead ones

 then, again form the console   kill processid

 On Thu, Feb 17, 2011 at 3:52 PM, Rafael Valenzuela rav...@gmail.com
 wrote:
  Hi all;
  I wonder if there is any tool to Performance Tuning querys. In other
 know if
  there is any way to kill connections that take x hours dead (for example
 1
  hour)
 
  --
  Mit forever
  My Blog http://www.redcloverbi.wordpress.com
  My Faborite Web
 http://ocw.mit.edu/OcwWeb/Electrical-Engineering-and-Computer-Science/index.htm
 
  http://www.technologyreview.com/
 



 --
  - michael dykman
  - mdyk...@gmail.com

  May the Source be with you.




 --
 Mit forever
 My Blog http://www.redcloverbi.wordpress.com
 My Faborite 
 Webhttp://ocw.mit.edu/OcwWeb/Electrical-Engineering-and-Computer-Science/index.htm
 http://www.technologyreview.com/






-- 
 - michael dykman
 - mdyk...@gmail.com

 May the Source be with you.


Re: Question about performance

2011-02-17 Thread Reindl Harald
I am working with mysql since many yaers and i have never
found e reason to kill braindead connections - what
benefit do you think to have from such actions instead
looking why there are hanging ones?

kill a connection of postfix and some user gets
temorary lookup error, php-scripts are closing
connections after the request

if you have long living connections from php
you have persistent connections active - so
why do you not disable the feature if you do
not like it instead writnign dirty scripts?

Am 17.02.2011 22:16, schrieb Rafael Valenzuela:
 Hi Michael:
 
 Yeah , i think that i do a shell script.something like that.
 
 require 'mysql'
 
 mysql = Mysql.new(ip, user, pass)
 processlist = mysql.query(show full processlist)
 killed = 0
 processlist.each { | process |
   mysql.query(KILL #{process[0].to_i})
 }
 puts #{Time.new} -- killed: #{killed} connections



signature.asc
Description: OpenPGP digital signature


RE: Question about database value checking

2011-02-04 Thread Peter He

Are you using the strict SQL mode?  Check your my.cnf file.
 
Peter
 
 Date: Fri, 4 Feb 2011 14:08:01 -0800
 From: awall...@ihouseweb.com
 To: mysql@lists.mysql.com
 Subject: Question about database value checking
 
 So, a problem popped up today that has caused us no end of hair-pulling, and
 it brought to mind a similar issue that I found very, well, wrong.
 
 If you have a table defined:
 
 CREATE TABLE `tester_table` (
 `acnt` varchar(20) NOT NULL DEFAULT '',
 `method` varchar(10) NOT NULL DEFAULT '',
 `card_num` varchar(100) NOT NULL DEFAULT '',
 PRIMARY KEY (`acnt`)
 ) ENGINE=MyISAM DEFAULT CHARSET=latin1
 
 And try this:
 
 INSERT INTO tester_table (acnt, method, card_num)
 VALUES ('test1', 'push', NULL);
 
 That fails. and gives a nice error. But:
 
 INSERT INTO tester_table (acnt, method, card_num)
 VALUES ('test1', 'push', 'A12345');
 
 UPDATE tester_table set card_num = NULL WHERE acnt = 'test1';
 
 That succeeds, but it puts an empty string into the card_num column. I
 would have thought (hoped) that an error would be thrown in that case as
 well. On a similar note, the following table:
 
 CREATE TABLE `tester_table2` (
 `acnt` varchar(20) NOT NULL,
 `required` enum('NO','DETAILS','RESULTS') NOT NULL,
 PRIMARY KEY (`acnt`)
 ) ENGINE=InnoDB DEFAULT CHARSET=utf8
 
 Lets you insert:
 
 INSERT INTO tester_table2 (acnt, required) VALUES ('A123456', 'THIS IS NOT 
 REAL');
 
 Though it just puts an empty string into the required column.
 
 Is there a setting for mysql to return errors in these cases? It seems silly 
 to set
 up an enum column, or a not null column, and not have the possible values 
 enforced?
 
 thanks,
 andy
 
 
 -- 
 Andy Wallace
 iHOUSEweb, Inc.
 awall...@ihouseweb.com
 (866) 645-7700 ext 219
 --
 There are two ways to build software:
 Make it so simple that there are obviously no bugs,
 or make it so complex that there are no obvious bugs.
 
 -- 
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe: http://lists.mysql.com/mysql?unsub=phe1...@hotmail.com
 
  

Re: Question about database value checking

2011-02-04 Thread Andy Wallace

Thanks Peter, exactly what I was hoping for!
andy

On 2/4/11 3:11 PM, Peter He wrote:


Are you using the strict SQL mode?  Check your my.cnf file.

Peter


Date: Fri, 4 Feb 2011 14:08:01 -0800
From: awall...@ihouseweb.com
To: mysql@lists.mysql.com
Subject: Question about database value checking

So, a problem popped up today that has caused us no end of hair-pulling, and
it brought to mind a similar issue that I found very, well, wrong.

If you have a table defined:

CREATE TABLE `tester_table` (
`acnt` varchar(20) NOT NULL DEFAULT '',
`method` varchar(10) NOT NULL DEFAULT '',
`card_num` varchar(100) NOT NULL DEFAULT '',
PRIMARY KEY (`acnt`)
) ENGINE=MyISAM DEFAULT CHARSET=latin1

And try this:

INSERT INTO tester_table (acnt, method, card_num)
VALUES ('test1', 'push', NULL);

That fails. and gives a nice error. But:

INSERT INTO tester_table (acnt, method, card_num)
VALUES ('test1', 'push', 'A12345');

UPDATE tester_table set card_num = NULL WHERE acnt = 'test1';

That succeeds, but it puts an empty string into the card_num column. I
would have thought (hoped) that an error would be thrown in that case as
well. On a similar note, the following table:

CREATE TABLE `tester_table2` (
`acnt` varchar(20) NOT NULL,
`required` enum('NO','DETAILS','RESULTS') NOT NULL,
PRIMARY KEY (`acnt`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8

Lets you insert:

INSERT INTO tester_table2 (acnt, required) VALUES ('A123456', 'THIS IS NOT 
REAL');

Though it just puts an empty string into the required column.

Is there a setting for mysql to return errors in these cases? It seems silly to 
set
up an enum column, or a not null column, and not have the possible values 
enforced?

thanks,
andy


--
Andy Wallace
iHOUSEweb, Inc.
awall...@ihouseweb.com
(866) 645-7700 ext 219
--
There are two ways to build software:
Make it so simple that there are obviously no bugs,
or make it so complex that there are no obvious bugs.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe: http://lists.mysql.com/mysql?unsub=phe1...@hotmail.com





--
Andy Wallace
iHOUSEweb, Inc.
awall...@ihouseweb.com
(866) 645-7700 ext 219
--
There are two ways to build software:
Make it so simple that there are obviously no bugs,
or make it so complex that there are no obvious bugs.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: question about restoring...

2010-11-12 Thread Johan De Meersman
From the OP:

 I have a copy of the INNODB files for these two tables - is there a way
 to extract the table contents from these files short of a full import?


I have to agree, that's quite ambiguous. Andy, is it a copy of the innoDB
datafiles, or a database dump that you have ?

In the latter case, it's reasonably simple to extract what you need; in the
former case you're gonna have to try attaching them to a new instance - good
luck with that.



-- 
Bier met grenadyn
Is als mosterd by den wyn
Sy die't drinkt, is eene kwezel
Hy die't drinkt, is ras een ezel


Re: question about restoring...

2010-11-12 Thread Ananda Kumar
If you just need specific records, you can use -w option of mysql to
extract only the specifc records.
Then you can run the dump file into another db.

regards
anandkl

On Fri, Nov 12, 2010 at 2:35 PM, Johan De Meersman vegiv...@tuxera.bewrote:

 From the OP:

  I have a copy of the INNODB files for these two tables - is there a way
  to extract the table contents from these files short of a full import?
 

 I have to agree, that's quite ambiguous. Andy, is it a copy of the innoDB
 datafiles, or a database dump that you have ?

 In the latter case, it's reasonably simple to extract what you need; in the
 former case you're gonna have to try attaching them to a new instance -
 good
 luck with that.



 --
 Bier met grenadyn
 Is als mosterd by den wyn
 Sy die't drinkt, is eene kwezel
 Hy die't drinkt, is ras een ezel



Re: question about restoring...

2010-11-12 Thread Andy Wallace

Thanks, guys. I have copies of the innodb files. The boss went whole hog on
using zfs for everything, so backups of files are readily available. Looks
like I'll be having the db reconstituted...

thanks again

On 11/12/10 1:05 AM, Johan De Meersman wrote:

 From the OP:

I have a copy of the INNODB files for these two tables - is there a way
to extract the table contents from these files short of a full import?


I have to agree, that's quite ambiguous. Andy, is it a copy of the innoDB 
datafiles, or a database dump that you have ?

In the latter case, it's reasonably simple to extract what you need; in the 
former case you're gonna have to try attaching them to a new
instance - good luck with that.



--
Bier met grenadyn
Is als mosterd by den wyn
Sy die't drinkt, is eene kwezel
Hy die't drinkt, is ras een ezel


--
Andy Wallace
iHOUSEweb, Inc.
awall...@ihouseweb.com
(866) 645-7700 ext 219
--
There are two ways to build software:
Make it so simple that there are obviously no bugs,
or make it so complex that there are no obvious bugs.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: question about restoring...

2010-11-10 Thread Jerry Schwartz
I must have missed something way back in the beginning. I thought the original 
poster had a dump of his database.

 

I have no idea if you can attach the innodb files to another database.

 

I stand corrected.

 

Regards,

 

Jerry Schwartz

Global Information Incorporated

195 Farmington Ave.

Farmington, CT 06032

 

860.674.8796 / FAX: 860.674.8341

E-mail: je...@gii.co.jp 

Web site: www.the-infoshop.com http://www.the-infoshop.com/ 

 

From: vegiv...@gmail.com [mailto:vegiv...@gmail.com] On Behalf Of Johan De 
Meersman
Sent: Wednesday, November 10, 2010 2:25 AM
To: Jerry Schwartz
Cc: Gavin Towey; Andy Wallace; mysql list
Subject: Re: question about restoring...

 

On Tue, Nov 9, 2010 at 11:39 PM, Jerry Schwartz je...@gii.co.jp wrote:

Then I guess it's a matter of preference. I'd rather edit a text file than
build a new instance of MySQL.


The way I parse that, you're saying that there is a way to reattach ibd files 
to another database ?



-- 
Bier met grenadyn
Is als mosterd by den wyn
Sy die't drinkt, is eene kwezel
Hy die't drinkt, is ras een ezel



RE: question about restoring...

2010-11-09 Thread Gavin Towey
No, you should import the data into another instance of mysql to extract the 
records.

Regards,
Gavin Towey


-Original Message-
From: Andy Wallace [mailto:awall...@ihouseweb.com]
Sent: Tuesday, November 09, 2010 10:34 AM
To: mysql list
Subject: question about restoring...

So, I got a request this morning to recover some specific records for
a client. I just want a handful of records from a couple of tables here.
I have a copy of the INNODB files for these two tables - is there a way
to extract the table contents from these files short of a full import?

thanks,
ansdy


--
Andy Wallace
iHOUSEweb, Inc.
awall...@ihouseweb.com
(866) 645-7700 ext 219
--
There are two ways to build software:
Make it so simple that there are obviously no bugs,
or make it so complex that there are no obvious bugs.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=gto...@ffn.com


This message contains confidential information and is intended only for the 
individual named.  If you are not the named addressee, you are notified that 
reviewing, disseminating, disclosing, copying or distributing this e-mail is 
strictly prohibited.  Please notify the sender immediately by e-mail if you 
have received this e-mail by mistake and delete this e-mail from your system. 
E-mail transmission cannot be guaranteed to be secure or error-free as 
information could be intercepted, corrupted, lost, destroyed, arrive late or 
incomplete, or contain viruses. The sender therefore does not accept liability 
for any loss or damage caused by viruses or errors or omissions in the contents 
of this message, which arise as a result of e-mail transmission. [FriendFinder 
Networks, Inc., 220 Humboldt Court, Sunnyvale, CA 94089, USA, FriendFinder.com

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: question about restoring...

2010-11-09 Thread Gavin Towey
Not if he has the raw innodb files.


-Original Message-
From: Jerry Schwartz [mailto:je...@gii.co.jp]
Sent: Tuesday, November 09, 2010 11:05 AM
To: Gavin Towey; 'Andy Wallace'; 'mysql list'
Subject: RE: question about restoring...

That's overkill.

You should be able to import the data into another database within the same
instance, unless the file is too big to handle.

Regards,

Jerry Schwartz
Global Information Incorporated
195 Farmington Ave.
Farmington, CT 06032

860.674.8796 / FAX: 860.674.8341
E-mail: je...@gii.co.jp
Web site: www.the-infoshop.com

-Original Message-
From: Gavin Towey [mailto:gto...@ffn.com]
Sent: Tuesday, November 09, 2010 1:50 PM
To: Andy Wallace; mysql list
Subject: RE: question about restoring...

No, you should import the data into another instance of mysql to extract the
records.

Regards,
Gavin Towey


-Original Message-
From: Andy Wallace [mailto:awall...@ihouseweb.com]
Sent: Tuesday, November 09, 2010 10:34 AM
To: mysql list
Subject: question about restoring...

So, I got a request this morning to recover some specific records for
a client. I just want a handful of records from a couple of tables here.
I have a copy of the INNODB files for these two tables - is there a way
to extract the table contents from these files short of a full import?

thanks,
ansdy


--
Andy Wallace
iHOUSEweb, Inc.
awall...@ihouseweb.com
(866) 645-7700 ext 219
--
There are two ways to build software:
Make it so simple that there are obviously no bugs,
or make it so complex that there are no obvious bugs.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=gto...@ffn.com


This message contains confidential information and is intended only for the
individual named.  If you are not the named addressee, you are notified that
reviewing, disseminating, disclosing, copying or distributing this e-mail is
strictly prohibited.  Please notify the sender immediately by e-mail if you
have received this e-mail by mistake and delete this e-mail from your system.
E-mail transmission cannot be guaranteed to be secure or error-free as
information could be intercepted, corrupted, lost, destroyed, arrive late or
incomplete, or contain viruses. The sender therefore does not accept
liability
for any loss or damage caused by viruses or errors or omissions in the
contents
of this message, which arise as a result of e-mail transmission.
[FriendFinder
Networks, Inc., 220 Humboldt Court, Sunnyvale, CA 94089, USA,
FriendFinder.com

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=je...@gii.co.jp





This message contains confidential information and is intended only for the 
individual named.  If you are not the named addressee, you are notified that 
reviewing, disseminating, disclosing, copying or distributing this e-mail is 
strictly prohibited.  Please notify the sender immediately by e-mail if you 
have received this e-mail by mistake and delete this e-mail from your system. 
E-mail transmission cannot be guaranteed to be secure or error-free as 
information could be intercepted, corrupted, lost, destroyed, arrive late or 
incomplete, or contain viruses. The sender therefore does not accept liability 
for any loss or damage caused by viruses or errors or omissions in the contents 
of this message, which arise as a result of e-mail transmission. [FriendFinder 
Networks, Inc., 220 Humboldt Court, Sunnyvale, CA 94089, USA, FriendFinder.com

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: question about restoring...

2010-11-09 Thread Jerry Schwartz
Then I guess it's a matter of preference. I'd rather edit a text file than 
build a new instance of MySQL.

On the other hand, if he has a development environment (as we all, of course, 
do) then he could futz around in there.

Regards,

Jerry Schwartz
Global Information Incorporated
195 Farmington Ave.
Farmington, CT 06032

860.674.8796 / FAX: 860.674.8341
E-mail: je...@gii.co.jp
Web site: www.the-infoshop.com

-Original Message-
From: Gavin Towey [mailto:gto...@ffn.com]
Sent: Tuesday, November 09, 2010 3:22 PM
To: Jerry Schwartz; 'Andy Wallace'; 'mysql list'
Subject: RE: question about restoring...

Not if he has the raw innodb files.


-Original Message-
From: Jerry Schwartz [mailto:je...@gii.co.jp]
Sent: Tuesday, November 09, 2010 11:05 AM
To: Gavin Towey; 'Andy Wallace'; 'mysql list'
Subject: RE: question about restoring...

That's overkill.

You should be able to import the data into another database within the same
instance, unless the file is too big to handle.

Regards,

Jerry Schwartz
Global Information Incorporated
195 Farmington Ave.
Farmington, CT 06032

860.674.8796 / FAX: 860.674.8341
E-mail: je...@gii.co.jp
Web site: www.the-infoshop.com

-Original Message-
From: Gavin Towey [mailto:gto...@ffn.com]
Sent: Tuesday, November 09, 2010 1:50 PM
To: Andy Wallace; mysql list
Subject: RE: question about restoring...

No, you should import the data into another instance of mysql to extract the
records.

Regards,
Gavin Towey


-Original Message-
From: Andy Wallace [mailto:awall...@ihouseweb.com]
Sent: Tuesday, November 09, 2010 10:34 AM
To: mysql list
Subject: question about restoring...

So, I got a request this morning to recover some specific records for
a client. I just want a handful of records from a couple of tables here.
I have a copy of the INNODB files for these two tables - is there a way
to extract the table contents from these files short of a full import?

thanks,
ansdy


--
Andy Wallace
iHOUSEweb, Inc.
awall...@ihouseweb.com
(866) 645-7700 ext 219
--
There are two ways to build software:
Make it so simple that there are obviously no bugs,
or make it so complex that there are no obvious bugs.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=gto...@ffn.com


This message contains confidential information and is intended only for the
individual named.  If you are not the named addressee, you are notified that
reviewing, disseminating, disclosing, copying or distributing this e-mail is
strictly prohibited.  Please notify the sender immediately by e-mail if you
have received this e-mail by mistake and delete this e-mail from your 
system.
E-mail transmission cannot be guaranteed to be secure or error-free as
information could be intercepted, corrupted, lost, destroyed, arrive late or
incomplete, or contain viruses. The sender therefore does not accept
liability
for any loss or damage caused by viruses or errors or omissions in the
contents
of this message, which arise as a result of e-mail transmission.
[FriendFinder
Networks, Inc., 220 Humboldt Court, Sunnyvale, CA 94089, USA,
FriendFinder.com

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=je...@gii.co.jp





This message contains confidential information and is intended only for the
individual named.  If you are not the named addressee, you are notified that
reviewing, disseminating, disclosing, copying or distributing this e-mail is
strictly prohibited.  Please notify the sender immediately by e-mail if you
have received this e-mail by mistake and delete this e-mail from your system.
E-mail transmission cannot be guaranteed to be secure or error-free as
information could be intercepted, corrupted, lost, destroyed, arrive late or
incomplete, or contain viruses. The sender therefore does not accept 
liability
for any loss or damage caused by viruses or errors or omissions in the 
contents
of this message, which arise as a result of e-mail transmission. 
[FriendFinder
Networks, Inc., 220 Humboldt Court, Sunnyvale, CA 94089, USA, 
FriendFinder.com




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: question about restoring...

2010-11-09 Thread Johan De Meersman
On Tue, Nov 9, 2010 at 11:39 PM, Jerry Schwartz je...@gii.co.jp wrote:

 Then I guess it's a matter of preference. I'd rather edit a text file than
 build a new instance of MySQL.


The way I parse that, you're saying that there is a way to reattach ibd
files to another database ?


-- 
Bier met grenadyn
Is als mosterd by den wyn
Sy die't drinkt, is eene kwezel
Hy die't drinkt, is ras een ezel


Re: question about VIEWS in 5.1.x

2010-09-06 Thread Jangita

On 03/09/2010 9:27 p, Hank wrote:

On 02/09/2010 8:30 p, Hank wrote:


Simple question about views:



Hank,
Have you tried running away from the problem :-) by doing...

CREATE PROCEDURE `combo`(theid INT)
BEGIN
(SELECT * FROM table1 WHERE id = theid)
UNION
(SELECT * FROM table2 WHERE id = theid);
END$$

then calling it using

call combo(value);



Wow - thanks. This works perfectly.   I'm assuming I can use call
combo(value) in PHP and it returns the result set as if it were a
proper table?

-Hank


Yes it should work just like a query/table.

--
Jangita | +256 76 91 8383 | Y!  MSN: jang...@yahoo.com
Skype: jangita | GTalk: jangita.nyag...@gmail.com

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: question about VIEWS in 5.1.x

2010-09-06 Thread Jangita

On 03/09/2010 9:26 p, Hank wrote:

On Fri, Sep 3, 2010 at 6:23 AM, Jangitajang...@jangita.com  wrote:

On 02/09/2010 8:30 p, Hank wrote:


Simple question about views:



Hank,
Have you tried running away from the problem :-) by doing...

CREATE PROCEDURE `combo`(theid INT)
BEGIN
(SELECT * FROM table1 WHERE id = theid)
UNION
(SELECT * FROM table2 WHERE id = theid);
END$$

then calling it using

call combo(value);



Wow - thanks. This works perfectly.   I'm assuming I can use call
combo(value) in PHP and it returns the result set as if it were a
proper table?

-Hank


Yes! it should work just like a table.

--
Jangita | +256 76 91 8383 | Y!  MSN: jang...@yahoo.com
Skype: jangita | GTalk: jangita.nyag...@gmail.com

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: question about VIEWS in 5.1.x

2010-09-03 Thread Jangita

On 02/09/2010 8:30 p, Hank wrote:

Simple question about views:

I have a view such as:

  create view combo as
  select * from table1
  union
  select * from table2;

Where table1 and table2 are very large and identical and have a
non-unique key on field id..

when I do a:

select * from combo where id='value'  ;

the system seems to be doing a table scan of one or both tables.. I
can't even do an:

explain select * from combo where field='value' ;

the system seems to hang on the explain.  SHOW PROCESSLIST says the
explain is Sending data .

Issuing either one of the view components with the where clause
returns results in a fraction of a second (pretty much a full indexed
lookup)

I know when I used to use Oracle, the where clause would be applied to
all parts of the view, but in this case, I can't even figure out what
MySQL is trying to do.

(I've also tried UNION ALL with the same results).

Any suggestions on how to query both tables using the indexed and the
view at the same time?  That was my intention.

-Hank


Hank,
I think mysql is selecting ALL the records from both tables then 
applying the where clause to all the data from table 1 and table 2 (I 
think - guys correct me if I'm wrong)


Have you tried running away from the problem :-) by doing...

CREATE PROCEDURE `combo`(theid INT)
BEGIN
(SELECT * FROM table1 WHERE id = theid)
UNION
(SELECT * FROM table2 WHERE id = theid);
END$$

then calling it using

call combo(value);

?

--
Jangita | +256 76 91 8383 | Y!  MSN: jang...@yahoo.com
Skype: jangita | GTalk: jangita.nyag...@gmail.com

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: question about VIEWS in 5.1.x

2010-09-03 Thread Shawn Green (MySQL)

On 9/3/2010 6:23 AM, Jangita wrote:

On 02/09/2010 8:30 p, Hank wrote:

Simple question about views:

I have a view such as:

  create view combo as
  select * from table1
  union
  select * from table2;

...

(I've also tried UNION ALL with the same results).
...


Hank,
I think mysql is selecting ALL the records from both tables then 
applying the where clause to all the data from table 1 and table 2 (I 
think - guys correct me if I'm wrong)

...



Jangita is correct. Read the bottom of
http://dev.mysql.com/doc/refman/5.1/en/view-algorithms.html


--
Shawn Green
MySQL Principal Technical Support Engineer
Oracle USA, Inc.
Office: Blountville, TN

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: question about VIEWS in 5.1.x

2010-09-03 Thread Hank
On 02/09/2010 8:30 p, Hank wrote:

 Simple question about views:


 Hank,
 Have you tried running away from the problem :-) by doing...

 CREATE PROCEDURE `combo`(theid INT)
 BEGIN
(SELECT * FROM table1 WHERE id = theid)
UNION
(SELECT * FROM table2 WHERE id = theid);
END$$

 then calling it using

 call combo(value);


Wow - thanks. This works perfectly.   I'm assuming I can use call
combo(value) in PHP and it returns the result set as if it were a
proper table?

-Hank

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Question on http query

2010-06-02 Thread Michael Dykman
MySQL is a tradition Relational DataBase System.  It underlays
something like 80% (somebody correct me if I'm out-of-date here) of
the http applications populating the internet.  While some RDBMSs
offer extensions for RESP-like HTTP implementations, MySQL does not
support this directly.  It can be used in conjunction with a multitude
of languages and frameworks.

If you are just getting started on this path, you might want to look
at something like python or ruby or PHP, they all can handle HTTP
requests very efficiently and have nice interfaces to MySQL.

Best of Luck.

 - michael dykman

On Wed, Jun 2, 2010 at 7:15 PM, Kandy Wong kan...@triumf.ca wrote:
 Hi,

 I'd like to know if MySQL server supports http queries?
 Thanks.

 Kandy

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:    http://lists.mysql.com/mysql?unsub=mdyk...@gmail.com





-- 
 - michael dykman
 - mdyk...@gmail.com

 May the Source be with you.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Question about DELETE

2010-03-18 Thread Johan De Meersman
Given that OP is talking about a single delete statement, I'm gonna be very
surprised if he manages to squeeze an intermediate commit in there :-)

For a single-statement delete on a single table, the indexes will be rebuilt
only once. I'm not entirely sure what happens to cascaded deletes, though.


On Thu, Mar 18, 2010 at 6:05 AM, Ananda Kumar anan...@gmail.com wrote:

 Hi,
 It depends how frequently ur doing a commit.
 If you have written a plsql, with loop and if you commit after each row is
 deleted, then it get update for each row. Else if you commit at the end the
 loop, it commits only once for all the rows deleted.

 regards
 anandkl
 On Thu, Mar 18, 2010 at 1:21 AM, Price, Randall randall.pr...@vt.edu
 wrote:

  Hello,
 
  I have a simple question about deleting records from INNODB tables.  I
 have
  a master table with a few child tables linked via Foreign Key
 constraints.
   Each table has several indexes as well.
 
  My question is:  if I delete many records in a single delete statement
  (i.e., DELETE FROM table WHERE id = 1 AND id = 5 ... AND ID = 100) how
 many
  times are the foreign keys/indexes updated?
 
  Once for the entire DELETE operation or one time for each record that is
  deleted?
 
  Thanks,
 
  Randall Price
 
 




-- 
Bier met grenadyn
Is als mosterd by den wyn
Sy die't drinkt, is eene kwezel
Hy die't drinkt, is ras een ezel


RE: Question about DELETE

2010-03-18 Thread Price, Randall
Thanks for your responses on this.

However, I suspect that the indexes are being rebuilt over and over during the 
mass delete operation.

If I delete a small number of records (i.e., DELETE FROM table WHERE id BETWEEN 
1 AND 5) it may only take a minute or so.

If I delete a large number of records (i.e., DELETE FROM table WHERE id BETWEEN 
1 AND 500) it may take upwards of an hour or more.

So what would cause this increased slowness the more records you delete, unless 
the indexing is happening multiple times?

Thanks,

-Randall Price


From: vegiv...@gmail.com [mailto:vegiv...@gmail.com] On Behalf Of Johan De 
Meersman
Sent: Thursday, March 18, 2010 6:48 AM
To: Ananda Kumar
Cc: Price, Randall; [MySQL]
Subject: Re: Question about DELETE

Given that OP is talking about a single delete statement, I'm gonna be very 
surprised if he manages to squeeze an intermediate commit in there :-)

For a single-statement delete on a single table, the indexes will be rebuilt 
only once. I'm not entirely sure what happens to cascaded deletes, though.

On Thu, Mar 18, 2010 at 6:05 AM, Ananda Kumar 
anan...@gmail.commailto:anan...@gmail.com wrote:
Hi,
It depends how frequently ur doing a commit.
If you have written a plsql, with loop and if you commit after each row is
deleted, then it get update for each row. Else if you commit at the end the
loop, it commits only once for all the rows deleted.

regards
anandkl
On Thu, Mar 18, 2010 at 1:21 AM, Price, Randall 
randall.pr...@vt.edumailto:randall.pr...@vt.eduwrote:

 Hello,

 I have a simple question about deleting records from INNODB tables.  I have
 a master table with a few child tables linked via Foreign Key constraints.
  Each table has several indexes as well.

 My question is:  if I delete many records in a single delete statement
 (i.e., DELETE FROM table WHERE id = 1 AND id = 5 ... AND ID = 100) how many
 times are the foreign keys/indexes updated?

 Once for the entire DELETE operation or one time for each record that is
 deleted?

 Thanks,

 Randall Price





--
Bier met grenadyn
Is als mosterd by den wyn
Sy die't drinkt, is eene kwezel
Hy die't drinkt, is ras een ezel


RE: Question about DELETE

2010-03-18 Thread Ian Simpson
Hi Randall,

If you're talking about processes that are taking that long, then
running SHOW PROCESSLIST several times during the operation should give
you a rough idea what it is doing at each stage.

Also, do you have an index on the id column? It could just be taking a
long time to identify all the rows it needs to delete.

On Thu, 2010-03-18 at 10:03 -0400, Price, Randall wrote:
 Thanks for your responses on this.
 
 However, I suspect that the indexes are being rebuilt over and over during 
 the mass delete operation.
 
 If I delete a small number of records (i.e., DELETE FROM table WHERE id 
 BETWEEN 1 AND 5) it may only take a minute or so.
 
 If I delete a large number of records (i.e., DELETE FROM table WHERE id 
 BETWEEN 1 AND 500) it may take upwards of an hour or more.
 
 So what would cause this increased slowness the more records you delete, 
 unless the indexing is happening multiple times?
 
 Thanks,
 
 -Randall Price
 
 
 From: vegiv...@gmail.com [mailto:vegiv...@gmail.com] On Behalf Of Johan De 
 Meersman
 Sent: Thursday, March 18, 2010 6:48 AM
 To: Ananda Kumar
 Cc: Price, Randall; [MySQL]
 Subject: Re: Question about DELETE
 
 Given that OP is talking about a single delete statement, I'm gonna be very 
 surprised if he manages to squeeze an intermediate commit in there :-)
 
 For a single-statement delete on a single table, the indexes will be rebuilt 
 only once. I'm not entirely sure what happens to cascaded deletes, though.
 
 On Thu, Mar 18, 2010 at 6:05 AM, Ananda Kumar 
 anan...@gmail.commailto:anan...@gmail.com wrote:
 Hi,
 It depends how frequently ur doing a commit.
 If you have written a plsql, with loop and if you commit after each row is
 deleted, then it get update for each row. Else if you commit at the end the
 loop, it commits only once for all the rows deleted.
 
 regards
 anandkl
 On Thu, Mar 18, 2010 at 1:21 AM, Price, Randall 
 randall.pr...@vt.edumailto:randall.pr...@vt.eduwrote:
 
  Hello,
 
  I have a simple question about deleting records from INNODB tables.  I have
  a master table with a few child tables linked via Foreign Key constraints.
   Each table has several indexes as well.
 
  My question is:  if I delete many records in a single delete statement
  (i.e., DELETE FROM table WHERE id = 1 AND id = 5 ... AND ID = 100) how many
  times are the foreign keys/indexes updated?
 
  Once for the entire DELETE operation or one time for each record that is
  deleted?
 
  Thanks,
 
  Randall Price
 
 
 
 
 
 --
 Bier met grenadyn
 Is als mosterd by den wyn
 Sy die't drinkt, is eene kwezel
 Hy die't drinkt, is ras een ezel


-- 
Ian Simpson
System Administrator
MyJobGroup


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: Question about DELETE

2010-03-18 Thread Price, Randall
I have the MySQL Administrator running and on the Server Connections menu on 
the Threads tab I can see the thread running (i.e., DELETE FROM table WHERE 
...).  I refresh this tab periodically to see what stage the process is in.  It 
does not display any information about rebuilding indexes, just that is running 
the DELETE query.

If I turn the DELETE FROM into a SELECT to see if it takes a long time to 
select the records to delete, it returns almost instantly so MySQL seems to be 
able to find the records to delete pretty fast.  I also assume that turning the 
DELETE FROM into a SELECT is a reasonable way to determine this.

When I do a mass delete on the parent table (i.e., DELETE FROM table WHERE id 
BETWEEN 1 AND 500) all I can see in the process list is the DELETE running.  
The parent records are deleted and the CASCADING DELETES then deletes the child 
records in the other two child tables.  The process list does not show any 
information about deleting the child records through the CASCADING DELETES, 
just that it is deleting the parent records.

The parent and both child tables have multiple indexes on them, plus the 
FOREIGN KEY CONSTRAINTS.  So all the indexes on the parent table must be 
rebuilt, all the indexes on the two child tables must be rebuilt, and (I 
suspect) the foreign key constraints must be rebuilt (not sure about this).

I have tried dropping the foreign keys and indexes, performing the deletes, 
then rebuilding the indexes and foreign keys.  However, this process is equally 
as long (and maybe even longer) because dropping the foreign keys and indexes 
takes a long time, the delete seems to go pretty fast, and then rebuilding the 
indexes and foreign keys then takes a long time.  This technique may be alright 
for deleting a large number of records, but for a small number it still takes a 
long time to drop and rebuild.

I have tried deleting from the bottom up (i.e., deleting the child records 
first, then the parent records) to see if that would maybe bypass the FOREIGN 
KEY rebuild (if there is actually a rebuild for this, not sure) and speed up 
the process but it does not.  It still takes a long time on a large number of 
deletes.

So I am at a quandary as to how to make this delete process perform better.

Thanks,

-Randall Price


-Original Message-
From: Ian Simpson [mailto:i...@it.myjobgroup.co.uk] 
Sent: Thursday, March 18, 2010 10:11 AM
To: Price, Randall
Cc: Johan De Meersman; Ananda Kumar; [MySQL]
Subject: RE: Question about DELETE

Hi Randall,

If you're talking about processes that are taking that long, then
running SHOW PROCESSLIST several times during the operation should give
you a rough idea what it is doing at each stage.

Also, do you have an index on the id column? It could just be taking a
long time to identify all the rows it needs to delete.

On Thu, 2010-03-18 at 10:03 -0400, Price, Randall wrote:
 Thanks for your responses on this.
 
 However, I suspect that the indexes are being rebuilt over and over during 
 the mass delete operation.
 
 If I delete a small number of records (i.e., DELETE FROM table WHERE id 
 BETWEEN 1 AND 5) it may only take a minute or so.
 
 If I delete a large number of records (i.e., DELETE FROM table WHERE id 
 BETWEEN 1 AND 500) it may take upwards of an hour or more.
 
 So what would cause this increased slowness the more records you delete, 
 unless the indexing is happening multiple times?
 
 Thanks,
 
 -Randall Price
 
 
 From: vegiv...@gmail.com [mailto:vegiv...@gmail.com] On Behalf Of Johan De 
 Meersman
 Sent: Thursday, March 18, 2010 6:48 AM
 To: Ananda Kumar
 Cc: Price, Randall; [MySQL]
 Subject: Re: Question about DELETE
 
 Given that OP is talking about a single delete statement, I'm gonna be very 
 surprised if he manages to squeeze an intermediate commit in there :-)
 
 For a single-statement delete on a single table, the indexes will be rebuilt 
 only once. I'm not entirely sure what happens to cascaded deletes, though.
 
 On Thu, Mar 18, 2010 at 6:05 AM, Ananda Kumar 
 anan...@gmail.commailto:anan...@gmail.com wrote:
 Hi,
 It depends how frequently ur doing a commit.
 If you have written a plsql, with loop and if you commit after each row is
 deleted, then it get update for each row. Else if you commit at the end the
 loop, it commits only once for all the rows deleted.
 
 regards
 anandkl
 On Thu, Mar 18, 2010 at 1:21 AM, Price, Randall 
 randall.pr...@vt.edumailto:randall.pr...@vt.eduwrote:
 
  Hello,
 
  I have a simple question about deleting records from INNODB tables.  I have
  a master table with a few child tables linked via Foreign Key constraints.
   Each table has several indexes as well.
 
  My question is:  if I delete many records in a single delete statement
  (i.e., DELETE FROM table WHERE id = 1 AND id = 5 ... AND ID = 100) how many
  times are the foreign keys/indexes updated?
 
  Once for the entire DELETE operation or one time for each record that is
  deleted?
 
  Thanks

Re: Question about DELETE

2010-03-18 Thread Ananda Kumar
delete will also cause the undo(before image) to be generated, in case u
want to rollback. This will also add up to the delete completion time.

After each mass delete, rebuild indexes to remove gaps in indexes(remove
fragmentatio in the index). This will improve next delete or select.

regards
anandkl
On Thu, Mar 18, 2010 at 8:22 PM, Price, Randall randall.pr...@vt.eduwrote:

 I have the MySQL Administrator running and on the Server Connections menu
 on the Threads tab I can see the thread running (i.e., DELETE FROM table
 WHERE ...).  I refresh this tab periodically to see what stage the process
 is in.  It does not display any information about rebuilding indexes, just
 that is running the DELETE query.

 If I turn the DELETE FROM into a SELECT to see if it takes a long time to
 select the records to delete, it returns almost instantly so MySQL seems to
 be able to find the records to delete pretty fast.  I also assume that
 turning the DELETE FROM into a SELECT is a reasonable way to determine this.

 When I do a mass delete on the parent table (i.e., DELETE FROM table WHERE
 id BETWEEN 1 AND 500) all I can see in the process list is the DELETE
 running.  The parent records are deleted and the CASCADING DELETES then
 deletes the child records in the other two child tables.  The process list
 does not show any information about deleting the child records through the
 CASCADING DELETES, just that it is deleting the parent records.

 The parent and both child tables have multiple indexes on them, plus the
 FOREIGN KEY CONSTRAINTS.  So all the indexes on the parent table must be
 rebuilt, all the indexes on the two child tables must be rebuilt, and (I
 suspect) the foreign key constraints must be rebuilt (not sure about this).

 I have tried dropping the foreign keys and indexes, performing the deletes,
 then rebuilding the indexes and foreign keys.  However, this process is
 equally as long (and maybe even longer) because dropping the foreign keys
 and indexes takes a long time, the delete seems to go pretty fast, and then
 rebuilding the indexes and foreign keys then takes a long time.  This
 technique may be alright for deleting a large number of records, but for a
 small number it still takes a long time to drop and rebuild.

 I have tried deleting from the bottom up (i.e., deleting the child records
 first, then the parent records) to see if that would maybe bypass the
 FOREIGN KEY rebuild (if there is actually a rebuild for this, not sure) and
 speed up the process but it does not.  It still takes a long time on a large
 number of deletes.

 So I am at a quandary as to how to make this delete process perform better.

 Thanks,

 -Randall Price


 -Original Message-
 From: Ian Simpson [mailto:i...@it.myjobgroup.co.uk]
 Sent: Thursday, March 18, 2010 10:11 AM
 To: Price, Randall
 Cc: Johan De Meersman; Ananda Kumar; [MySQL]
 Subject: RE: Question about DELETE

 Hi Randall,

 If you're talking about processes that are taking that long, then
 running SHOW PROCESSLIST several times during the operation should give
 you a rough idea what it is doing at each stage.

 Also, do you have an index on the id column? It could just be taking a
 long time to identify all the rows it needs to delete.

 On Thu, 2010-03-18 at 10:03 -0400, Price, Randall wrote:
  Thanks for your responses on this.
 
  However, I suspect that the indexes are being rebuilt over and over
 during the mass delete operation.
 
  If I delete a small number of records (i.e., DELETE FROM table WHERE id
 BETWEEN 1 AND 5) it may only take a minute or so.
 
  If I delete a large number of records (i.e., DELETE FROM table WHERE id
 BETWEEN 1 AND 500) it may take upwards of an hour or more.
 
  So what would cause this increased slowness the more records you delete,
 unless the indexing is happening multiple times?
 
  Thanks,
 
  -Randall Price
 
 
  From: vegiv...@gmail.com [mailto:vegiv...@gmail.com] On Behalf Of Johan
 De Meersman
  Sent: Thursday, March 18, 2010 6:48 AM
  To: Ananda Kumar
  Cc: Price, Randall; [MySQL]
  Subject: Re: Question about DELETE
 
  Given that OP is talking about a single delete statement, I'm gonna be
 very surprised if he manages to squeeze an intermediate commit in there :-)
 
  For a single-statement delete on a single table, the indexes will be
 rebuilt only once. I'm not entirely sure what happens to cascaded deletes,
 though.
 
  On Thu, Mar 18, 2010 at 6:05 AM, Ananda Kumar anan...@gmail.commailto:
 anan...@gmail.com wrote:
  Hi,
  It depends how frequently ur doing a commit.
  If you have written a plsql, with loop and if you commit after each row
 is
  deleted, then it get update for each row. Else if you commit at the end
 the
  loop, it commits only once for all the rows deleted.
 
  regards
  anandkl
  On Thu, Mar 18, 2010 at 1:21 AM, Price, Randall randall.pr...@vt.edu
 mailto:randall.pr...@vt.eduwrote:
 
   Hello,
  
   I have a simple question about deleting records from INNODB tables.  I
 have

RE: Question about DELETE

2010-03-18 Thread Price, Randall
Would wrapping the DELETE in a TRANSACTION improve the performance any?

Also, when you say to after each mass delete, rebuilt the indexes... would 
running OPTIMIZE TABLE tablename; be the way to do this, or how?

Thanks,

-Randall Price


From: Ananda Kumar [mailto:anan...@gmail.com]
Sent: Thursday, March 18, 2010 11:15 AM
To: Price, Randall
Cc: Ian Simpson; Johan De Meersman; [MySQL]
Subject: Re: Question about DELETE

delete will also cause the undo(before image) to be generated, in case u want 
to rollback. This will also add up to the delete completion time.

After each mass delete, rebuild indexes to remove gaps in indexes(remove 
fragmentatio in the index). This will improve next delete or select.

regards
anandkl
On Thu, Mar 18, 2010 at 8:22 PM, Price, Randall 
randall.pr...@vt.edumailto:randall.pr...@vt.edu wrote:
I have the MySQL Administrator running and on the Server Connections menu on 
the Threads tab I can see the thread running (i.e., DELETE FROM table WHERE 
...).  I refresh this tab periodically to see what stage the process is in.  It 
does not display any information about rebuilding indexes, just that is running 
the DELETE query.

If I turn the DELETE FROM into a SELECT to see if it takes a long time to 
select the records to delete, it returns almost instantly so MySQL seems to be 
able to find the records to delete pretty fast.  I also assume that turning the 
DELETE FROM into a SELECT is a reasonable way to determine this.

When I do a mass delete on the parent table (i.e., DELETE FROM table WHERE id 
BETWEEN 1 AND 500) all I can see in the process list is the DELETE running.  
The parent records are deleted and the CASCADING DELETES then deletes the child 
records in the other two child tables.  The process list does not show any 
information about deleting the child records through the CASCADING DELETES, 
just that it is deleting the parent records.

The parent and both child tables have multiple indexes on them, plus the 
FOREIGN KEY CONSTRAINTS.  So all the indexes on the parent table must be 
rebuilt, all the indexes on the two child tables must be rebuilt, and (I 
suspect) the foreign key constraints must be rebuilt (not sure about this).

I have tried dropping the foreign keys and indexes, performing the deletes, 
then rebuilding the indexes and foreign keys.  However, this process is equally 
as long (and maybe even longer) because dropping the foreign keys and indexes 
takes a long time, the delete seems to go pretty fast, and then rebuilding the 
indexes and foreign keys then takes a long time.  This technique may be alright 
for deleting a large number of records, but for a small number it still takes a 
long time to drop and rebuild.

I have tried deleting from the bottom up (i.e., deleting the child records 
first, then the parent records) to see if that would maybe bypass the FOREIGN 
KEY rebuild (if there is actually a rebuild for this, not sure) and speed up 
the process but it does not.  It still takes a long time on a large number of 
deletes.

So I am at a quandary as to how to make this delete process perform better.

Thanks,

-Randall Price


-Original Message-
From: Ian Simpson 
[mailto:i...@it.myjobgroup.co.ukmailto:i...@it.myjobgroup.co.uk]
Sent: Thursday, March 18, 2010 10:11 AM
To: Price, Randall
Cc: Johan De Meersman; Ananda Kumar; [MySQL]
Subject: RE: Question about DELETE

Hi Randall,

If you're talking about processes that are taking that long, then
running SHOW PROCESSLIST several times during the operation should give
you a rough idea what it is doing at each stage.

Also, do you have an index on the id column? It could just be taking a
long time to identify all the rows it needs to delete.

On Thu, 2010-03-18 at 10:03 -0400, Price, Randall wrote:
 Thanks for your responses on this.

 However, I suspect that the indexes are being rebuilt over and over during 
 the mass delete operation.

 If I delete a small number of records (i.e., DELETE FROM table WHERE id 
 BETWEEN 1 AND 5) it may only take a minute or so.

 If I delete a large number of records (i.e., DELETE FROM table WHERE id 
 BETWEEN 1 AND 500) it may take upwards of an hour or more.

 So what would cause this increased slowness the more records you delete, 
 unless the indexing is happening multiple times?

 Thanks,

 -Randall Price


 From: vegiv...@gmail.commailto:vegiv...@gmail.com 
 [mailto:vegiv...@gmail.commailto:vegiv...@gmail.com] On Behalf Of Johan De 
 Meersman
 Sent: Thursday, March 18, 2010 6:48 AM
 To: Ananda Kumar
 Cc: Price, Randall; [MySQL]
 Subject: Re: Question about DELETE

 Given that OP is talking about a single delete statement, I'm gonna be very 
 surprised if he manages to squeeze an intermediate commit in there :-)

 For a single-statement delete on a single table, the indexes will be rebuilt 
 only once. I'm not entirely sure what happens to cascaded deletes, though.

 On Thu, Mar 18, 2010 at 6:05 AM, Ananda Kumar 
 anan...@gmail.commailto:anan

Re: Question about DELETE

2010-03-17 Thread Ananda Kumar
Hi,
It depends how frequently ur doing a commit.
If you have written a plsql, with loop and if you commit after each row is
deleted, then it get update for each row. Else if you commit at the end the
loop, it commits only once for all the rows deleted.

regards
anandkl
On Thu, Mar 18, 2010 at 1:21 AM, Price, Randall randall.pr...@vt.eduwrote:

 Hello,

 I have a simple question about deleting records from INNODB tables.  I have
 a master table with a few child tables linked via Foreign Key constraints.
  Each table has several indexes as well.

 My question is:  if I delete many records in a single delete statement
 (i.e., DELETE FROM table WHERE id = 1 AND id = 5 ... AND ID = 100) how many
 times are the foreign keys/indexes updated?

 Once for the entire DELETE operation or one time for each record that is
 deleted?

 Thanks,

 Randall Price




Re: question regarding mysql database location

2009-11-26 Thread nitin mehta
Is mysql the owner of the directories?



- Original Message 
From: Manasi Save manasi.s...@artificialmachines.com
To: Johan De Meersman vegiv...@tuxera.be
Cc: Waynn Lue waynn...@gmail.com; mysql mysql@lists.mysql.com
Sent: Wed, November 25, 2009 8:12:25 PM
Subject: Re: question regarding mysql database location

Dear Johan,

Need your help again in understanding How mysql reads symlink.

As you said below, I have created symlinks in default mysql directory.
and try to read that symlink file as a database. But mysql is not reading
that file as Database. Is there any settings which I need to change.

Thanks in advance.

-- 
Regards,
Manasi Save
Artificial Machines Pvt Ltd.

 On Wed, Nov 25, 2009 at 11:55 AM, Manasi Save 
 manasi.s...@artificialmachines.com wrote:

 Hi Johan,

 I am Sorry. If I have complicated the senerio But, this still not fix my
 purpose.

 What I want is - From your example :-

 /data/disk1/mysql/db1 (directory)
                /db2 (directory)
                /db3 (directory)
                /db4 (symlink to /data/disk2/mysql/db4)
                /db5 (symlink to /data/disk2/mysql/db5)
                /db6 (symlink to /data/disk2/mysql/db6)

 I dont want to create these directories here (/data/disk1/mysql/d4
 /d5
 /d6).


 They're not directories, they're symlinks, which are (to the OS) a kind of
 file, and thus not limited to 32000 per directory. They behave mostly
 identical to a directory, though, so MySQL will pick them up seamlessly,
 with the one hitch that you'll have to replace create database
 statements
 by mkdir and ln calls on the OS level.

 This is afaik the only way to do this on the MySQL level. It is impossible
 to specify multiple base directories.

 Another possible option, but higher in complexity and most likely less
 performant, would be to run two instances of MySQL on different ports with
 different data directories, and use MySQL Proxy to redirect incoming
 connections based on whatever criterion you could script into it - use
 database statements, for example. This is however going to come with it's
 very own set of catches and limitations.

 I'm not big on proxy, myself, so I'm afraid if the symlink option is not
 acceptable to you, I can't help you any further.




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:    http://lists.mysql.com/mysql?unsub=ntn...@yahoo.com




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: question regarding mysql database location

2009-11-26 Thread Manasi Save
Hi Johan,

It worked perfectly. Thank you so much for this explanation.

I am really greatful.

-- 
Best Regards,
Manasi Save
Artificial Machines Pvt Ltd.

 On Wed, Nov 25, 2009 at 3:42 PM, Manasi Save 
 manasi.s...@artificialmachines.com wrote:

 Dear Johan,

 Need your help again in understanding How mysql reads symlink.

 As you said below, I have created symlinks in default mysql directory.
 and try to read that symlink file as a database. But mysql is not
 reading
 that file as Database. Is there any settings which I need to change.


 Make sure the directory the symlink points to has the same owner, group
 and
 permissions as your other databases. The permissions of the symlink itself
 are irrelevant (at least, on a Linux system. YMMV for other *nixen).

 *mytest1:~# cd /var/lib/mysql
 mytest1:/var/lib/mysql# ls -lh*
 total 117M
 -rw-r--r-- 1 root  root 0 2009-11-19 12:08 debian-5.0.flag
 -rw-rw 1 mysql mysql 106M 2009-11-25 11:53 ibdata1
 -rw-rw 1 mysql mysql 5.0M 2009-11-25 11:53 ib_logfile0
 -rw-rw 1 mysql mysql 5.0M 2009-11-25 11:53 ib_logfile1
 drwxr-xr-x 2 mysql root  4.0K 2009-11-24 10:27 mysql
 -rw--- 1 root  root 7 2009-11-19 12:08 mysql_upgrade_info
 drwx-- 2 mysql mysql 4.0K 2009-11-24 10:43 tmp
 *mytest1:/var/lib/mysql# mysql*
 Welcome to the MySQL monitor.  Commands end with ; or \g.
 Your MySQL connection id is 690
 Server version: 5.0.51a-24+lenny2 (Debian)

 Type 'help;' or '\h' for help. Type '\c' to clear the buffer.

 *mysql show databases;*
 ++
 | Database   |
 ++
 | information_schema |
 | mysql  |
 | tmp|
 ++
 10 rows in set (0.13 sec)

 mysql Bye
 *mytest1:/var/lib/mysql# mkdir /tmp/remotedatabase
 mytest1:/var/lib/mysql# chown mysql:mysql /tmp/remotedatabase
 mytest1:/var/lib/mysql# chmod u+rwX /tmp/remotedatabase
 mytest1:/var/lib/mysql# ln -s /tmp/remotedatabase ./
 mytest1:/var/lib/mysql# mysql*
 Welcome to the MySQL monitor.  Commands end with ; or \g.
 Your MySQL connection id is 691
 Server version: 5.0.51a-24+lenny2 (Debian)

 Type 'help;' or '\h' for help. Type '\c' to clear the buffer.

 *mysql show databases;*
 ++
 | Database   |
 ++
 | information_schema |
 | mysql  |
 | remotedatabase |
 | tmp|
 ++
 11 rows in set (0.01 sec)

 *mysql use remotedatabase;*
 Database changed
 *mysql create table a (a int);*
 Query OK, 0 rows affected (0.04 sec)

 *mysql show tables;*
 +--+
 | Tables_in_remotedatabase |
 +--+
 | a|
 +--+
 1 row in set (0.01 sec)

 mysql Bye
 mytest1:/var/lib/mysql#






 Thanks in advance.

 --
 Regards,
 Manasi Save
 Artificial Machines Pvt Ltd.

  On Wed, Nov 25, 2009 at 11:55 AM, Manasi Save 
  manasi.s...@artificialmachines.com wrote:
 
  Hi Johan,
 
  I am Sorry. If I have complicated the senerio But, this still not fix
 my
  purpose.
 
  What I want is - From your example :-
 
  /data/disk1/mysql/db1 (directory)
  /db2 (directory)
  /db3 (directory)
  /db4 (symlink to /data/disk2/mysql/db4)
  /db5 (symlink to /data/disk2/mysql/db5)
  /db6 (symlink to /data/disk2/mysql/db6)
 
  I dont want to create these directories here (/data/disk1/mysql/d4
  /d5
  /d6).
 
 
  They're not directories, they're symlinks, which are (to the OS) a
 kind
 of
  file, and thus not limited to 32000 per directory. They behave mostly
  identical to a directory, though, so MySQL will pick them up
 seamlessly,
  with the one hitch that you'll have to replace create database
  statements
  by mkdir and ln calls on the OS level.
 
  This is afaik the only way to do this on the MySQL level. It is
 impossible
  to specify multiple base directories.
 
  Another possible option, but higher in complexity and most likely less
  performant, would be to run two instances of MySQL on different ports
 with
  different data directories, and use MySQL Proxy to redirect incoming
  connections based on whatever criterion you could script into it -
 use
  database statements, for example. This is however going to come with
 it's
  very own set of catches and limitations.
 
  I'm not big on proxy, myself, so I'm afraid if the symlink option is
 not
  acceptable to you, I can't help you any further.
 







-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: question regarding mysql database location

2009-11-26 Thread Manasi Save
Hi Nitin,

Yes for the actually directories created mysql is the owner.

-- 
Thanks and Regards,
Manasi Save
Artificial Machines Pvt Ltd.

 Is mysql the owner of the directories?



 - Original Message 
 From: Manasi Save manasi.s...@artificialmachines.com
 To: Johan De Meersman vegiv...@tuxera.be
 Cc: Waynn Lue waynn...@gmail.com; mysql mysql@lists.mysql.com
 Sent: Wed, November 25, 2009 8:12:25 PM
 Subject: Re: question regarding mysql database location

 Dear Johan,

 Need your help again in understanding How mysql reads symlink.

 As you said below, I have created symlinks in default mysql directory.
 and try to read that symlink file as a database. But mysql is not reading
 that file as Database. Is there any settings which I need to change.

 Thanks in advance.

 --
 Regards,
 Manasi Save
 Artificial Machines Pvt Ltd.

 On Wed, Nov 25, 2009 at 11:55 AM, Manasi Save 
 manasi.s...@artificialmachines.com wrote:

 Hi Johan,

 I am Sorry. If I have complicated the senerio But, this still not fix
 my
 purpose.

 What I want is - From your example :-

 /data/disk1/mysql/db1 (directory)
                /db2 (directory)
                /db3 (directory)
                /db4 (symlink to /data/disk2/mysql/db4)
                /db5 (symlink to /data/disk2/mysql/db5)
                /db6 (symlink to /data/disk2/mysql/db6)

 I dont want to create these directories here (/data/disk1/mysql/d4
 /d5
 /d6).


 They're not directories, they're symlinks, which are (to the OS) a kind
 of
 file, and thus not limited to 32000 per directory. They behave mostly
 identical to a directory, though, so MySQL will pick them up seamlessly,
 with the one hitch that you'll have to replace create database
 statements
 by mkdir and ln calls on the OS level.

 This is afaik the only way to do this on the MySQL level. It is
 impossible
 to specify multiple base directories.

 Another possible option, but higher in complexity and most likely less
 performant, would be to run two instances of MySQL on different ports
 with
 different data directories, and use MySQL Proxy to redirect incoming
 connections based on whatever criterion you could script into it - use
 database statements, for example. This is however going to come with
 it's
 very own set of catches and limitations.

 I'm not big on proxy, myself, so I'm afraid if the symlink option is not
 acceptable to you, I can't help you any further.




 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:    http://lists.mysql.com/mysql?unsub=ntn...@yahoo.com







-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: question regarding mysql database location

2009-11-25 Thread Waynn Lue
I fixed this by using symlinks for the directories for the underlying
databases. The limit for files is significantly higher than
directories.

Waynn

On 11/24/09, Manasi Save manasi.s...@artificialmachines.com wrote:
 Hi All,

 I have asked this question before But, I think I am not able to describe
 it better.

 Sorry for asking it again.
 I have multiple databases but there is a limit on the folders getting
 created in one folder.

 I have mysql default directory set as /var/lib/mysql/data.
 Now, After 32000 folder creation I am not able to create more folders than
 that. Well Its not like I want to create 32000 database's in it (Which I
 wanted to earlier :-P).

 for example - I want to create 10 databases but 5 in
 /var/lib/mysql/data/d1 to d5
 and othe 5 in /var/lib/mysql/data/d6 to d10.

 but I want to access all the databases that is d1-d10.

 as I ca change the database location after 5 databases but not able to
 access old five which I have created in old location.


 Please let me know if anymore information is needed on this. I am really
 looking for the solution. Please Help me.
 --
 Thanks and Regards,
 Manasi Save
 Artificial Machines Pvt Ltd.




 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql?unsub=waynn...@gmail.com



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: question regarding mysql database location

2009-11-25 Thread Manasi Save
Thanks Waynn,

I could not get your point of using symlinks. Because as per my knowledge
symlink will store same data which is there in original directory.
and What do you mean by The limit for files is significantly higher than
 directories.

Can you elaborate it more.

Thanks in advance.

Regards,
Manasi Save
Artificial Machines Pvt Ltd.

 I fixed this by using symlinks for the directories for the underlying
 databases. The limit for files is significantly higher than
 directories.

 Waynn

 On 11/24/09, Manasi Save manasi.s...@artificialmachines.com wrote:
 Hi All,

 I have asked this question before But, I think I am not able to describe
 it better.

 Sorry for asking it again.
 I have multiple databases but there is a limit on the folders getting
 created in one folder.

 I have mysql default directory set as /var/lib/mysql/data.
 Now, After 32000 folder creation I am not able to create more folders
 than
 that. Well Its not like I want to create 32000 database's in it (Which I
 wanted to earlier :-P).

 for example - I want to create 10 databases but 5 in
 /var/lib/mysql/data/d1 to d5
 and othe 5 in /var/lib/mysql/data/d6 to d10.

 but I want to access all the databases that is d1-d10.

 as I ca change the database location after 5 databases but not able to
 access old five which I have created in old location.


 Please let me know if anymore information is needed on this. I am really
 looking for the solution. Please Help me.
 --
 Thanks and Regards,
 Manasi Save
 Artificial Machines Pvt Ltd.




 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql?unsub=waynn...@gmail.com






-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: question regarding mysql database location

2009-11-25 Thread Waynn Lue
On Wed, Nov 25, 2009 at 12:53 AM, Manasi Save 
manasi.s...@artificialmachines.com wrote:

 Thanks Waynn,

 I could not get your point of using symlinks. Because as per my knowledge
 symlink will store same data which is there in original directory.
 and What do you mean by The limit for files is significantly higher than
  directories.

 Can you elaborate it more.

 Thanks in advance.


So assuming /var/lib/mysql/data/ is your mysql data directory, you could
create a new directory called /var/lib/mysql/data/data1, then move all the
directories from /var/lib/mysql/data/* into data1.  Then you could create a
symlink in /var/lib/mysql/data/ pointing to /var/lib/mysql/data/data1/dir
name.  When mysql tries to load the data directory, it follows the symlink
to the underlying directory (in /var/lib/mysql/data/data1).


Re: question regarding mysql database location

2009-11-25 Thread Manasi Save
Well Waynn,

In this case I need to move all the existing databases to new location
right. Which I don't want to do. Is it possible that I create sym link
between two and use both.
-- 
Thanks and Regards,
Manasi Save
Artificial Machines Pvt Ltd.

 On Wed, Nov 25, 2009 at 12:53 AM, Manasi Save 
 manasi.s...@artificialmachines.com wrote:

 Thanks Waynn,

 I could not get your point of using symlinks. Because as per my
 knowledge
 symlink will store same data which is there in original directory.
 and What do you mean by The limit for files is significantly higher
 than
  directories.

 Can you elaborate it more.

 Thanks in advance.


 So assuming /var/lib/mysql/data/ is your mysql data directory, you could
 create a new directory called /var/lib/mysql/data/data1, then move all the
 directories from /var/lib/mysql/data/* into data1.  Then you could create
 a
 symlink in /var/lib/mysql/data/ pointing to /var/lib/mysql/data/data1/dir
 name.  When mysql tries to load the data directory, it follows the
 symlink
 to the underlying directory (in /var/lib/mysql/data/data1).




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: question regarding mysql database location

2009-11-25 Thread Johan De Meersman
You don't need to move any databases. Look at this structure:

/data/disk1/mysql/db1 (directory)
 /db2 (directory)
 /db3 (directory)
 /db4 (symlink to /data/disk2/mysql/db4)
 /db5 (symlink to /data/disk2/mysql/db5)
 /db6 (symlink to /data/disk2/mysql/db6)
 /disk2/mysql/db4 (directory)
 /db5 (directory)
 /db6 (directory)


If your mysql data directory is set to /data/disk1/mysql, the server will
pick up the symlinks there and use them as if they were just ordinary
directories.


On Wed, Nov 25, 2009 at 10:48 AM, Manasi Save 
manasi.s...@artificialmachines.com wrote:

 Well Waynn,

 In this case I need to move all the existing databases to new location
 right. Which I don't want to do. Is it possible that I create sym link
 between two and use both.
 --
 Thanks and Regards,
 Manasi Save
 Artificial Machines Pvt Ltd.

  On Wed, Nov 25, 2009 at 12:53 AM, Manasi Save 
  manasi.s...@artificialmachines.com wrote:
 
  Thanks Waynn,
 
  I could not get your point of using symlinks. Because as per my
  knowledge
  symlink will store same data which is there in original directory.
  and What do you mean by The limit for files is significantly higher
  than
   directories.
 
  Can you elaborate it more.
 
  Thanks in advance.
 
 
  So assuming /var/lib/mysql/data/ is your mysql data directory, you could
  create a new directory called /var/lib/mysql/data/data1, then move all
 the
  directories from /var/lib/mysql/data/* into data1.  Then you could create
  a
  symlink in /var/lib/mysql/data/ pointing to
 /var/lib/mysql/data/data1/dir
  name.  When mysql tries to load the data directory, it follows the
  symlink
  to the underlying directory (in /var/lib/mysql/data/data1).
 



 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql?unsub=vegiv...@tuxera.be




Re: question regarding mysql database location

2009-11-25 Thread Manasi Save
Hi Johan,

I am Sorry. If I have complicated the senerio But, this still not fix my
purpose.

What I want is - From your example :-

/data/disk1/mysql/db1 (directory)
 /db2 (directory)
 /db3 (directory)
 /db4 (symlink to /data/disk2/mysql/db4)
 /db5 (symlink to /data/disk2/mysql/db5)
 /db6 (symlink to /data/disk2/mysql/db6)

I dont want to create these directories here (/data/disk1/mysql/d4
/d5
/d6). Also is it somthing that in disk1/mysql it will not create physical
folder of it.

 /disk2/mysql/db4 (directory)
 /db5 (directory)
 /db6 (directory)
-- 
Thanks and Regards,
Manasi Save
Artificial Machines Pvt Ltd.

 You don't need to move any databases. Look at this structure:

 /data/disk1/mysql/db1 (directory)
  /db2 (directory)
  /db3 (directory)
  /db4 (symlink to /data/disk2/mysql/db4)
  /db5 (symlink to /data/disk2/mysql/db5)
  /db6 (symlink to /data/disk2/mysql/db6)
  /disk2/mysql/db4 (directory)
  /db5 (directory)
  /db6 (directory)


 If your mysql data directory is set to /data/disk1/mysql, the server will
 pick up the symlinks there and use them as if they were just ordinary
 directories.


 On Wed, Nov 25, 2009 at 10:48 AM, Manasi Save 
 manasi.s...@artificialmachines.com wrote:

 Well Waynn,

 In this case I need to move all the existing databases to new location
 right. Which I don't want to do. Is it possible that I create sym link
 between two and use both.
 --
 Thanks and Regards,
 Manasi Save
 Artificial Machines Pvt Ltd.

  On Wed, Nov 25, 2009 at 12:53 AM, Manasi Save 
  manasi.s...@artificialmachines.com wrote:
 
  Thanks Waynn,
 
  I could not get your point of using symlinks. Because as per my
  knowledge
  symlink will store same data which is there in original directory.
  and What do you mean by The limit for files is significantly higher
  than
   directories.
 
  Can you elaborate it more.
 
  Thanks in advance.
 
 
  So assuming /var/lib/mysql/data/ is your mysql data directory, you
 could
  create a new directory called /var/lib/mysql/data/data1, then move all
 the
  directories from /var/lib/mysql/data/* into data1.  Then you could
 create
  a
  symlink in /var/lib/mysql/data/ pointing to
 /var/lib/mysql/data/data1/dir
  name.  When mysql tries to load the data directory, it follows the
  symlink
  to the underlying directory (in /var/lib/mysql/data/data1).
 



 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql?unsub=vegiv...@tuxera.be






-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: question regarding mysql database location

2009-11-25 Thread Krishna Chandra Prajapati
Hi Manasi,

At a time mysql can point to one data directory. For your task you can have
n number of mysql installation with different data directory. After that you
can use federated storage engine to perform your task.

Thanks,
Krishna Ch. Prajapati

On Wed, Nov 25, 2009 at 12:19 PM, Manasi Save 
manasi.s...@artificialmachines.com wrote:

 Hi All,

 I have asked this question before But, I think I am not able to describe
 it better.

 Sorry for asking it again.
 I have multiple databases but there is a limit on the folders getting
 created in one folder.

 I have mysql default directory set as /var/lib/mysql/data.
 Now, After 32000 folder creation I am not able to create more folders than
 that. Well Its not like I want to create 32000 database's in it (Which I
 wanted to earlier :-P).

 for example - I want to create 10 databases but 5 in
 /var/lib/mysql/data/d1 to d5
 and othe 5 in /var/lib/mysql/data/d6 to d10.

 but I want to access all the databases that is d1-d10.

 as I ca change the database location after 5 databases but not able to
 access old five which I have created in old location.


 Please let me know if anymore information is needed on this. I am really
 looking for the solution. Please Help me.
 --
 Thanks and Regards,
 Manasi Save
 Artificial Machines Pvt Ltd.




 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:
 http://lists.mysql.com/mysql?unsub=prajapat...@gmail.com




Re: question regarding mysql database location

2009-11-25 Thread Johan De Meersman
On Wed, Nov 25, 2009 at 11:55 AM, Manasi Save 
manasi.s...@artificialmachines.com wrote:

 Hi Johan,

 I am Sorry. If I have complicated the senerio But, this still not fix my
 purpose.

 What I want is - From your example :-

 /data/disk1/mysql/db1 (directory)
 /db2 (directory)
 /db3 (directory)
 /db4 (symlink to /data/disk2/mysql/db4)
 /db5 (symlink to /data/disk2/mysql/db5)
 /db6 (symlink to /data/disk2/mysql/db6)

 I dont want to create these directories here (/data/disk1/mysql/d4
 /d5
 /d6).


They're not directories, they're symlinks, which are (to the OS) a kind of
file, and thus not limited to 32000 per directory. They behave mostly
identical to a directory, though, so MySQL will pick them up seamlessly,
with the one hitch that you'll have to replace create database statements
by mkdir and ln calls on the OS level.

This is afaik the only way to do this on the MySQL level. It is impossible
to specify multiple base directories.

Another possible option, but higher in complexity and most likely less
performant, would be to run two instances of MySQL on different ports with
different data directories, and use MySQL Proxy to redirect incoming
connections based on whatever criterion you could script into it - use
database statements, for example. This is however going to come with it's
very own set of catches and limitations.

I'm not big on proxy, myself, so I'm afraid if the symlink option is not
acceptable to you, I can't help you any further.


Re: question regarding mysql database location

2009-11-25 Thread Johan De Meersman
On Wed, Nov 25, 2009 at 12:05 PM, Krishna Chandra Prajapati 
prajapat...@gmail.com wrote:

 At a time mysql can point to one data directory. For your task you can have
 n number of mysql installation with different data directory. After that
 you
 can use federated storage engine to perform your task.


The federated engine ignores indexes on the remote database, though. Read up
on the documentation before jumping in to this :-)


Re: question regarding mysql database location

2009-11-25 Thread Manasi Save
Thanks Johan,

It was really a great help. I'll try to implement it. I dont want to opt
for multiple mysql instances option as thats not feasible.

I'll get back to you all if it works fine.

Thanks again.

-- 
Best Regards,
Manasi Save
Artificial Machines Pvt Ltd.

 On Wed, Nov 25, 2009 at 11:55 AM, Manasi Save 
 manasi.s...@artificialmachines.com wrote:

 Hi Johan,

 I am Sorry. If I have complicated the senerio But, this still not fix my
 purpose.

 What I want is - From your example :-

 /data/disk1/mysql/db1 (directory)
 /db2 (directory)
 /db3 (directory)
 /db4 (symlink to /data/disk2/mysql/db4)
 /db5 (symlink to /data/disk2/mysql/db5)
 /db6 (symlink to /data/disk2/mysql/db6)

 I dont want to create these directories here (/data/disk1/mysql/d4
 /d5
 /d6).


 They're not directories, they're symlinks, which are (to the OS) a kind of
 file, and thus not limited to 32000 per directory. They behave mostly
 identical to a directory, though, so MySQL will pick them up seamlessly,
 with the one hitch that you'll have to replace create database
 statements
 by mkdir and ln calls on the OS level.

 This is afaik the only way to do this on the MySQL level. It is impossible
 to specify multiple base directories.

 Another possible option, but higher in complexity and most likely less
 performant, would be to run two instances of MySQL on different ports with
 different data directories, and use MySQL Proxy to redirect incoming
 connections based on whatever criterion you could script into it - use
 database statements, for example. This is however going to come with it's
 very own set of catches and limitations.

 I'm not big on proxy, myself, so I'm afraid if the symlink option is not
 acceptable to you, I can't help you any further.




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: question regarding mysql database location

2009-11-25 Thread Manasi Save
Dear Johan,

Need your help again in understanding How mysql reads symlink.

As you said below, I have created symlinks in default mysql directory.
and try to read that symlink file as a database. But mysql is not reading
that file as Database. Is there any settings which I need to change.

Thanks in advance.

-- 
Regards,
Manasi Save
Artificial Machines Pvt Ltd.

 On Wed, Nov 25, 2009 at 11:55 AM, Manasi Save 
 manasi.s...@artificialmachines.com wrote:

 Hi Johan,

 I am Sorry. If I have complicated the senerio But, this still not fix my
 purpose.

 What I want is - From your example :-

 /data/disk1/mysql/db1 (directory)
 /db2 (directory)
 /db3 (directory)
 /db4 (symlink to /data/disk2/mysql/db4)
 /db5 (symlink to /data/disk2/mysql/db5)
 /db6 (symlink to /data/disk2/mysql/db6)

 I dont want to create these directories here (/data/disk1/mysql/d4
 /d5
 /d6).


 They're not directories, they're symlinks, which are (to the OS) a kind of
 file, and thus not limited to 32000 per directory. They behave mostly
 identical to a directory, though, so MySQL will pick them up seamlessly,
 with the one hitch that you'll have to replace create database
 statements
 by mkdir and ln calls on the OS level.

 This is afaik the only way to do this on the MySQL level. It is impossible
 to specify multiple base directories.

 Another possible option, but higher in complexity and most likely less
 performant, would be to run two instances of MySQL on different ports with
 different data directories, and use MySQL Proxy to redirect incoming
 connections based on whatever criterion you could script into it - use
 database statements, for example. This is however going to come with it's
 very own set of catches and limitations.

 I'm not big on proxy, myself, so I'm afraid if the symlink option is not
 acceptable to you, I can't help you any further.




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: question regarding mysql database location

2009-11-25 Thread Johan De Meersman
On Wed, Nov 25, 2009 at 3:42 PM, Manasi Save 
manasi.s...@artificialmachines.com wrote:

 Dear Johan,

 Need your help again in understanding How mysql reads symlink.

 As you said below, I have created symlinks in default mysql directory.
 and try to read that symlink file as a database. But mysql is not reading
 that file as Database. Is there any settings which I need to change.


Make sure the directory the symlink points to has the same owner, group and
permissions as your other databases. The permissions of the symlink itself
are irrelevant (at least, on a Linux system. YMMV for other *nixen).

*mytest1:~# cd /var/lib/mysql
mytest1:/var/lib/mysql# ls -lh*
total 117M
-rw-r--r-- 1 root  root 0 2009-11-19 12:08 debian-5.0.flag
-rw-rw 1 mysql mysql 106M 2009-11-25 11:53 ibdata1
-rw-rw 1 mysql mysql 5.0M 2009-11-25 11:53 ib_logfile0
-rw-rw 1 mysql mysql 5.0M 2009-11-25 11:53 ib_logfile1
drwxr-xr-x 2 mysql root  4.0K 2009-11-24 10:27 mysql
-rw--- 1 root  root 7 2009-11-19 12:08 mysql_upgrade_info
drwx-- 2 mysql mysql 4.0K 2009-11-24 10:43 tmp
*mytest1:/var/lib/mysql# mysql*
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 690
Server version: 5.0.51a-24+lenny2 (Debian)

Type 'help;' or '\h' for help. Type '\c' to clear the buffer.

*mysql show databases;*
++
| Database   |
++
| information_schema |
| mysql  |
| tmp|
++
10 rows in set (0.13 sec)

mysql Bye
*mytest1:/var/lib/mysql# mkdir /tmp/remotedatabase
mytest1:/var/lib/mysql# chown mysql:mysql /tmp/remotedatabase
mytest1:/var/lib/mysql# chmod u+rwX /tmp/remotedatabase
mytest1:/var/lib/mysql# ln -s /tmp/remotedatabase ./
mytest1:/var/lib/mysql# mysql*
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 691
Server version: 5.0.51a-24+lenny2 (Debian)

Type 'help;' or '\h' for help. Type '\c' to clear the buffer.

*mysql show databases;*
++
| Database   |
++
| information_schema |
| mysql  |
| remotedatabase |
| tmp|
++
11 rows in set (0.01 sec)

*mysql use remotedatabase;*
Database changed
*mysql create table a (a int);*
Query OK, 0 rows affected (0.04 sec)

*mysql show tables;*
+--+
| Tables_in_remotedatabase |
+--+
| a|
+--+
1 row in set (0.01 sec)

mysql Bye
mytest1:/var/lib/mysql#






 Thanks in advance.

 --
 Regards,
 Manasi Save
 Artificial Machines Pvt Ltd.

  On Wed, Nov 25, 2009 at 11:55 AM, Manasi Save 
  manasi.s...@artificialmachines.com wrote:
 
  Hi Johan,
 
  I am Sorry. If I have complicated the senerio But, this still not fix my
  purpose.
 
  What I want is - From your example :-
 
  /data/disk1/mysql/db1 (directory)
  /db2 (directory)
  /db3 (directory)
  /db4 (symlink to /data/disk2/mysql/db4)
  /db5 (symlink to /data/disk2/mysql/db5)
  /db6 (symlink to /data/disk2/mysql/db6)
 
  I dont want to create these directories here (/data/disk1/mysql/d4
  /d5
  /d6).
 
 
  They're not directories, they're symlinks, which are (to the OS) a kind
 of
  file, and thus not limited to 32000 per directory. They behave mostly
  identical to a directory, though, so MySQL will pick them up seamlessly,
  with the one hitch that you'll have to replace create database
  statements
  by mkdir and ln calls on the OS level.
 
  This is afaik the only way to do this on the MySQL level. It is
 impossible
  to specify multiple base directories.
 
  Another possible option, but higher in complexity and most likely less
  performant, would be to run two instances of MySQL on different ports
 with
  different data directories, and use MySQL Proxy to redirect incoming
  connections based on whatever criterion you could script into it - use
  database statements, for example. This is however going to come with
 it's
  very own set of catches and limitations.
 
  I'm not big on proxy, myself, so I'm afraid if the symlink option is not
  acceptable to you, I can't help you any further.
 





Re: question regarding mysql database location

2009-11-24 Thread Manasi Save
Also I forgot to mention that I have gone through the innodb option of
innodb_data_file_path but I can just specify it as :

innodb_data_file_path=ibdata1:2048M:autoextend:max:1024M;ibdata1:2048M:autoextend:max:1024M;

But not as :

innodb_data_file_path=/var/lib/mysql/data/ibdata1:2048M:autoextend:max:1024M;/var/lib/mysql/data1/ibdata1:2048M:autoextend:max:1024M;

Is there any wayout for this?

Thanks and Regards,
Manasi Save
Artificial Machines Pvt Ltd.

 Hi All,

 I have asked this question before But, I think I am not able to describe
 it better.

 Sorry for asking it again.
 I have multiple databases but there is a limit on the folders getting
 created in one folder.

 I have mysql default directory set as /var/lib/mysql/data.
 Now, After 32000 folder creation I am not able to create more folders than
 that. Well Its not like I want to create 32000 database's in it (Which I
 wanted to earlier :-P).

 for example - I want to create 10 databases but 5 in
 /var/lib/mysql/data/d1 to d5
 and othe 5 in /var/lib/mysql/data/d6 to d10.

 but I want to access all the databases that is d1-d10.

 as I ca change the database location after 5 databases but not able to
 access old five which I have created in old location.


 Please let me know if anymore information is needed on this. I am really
 looking for the solution. Please Help me.
 --
 Thanks and Regards,
 Manasi Save
 Artificial Machines Pvt Ltd.




 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:
 http://lists.mysql.com/mysql?unsub=manasi.s...@artificialmachines.com





-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: Question about MySQL

2009-08-11 Thread Gavin Towey
Hi Banyan,

I'm really just talking about basic optimization techniques:

1. Install lots of RAM
2. Convert all table to innodb
3. Allocate about 80% of memory to innodb_buffer_pool_size

If you haven't seen this script yet, I suggest you start here:
https://launchpad.net/mysql-tuning-primer

Regards,
Gavin Towey

-Original Message-
From: Banyan He [mailto:ban...@rootong.com]
Sent: Friday, August 07, 2009 11:12 AM
To: Gavin Towey; joerg.bru...@sun.com; Peter Chacko
Cc: mysql
Subject: Re: Question about MySQL

Hi Gavin,

I am interested in the things you made for the optimization. Can you share
with us such things?

Thanks a lot,

--
Banyan He
Network  System Security Infrastructure
Mail: ban...@rootong.com
Blog: http://www.rootong.com/blog
LinkedIn: http://www.linkedin.com/in/banyanhe
Website: http://www.rootong.com


 From: Gavin Towey gto...@ffn.com
 Date: Fri, 7 Aug 2009 11:07:19 -0700
 To: joerg.bru...@sun.com joerg.bru...@sun.com, Peter Chacko
 peterchack...@gmail.com
 Cc: mysql mysql@lists.mysql.com
 Subject: RE: Question about MySQL

 I always accepted that NFS was unacceptably slow for database access, until I
 actually tested it.  Turns out that with lots of RAM and properly tuned
 caches, you're optimizing for minimal IO anyway. A good file server will have
 massive amounts of IO OPS.  On top of that if you're using GigE over few hops,
 then it's really not slower than local disks.

 Remember: benchmark and test your assumptions!

 Regards,
 Gavin Towey

 -Original Message-
 From: joerg.bru...@sun.com [mailto:joerg.bru...@sun.com]
 Sent: Friday, August 07, 2009 1:19 AM
 To: Peter Chacko
 Cc: mysql
 Subject: Re: Question about MySQL

 Hi Peter, all,


 let me just concentrate on the NFS aspect:


 Peter Chacko wrote:
 [[...]]

 Another question is , whats the general experience of  running MySQL
 servers on NFS shares ?

 I would *never* use NFS storage for any DBMS (except for some testing):
 NFS access is slower than local disk access, and it adds more components
 to the critical path. So your operations get slower, reliability
 decreases, and (in case of any trouble) analysis becomes more difficult.

 I cannot imagine any setup where you have a machine strong enough to run
 your DBMS on it, but not capable of driving sufficient local disks.

 The typical argument for having centralized disks serving many machines
 is based on economies of scale (huge disks), flexibility (partitioning),
 and centralized management (RAID replacement, backup).
 There may be some merit to this in a specialized setup (NAS systems -
 I'm not convinced of them, but don't claim expert knowledge about them),
 but IMO not using general-purpose machines and NFS.

 Whatever the economical advantages during normal operation may be, you
 should not forget the huge costs you would incur if any in-between
 component breaks and your database stops operating.
 This may be tolerable for some applications, depending on the required
 availability, but simply intolerable for others.


 We are working on developing a custom
 protocol for MySQL clustering that takes care of all file management
 as part of the database clustering protocol, rather than a storage
 engine feature.

 Sorry, I don't get the meaning of this in sufficient detail to comment.


 What are the likely setup of a database storage ? Is
 it on RAW partition or on a File system ?

 That may depend on the storage engine used.
 MyISAM uses a file-per-table approach, so must be run on a file system;
 InnoDB may be able to use a raw partition (I'm not sure there);
 for others, I can't tell.


   Will ocfs2  be better used
 for mySQL as well, in a clustered environment ?

 I strongly doubt it.

 AIUI, the big advantage of cluster file systems is that they allow all
 machines to access and modify shared data.
 With a DBMS, you don't want to share data across machines, you want the
 database to be accessed only be the one machine (possibly multi-CPU, but
 shared memory) running the database processes, because on that machine
 you have the various database caches.
 Then, that machine makes the data available to all clients, so you get a
 logical sharing on a higher protocol level (SQL).

 To have multiple machines accessing the same database storage, you would
 first need some protocol to ensure cache coherency, and that is not
 contained in MySQL (in the general server).
 To use MySQL on multiple machines for the same data, you set up replication.

 The alternative approach would be to use MySQL Cluster, which is
 designed to hold the data in main memory (for extremely low latency) and
 to use the disk only for backup purposes.



 I would appreciate if any one share with me their thoughts on this.

 My comments above are based on my experience during DBMS development
 (including distributed DBMS), but not on any financial calculations or
 DBA work. Weigh them with other answers

Re: Question about MySQL

2009-08-10 Thread Joerg Bruehe
Hi all!


First of all, please excuse the typo I made in my posting.
I had written
 There may be some merit to this in a specialized setup (NAS systems -
 I'm not convinced of them, but don't claim expert knowledge about them),
and of course meant SAN, not NAS systems.


As regards NFS:

Peter Chacko wrote:
 And NFS is becoming better and better with the adoption of 10GbE, and
 NFSoRDMA ...i am sure at that point no body will complain about NFS
 performance for databases. And for a parallel database access, pNFS is
 also shaping up well. As NFS creators are now owned by ORACLE who
 themselves have developed technology like direct NFS, NFS and Database
 storage will be great buddies in future.
 
 thanks
 
 On Fri, Aug 7, 2009 at 11:37 PM, Gavin Toweygto...@ffn.com wrote:
 I always accepted that NFS was unacceptably slow for database access,
 until I actually tested it.  Turns out that with lots of RAM and properly
 tuned caches, you're optimizing for minimal IO anyway. A good file server
 will have massive amounts of IO OPS.  On top of that if you're using GigE
 over few hops, then it's really not slower than local disks.


I immediately agree that NFS becomes faster due to technical progress,
like any other part of computing.

But however clever you optimize your NFS setup (including the network,
the servers, ...), you always have the additional latency of the network
and the NFS server (compared to the local disk).
Remember: Database performance is not only about throughput, it is also
about latency.
So NFS can only be slower than a local disk, never the same turnaround
time, let alone faster (assuming equal disks and machines, of course).

Whether that is *too* slow is another question - depending on your
software, your workload, and your hardware it may be fast enough.


However, my main objection against using NFS for database storage is not
performance, it is complexity:
If your database server does not use local disks but NFS, then the
network between the database server and the NFS server as well as that
server suddenly become essential components for your database setup.
As any component may fail, you increase the risk to your DB.

You may reduce the individual risk by selecting better hardware, dual
controllers, dual cabling, mirrored machines, ... as much as you like,
the result will still be higher complexity and higher risks than if you
had applied similar enhancements to your database server and its local
disks.


 Remember: benchmark and test your assumptions!

Agreed.


Regards,
Jörg

-- 
Joerg Bruehe,  MySQL Build Team,  joerg.bru...@sun.com
   (+49 30) 417 01 487
Sun Microsystems GmbH,   Komturstraße 18a,   D-12099 Berlin
Geschaeftsfuehrer: Thomas Schroeder, Wolfgang Engels, Wolf Frenkel
Vorsitzender des Aufsichtsrates: Martin Haering Muenchen: HRB161028


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Question about MySQL

2009-08-07 Thread Joerg Bruehe
Hi Peter, all,


let me just concentrate on the NFS aspect:


Peter Chacko wrote:
 [[...]]
 
 Another question is , whats the general experience of  running MySQL
 servers on NFS shares ?

I would *never* use NFS storage for any DBMS (except for some testing):
NFS access is slower than local disk access, and it adds more components
to the critical path. So your operations get slower, reliability
decreases, and (in case of any trouble) analysis becomes more difficult.

I cannot imagine any setup where you have a machine strong enough to run
your DBMS on it, but not capable of driving sufficient local disks.

The typical argument for having centralized disks serving many machines
is based on economies of scale (huge disks), flexibility (partitioning),
and centralized management (RAID replacement, backup).
There may be some merit to this in a specialized setup (NAS systems -
I'm not convinced of them, but don't claim expert knowledge about them),
but IMO not using general-purpose machines and NFS.

Whatever the economical advantages during normal operation may be, you
should not forget the huge costs you would incur if any in-between
component breaks and your database stops operating.
This may be tolerable for some applications, depending on the required
availability, but simply intolerable for others.


 We are working on developing a custom
 protocol for MySQL clustering that takes care of all file management
 as part of the database clustering protocol, rather than a storage
 engine feature.

Sorry, I don't get the meaning of this in sufficient detail to comment.


 What are the likely setup of a database storage ? Is
 it on RAW partition or on a File system ?

That may depend on the storage engine used.
MyISAM uses a file-per-table approach, so must be run on a file system;
InnoDB may be able to use a raw partition (I'm not sure there);
for others, I can't tell.


   Will ocfs2  be better used
 for mySQL as well, in a clustered environment ?

I strongly doubt it.

AIUI, the big advantage of cluster file systems is that they allow all
machines to access and modify shared data.
With a DBMS, you don't want to share data across machines, you want the
database to be accessed only be the one machine (possibly multi-CPU, but
shared memory) running the database processes, because on that machine
you have the various database caches.
Then, that machine makes the data available to all clients, so you get a
logical sharing on a higher protocol level (SQL).

To have multiple machines accessing the same database storage, you would
first need some protocol to ensure cache coherency, and that is not
contained in MySQL (in the general server).
To use MySQL on multiple machines for the same data, you set up replication.

The alternative approach would be to use MySQL Cluster, which is
designed to hold the data in main memory (for extremely low latency) and
to use the disk only for backup purposes.


 
 I would appreciate if any one share with me their thoughts on this.

My comments above are based on my experience during DBMS development
(including distributed DBMS), but not on any financial calculations or
DBA work. Weigh them with other answers.


Regards,
Jörg

-- 
Joerg Bruehe,  MySQL Build Team,  joerg.bru...@sun.com
Sun Microsystems GmbH,   Komturstraße 18a,   D-12099 Berlin
Geschaeftsfuehrer: Thomas Schroeder, Wolfgang Engels, Wolf Frenkel
Vorsitzender des Aufsichtsrates: Martin Haering Muenchen: HRB161028


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Question about MySQL

2009-08-07 Thread Peter Chacko
Hi Jorg,

I  really appreciate your help sharing your experience/thoughts.

Yes, i fully concur with you,  NFS is not designed for Databases. But
you know there are Distributed SAN file systems (that use Direct IO to
the SAN) are serving  databases like DB2 in many installations for
shared storage. NFS cannot match its performance...due to its added
latency. But you know ORACLE has added a direct NFS support to its
storage, that has no file system caching, and is far better than
native NFS client driver.

My goal here is to make NFS works much better, as a thin layer,
optimized for Database IO by developing a new NFS stack in a clustered
environment, as part of the clustering intelligence itself.

If any one is   aware of such efforts please share it with me   ( That
way we can avoid duplicate efforts  and we can just complement what
they are not doing )

Thanks

On Fri, Aug 7, 2009 at 1:49 PM, Joerg Bruehejoerg.bru...@sun.com wrote:
 Hi Peter, all,


 let me just concentrate on the NFS aspect:


 Peter Chacko wrote:
 [[...]]

 Another question is , whats the general experience of  running MySQL
 servers on NFS shares ?

 I would *never* use NFS storage for any DBMS (except for some testing):
 NFS access is slower than local disk access, and it adds more components
 to the critical path. So your operations get slower, reliability
 decreases, and (in case of any trouble) analysis becomes more difficult.

 I cannot imagine any setup where you have a machine strong enough to run
 your DBMS on it, but not capable of driving sufficient local disks.

 The typical argument for having centralized disks serving many machines
 is based on economies of scale (huge disks), flexibility (partitioning),
 and centralized management (RAID replacement, backup).
 There may be some merit to this in a specialized setup (NAS systems -
 I'm not convinced of them, but don't claim expert knowledge about them),
 but IMO not using general-purpose machines and NFS.

 Whatever the economical advantages during normal operation may be, you
 should not forget the huge costs you would incur if any in-between
 component breaks and your database stops operating.
 This may be tolerable for some applications, depending on the required
 availability, but simply intolerable for others.


                         We are working on developing a custom
 protocol for MySQL clustering that takes care of all file management
 as part of the database clustering protocol, rather than a storage
 engine feature.

 Sorry, I don't get the meaning of this in sufficient detail to comment.


                 What are the likely setup of a database storage ? Is
 it on RAW partition or on a File system ?

 That may depend on the storage engine used.
 MyISAM uses a file-per-table approach, so must be run on a file system;
 InnoDB may be able to use a raw partition (I'm not sure there);
 for others, I can't tell.


                                           Will ocfs2  be better used
 for mySQL as well, in a clustered environment ?

 I strongly doubt it.

 AIUI, the big advantage of cluster file systems is that they allow all
 machines to access and modify shared data.
 With a DBMS, you don't want to share data across machines, you want the
 database to be accessed only be the one machine (possibly multi-CPU, but
 shared memory) running the database processes, because on that machine
 you have the various database caches.
 Then, that machine makes the data available to all clients, so you get a
 logical sharing on a higher protocol level (SQL).

 To have multiple machines accessing the same database storage, you would
 first need some protocol to ensure cache coherency, and that is not
 contained in MySQL (in the general server).
 To use MySQL on multiple machines for the same data, you set up replication.

 The alternative approach would be to use MySQL Cluster, which is
 designed to hold the data in main memory (for extremely low latency) and
 to use the disk only for backup purposes.



 I would appreciate if any one share with me their thoughts on this.

 My comments above are based on my experience during DBMS development
 (including distributed DBMS), but not on any financial calculations or
 DBA work. Weigh them with other answers.


 Regards,
 Jörg

 --
 Joerg Bruehe,  MySQL Build Team,  joerg.bru...@sun.com
 Sun Microsystems GmbH,   Komturstraße 18a,   D-12099 Berlin
 Geschaeftsfuehrer: Thomas Schroeder, Wolfgang Engels, Wolf Frenkel
 Vorsitzender des Aufsichtsrates: Martin Haering     Muenchen: HRB161028





-- 
Best regards,
Peter Chacko

NetDiox computing systems,
Network storage  OS  training and research.
Bangalore, India.
www.netdiox.com
080 2664 0708

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: Question about MySQL

2009-08-07 Thread Gavin Towey
I always accepted that NFS was unacceptably slow for database access, until I 
actually tested it.  Turns out that with lots of RAM and properly tuned caches, 
you're optimizing for minimal IO anyway. A good file server will have massive 
amounts of IO OPS.  On top of that if you're using GigE over few hops, then 
it's really not slower than local disks.

Remember: benchmark and test your assumptions!

Regards,
Gavin Towey

-Original Message-
From: joerg.bru...@sun.com [mailto:joerg.bru...@sun.com]
Sent: Friday, August 07, 2009 1:19 AM
To: Peter Chacko
Cc: mysql
Subject: Re: Question about MySQL

Hi Peter, all,


let me just concentrate on the NFS aspect:


Peter Chacko wrote:
 [[...]]

 Another question is , whats the general experience of  running MySQL
 servers on NFS shares ?

I would *never* use NFS storage for any DBMS (except for some testing):
NFS access is slower than local disk access, and it adds more components
to the critical path. So your operations get slower, reliability
decreases, and (in case of any trouble) analysis becomes more difficult.

I cannot imagine any setup where you have a machine strong enough to run
your DBMS on it, but not capable of driving sufficient local disks.

The typical argument for having centralized disks serving many machines
is based on economies of scale (huge disks), flexibility (partitioning),
and centralized management (RAID replacement, backup).
There may be some merit to this in a specialized setup (NAS systems -
I'm not convinced of them, but don't claim expert knowledge about them),
but IMO not using general-purpose machines and NFS.

Whatever the economical advantages during normal operation may be, you
should not forget the huge costs you would incur if any in-between
component breaks and your database stops operating.
This may be tolerable for some applications, depending on the required
availability, but simply intolerable for others.


 We are working on developing a custom
 protocol for MySQL clustering that takes care of all file management
 as part of the database clustering protocol, rather than a storage
 engine feature.

Sorry, I don't get the meaning of this in sufficient detail to comment.


 What are the likely setup of a database storage ? Is
 it on RAW partition or on a File system ?

That may depend on the storage engine used.
MyISAM uses a file-per-table approach, so must be run on a file system;
InnoDB may be able to use a raw partition (I'm not sure there);
for others, I can't tell.


   Will ocfs2  be better used
 for mySQL as well, in a clustered environment ?

I strongly doubt it.

AIUI, the big advantage of cluster file systems is that they allow all
machines to access and modify shared data.
With a DBMS, you don't want to share data across machines, you want the
database to be accessed only be the one machine (possibly multi-CPU, but
shared memory) running the database processes, because on that machine
you have the various database caches.
Then, that machine makes the data available to all clients, so you get a
logical sharing on a higher protocol level (SQL).

To have multiple machines accessing the same database storage, you would
first need some protocol to ensure cache coherency, and that is not
contained in MySQL (in the general server).
To use MySQL on multiple machines for the same data, you set up replication.

The alternative approach would be to use MySQL Cluster, which is
designed to hold the data in main memory (for extremely low latency) and
to use the disk only for backup purposes.



 I would appreciate if any one share with me their thoughts on this.

My comments above are based on my experience during DBMS development
(including distributed DBMS), but not on any financial calculations or
DBA work. Weigh them with other answers.


Regards,
Jörg

--
Joerg Bruehe,  MySQL Build Team,  joerg.bru...@sun.com
Sun Microsystems GmbH,   Komturstraße 18a,   D-12099 Berlin
Geschaeftsfuehrer: Thomas Schroeder, Wolfgang Engels, Wolf Frenkel
Vorsitzender des Aufsichtsrates: Martin Haering Muenchen: HRB161028


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=gto...@ffn.com


The information contained in this transmission may contain privileged and 
confidential information. It is intended only for the use of the person(s) 
named above. If you are not the intended recipient, you are hereby notified 
that any review, dissemination, distribution or duplication of this 
communication is strictly prohibited. If you are not the intended recipient, 
please contact the sender by reply email and destroy all copies of the original 
message.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Question about MySQL

2009-08-07 Thread Banyan He
Hi Gavin,

I am interested in the things you made for the optimization. Can you share
with us such things?

Thanks a lot,

-- 
Banyan He
Network  System Security Infrastructure
Mail: ban...@rootong.com
Blog: http://www.rootong.com/blog
LinkedIn: http://www.linkedin.com/in/banyanhe
Website: http://www.rootong.com


 From: Gavin Towey gto...@ffn.com
 Date: Fri, 7 Aug 2009 11:07:19 -0700
 To: joerg.bru...@sun.com joerg.bru...@sun.com, Peter Chacko
 peterchack...@gmail.com
 Cc: mysql mysql@lists.mysql.com
 Subject: RE: Question about MySQL
 
 I always accepted that NFS was unacceptably slow for database access, until I
 actually tested it.  Turns out that with lots of RAM and properly tuned
 caches, you're optimizing for minimal IO anyway. A good file server will have
 massive amounts of IO OPS.  On top of that if you're using GigE over few hops,
 then it's really not slower than local disks.
 
 Remember: benchmark and test your assumptions!
 
 Regards,
 Gavin Towey
 
 -Original Message-
 From: joerg.bru...@sun.com [mailto:joerg.bru...@sun.com]
 Sent: Friday, August 07, 2009 1:19 AM
 To: Peter Chacko
 Cc: mysql
 Subject: Re: Question about MySQL
 
 Hi Peter, all,
 
 
 let me just concentrate on the NFS aspect:
 
 
 Peter Chacko wrote:
 [[...]]
 
 Another question is , whats the general experience of  running MySQL
 servers on NFS shares ?
 
 I would *never* use NFS storage for any DBMS (except for some testing):
 NFS access is slower than local disk access, and it adds more components
 to the critical path. So your operations get slower, reliability
 decreases, and (in case of any trouble) analysis becomes more difficult.
 
 I cannot imagine any setup where you have a machine strong enough to run
 your DBMS on it, but not capable of driving sufficient local disks.
 
 The typical argument for having centralized disks serving many machines
 is based on economies of scale (huge disks), flexibility (partitioning),
 and centralized management (RAID replacement, backup).
 There may be some merit to this in a specialized setup (NAS systems -
 I'm not convinced of them, but don't claim expert knowledge about them),
 but IMO not using general-purpose machines and NFS.
 
 Whatever the economical advantages during normal operation may be, you
 should not forget the huge costs you would incur if any in-between
 component breaks and your database stops operating.
 This may be tolerable for some applications, depending on the required
 availability, but simply intolerable for others.
 
 
 We are working on developing a custom
 protocol for MySQL clustering that takes care of all file management
 as part of the database clustering protocol, rather than a storage
 engine feature.
 
 Sorry, I don't get the meaning of this in sufficient detail to comment.
 
 
 What are the likely setup of a database storage ? Is
 it on RAW partition or on a File system ?
 
 That may depend on the storage engine used.
 MyISAM uses a file-per-table approach, so must be run on a file system;
 InnoDB may be able to use a raw partition (I'm not sure there);
 for others, I can't tell.
 
 
   Will ocfs2  be better used
 for mySQL as well, in a clustered environment ?
 
 I strongly doubt it.
 
 AIUI, the big advantage of cluster file systems is that they allow all
 machines to access and modify shared data.
 With a DBMS, you don't want to share data across machines, you want the
 database to be accessed only be the one machine (possibly multi-CPU, but
 shared memory) running the database processes, because on that machine
 you have the various database caches.
 Then, that machine makes the data available to all clients, so you get a
 logical sharing on a higher protocol level (SQL).
 
 To have multiple machines accessing the same database storage, you would
 first need some protocol to ensure cache coherency, and that is not
 contained in MySQL (in the general server).
 To use MySQL on multiple machines for the same data, you set up replication.
 
 The alternative approach would be to use MySQL Cluster, which is
 designed to hold the data in main memory (for extremely low latency) and
 to use the disk only for backup purposes.
 
 
 
 I would appreciate if any one share with me their thoughts on this.
 
 My comments above are based on my experience during DBMS development
 (including distributed DBMS), but not on any financial calculations or
 DBA work. Weigh them with other answers.
 
 
 Regards,
 Jörg
 
 --
 Joerg Bruehe,  MySQL Build Team,  joerg.bru...@sun.com
 Sun Microsystems GmbH,   Komturstraße 18a,   D-12099 Berlin
 Geschaeftsfuehrer: Thomas Schroeder, Wolfgang Engels, Wolf Frenkel
 Vorsitzender des Aufsichtsrates: Martin Haering Muenchen: HRB161028
 
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql?unsub=gto...@ffn.com
 
 
 The information contained in this transmission

Re: Question about MySQL

2009-08-07 Thread Peter Chacko
And NFS is becoming better and better with the adoption of 10GbE, and
NFSoRDMA ...i am sure at that point no body will complain about NFS
performance for databases. And for a parallel database access, pNFS is
also shaping up well. As NFS creators are now owned by ORACLE who
themselves have developed technology like direct NFS, NFS and Database
storage will be great buddies in future.

thanks

On Fri, Aug 7, 2009 at 11:37 PM, Gavin Toweygto...@ffn.com wrote:
 I always accepted that NFS was unacceptably slow for database access, until I 
 actually tested it.  Turns out that with lots of RAM and properly tuned 
 caches, you're optimizing for minimal IO anyway. A good file server will have 
 massive amounts of IO OPS.  On top of that if you're using GigE over few 
 hops, then it's really not slower than local disks.

 Remember: benchmark and test your assumptions!

 Regards,
 Gavin Towey

 -Original Message-
 From: joerg.bru...@sun.com [mailto:joerg.bru...@sun.com]
 Sent: Friday, August 07, 2009 1:19 AM
 To: Peter Chacko
 Cc: mysql
 Subject: Re: Question about MySQL

 Hi Peter, all,


 let me just concentrate on the NFS aspect:


 Peter Chacko wrote:
 [[...]]

 Another question is , whats the general experience of  running MySQL
 servers on NFS shares ?

 I would *never* use NFS storage for any DBMS (except for some testing):
 NFS access is slower than local disk access, and it adds more components
 to the critical path. So your operations get slower, reliability
 decreases, and (in case of any trouble) analysis becomes more difficult.

 I cannot imagine any setup where you have a machine strong enough to run
 your DBMS on it, but not capable of driving sufficient local disks.

 The typical argument for having centralized disks serving many machines
 is based on economies of scale (huge disks), flexibility (partitioning),
 and centralized management (RAID replacement, backup).
 There may be some merit to this in a specialized setup (NAS systems -
 I'm not convinced of them, but don't claim expert knowledge about them),
 but IMO not using general-purpose machines and NFS.

 Whatever the economical advantages during normal operation may be, you
 should not forget the huge costs you would incur if any in-between
 component breaks and your database stops operating.
 This may be tolerable for some applications, depending on the required
 availability, but simply intolerable for others.


                         We are working on developing a custom
 protocol for MySQL clustering that takes care of all file management
 as part of the database clustering protocol, rather than a storage
 engine feature.

 Sorry, I don't get the meaning of this in sufficient detail to comment.


                 What are the likely setup of a database storage ? Is
 it on RAW partition or on a File system ?

 That may depend on the storage engine used.
 MyISAM uses a file-per-table approach, so must be run on a file system;
 InnoDB may be able to use a raw partition (I'm not sure there);
 for others, I can't tell.


                                           Will ocfs2  be better used
 for mySQL as well, in a clustered environment ?

 I strongly doubt it.

 AIUI, the big advantage of cluster file systems is that they allow all
 machines to access and modify shared data.
 With a DBMS, you don't want to share data across machines, you want the
 database to be accessed only be the one machine (possibly multi-CPU, but
 shared memory) running the database processes, because on that machine
 you have the various database caches.
 Then, that machine makes the data available to all clients, so you get a
 logical sharing on a higher protocol level (SQL).

 To have multiple machines accessing the same database storage, you would
 first need some protocol to ensure cache coherency, and that is not
 contained in MySQL (in the general server).
 To use MySQL on multiple machines for the same data, you set up replication.

 The alternative approach would be to use MySQL Cluster, which is
 designed to hold the data in main memory (for extremely low latency) and
 to use the disk only for backup purposes.



 I would appreciate if any one share with me their thoughts on this.

 My comments above are based on my experience during DBMS development
 (including distributed DBMS), but not on any financial calculations or
 DBA work. Weigh them with other answers.


 Regards,
 Jörg

 --
 Joerg Bruehe,  MySQL Build Team,  joerg.bru...@sun.com
 Sun Microsystems GmbH,   Komturstraße 18a,   D-12099 Berlin
 Geschaeftsfuehrer: Thomas Schroeder, Wolfgang Engels, Wolf Frenkel
 Vorsitzender des Aufsichtsrates: Martin Haering     Muenchen: HRB161028


 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:    http://lists.mysql.com/mysql?unsub=gto...@ffn.com


 The information contained in this transmission may contain privileged and 
 confidential information. It is intended only for the use of the person(s) 
 named

Re: Question about query - can this be done?

2009-06-02 Thread Brent Baisley
On Tue, Jun 2, 2009 at 11:52 AM, Ray r...@stilltech.net wrote:
 Hello,

 I've tried the manual and google, but I am not even sure what to call what I
 want to do.

 simplified data example:
 I have a table of start and end times for an event, and an id for that event
 in a table. each event may occur multiple times, but never more than 5 times
 and rarely more than 3.
 I want a query that will provide one record per event with all times included.
 feel free to answer RTFM or STFW as long as you provide the manual section or
 key words.  ;)
 Thanks,
 Ray


 chart form follows:

 id | event_id | start | end
 ---
 1  |    4         | t1    | t2
 2  |    4         | t3    | t4
 3  |    4         | t5    | t6
 4  |    5         | t1    | t2
 5  |    5         | t3    | t4

 becomes

 id | event_id | start | end | start | end | start | end
 ---
 ?  |     4        | t1    | t2     | t3    | t4    |  t5    | t6
 ?  |     5        | t1    | t2     | t3    | t4


I think what you are looking for is GROUP_CONCAT. You can just GROUP
BY event id, and then process the resulting delimited string on the
front end.
SELECT event_id, GROUP_CONCAT(start) start_dates, GROUP_CONCAT(end) end_dates
FROM events GROUP BY event_id

Or even combined start and end dates into a single string and group them.
SELECT event_id, GROUP_CONCAT( CONCAT(start, '-', end) ) start_end
FROM events GROUP BY event_id

But, if you really want to get it in the column format you indicate,
you can make a much more complicated query. Use SUBSTRING_INDEX to
split out the parts of the group you need.

SELECT event_id,
SUBSTRING_INDEX(GROUP_CONCAT(start), ',', 1 ) start1,
SUBSTRING_INDEX(GROUP_CONCAT(end), ',', 1 ) end1,
SUBSTRING_INDEX( SUBSTRING_INDEX(GROUP_CONCAT(start), ',', 2 ), ',',
-1 ) start2,
SUBSTRING_INDEX( SUBSTRING_INDEX(GROUP_CONCAT(end), ',', 2 ), ',', -1 ) end2,
SUBSTRING_INDEX( SUBSTRING_INDEX(GROUP_CONCAT(start), ',', 3 ), ',',
-1 ) start3,
SUBSTRING_INDEX( SUBSTRING_INDEX(GROUP_CONCAT(end), ',', 3 ), ',', -1 ) end3,
SUBSTRING_INDEX( SUBSTRING_INDEX(GROUP_CONCAT(start), ',', 4 ), ',',
-1 ) start4,
SUBSTRING_INDEX( SUBSTRING_INDEX(GROUP_CONCAT(end), ',', 4 ), ',', -1 ) end4,
SUBSTRING_INDEX( SUBSTRING_INDEX(GROUP_CONCAT(start), ',', 5 ), ',',
-1 ) start5,
SUBSTRING_INDEX( SUBSTRING_INDEX(GROUP_CONCAT(end), ',', 5 ), ',', -1 ) end5
FROM events GROUP BY event_id;

I think that will give the format you specified, but I am not
recommending you do it this way.

Hope that helps.

Brent Baisley

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Question about query - can this be done?

2009-06-02 Thread Peter Brawley

Ray,

I want a query that will provide one record per event with all times included. 
feel free to answer RTFM or STFW as long as you provide the manual section or 
key words.   ;) 


Can be done with a pivot table. Examples under Pivot tables at 
http://www.artfulsoftware.com/infotree/queries.php. If you get stuck, 
pipe up.


PB

-

Ray wrote:
Hello, 

I've tried the manual and google, but I am not even sure what to call what I 
want to do.


simplified data example:
I have a table of start and end times for an event, and an id for that event 
in a table. each event may occur multiple times, but never more than 5 times 
and rarely more than 3. 
I want a query that will provide one record per event with all times included. 
feel free to answer RTFM or STFW as long as you provide the manual section or 
key words.  ;)

Thanks,
Ray


chart form follows:

id | event_id | start | end
---
1  |4 | t1| t2
2  |4 | t3| t4
3  |4 | t5| t6
4  |5 | t1| t2
5  |5 | t3| t4

becomes 


id | event_id | start | end | start | end | start | end
---
?  | 4| t1| t2 | t3| t4|  t5| t6
?  | 5| t1| t2 | t3| t4  

  




No virus found in this incoming message.
Checked by AVG - www.avg.com 
Version: 8.5.339 / Virus Database: 270.12.50/2150 - Release Date: 06/02/09 06:47:00


  


Re: Question about query - can this be done?

2009-06-02 Thread Ray
On June 2, 2009 10:44:48 am Peter Brawley wrote:
 Ray,

 I want a query that will provide one record per event with all times
  included. feel free to answer RTFM or STFW as long as you provide the
  manual section or key words.   ;)

 Can be done with a pivot table. Examples under Pivot tables at
 http://www.artfulsoftware.com/infotree/queries.php. If you get stuck,
 pipe up.

 PB


Thanks Peter and Brent.
GROUP_CONCAT does exactly what I want. 
Brent, you're right, I don't really want to break up the times into separate 
fields that bad, the results are going into PHP so I can parse the combined 
fields there without much difficulty.

The next problem is how do I use the results in a join. My first thought (that 
doesn't work) was:

SELECT event_id, GROUP_CONCAT( CONCAT(start, '-', end) ) start_end FROM events 
GROUP BY event_id JOIN event_details WHERE 
events.event_id=event_details.event_id

I have tried brackets, and a few other things, but I haven't got it yet.
Thanks, 
Ray 


 -

 Ray wrote:
  Hello,
 
  I've tried the manual and google, but I am not even sure what to call
  what I want to do.
 
  simplified data example:
  I have a table of start and end times for an event, and an id for that
  event in a table. each event may occur multiple times, but never more
  than 5 times and rarely more than 3.
  I want a query that will provide one record per event with all times
  included. feel free to answer RTFM or STFW as long as you provide the
  manual section or key words.  ;)
  Thanks,
  Ray
 
 
  chart form follows:
 
  id | event_id | start | end
  ---
  1  |4 | t1| t2
  2  |4 | t3| t4
  3  |4 | t5| t6
  4  |5 | t1| t2
  5  |5 | t3| t4
 
  becomes
 
  id | event_id | start | end | start | end | start | end
  -
 -- ?  | 4| t1| t2 | t3| t4|  t5| t6 ?  |  
5| t1| t2 | t3| t4
 
 
  
 
 
  No virus found in this incoming message.
  Checked by AVG - www.avg.com
  Version: 8.5.339 / Virus Database: 270.12.50/2150 - Release Date:
  06/02/09 06:47:00


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Question about query - can this be done?

2009-06-02 Thread Ray
On June 2, 2009 03:14:36 pm Ray wrote:
 On June 2, 2009 10:44:48 am Peter Brawley wrote:
  Ray,
 
  I want a query that will provide one record per event with all times
   included. feel free to answer RTFM or STFW as long as you provide the
   manual section or key words.   ;)
 
  Can be done with a pivot table. Examples under Pivot tables at
  http://www.artfulsoftware.com/infotree/queries.php. If you get stuck,
  pipe up.
 
  PB

 Thanks Peter and Brent.
 GROUP_CONCAT does exactly what I want.
 Brent, you're right, I don't really want to break up the times into
 separate fields that bad, the results are going into PHP so I can parse the
 combined fields there without much difficulty.

 The next problem is how do I use the results in a join. My first thought
 (that doesn't work) was:

 SELECT event_id, GROUP_CONCAT( CONCAT(start, '-', end) ) start_end FROM
 events GROUP BY event_id JOIN event_details WHERE
not sure where this typo came from
I meant ON 

 events.event_id=event_details.event_id

 I have tried brackets, and a few other things, but I haven't got it yet.
 Thanks,
 Ray


I found a solution, but not sure if it's a good idea.

CREATE OR REPLACE VIEW v AS SELECT event_id, GROUP_CONCAT( CONCAT(start, '-', 
end) ) start_end FROM events GROUP BY event_id; 
SELECT * FROM event_details JOIN v ON events.event_id=event_details.event_id 

Thanks, 
Ray

  -
 
  Ray wrote:
   Hello,
  
   I've tried the manual and google, but I am not even sure what to call
   what I want to do.
  
   simplified data example:
   I have a table of start and end times for an event, and an id for that
   event in a table. each event may occur multiple times, but never more
   than 5 times and rarely more than 3.
   I want a query that will provide one record per event with all times
   included. feel free to answer RTFM or STFW as long as you provide the
   manual section or key words.  ;)
   Thanks,
   Ray
  
  
   chart form follows:
  
   id | event_id | start | end
   ---
   1  |4 | t1| t2
   2  |4 | t3| t4
   3  |4 | t5| t6
   4  |5 | t1| t2
   5  |5 | t3| t4
  
   becomes
  
   id | event_id | start | end | start | end | start | end
   ---
  -- -- ?  | 4| t1| t2 | t3| t4|  t5| t6 ?
| 5| t1| t2 | t3| t4
  
  
   ---
  -
  
  
   No virus found in this incoming message.
   Checked by AVG - www.avg.com
   Version: 8.5.339 / Virus Database: 270.12.50/2150 - Release Date:
   06/02/09 06:47:00


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: Question about query - can this be done?

2009-06-02 Thread Nathan Sullivan
Ray,

You can use the results of a query in a join with something like:

select tmp.id, t1.id
from (some_query_selecting_id) as tmp
join t1 on t1.id=tmp.id


Hope that helps.


Regards,
Nathan Sullivan

-Original Message-
From: Ray [mailto:r...@stilltech.net] 
Sent: Tuesday, June 02, 2009 4:58 PM
To: mysql@lists.mysql.com
Subject: Re: Question about query - can this be done?

On June 2, 2009 03:14:36 pm Ray wrote:
 On June 2, 2009 10:44:48 am Peter Brawley wrote:
  Ray,
 
  I want a query that will provide one record per event with all times
   included. feel free to answer RTFM or STFW as long as you provide the
   manual section or key words.   ;)
 
  Can be done with a pivot table. Examples under Pivot tables at
  http://www.artfulsoftware.com/infotree/queries.php. If you get stuck,
  pipe up.
 
  PB

 Thanks Peter and Brent.
 GROUP_CONCAT does exactly what I want.
 Brent, you're right, I don't really want to break up the times into
 separate fields that bad, the results are going into PHP so I can parse the
 combined fields there without much difficulty.

 The next problem is how do I use the results in a join. My first thought
 (that doesn't work) was:

 SELECT event_id, GROUP_CONCAT( CONCAT(start, '-', end) ) start_end FROM
 events GROUP BY event_id JOIN event_details WHERE
not sure where this typo came from
I meant ON 

 events.event_id=event_details.event_id

 I have tried brackets, and a few other things, but I haven't got it yet.
 Thanks,
 Ray


I found a solution, but not sure if it's a good idea.

CREATE OR REPLACE VIEW v AS SELECT event_id, GROUP_CONCAT( CONCAT(start, '-', 
end) ) start_end FROM events GROUP BY event_id; 
SELECT * FROM event_details JOIN v ON events.event_id=event_details.event_id 

Thanks, 
Ray

  -
 
  Ray wrote:
   Hello,
  
   I've tried the manual and google, but I am not even sure what to call
   what I want to do.
  
   simplified data example:
   I have a table of start and end times for an event, and an id for that
   event in a table. each event may occur multiple times, but never more
   than 5 times and rarely more than 3.
   I want a query that will provide one record per event with all times
   included. feel free to answer RTFM or STFW as long as you provide the
   manual section or key words.  ;)
   Thanks,
   Ray
  
  
   chart form follows:
  
   id | event_id | start | end
   ---
   1  |4 | t1| t2
   2  |4 | t3| t4
   3  |4 | t5| t6
   4  |5 | t1| t2
   5  |5 | t3| t4
  
   becomes
  
   id | event_id | start | end | start | end | start | end
   ---
  -- -- ?  | 4| t1| t2 | t3| t4|  t5| t6 ?
| 5| t1| t2 | t3| t4
  
  
   ---
  -
  
  
   No virus found in this incoming message.
   Checked by AVG - www.avg.com
   Version: 8.5.339 / Virus Database: 270.12.50/2150 - Release Date:
   06/02/09 06:47:00


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=nsulli...@cappex.com


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



[solved]Re: Question about query - can this be done?

2009-06-02 Thread Ray
On June 2, 2009 04:13:31 pm Nathan Sullivan wrote:
 Ray,

 You can use the results of a query in a join with something like:

 select tmp.id, t1.id
 from (some_query_selecting_id) as tmp
 join t1 on t1.id=tmp.id


 Hope that helps.


 Regards,
 Nathan Sullivan

Thanks Nathan, 
I think that completes the picture.
Just what I was looking for.
Ray


 -Original Message-
 From: Ray [mailto:r...@stilltech.net]
 Sent: Tuesday, June 02, 2009 4:58 PM
 To: mysql@lists.mysql.com
 Subject: Re: Question about query - can this be done?

 On June 2, 2009 03:14:36 pm Ray wrote:
  On June 2, 2009 10:44:48 am Peter Brawley wrote:
   Ray,
  
   I want a query that will provide one record per event with all times
included. feel free to answer RTFM or STFW as long as you provide the
manual section or key words.   ;)
  
   Can be done with a pivot table. Examples under Pivot tables at
   http://www.artfulsoftware.com/infotree/queries.php. If you get stuck,
   pipe up.
  
   PB
 
  Thanks Peter and Brent.
  GROUP_CONCAT does exactly what I want.
  Brent, you're right, I don't really want to break up the times into
  separate fields that bad, the results are going into PHP so I can parse
  the combined fields there without much difficulty.
 
  The next problem is how do I use the results in a join. My first thought
  (that doesn't work) was:
 
  SELECT event_id, GROUP_CONCAT( CONCAT(start, '-', end) ) start_end FROM
  events GROUP BY event_id JOIN event_details WHERE

 not sure where this typo came from
 I meant ON

  events.event_id=event_details.event_id
 
  I have tried brackets, and a few other things, but I haven't got it yet.
  Thanks,
  Ray

 I found a solution, but not sure if it's a good idea.

 CREATE OR REPLACE VIEW v AS SELECT event_id, GROUP_CONCAT( CONCAT(start,
 '-', end) ) start_end FROM events GROUP BY event_id;
 SELECT * FROM event_details JOIN v ON
 events.event_id=event_details.event_id

 Thanks,
 Ray

   -
  
   Ray wrote:
Hello,
   
I've tried the manual and google, but I am not even sure what to call
what I want to do.
   
simplified data example:
I have a table of start and end times for an event, and an id for
that event in a table. each event may occur multiple times, but never
more than 5 times and rarely more than 3.
I want a query that will provide one record per event with all times
included. feel free to answer RTFM or STFW as long as you provide the
manual section or key words.  ;)
Thanks,
Ray
   
   
chart form follows:
   
id | event_id | start | end
---
1  |4 | t1| t2
2  |4 | t3| t4
3  |4 | t5| t6
4  |5 | t1| t2
5  |5 | t3| t4
   
becomes
   
id | event_id | start | end | start | end | start | end
-
   -- -- -- ?  | 4| t1| t2 | t3| t4|  t5|
t6 ?
   
 | 5| t1| t2 | t3| t4
   
-
   -- -
   
   
No virus found in this incoming message.
Checked by AVG - www.avg.com
Version: 8.5.339 / Virus Database: 270.12.50/2150 - Release Date:
06/02/09 06:47:00

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql?unsub=nsulli...@cappex.com


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: Question about Maria readers/writers

2009-05-22 Thread Martin Gainty

only if you are implementing INNODB Transactional Storage Engine

MySQL uses table-level locking for MyISAM,
MEMORY and MERGE tables,
page-level locking for BDB tables, and
row-level locking for InnoDB tables.
  
http://dev.mysql.com/doc/refman/5.0/en/internal-locking.html

Martin 
__ 
Verzicht und Vertraulichkeitanmerkung/Note de déni et de confidentialité
 
Diese Nachricht ist vertraulich. Sollten Sie nicht der vorgesehene Empfaenger 
sein, so bitten wir hoeflich um eine Mitteilung. Jede unbefugte Weiterleitung 
oder Fertigung einer Kopie ist unzulaessig. Diese Nachricht dient lediglich dem 
Austausch von Informationen und entfaltet keine rechtliche Bindungswirkung. 
Aufgrund der leichten Manipulierbarkeit von E-Mails koennen wir keine Haftung 
fuer den Inhalt uebernehmen.
Ce message est confidentiel et peut être privilégié. Si vous n'êtes pas le 
destinataire prévu, nous te demandons avec bonté que pour satisfaire informez 
l'expéditeur. N'importe quelle diffusion non autorisée ou la copie de ceci est 
interdite. Ce message sert à l'information seulement et n'aura pas n'importe 
quel effet légalement obligatoire. Étant donné que les email peuvent facilement 
être sujets à la manipulation, nous ne pouvons accepter aucune responsabilité 
pour le contenu fourni.




 Date: Fri, 22 May 2009 09:32:02 -0500
 To: mysql@lists.mysql.com
 From: mo...@fastmail.fm
 Subject: Question about Maria readers/writers
 
  From the MySQL v6 manual:
 http://dev.mysql.com/doc/refman/6.0/en/se-maria-concurrency.html
 
 +
 When using transactional tables, Maria supports a single writer and 
 multiple readers. The single writer supports both 
 http://dev.mysql.com/doc/refman/6.0/en/insert.htmlINSERT and 
 http://dev.mysql.com/doc/refman/6.0/en/update.htmlUPDATE operations.
 
 SELECT
 
 All issued SELECT's are running concurrently. While a SELECT is running, 
 all writers (INSERT, DELETE, UPDATE) are blocked from using any of the used 
 tables (ie, they wait for the table to be free before continuing).
 +
 
 If it does support a single writer and multiple readers, why are the 
 writers blocked when a Select is running? Shouldn't a single writer be able 
 to run with concurrent Selects?
 
 Mike 
 
 
 -- 
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql?unsub=mgai...@hotmail.com
 

_
Hotmail® goes with you. 
http://windowslive.com/Tutorial/Hotmail/Mobile?ocid=TXT_TAGLM_WL_HM_Tutorial_Mobile1_052009

Re: Question regards mysqldump and replication

2009-05-03 Thread Shawn Green

Dominik Klein wrote:

Hi.

I have a question regarding mysql replication and mysqldump.

I have a master (A). All my clients insert/update/delete only to this
master. Then I have a Slave (B). This slave only replicates the master.
There are no other processes changing/inserting data into the Slave. The
slave also logs binlog so I could replicate from that server as well.

Now I want a chained Slave ( like A - B - C , C being the chained slave).

So my idea is: stop replication on B so no changes during dump, dump its
master status, mysqldump all databases. Then load the dump on C and
configure slave on C according to the master status from B.

I did that and end up in hundreds of duplicate key errors. How can that
be? What should I do now? Do I need to wait for some settling after I
have stop slave on B and before starting the actual mysqldump?

Mysql Version is 5.0.51b on A and B, 5.0.77 on C, operating system is linux.



If you did this:

(on B)
STOP SLAVE;
SHOW MASTER STATUS;
dump all data

(on C)
restore all data
CHANGE MASTER TO ... binary log coordinates from B
START SLAVE;

Then this should have worked.

If C started replicating from the first binary log of B, then this would 
very easily explain the mass of duplicate key problems.


Double check your replication coordinates.
--
Shawn Green, MySQL Senior Support Engineer
Sun Microsystems, Inc.
Office: Blountville, TN



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Question on replication terminology

2009-04-29 Thread Eric Bergen
Dual master replication can be either dual master dual write or dual
master single writer. The latter is preferred. In this configuration
replication is connected in both directions but clients only ever
connect to one master at a time. It's just as safe as master - slave
replication if you handle the failover correctly.

-Eric

On Tue, Apr 28, 2009 at 3:43 PM, Claudio Nanni claudio.na...@gmail.com wrote:
 Hi there,
 I would only like to stress that the only supported (and recommended)
 replication solution in MySQL is
 Master---Slave  replication.
 In this scenario you can have ONLY one master and (virtually) any number of
 slaves.
 There is NO other safe replication solution.
 The terms you mention seems to refer to the same solution, where you have
 two servers each acting as a master:
 this is a non standard dangerous scenario in MySQL and requires application
 logic awareness.

 Hope to have brought a little light in your mind

 Cheers
 Claudio



 Vikram Vaswani wrote:

 Hi

 I'm new to replication and looking through some docs on how to use it.
 Could
 someone please tell me if the following terms mean the same thing or, if
 not, what is the difference:

 master-master replication
 dual-master replication
 bidirectional replication

 TIA
 -BT




 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:    http://lists.mysql.com/mysql?unsub=eric.ber...@gmail.com





-- 
Eric Bergen
eric.ber...@gmail.com
http://www.ebergen.net

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Question on replication terminology

2009-04-29 Thread Curtis Maurand


I think what's really being sought after, here is clustering.

--C

Eric Bergen wrote:

Dual master replication can be either dual master dual write or dual
master single writer. The latter is preferred. In this configuration
replication is connected in both directions but clients only ever
connect to one master at a time. It's just as safe as master - slave
replication if you handle the failover correctly.

-Eric

On Tue, Apr 28, 2009 at 3:43 PM, Claudio Nanni claudio.na...@gmail.com wrote:
  

Hi there,
I would only like to stress that the only supported (and recommended)
replication solution in MySQL is
Master---Slave  replication.
In this scenario you can have ONLY one master and (virtually) any number of
slaves.
There is NO other safe replication solution.
The terms you mention seems to refer to the same solution, where you have
two servers each acting as a master:
this is a non standard dangerous scenario in MySQL and requires application
logic awareness.

Hope to have brought a little light in your mind

Cheers
Claudio



Vikram Vaswani wrote:


Hi

I'm new to replication and looking through some docs on how to use it.
Could
someone please tell me if the following terms mean the same thing or, if
not, what is the difference:

master-master replication
dual-master replication
bidirectional replication

TIA
-BT


  

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=eric.ber...@gmail.com







  




Re: Question on replication terminology

2009-04-28 Thread Claudio Nanni

Hi there,
I would only like to stress that the only supported (and recommended) 
replication solution in MySQL is

Master---Slave  replication.
In this scenario you can have ONLY one master and (virtually) any number 
of slaves.

There is NO other safe replication solution.
The terms you mention seems to refer to the same solution, where you 
have two servers each acting as a master:
this is a non standard dangerous scenario in MySQL and requires 
application logic awareness.


Hope to have brought a little light in your mind

Cheers
Claudio



Vikram Vaswani wrote:

Hi

I'm new to replication and looking through some docs on how to use it. Could
someone please tell me if the following terms mean the same thing or, if
not, what is the difference:

master-master replication
dual-master replication
bidirectional replication

TIA
-BT

  



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



re: Question!

2009-04-02 Thread Michael Widenius

Hi!

 Jarikre == Jarikre Efemena jefem...@yahoo.com writes:

Jarikre Dear sir,
Jarikre  
Jarikre I am young web developer using PHP Script in designing interactive 
website. I desire to include Mysql database on my websites. 
Jarikre  
Jarikre Please, how do I import, upload/export Mysql database to a website 
server after creating a Mysql user account and a particular database on my 
local machine?
Jarikre  
Jarikre I will be very grateful if comprehensive response is granted to my 
question.

Just copy the files in your data directory or use 'mysqldump'.

Regards,
Monty


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Question!

2009-04-01 Thread Jujitsu Lizard
On Tue, Mar 31, 2009 at 1:30 AM, Jarikre Efemena jefem...@yahoo.com wrote:

 Dear sir,

 I am young web developer using PHP Script in designing interactive website.
 I desire to include Mysql database on my websites.

 Please, how do I import, upload/export Mysql database to a website server
 after creating a Mysql user account and a particular database on my local
 machine?

 I will be very grateful if comprehensive response is granted to my
 question.

There are a number of books that discuss using PHP and MySQL together.  Most
have both product names in the title.  Just about any one of them will do as
a starting point.

There are also a number of books that discuss LAMP (Linux + Apache + MySQL +
PHP).  Those would generally also do as a starting point.

Best regards, Dave A.


RE: Question!

2009-03-31 Thread Joshua Gordon
Read the online Manual.

-Original Message-
From: Jarikre Efemena [mailto:jefem...@yahoo.com] 
Sent: Monday, March 30, 2009 11:30 PM
To: mysql@lists.mysql.com
Subject: Question!

Dear sir,
 
I am young web developer using PHP Script in designing interactive website. I 
desire to include Mysql database on my websites. 
 
Please, how do I import, upload/export Mysql database to a website server after 
creating a Mysql user account and a particular database on my local machine?
 
I will be very grateful if comprehensive response is granted to my question.
 
Thank You.
 
Efe Jarikre
Nigeria 


  

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Question about triggers

2009-02-02 Thread Baron Schwartz
Send the value of @@server_id in the message, and make sure each
server has a unique value for that.  Compare the value in the received
message to the value in the server and see whether you should stop the
loop.

On Mon, Feb 2, 2009 at 4:38 AM, Tobias Stocker
tobias.stoc...@ch.netstream.com wrote:
 Hy there,

 I'm planing to to build a small partial replication on MySQL 5.0 using the 
 Spread Toolkit and the Message API for MySQL. Therefore I'll create a trigger 
 (on insert) which fetches the row and sends it to the message group XY.

 I was wondering if there is a way to tell MySQL in an SQL statement to not 
 fire the trigger, for example if a message from Server B arrives on A - and 
 contrariwise - I don't want the trigger to be launched to avoid loops and 
 such.

 I actually couldn't find anything about this topic. So, my question: is it 
 possible at all?

 Regards and thanks in advance,
 Toby

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql?unsub=ba...@xaprb.com





-- 
Baron Schwartz, Director of Consulting, Percona Inc.
Our Blog: http://www.mysqlperformanceblog.com/
Our Services: http://www.percona.com/services.html

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: question about natural join

2009-01-21 Thread ceo

The natural join will JOIN on *all* the fields whose names match, not just the 
ones you want it to.



In particular, the JOIN is matching up .expires and .expires with =



You then use WHERE to get only the ones with 



This is a tautology: There are NO records both = and  on the field 
.expires.



You don't really want a natural JOIN here.



And, honestly, natural JOIN generally ends up being problematic sooner or 
later.  You end up adding some kind of field to both tables that should not be 
included (last_update, e.g.) and suddenly your query isn't right any more.



Stick with the explicit WHERE clauses that make it crystal clear what your 
query does.



Just as SELECT * is bad



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: question about natural join

2009-01-21 Thread doug

Thank you.

On Wed, 21 Jan 2009, c...@l-i-e.com wrote:



The natural join will JOIN on *all* the fields whose names match, not just the 
ones you want it to.

In particular, the JOIN is matching up .expires and .expires with =

You then use WHERE to get only the ones with 

This is a tautology: There are NO records both = and  on the field 
.expires.

You don't really want a natural JOIN here.

And, honestly, natural JOIN generally ends up being problematic sooner or 
later.  You end up adding some kind of field to both tables that should not be 
included (last_update, e.g.) and suddenly your query isn't right any more.

Stick with the explicit WHERE clauses that make it crystal clear what your 
query does.

Just as SELECT * is bad


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=d...@safeport.com



_
Douglas Denault
http://www.safeport.com
d...@safeport.com
Voice: 301-217-9220
  Fax: 301-217-9277

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Question about Master-Master replication: Is this possible?

2009-01-13 Thread Baron Schwartz
On Tue, Jan 13, 2009 at 12:32 PM, Frank Becker
computersac...@beckerwelt.de wrote:
 Hello together,

 I have successfully set up a master-master-replication between two
 servers. My question is: It is possible to set up such a replication
 between three (or more) servers? Like this


 Master3 --- Master1 --- Master2
   |
Master4


These types of questions can always be answered by asking: does my
proposed setup require any server to have more than one master?  If
so, it's currently not possible.

You didn't draw arrows between the servers, so I can't really answer
you.  I can say that this is possible:

 Master3 --- Master1 --- Master2
   |
  v
Master4


but this is not:


 Master3 --- Master1 --- Master2
   |
  v
Master4


Why not?  Simply because in this diagram, Master1 is the slave of both
Master2 and Master3 which is impossible.

--
Baron Schwartz, Director of Consulting, Percona Inc.
Our Blog: http://www.mysqlperformanceblog.com/
Our Services: http://www.percona.com/services.html

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: Question about Master-Master replication: Is this possible?

2009-01-13 Thread Rolando Edwards
In the topology you just illustrated, you need to be specific about your scheme 
using arrows. Here are some examples:

==

Example 1: This is MultiMaster Replication among 4 servers

Master1---Master2
   ^  |
   |  |
   |  |
   |  V
Master4---Master3

==

Example 2: This is Tree Replication among 4 servers

  Master1
 |
 |
/|\
   | | |
   | | |
   V V V
Slave2Slave3Slave4

==

Example 3: This is Chained Replication among 4 servers

Master1---Slave2---Slave3---Slave4

==

Example 4: This is MultiMaster Replication among 4 servers
Using two masters and two slaves

  __
 /  \
V\
Master1Master2
  | \^ |
  |  \__/  |
  ||
  ||
  VV
Slave3  Slave4

==

As long as you obey the rule: A SLAVE CANNOT HAVE TWO MASTERS,
there are many replication topologies that are possible

-Original Message-
From: Frank Becker [mailto:computersac...@beckerwelt.de] 
Sent: Tuesday, January 13, 2009 12:33 PM
To: mysql@lists.mysql.com
Subject: Question about Master-Master replication: Is this possible?

Hello together,

I have successfully set up a master-master-replication between two
servers. My question is: It is possible to set up such a replication
between three (or more) servers? Like this


Master3 --- Master1 --- Master2
   |
Master4

Thanks for your feedback
Best regards

Frank

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=redwa...@logicworks.net


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Question about Master-Master replication: Is this possible?

2009-01-13 Thread Frank Becker
Hello Baron, thanks for your response.

 These types of questions can always be answered by asking: does my
 proposed setup require any server to have more than one master?  If
 so, it's currently not possible.

What I want to do is the following:
eGroupware is a enterprise-groupware solution. I started with eGroupware
on a single server. If I or my wife is out of office (e.g. by train) we
have no internet access with our notebook. It's too expensive. Ok.
I then set up a virtual server with vmware and set up a
master-master-replication between server (master1) and virtual server
(master2). This works fine. The notebook has its own server and if the
notebook is in the home network it replicates the changes.  

Now I want connect another notebook in the same way. If I do it again
and again the result would be a star-topology of masters.

That is why I ask. I don't want a master of desaster.

You asked for arrows. Here are they:

Master3 (virtual) -- Master1 -- Master2 (virtual)
^
|
v
Master4 (virtual)
 
Thank you for your help.

Frank Becker



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: Question on default database for stored functions

2008-12-26 Thread Jerry Schwartz


-Original Message-
From: blue.trapez...@gmail.com [mailto:blue.trapez...@gmail.com] On
Behalf Of Vikram Vaswani
Sent: Thursday, December 25, 2008 5:47 AM
To: mysql@lists.mysql.com
Subject: Question on default database for stored functions

Hi

According to the MySQL manual, By default, a routine is associated with
the default database.When the routine is invoked, an implicit USE
db_name is performed (and undone when the routine terminates)

However, when I tried accessing a stored routine from another database,
I received an error. Output below:

mysql USE test
[JS] Stupid question, but do you have a semicolon at the end of the USE
statement?

mysql DELIMITER //
mysql CREATE FUNCTION get_area(radius INT)
 -   RETURNS FLOAT
 -   BEGIN
 - RETURN PI() * radius * radius;
 -   END
 - //
Query OK, 0 rows affected (0.13 sec)
mysql DELIMITER ;
mysql USE test2
Database changed
mysql select get_area(11);
ERROR 1305 (42000): FUNCTION test2.get_area does not exist

Can someone tell me what I'm doing wrong? Thanks.

Vikram

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=jschwa...@the-
infoshop.com





-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Question on default database for stored functions

2008-12-26 Thread Peter Brawley

select get_area(11);
ERROR 1305 (42000): FUNCTION test2.get_area does not exist
Can someone tell me what I'm doing wrong? Thanks.


SELECT dbWhereFunctionWasCreated.get_area(11);

PB

-

Jerry Schwartz wrote:
  

-Original Message-
From: blue.trapez...@gmail.com [mailto:blue.trapez...@gmail.com] On
Behalf Of Vikram Vaswani
Sent: Thursday, December 25, 2008 5:47 AM
To: mysql@lists.mysql.com
Subject: Question on default database for stored functions

Hi

According to the MySQL manual, By default, a routine is associated with
the default database.When the routine is invoked, an implicit USE
db_name is performed (and undone when the routine terminates)

However, when I tried accessing a stored routine from another database,
I received an error. Output below:

mysql USE test


[JS] Stupid question, but do you have a semicolon at the end of the USE
statement?

  

mysql DELIMITER //
mysql CREATE FUNCTION get_area(radius INT)
-   RETURNS FLOAT
-   BEGIN
- RETURN PI() * radius * radius;
-   END
- //
Query OK, 0 rows affected (0.13 sec)
mysql DELIMITER ;
mysql USE test2
Database changed
mysql select get_area(11);
ERROR 1305 (42000): FUNCTION test2.get_area does not exist

Can someone tell me what I'm doing wrong? Thanks.

Vikram

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=jschwa...@the-
infoshop.com







  




Internal Virus Database is out of date.
Checked by AVG - http://www.avg.com 
Version: 8.0.176 / Virus Database: 270.9.19/1853 - Release Date: 12/17/2008 8:31 AM


  


Re: Question about Averaging IF() function results

2008-11-04 Thread Peter Brawley

Eric,

I'd replace 

 (avg(IF(avgTest.Q17,avgTest.Q1,Null))
 +avg(IF(avgTest.Q27,avgTest.Q2,Null))
 +avg(IF(avgTest.Q37,avgTest.Q3,Null))
 +avg(IF(avgTest.Q47,avgTest.Q4,Null))
 +avg(IF(avgTest.Q57,avgTest.Q5,Null)))/5 as overallAvg from avgTest 
group by course;


with ...

(IF(avgTest.Q17,avgTest.Q1,0) + IF(avgTest.Q27,avgTest.Q2,0) + 
IF(avgTest.Q37,avgTest.Q3,0)+
(IF(avgTest.Q17,avgTest.Q1,0)+ IF (avgTest.Q27,avgTest.Q2,0)+ IF 
(avgTest.Q37,avgTest.Q3,0)+

IF(avgTest.Q47,avgTest.Q4,0) + IF(avgTest.Q57,avgTest.Q5,0)) /
MAX(1,IF(avgTest.Q17,1,0) + IF(avgTest.Q27,1,0) + IF(avgTest.Q37,1,0) 
+ IF(avgTest.Q47,1,0) + IF(avgTest.Q57,1,0))


PB



Eric Lommatsch wrote:

Hello List,
 
I have a question about trying to calculate an average across columns. 
I am trying to calculate the results of surveys where in the data I 
have individuals that have marked questions on the survey as N/A. in 
my survey I am using 1-6 as the evaluated answers and if the person 
marked NA the stored value is 7.
 
Here is a table with some sample data of what I am using to test the 
calculation I am working on:  ( actually this is simplified from the 
actual data but the results I get are still the same)
 
CREATE TABLE `avgTest` (

  `Course` varchar(8) default NULL,
  `Q1` int(11) default NULL,
  `Q2` int(11) default NULL,
  `Q3` int(11) default NULL,
  `Q4` int(11) default NULL,
  `Q5` int(11) default NULL
)
 
Course|Q1|Q2|Q3|Q4|Q5
-

HUM300  |6  | 6  | 7 |  6 |6
HUM301  |6  | 6  | 6 |  6 |6
HUM301  |7  | 7  | 7 |  7 |7
 
Here is the query that I am using to perform the calculations
 
select course,

  avg(IF(avgTest.Q17,avgTest.Q1,Null)) as AvgOfQ1,
  avg(IF(avgTest.Q27,avgTest.Q2,Null)) as AvgOfQ2,
  avg(IF(avgTest.Q37,avgTest.Q3,Null)) as AvgOfQ3,
  avg(IF(avgTest.Q47,avgTest.Q4,Null)) as AvgOfQ4,
  avg(IF(avgTest.Q57,avgTest.Q5,Null)) as AvgOfQ5,
  (avg(IF(avgTest.Q17,avgTest.Q1,Null))
  +avg(IF(avgTest.Q27,avgTest.Q2,Null))
  +avg(IF(avgTest.Q37,avgTest.Q3,Null))
  +avg(IF(avgTest.Q47,avgTest.Q4,Null))
  +avg(IF(avgTest.Q57,avgTest.Q5,Null)))/5 as overallAvg from avgTest 
group by course;
 
Here are the results that I get that are incorrect.
 
Course|AvgOfQ1|AvgOfQ2|AvgOfQ3|AvgOfQ4|AvgOfQ5|
overallAvg

---
HUM300  |  6.000  |
6.000 |   Null|   6.000  |6.000 
|   Null 
HUM301  |  6.000  |
6.000 |  6.000  |   6.000  |6.000 
|   6.000 
 
Here are the results that I get that when I change using null in the 
query to a 0.
 
Course|AvgOfQ1|AvgOfQ2|AvgOfQ3|AvgOfQ4|AvgOfQ5|
overallAvg

---
HUM300  |  6.000  |
6.000 |  0.000  |   6.000  |6.000 
|   4.800 
HUM301  |  6.000  |
6.000 |  6.000  |   6.000  |6.000 
|   6.000 
 
Here are the results that I want to be getting from the query that I 
am working with.
 
Course|AvgOfQ1|AvgOfQ2|AvgOfQ3|AvgOfQ4|AvgOfQ5|
overallAvg

---
HUM300  |  6.000  |
6.000 |   Null|   6.000  |6.000 
|   6.000 
HUM301  |  6.000  |
6.000 |  6.000  |   6.000  |6.000 
|   6.000 
 
I tried using the if function without a false answer and I am getting 
a syntax error when I do this.
 
If it is possible for me to get this correct result in MySQL, can 
someone provide me with the correct query syntax to get these results?
 
 
Thank you
 
Eric H. Lommatsch

Programmer
360 Business
2087 South Grant Street
Denver, CO 80210
Tel 303-777-8939
Fax 303-778-0378
 
[EMAIL PROTECTED] mailto:[EMAIL PROTECTED]


 





No virus found in this incoming message.
Checked by AVG - http://www.avg.com 
Version: 8.0.175 / Virus Database: 270.8.5/1764 - Release Date: 11/3/2008 7:46 AM


  


RE: Question about Averaging IF() function results

2008-11-04 Thread Eric Lommatsch
Hello Peter,
 
Thanks for your suggestion, I think I have found another way to get the
average that I need. 
 
If the formula I have come up with does not work I will try your formula.
 
Thank you
 
Eric H. Lommatsch
Programmer
360 Business 
2087 South Grant Street
Denver, CO 80210
Tel 303-777-8939
Fax 303-778-0378
 
[EMAIL PROTECTED]
 



From: Peter Brawley [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, November 04, 2008 1:14 PM
To: Eric Lommatsch
Cc: mysql@lists.mysql.com
Subject: Re: Question about Averaging IF() function results


Eric,

I'd replace 

  (avg(IF(avgTest.Q17,avgTest.Q1,Null))
  +avg(IF(avgTest.Q27,avgTest.Q2,Null))
  +avg(IF(avgTest.Q37,avgTest.Q3,Null))
  +avg(IF(avgTest.Q47,avgTest.Q4,Null))
  +avg(IF(avgTest.Q57,avgTest.Q5,Null)))/5 as overallAvg from avgTest group
by course;

with ...

(IF(avgTest.Q17,avgTest.Q1,0) + IF(avgTest.Q27,avgTest.Q2,0) +
IF(avgTest.Q37,avgTest.Q3,0)+
(IF(avgTest.Q17,avgTest.Q1,0)+ IF (avgTest.Q27,avgTest.Q2,0)+ IF
(avgTest.Q37,avgTest.Q3,0)+
IF(avgTest.Q47,avgTest.Q4,0) + IF(avgTest.Q57,avgTest.Q5,0)) /
MAX(1,IF(avgTest.Q17,1,0) + IF(avgTest.Q27,1,0) + IF(avgTest.Q37,1,0) +
IF(avgTest.Q47,1,0) + IF(avgTest.Q57,1,0))

PB



Eric Lommatsch wrote: 

Hello List,
 
I have a question about trying to calculate an average across
columns. I am trying to calculate the results of surveys where in the data I
have individuals that have marked questions on the survey as N/A. in my
survey I am using 1-6 as the evaluated answers and if the person marked NA
the stored value is 7.
 
Here is a table with some sample data of what I am using to test the
calculation I am working on:  ( actually this is simplified from the actual
data but the results I get are still the same)
 
CREATE TABLE `avgTest` (
  `Course` varchar(8) default NULL,
  `Q1` int(11) default NULL,
  `Q2` int(11) default NULL,
  `Q3` int(11) default NULL,
  `Q4` int(11) default NULL,
  `Q5` int(11) default NULL
)
 
Course|Q1|Q2|Q3|Q4|Q5

-

HUM300  |6  | 6  | 7 |  6 |6
HUM301  |6  | 6  | 6 |  6 |6

HUM301  |7  | 7  | 7 |  7 |7
 
Here is the query that I am using to perform the calculations
 
select course,
  avg(IF(avgTest.Q17,avgTest.Q1,Null)) as AvgOfQ1,
  avg(IF(avgTest.Q27,avgTest.Q2,Null)) as AvgOfQ2,
  avg(IF(avgTest.Q37,avgTest.Q3,Null)) as AvgOfQ3,
  avg(IF(avgTest.Q47,avgTest.Q4,Null)) as AvgOfQ4,
  avg(IF(avgTest.Q57,avgTest.Q5,Null)) as AvgOfQ5,
  (avg(IF(avgTest.Q17,avgTest.Q1,Null))
  +avg(IF(avgTest.Q27,avgTest.Q2,Null))
  +avg(IF(avgTest.Q37,avgTest.Q3,Null))
  +avg(IF(avgTest.Q47,avgTest.Q4,Null))
  +avg(IF(avgTest.Q57,avgTest.Q5,Null)))/5 as overallAvg from
avgTest group by course;
 
Here are the results that I get that are incorrect.
 

Course|AvgOfQ1|AvgOfQ2|AvgOfQ3|
AvgOfQ4|AvgOfQ5|overallAvg

-
--
HUM300  |  6.000  |6.000 |   Null|
6.000  |6.000 |   Null 

HUM301  |  6.000  |6.000 |  6.000  |
6.000  |6.000 |   6.000 
 

Here are the results that I get that when I change using null in the
query to a 0.
 

Course|AvgOfQ1|AvgOfQ2|AvgOfQ3|
AvgOfQ4|AvgOfQ5|overallAvg

-
--
HUM300  |  6.000  |6.000 |  0.000  |
6.000  |6.000 |   4.800 

HUM301  |  6.000  |6.000 |  6.000  |
6.000  |6.000 |   6.000 
 

Here are the results that I want to be getting from the query that I
am working with.
 

Course|AvgOfQ1|AvgOfQ2|AvgOfQ3|
AvgOfQ4|AvgOfQ5|overallAvg

-
--
HUM300  |  6.000  |6.000 |   Null|
6.000  |6.000 |   6.000 

HUM301  |  6.000

Re: Question of Relationship between tables

2008-10-08 Thread Jim Lyons
Usually, you'd have 3 tables: USER, FRIEND, and a third table named
something like USER_FRIEND.  They'd be set up like:

USER:
   emailID (PK)
   userName
   Password
   Address
   Etc

FRIEND:
   emailID (PK)

USER_FRIEND
   user_emailID (PK)
   friend_emailID (PK)

with user_emailID a foreign key pointing to USER, friend_emailid a foreign
key pointing to FRIEND.  This is the standard way of doing a many-many
relationship.



On Wed, Oct 8, 2008 at 10:41 AM, Ben A.H. [EMAIL PROTECTED] wrote:

 Hello,

 I'm having conceptualizing the correct relationship for what seems a very
 simple scenario:

 Scenario:
 I have a standard USERS table...
 USERS have a list of FRIENDS, these can be other members or also non
 members... Similar to facebook...

 My main issue is conceptualizing the relationship for member to member
 contacts.

 TABLES:
USER:
emailID (PK)
userName
Password
Address
Etc

FRIEND:
emailID (PK)
friendEmailID (PK)

 RELATIONSHIPS:

 USER.emailID (1) --- FRIEND.emailID (many)
 USER.emailID (many) --- FRIEND.friendEmailID (1)

 Does this work or is this a cyclical many-to-many relationship? (1 User can
 have many friends, 1 friend can belong to many users)... If so, what's the
 correct (normalized) way of representing this?



 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




-- 
Jim Lyons
Web developer / Database administrator
http://www.weblyons.com


  1   2   3   4   5   6   7   >