Re: Question about contributing a patch

2017-10-09 Thread shawn l.green



On 10/9/2017 3:27 AM, Xiaoyu Wang wrote:

Hello,I reported a bug, at https://bugs.mysql.com/bug.php?id=87637, as well as 
a patch. And Bogdan, the bug hunter, told me this patch would show up on the 
dev contribution report. So, could anyone please tell me how to contact dev 
team, or how can I know the progress about integrating the patch. By the way, I 
signed Oracle Contributor Agreement.
Any reply would be a great help.
Thanks, sincerely
Xiaoyu



Hello Xiaoyu,

Your interaction with the developers will happen through your bug report 
just as it did with our bug report handling team. If they need any 
details or if they need to engage with you again that is where they will 
contact you.


As to the integration of your fix into our code... that gets more 
complicated. There may be edge cases or use cases that need us to modify 
your code to handle. Sometimes these are found as the developer applies 
your patch to our code, sometimes with post-build unit testing, 
sometimes only after full integration testing.


And when that work may start depends on when a developer is scheduled to 
work on the specific bug you designed the patch for. So it could be a 
while.


Thank you very much for helping MySQL to become a better product!

Regards,
--
Shawn Green
MySQL Senior Principal Technical Support Engineer
Oracle USA, Inc. - Integrated Cloud Applications & Platform Services
Office: Blountville, TN

Become certified in MySQL! Visit https://www.mysql.com/certification/ 
for details.


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Question about contributing a patch

2017-10-09 Thread Xiaoyu Wang
Hello,I reported a bug, at https://bugs.mysql.com/bug.php?id=87637, as well as 
a patch. And Bogdan, the bug hunter, told me this patch would show up on the 
dev contribution report. So, could anyone please tell me how to contact dev 
team, or how can I know the progress about integrating the patch. By the way, I 
signed Oracle Contributor Agreement.
Any reply would be a great help.
Thanks, sincerely
Xiaoyu

A question about Oracle Contributor Agreement

2017-09-22 Thread Xiaoyu Wang
Hi, I signed Oracle Contributor Agreement about a month ago, but have not got a 
response. I reported a bug, but I can not contribute my patch. So, could anyone 
please tell me how long will it take before I am informed? Thanks, sincerely

Re: Relational query question

2015-10-01 Thread Divesh Kamra
It better to LEFT join rather then NOT IN




On Wed, Sep 30, 2015 at 6:00 PM, Mogens Melander 
wrote:

> Maybe not the most optimal, but (probably) the most simple:
>
> SELECT * FROM fruit
> where id not in (select fruit from purchase
> where customer=1);
>
> 1, 'Apples'
> 3, 'Oranges'
>
>
> On 2015-09-30 00:01, Richard Reina wrote:
>
>> If I have three simple tables:
>>
>> mysql> select * from customer;
>> +++
>> | ID | NAME   |
>> +++
>> |  1 | Joey   |
>> |  2 | Mike   |
>> |  3 | Kellie |
>> +++
>> 3 rows in set (0.00 sec)
>>
>> mysql> select * from fruit;
>> ++-+
>> | ID | NAME|
>> ++-+
>> |  1 | Apples  |
>> |  2 | Grapes  |
>> |  3 | Oranges |
>> |  4 | Kiwis   |
>> ++-+
>> 4 rows in set (0.00 sec)
>>
>> mysql> select * from purchases;
>> ++-+--+
>> | ID | CUST_ID | FRUIT_ID |
>> ++-+--+
>> |  2 |  3 |   2   |
>> |  3 |  1 |   4   |
>> |  4 |  1 |   2   |
>> |  5 |  2 |   1   |
>> ++-+--+
>>
>> I am having trouble understanding a relational query. How can I select
>> those fruits that Joey has not purchased?
>>
>
> --
> Mogens
> +66 8701 33224
>
>
> --
> This message has been scanned for viruses and
> dangerous content by MailScanner, and is
> believed to be clean.
>
>
>
> --
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:http://lists.mysql.com/mysql
>
>


Re: Relational query question

2015-09-30 Thread Mogens Melander

Maybe not the most optimal, but (probably) the most simple:

SELECT * FROM fruit
where id not in (select fruit from purchase
where customer=1);

1, 'Apples'
3, 'Oranges'

On 2015-09-30 00:01, Richard Reina wrote:

If I have three simple tables:

mysql> select * from customer;
+++
| ID | NAME   |
+++
|  1 | Joey   |
|  2 | Mike   |
|  3 | Kellie |
+++
3 rows in set (0.00 sec)

mysql> select * from fruit;
++-+
| ID | NAME|
++-+
|  1 | Apples  |
|  2 | Grapes  |
|  3 | Oranges |
|  4 | Kiwis   |
++-+
4 rows in set (0.00 sec)

mysql> select * from purchases;
++-+--+
| ID | CUST_ID | FRUIT_ID |
++-+--+
|  2 |  3 |   2   |
|  3 |  1 |   4   |
|  4 |  1 |   2   |
|  5 |  2 |   1   |
++-+--+

I am having trouble understanding a relational query. How can I select
those fruits that Joey has not purchased?


--
Mogens
+66 8701 33224


--
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Relational query question

2015-09-29 Thread Richard Reina
If I have three simple tables:

mysql> select * from customer;
+++
| ID | NAME   |
+++
|  1 | Joey   |
|  2 | Mike   |
|  3 | Kellie |
+++
3 rows in set (0.00 sec)

mysql> select * from fruit;
++-+
| ID | NAME|
++-+
|  1 | Apples  |
|  2 | Grapes  |
|  3 | Oranges |
|  4 | Kiwis   |
++-+
4 rows in set (0.00 sec)

mysql> select * from purchases;
++-+--+
| ID | CUST_ID | FRUIT_ID |
++-+--+
|  2 |  3 |   2   |
|  3 |  1 |   4   |
|  4 |  1 |   2   |
|  5 |  2 |   1   |
++-+--+

I am having trouble understanding a relational query. How can I select
those fruits that Joey has not purchased?


Re: Relational query question

2015-09-29 Thread shawn l.green



On 9/29/2015 1:27 PM, Ron Piggott wrote:



On 29/09/15 13:01, Richard Reina wrote:

If I have three simple tables:

mysql> select * from customer;
+++
| ID | NAME   |
+++
|  1 | Joey   |
|  2 | Mike   |
|  3 | Kellie |
+++
3 rows in set (0.00 sec)

mysql> select * from fruit;
++-+
| ID | NAME|
++-+
|  1 | Apples  |
|  2 | Grapes  |
|  3 | Oranges |
|  4 | Kiwis   |
++-+
4 rows in set (0.00 sec)

mysql> select * from purchases;
++-+--+
| ID | CUST_ID | FRUIT_ID |
++-+--+
|  2 |  3 |   2   |
|  3 |  1 |   4   |
|  4 |  1 |   2   |
|  5 |  2 |   1   |
++-+--+

I am having trouble understanding a relational query. How can I select
those fruits that Joey has not purchased?



I think you are going to want to use a "LEFT JOIN" using "purchases" as
the common table to join with a WHERE purchases.FRUIT_ID IS NULL





SELECT f.name
FROM fruit f
LEFT JOIN purchases p
  on f.id = p.fruit_id
INNER JOIN customer c
  on p.cust_id = c.id
  and c.name='Joey'
WHERE c.id IS NULL;

You have to make that "and...Joey" part of the LEFT JOIN to be selective 
for just "what Joey bought". it is the WHERE c.id IS NULL part that 
filters out and returns only the stuff that Joey did not buy.


If you put the c.name='Joey' term in the WHERE clause then you force a 
value to exist at that point of the query turning your LEFT JOIN into 
INNER JOIN (which would only show you what Joey did buy).


If you put WHERE c.name !='Joey' into the WHERE clause, then you would 
get the list of fruits that anyone else but Joey had purchased.


To see how this works and to understand the process a little better, 
expose all 3 layers of the problem as a big matrix (you'll get all 48 
row combinations).


SELECT f.id as f_id, f.name, p.id as p_id, p.cust_id, p.fruit_id, c.id 
as c_id,  c.name

FROM fruit f
LEFT JOIN purchases p
  on f.id = p.fruit_id
LEFT JOIN customer c
  on p.cust_id = c.id


From here, look at when the columns are NULL and when they aren't. Then 
experiment with different conditions. You are almost there. This should 
push you right to the top of the learning curve.


--
Shawn Green
MySQL Senior Principal Technical Support Engineer
Oracle USA, Inc. - Integrated Cloud Applications & Platform Services
Office: Blountville, TN

Become certified in MySQL! Visit https://www.mysql.com/certification/ 
for details.


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Relational query question

2015-09-29 Thread Ron Piggott



On 29/09/15 13:01, Richard Reina wrote:

If I have three simple tables:

mysql> select * from customer;
+++
| ID | NAME   |
+++
|  1 | Joey   |
|  2 | Mike   |
|  3 | Kellie |
+++
3 rows in set (0.00 sec)

mysql> select * from fruit;
++-+
| ID | NAME|
++-+
|  1 | Apples  |
|  2 | Grapes  |
|  3 | Oranges |
|  4 | Kiwis   |
++-+
4 rows in set (0.00 sec)

mysql> select * from purchases;
++-+--+
| ID | CUST_ID | FRUIT_ID |
++-+--+
|  2 |  3 |   2   |
|  3 |  1 |   4   |
|  4 |  1 |   2   |
|  5 |  2 |   1   |
++-+--+

I am having trouble understanding a relational query. How can I select
those fruits that Joey has not purchased?



I think you are going to want to use a "LEFT JOIN" using "purchases" as 
the common table to join with a WHERE purchases.FRUIT_ID IS NULL




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: table desin question

2015-08-12 Thread Johan De Meersman
- Original Message -
 From: Richard Reina gatorre...@gmail.com
 Subject: table desin question
 
 Would this be the best way to design the schema and would it be best to
 make the client ID and technician ID the same as the user ID as they relate
 to the same person?


Close enough; but I think it would be preferrable to use a unique 
(autoincrement) PK for all three tables; and use a referential key in client 
and technician to point at user.

-- 
Unhappiness is discouraged and will be corrected with kitten pictures.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



table desin question

2015-08-12 Thread Richard Reina
I am ceating a database application with two different types of users,
clients and technicians.  Both types of users have to create and account in
which they become users. From there they can become clients or
technicians or perhaps even both.  Since each type describe different
attributes -- user describes users basic information and login credentials,
client describes client information such as billing info and payment method
and technician describes technican information such as areas of expertese,
experience and qualifications -- would it be best to create three tables
and use user ID as the primary key for all?

For example:

TABLE: user
| ID | first_name| last_name| email |phone| password |

TABLE client
|ID  | billing_add | b_city | b_st | b_zip | pmnt_mthd | cc_no|

TABLE Techician
|ID  | type  | years_of_exp | current | zone |

Would this be the best way to design the schema and would it be best to
make the client ID and technician ID the same as the user ID as they relate
to the same person?


Re: table desin question

2015-08-12 Thread hsv

On 2015/08/12 09:42, Johan De Meersman wrote:

- Original Message -

From: Richard Reinagatorre...@gmail.com
Subject: table desin question

Would this be the best way to design the schema and would it be best to
make the client ID and technician ID the same as the user ID as they relate
to the same person?


Close enough; but I think it would be preferrable to use a unique 
(autoincrement) PK for all three tables; and use a referential key in client 
and technician to point at user.
If there never are more client or technician records for one user, the 
autoincrementing PK in the user table is enough, with the referential 
key enough PK for each other table, too.


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: table design question

2015-07-29 Thread Richard Reina
Hi Peter,

Thanks for the reply. So are you saying like this?

Repair
RID INT, Date DATE, Tech_ID INT, R_TYPE_ID INT
1 2015-07-28  3243  3
2 2015-06-15  1253  1


Repair_details
ID, APL_TYPE VARCHAR(35), REPAIR_CODE CHAR(4), DESC
1   Refridgerator
C compressor
2   Wash Mach
MC Motor Coupler
3   Dish Washer
SA  Spray Arm
4   Refridgerator
DP  Drain Pan

Not sure what you mean by repeating details.What would the look up table
look like?

Thanks

2015-07-29 9:38 GMT-05:00 peter.braw...@earthlink.net:

  one table with a long ENUM column that contains repairs that
  could be attributed to any appliance or different repair tables
  for each appliance.

 The first would stick you with extending the enum column forever,  the
 second would stick you with unmanageable table glut, so you need the
 traditional relational solution---a parent repairs table to track common
 repair attributes, a child table to track the details, and lookup tables to
 track repeating details.

 PB

  Original Message 
 From: Richard Reina gatorre...@gmail.com
 Reply-To: Richard Reina gatorre...@gmail.com
 Date: 07/29/15 10:19 AM
 To: mysql@lists.mysql.com mysql@lists.mysql.com
 Cc:
 Sub: table design question
 If I were to create a database table(s) to tract most common repairs to
 different appliances I can't decide if it would be better to create one
 table with a long ENUM column that contains repairs that could be
 attributed to any appliance or different repair tables for each appliance.
 All the tables would describe the same thing -- a repair -- however the the
 things being repaired are different in nature which means a great deal of
 types of repairs that do not relate. Here is an example.

 repair_wash_mach
 ID INT, Date DATE, Tech_ID INT,  Type ENUM( 'leak', 'motor_coupler',
 'pump', 'controls', 'agitator')

 repair_dish_washer
 ID INT, Date DATE, Tech_ID INT,  Type ENUM( 'drain_arm', 'drive_belt',
 'door_latch', 'spray_arm', 'drain_valve')

 repair_refridgerator
 ID INT, Date DATE, Tech_ID INT,  Type ENUM( 'circ_fan', 'compressor',
 'disps_line', 'drain_pan', 'feeler_arm')

 Or since they are all repairs should they be in one table with a REALLY
 long ENUM table -- that will need to me altered as the number of appliances
 will most likely increase?

 ID INT, APPLIANCE VARCHAR(35), Date DATE, Tech_ID INT,  Type ENUM( 'leak',
 'motor_coupler', 'pump', 'controls', 'agitator', 'drain_arm', 'drive_belt',
 'door_latch', 'spray_arm', 'drain_valve', 'circ_fan', 'compressor',
 'disps_line', 'drain_pan', 'feeler_arm')

  End Original Message 



Re: table design question

2015-07-29 Thread shawn l.green

Hi Richard,

On 7/29/2015 10:19 AM, Richard Reina wrote:

If I were to create a database table(s) to tract most common repairs to
different appliances I can't decide if it would be better to create one
table with a long ENUM column that contains repairs that could be
attributed to any appliance or different repair tables for each appliance.
All the tables would describe the same thing -- a repair -- however the the
things being repaired are different in nature which means a great deal of
types of repairs that do not relate. Here is an example.

repair_wash_mach
ID INT, Date DATE, Tech_ID INT,  Type ENUM( 'leak', 'motor_coupler',
'pump', 'controls', 'agitator')

repair_dish_washer
ID INT, Date DATE, Tech_ID INT,  Type ENUM( 'drain_arm', 'drive_belt',
'door_latch', 'spray_arm', 'drain_valve')

repair_refridgerator
ID INT, Date DATE, Tech_ID INT,  Type ENUM( 'circ_fan', 'compressor',
'disps_line', 'drain_pan', 'feeler_arm')

Or since they are all repairs should they be in one table with a REALLY
long ENUM table -- that will need to me altered as the number of appliances
will most likely increase?

ID INT, APPLIANCE VARCHAR(35), Date DATE, Tech_ID INT,  Type ENUM( 'leak',
'motor_coupler', 'pump', 'controls', 'agitator', 'drain_arm', 'drive_belt',
'door_latch', 'spray_arm', 'drain_valve', 'circ_fan', 'compressor',
'disps_line', 'drain_pan', 'feeler_arm')



I would suggest a table of appliances, a table of components, and a 
table of repairs something like this...


repair_tasks(
  task_id int auto_increment
, task_description varchar(25)
, appliance_id int not null
, component_id int not null
)

That way you can have two tasks for the same device. For example,

A task of attach door seal would associate the fields (refrigerator, 
door seal).  So would replace door seal. So would order door seal 
from warehouse.


I would not use ENUMS, you would run out of options too quickly. My 
examples are extremely simplified but hopefully you can see the storage 
pattern I am suggesting.


Regards,
--
Shawn Green
MySQL Senior Principal Technical Support Engineer
Oracle USA, Inc. - Integrated Cloud Applications  Platform Services
Office: Blountville, TN

Become certified in MySQL! Visit https://www.mysql.com/certification/ 
for details.


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



table design question

2015-07-29 Thread Richard Reina
If I were to create a database table(s) to tract most common repairs to
different appliances I can't decide if it would be better to create one
table with a long ENUM column that contains repairs that could be
attributed to any appliance or different repair tables for each appliance.
All the tables would describe the same thing -- a repair -- however the the
things being repaired are different in nature which means a great deal of
types of repairs that do not relate. Here is an example.

repair_wash_mach
ID INT, Date DATE, Tech_ID INT,  Type ENUM( 'leak', 'motor_coupler',
'pump', 'controls', 'agitator')

repair_dish_washer
ID INT, Date DATE, Tech_ID INT,  Type ENUM( 'drain_arm', 'drive_belt',
'door_latch', 'spray_arm', 'drain_valve')

repair_refridgerator
ID INT, Date DATE, Tech_ID INT,  Type ENUM( 'circ_fan', 'compressor',
'disps_line', 'drain_pan', 'feeler_arm')

Or since they are all repairs should they be in one table with a REALLY
long ENUM table -- that will need to me altered as the number of appliances
will most likely increase?

ID INT, APPLIANCE VARCHAR(35), Date DATE, Tech_ID INT,  Type ENUM( 'leak',
'motor_coupler', 'pump', 'controls', 'agitator', 'drain_arm', 'drive_belt',
'door_latch', 'spray_arm', 'drain_valve', 'circ_fan', 'compressor',
'disps_line', 'drain_pan', 'feeler_arm')


Re: Where to ask a question about installation and configuration

2015-06-23 Thread Claudio Nanni
Hello Steve,


To what list should I post with a post-installation config and startup
 question?


This list, the MySQL General Mailing List, is the right place if the
question is about MySQL!

Cheers
-- 
Claudio


Where to ask a question about installation and configuration

2015-06-23 Thread Steve Matzura
To what list should I post with a post-installation config and startup
question?

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Latency question - #of connections

2015-02-05 Thread Stewart Smith
Learner Study learner.st...@gmail.com writes:
 Setup:
 MySQL 5.6.16 + thread pool extensions running on 3.18 kernel

 Two sysbench clients running from different PCs with total of 14k
 connections, each connection doing 2 select queries per sec.

 When I specify 7k connections from each client, sysbench shows latency
 of .3 msec (for both sides)

 If I run with 13k from one client and 1k from the other, latency comes
 to .6 msec (for both sides)

 I get that we are load balancing with 7k so latency improves but any
 ideas which part of MySQL server I should profile to see which area is
 messing it up with 13k+1k connection split- could it be poll() call?
 Or some other kernel area?

does perf show any interesting difference between the two setups?


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Latency question - #of connections

2015-02-05 Thread Learner Study
I'd be doing it and would provide update.

BTW, since the query/sec rate is too low, I doubt if it would show
anything. but would try and update.

Both clients aggregate at a switch and come into my server as a single
10G. So it shouldn't matter from poll() system call perspective. I'm
also thinking of using ftrace to debug it...any other ideas?

Thanks a lot!

On Thu, Feb 5, 2015 at 2:22 PM, Stewart Smith
stew...@linux.vnet.ibm.com wrote:
 Learner Study learner.st...@gmail.com writes:
 Setup:
 MySQL 5.6.16 + thread pool extensions running on 3.18 kernel

 Two sysbench clients running from different PCs with total of 14k
 connections, each connection doing 2 select queries per sec.

 When I specify 7k connections from each client, sysbench shows latency
 of .3 msec (for both sides)

 If I run with 13k from one client and 1k from the other, latency comes
 to .6 msec (for both sides)

 I get that we are load balancing with 7k so latency improves but any
 ideas which part of MySQL server I should profile to see which area is
 messing it up with 13k+1k connection split- could it be poll() call?
 Or some other kernel area?

 does perf show any interesting difference between the two setups?


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Latency question - #of connections

2015-02-05 Thread Learner Study
Hello MySQL experts,

I am new to MySQL and am seeing following behavior

Setup:
MySQL 5.6.16 + thread pool extensions running on 3.18 kernel

Two sysbench clients running from different PCs with total of 14k
connections, each connection doing 2 select queries per sec.

When I specify 7k connections from each client, sysbench shows latency
of .3 msec (for both sides)

If I run with 13k from one client and 1k from the other, latency comes
to .6 msec (for both sides)

I get that we are load balancing with 7k so latency improves but any
ideas which part of MySQL server I should profile to see which area is
messing it up with 13k+1k connection split- could it be poll() call?
Or some other kernel area?

Thanks in advance for my pointers...
Martin

PS: I hear that the same thing when run on kernel 2.6.32 doesn't show
this bad latency witj 13+1 split...yet to be confirmed though

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



question?

2015-01-08 Thread bruce
hey.

within php (or any other language)

is there a way to create the mysql sql, and execute the sql, where the
process can wait until the network connection for the mysql
command/process is actually valid?

IE (phpesque)
$pdo=new pdo()
sql = select * from foo where a=:a
$s=$pdo-prepare($sql)
$s-bind(a,$one)
$s-execute()

The issue we're seeing, is that the network (it's all wifi) is really
bad.. and the app machine, might not have a connection to the db box.

We're wondering if there's a way to simply have mysql/php detect when
a valid connection is there, and then proceed.

We've thought about the try/catch process, any others?

Thanks

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: question?

2015-01-08 Thread Reindl Harald



Am 08.01.2015 um 16:01 schrieb bruce:

hey.

within php (or any other language)

is there a way to create the mysql sql, and execute the sql, where the
process can wait until the network connection for the mysql
command/process is actually valid?

IE (phpesque)
$pdo=new pdo()
sql = select * from foo where a=:a
$s=$pdo-prepare($sql)
$s-bind(a,$one)
$s-execute()

The issue we're seeing, is that the network (it's all wifi) is really
bad.. and the app machine, might not have a connection to the db box.

We're wondering if there's a way to simply have mysql/php detect when
a valid connection is there, and then proceed.

We've thought about the try/catch process, any others?


snippets from our db-layer to get an idea

* everytime a error happens the reconnect method is triggered
* reconnect tries 300 times with a small sleep to connect again
* after it returns the last query is fired again

works that way for around 8 years now and we can restart mysqld at every 
random moment without lose a single web request




if(!in_array(@mysqli_errno($this-conn), array_keys($this-ignored_errors)))
   {
$this-reconnect();
$fehler = 0;
switch($unbuffered)
{
 case true: $result = @mysqli_query($this-conn, $sql, 
MYSQLI_USE_RESULT) or $fehler = 1; break;
 default:   $result = @mysqli_query($this-conn, $sql, 
MYSQLI_STORE_RESULT) or $fehler = 1; break;

}
   }


  public function reconnect()
  {
   $this-disconnect();
   $this-connect($this-persistent);
  }


  if(!$rw)
  {
   for($retry=1; $retry=300; $retry++)
   {
/** Initialisieren */
$this-conn = @mysqli_init();
/** SSL-Encryption wenn aktiviert und TCP */
if($this-ssl  $this-host != 'localhost')
{
 /** Wenn kein Zertifikat angegeben ist Dummy-Eintrag fuer 
Standard-Handshake da sonst keine Verschluesselung etabliert wird */

 if($this-ssl_crt === '')
 {
  $this-ssl_crt = 'dummy.crt';
 }
 /** SSL aktivieren */
 $this-conn-ssl_set($this-ssl_key, $this-ssl_crt, 
$this-ssl_ca, NULL, 
'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA:RSA-AES256-SHA');

}
/** Verbindung herstellen */
switch($persistent)
{
 case 1:  $rw = @mysqli_real_connect($this-conn, 'p:' . 
$this-host, $this-user, $this-pwd, $this-db, $this-port, '', 
$flags); break;
 default: $rw = @mysqli_real_connect($this-conn, $this-host, 
$this-user, $this-pwd, $this-db, $this-port, '', $flags); break;

}
/** Wenn Server wieder verfuegbar Verbindung erneut trennen und 
aufbauen um Fehlern waehrend des Initialisieren des Dienstes aus dem Weg 
zu gehen */

if($rw)
{
 /** Initialisieren */
 $this-conn = @mysqli_init();
 /** SSL-Encryption wenn aktiviert und TCP */
 if($this-ssl  $this-host != 'localhost')
 {
  /** Wenn kein Zertifikat angegeben ist Dummy-Eintrag fuer 
Standard-Handshake da sonst keine Verschluesselung etabliert wird */

  if($this-ssl_crt === '')
  {
   $this-ssl_crt = 'dummy.crt';
  }
  /** SSL aktivieren */
  $this-conn-ssl_set($this-ssl_key, $this-ssl_crt, 
$this-ssl_ca, NULL, 
'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA:RSA-AES256-SHA');

 }
 /** Verbindung herstellen */
 switch($persistent)
 {
  case 1:  $rw = @mysqli_real_connect($this-conn, 'p:' . 
$this-host, $this-user, $this-pwd, $this-db, $this-port, '', 
$flags); break;
  default: $rw = @mysqli_real_connect($this-conn, $this-host, 
$this-user, $this-pwd, $this-db, $this-port, '', $flags); break;

 }
 break;
}
usleep(5);
   }
   /** Es konnte trotz mehrmaligem Versuch keine Verbindung 
hergestellt werden */

   if(!$rw)
   {
$host_resolved  = @gethostbyaddr($this-host);
if(!$host_resolved)
{
 $host_resolved = $this-host;
}
$this-conn = 0;
$this-error('Verbindung zu Datenbank-Server span 
style=white-space:nowrap;quot;' . trim($host_resolved . ':' . 
$this-port . ' ' . $this-db) . 'quot;/span konnte nicht hergestellt 
werden.br /br /' . mysqli_connect_error());

   }
  }



signature.asc
Description: OpenPGP digital signature


Re: question?

2015-01-08 Thread Ron Piggott
The only way I could see this work would be to write forms to a temporary
text file array.  Then using a cron job to update the database.

On Thu, January 8, 2015 10:01 am, bruce wrote:
 hey.

 within php (or any other language)

 is there a way to create the mysql sql, and execute the sql, where the
 process can wait until the network connection for the mysql
 command/process is actually valid?

 IE (phpesque)
 $pdo=new pdo()
 sql = select * from foo where a=:a $s=$pdo-prepare($sql)
 $s-bind(a,$one)
 $s-execute()


 The issue we're seeing, is that the network (it's all wifi) is really
 bad.. and the app machine, might not have a connection to the db box.

 We're wondering if there's a way to simply have mysql/php detect when
 a valid connection is there, and then proceed.

 We've thought about the try/catch process, any others?


 Thanks


 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql






-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Question about my build of MySQL 5.6.10

2014-07-10 Thread Damien Kick
I build from source but find that libmysqlclient_r is merely a symlink to 
libmysqlclient.  Is that expected?

$ uname -a
Darwin Damien-Kicks-MacBook-Pro.local 12.5.0 Darwin Kernel Version 12.5.0: Sun 
Sep 29 13:33:47 PDT 2013; root:xnu-2050.48.12~1/RELEASE_X86_64 x86_64
$ echo $CXX
/Users/dkick/bin/g++
$ $CXX --version
g++ (GCC) 4.8.3
Copyright (C) 2013 Free Software Foundation, Inc.
This is free software; see the source for copying conditions.  There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.

$ echo $CXXFLAGS
--std=c++0x
$ ls -l libmysqlclient*
-rwxr-xr-x  1 dkick  staff   4188700 Apr  7 11:42 libmysqlclient.18.dylib
-rw-r--r--  1 dkick  staff  12209048 Apr  7 11:42 libmysqlclient.a
lrwxr-xr-x  1 dkick  staff23 Apr  7 11:42 libmysqlclient.dylib - 
libmysqlclient.18.dylib
lrwxr-xr-x  1 dkick  staff20 Apr  7 11:42 libmysqlclient_r.18.dylib - 
libmysqlclient.dylib
lrwxr-xr-x  1 dkick  staff16 Apr  7 11:42 libmysqlclient_r.a - 
libmysqlclient.a
lrwxr-xr-x  1 dkick  staff20 Apr  7 11:42 libmysqlclient_r.dylib - 
libmysqlclient.dylib
$

I can't seem to find any cmake variables necessary to force a build of a 
threaded/reentrant version of stuff but I'm probably missing something?



This e-mail and any attachments are confidential. If it is not intended for 
you, please notify the sender, and please erase and ignore the contents.


Proxy / connected failover question

2014-07-09 Thread Johan De Meersman
Hullo peoples, 

I'm not usually an advocate of MySQL Proxy and the like, but I'm stuck with one 
shitty application that utterly breaks whenever the database goes away 
unexpectedly. I can't change the application itself, so I find myself looking 
for options that allow the heathen contraption to not notice it's connection 
has switched. 

I am aware that connection state etc is likely to be lost anyway; I'll have to 
see wether or not that's going to be an issue during testing. 

I have two main questions: 
* am I remembering right that MySQL Proxy provides transparent failover ? 
* Are there other contenders in the same field, or alternate solutions ? 

Ideally I'm looking for a hyper-stable tool that can run on it's own VM, so the 
application doesn't notice when I switch backends. All the other applications 
play nice, in that they simply reconnect and go on with business, so it doesn't 
even *have* to take improbably loads. 


Thank you for any and all suggestions and information, 
Johan 

-- 
What's tiny and yellow and very, very dangerous? 
A canary with the root password. 


Re: Proxy / connected failover question

2014-07-09 Thread Claudio Nanni
Hi Johan,

I wanted to love mysql-proxy for so many years, so I understand you :)


 I have two main questions:
 * am I remembering right that MySQL Proxy provides transparent failover ?


You need to use/create a lua failover script, I've never seen or tried one.
What kept me from investing too much time and effort on it is that
mysql-proxy has been alpha for ages,
including not having been really multithreaded.


 * Are there other contenders in the same field, or alternate solutions ?


MaxScale looks really promising: https://github.com/skysql/MaxScale

I am testing it, it is not yet production ready probably, but it's
developing very fast and I like that you can telnet to a debug console and
monitor/manage it.
Plus you can write your own plugins, I'd definitely have a look at it.
1.0 beta was recently released:
http://markriddoch.blogspot.it/2014/07/maxscale-10-beta-highlights.html



 Ideally I'm looking for a hyper-stable tool that can run on it's own VM,
 so the application doesn't notice when I switch backends. All the other
 applications play nice, in that they simply reconnect and go on with
 business, so it doesn't even *have* to take improbably loads.


I think it is a shared mysqlians dream :)

Cheers
-- 
Claudio


Re: Proxy / connected failover question

2014-07-09 Thread Heck, Walter
Johan,

I don't think there's any need for the heavyness (and ugliness ;) ) of
MySQL Proxy. We're using haproxy for a similar setup (just with galera
behind it, but that shouldn't really matter. Have a look at this blog post
that explains most of it:
http://www.olindata.com/blog/2014/04/managing-percona-xtradb-cluster-puppet#haproxy

cheers,


On Wed, Jul 9, 2014 at 12:11 PM, Johan De Meersman vegiv...@tuxera.be
wrote:

 Hullo peoples,

 I'm not usually an advocate of MySQL Proxy and the like, but I'm stuck
 with one shitty application that utterly breaks whenever the database goes
 away unexpectedly. I can't change the application itself, so I find myself
 looking for options that allow the heathen contraption to not notice it's
 connection has switched.

 I am aware that connection state etc is likely to be lost anyway; I'll
 have to see wether or not that's going to be an issue during testing.

 I have two main questions:
 * am I remembering right that MySQL Proxy provides transparent failover ?
 * Are there other contenders in the same field, or alternate solutions ?

 Ideally I'm looking for a hyper-stable tool that can run on it's own VM,
 so the application doesn't notice when I switch backends. All the other
 applications play nice, in that they simply reconnect and go on with
 business, so it doesn't even *have* to take improbably loads.


 Thank you for any and all suggestions and information,
 Johan

 --
 What's tiny and yellow and very, very dangerous?
 A canary with the root password.




-- 
Best regards,

Walter Heck
CEO / Founder OlinData http://olindata.com/?src=wh_gapp - Open Source
Training  Consulting

Check out our upcoming trainings http://olindata.com/training/upcoming


Re: Proxy / connected failover question

2014-07-09 Thread Wagner Bianchi
I like HAProxy as well as it simplifies many of the things you seem to be 
looking for.

--
Wagner Bianchi
Mobile: +55.31.8654.9510

 Em 09/07/2014, às 07:48, Heck, Walter walterh...@olindata.com escreveu:
 
 Johan,
 
 I don't think there's any need for the heavyness (and ugliness ;) ) of
 MySQL Proxy. We're using haproxy for a similar setup (just with galera
 behind it, but that shouldn't really matter. Have a look at this blog post
 that explains most of it:
 http://www.olindata.com/blog/2014/04/managing-percona-xtradb-cluster-puppet#haproxy
 
 cheers,
 
 
 On Wed, Jul 9, 2014 at 12:11 PM, Johan De Meersman vegiv...@tuxera.be
 wrote:
 
 Hullo peoples,
 
 I'm not usually an advocate of MySQL Proxy and the like, but I'm stuck
 with one shitty application that utterly breaks whenever the database goes
 away unexpectedly. I can't change the application itself, so I find myself
 looking for options that allow the heathen contraption to not notice it's
 connection has switched.
 
 I am aware that connection state etc is likely to be lost anyway; I'll
 have to see wether or not that's going to be an issue during testing.
 
 I have two main questions:
 * am I remembering right that MySQL Proxy provides transparent failover ?
 * Are there other contenders in the same field, or alternate solutions ?
 
 Ideally I'm looking for a hyper-stable tool that can run on it's own VM,
 so the application doesn't notice when I switch backends. All the other
 applications play nice, in that they simply reconnect and go on with
 business, so it doesn't even *have* to take improbably loads.
 
 
 Thank you for any and all suggestions and information,
 Johan
 
 --
 What's tiny and yellow and very, very dangerous?
 A canary with the root password.
 
 
 
 -- 
 Best regards,
 
 Walter Heck
 CEO / Founder OlinData http://olindata.com/?src=wh_gapp - Open Source
 Training  Consulting
 
 Check out our upcoming trainings http://olindata.com/training/upcoming


RE: Proxy / connected failover question

2014-07-09 Thread Martin Gainty


 CC: vegiv...@tuxera.be; mysql@lists.mysql.com
 From: wagnerbianch...@gmail.com
 Subject: Re: Proxy / connected failover question
 Date: Wed, 9 Jul 2014 08:31:05 -0300
 To: walterh...@olindata.com
 
 I like HAProxy as well as it simplifies many of the things you seem to be 
 looking for.
 
 --
 Wagner Bianchi
 Mobile: +55.31.8654.9510
 
  Em 09/07/2014, às 07:48, Heck, Walter walterh...@olindata.com escreveu:
  
  Johan,
  
  I don't think there's any need for the heavyness (and ugliness ;) ) of
  MySQL Proxy. We're using haproxy for a similar setup (just with galera
  behind it, but that shouldn't really matter. Have a look at this blog post
  that explains most of it:
  http://www.olindata.com/blog/2014/04/managing-percona-xtradb-cluster-puppet#haproxy
  
  cheers,
  
  
  On Wed, Jul 9, 2014 at 12:11 PM, Johan De Meersman vegiv...@tuxera.be
  wrote:
  
  Hullo peoples,
  
  I'm not usually an advocate of MySQL Proxy and the like, but I'm stuck
  with one shitty application that utterly breaks whenever the database goes
  away unexpectedly. I can't change the application itself, so I find myself
  looking for options that allow the heathen contraption to not notice it's
  connection has switched.
MGhow could a proxy server re-route a downed DB connection to another server?
MGdoes Proxy have a heartbeat on 3306 and reroute somewhere else when 3306 
connection goes silent?
MGexactly how does either MySQLProxy or HAProxy handle these detection and 
reroute scenarios?
  
/snip
  
  Thank you for any and all suggestions and information,
  Johan
  
  What's tiny and yellow and very, very dangerous?
  A canary with the root password.
  
  Best regards,
  
  Walter Heck
  CEO / Founder OlinData http://olindata.com/?src=wh_gapp - Open Source
  Training  Consulting
  
MGBR,
MGMartin
  

Re: Proxy / connected failover question

2014-07-09 Thread Johan De Meersman
- Original Message -

 From: Martin Gainty mgai...@hotmail.com
 Subject: Proxy / connected failover question

   I'm not usually an advocate of MySQL Proxy and the like, but I'm stuck
   with one shitty application that utterly breaks whenever the database
   goes
   away unexpectedly. I can't change the application itself, so I find
   myself
   looking for options that allow the heathen contraption to not notice
   it's
   connection has switched.
 MGhow could a proxy server re-route a downed DB connection to another
 server?
 MGdoes Proxy have a heartbeat on 3306 and reroute somewhere else when 3306
 connection goes silent?
 MGexactly how does either MySQLProxy or HAProxy handle these detection and
 reroute scenarios?

Unlike a router, a proxy does not route packets; it acts as terminator for the 
client connection, interprets the question and then asks that same question on 
a separate connection to the actual server that it initiated itself. 

Thus, if the proxy notices that the backend is gone, it can simply establish a 
new connection to the same or a different host without having to break the 
connection with the client. 

It's a simple idea that works extremely well with stateless protocols like 
HTTP; but for a stateful procotol like MySQL it's rather more complex :-) 

Thank you all for the input, I'll have a look at your suggestions and report 
back with what I came up with :-) 

/johan 

-- 
Unhappiness is discouraged and will be corrected with kitten pictures. 


Re: replication question replacing the master

2014-01-18 Thread Manuel Arostegui
2014/1/17 Richard Reina gatorre...@gmail.com

 I have 3 relay MySQL database servers on my small office LAN backing up a
 master and 3 more machines backing up each relay (1 each). They are all
 replicating all databases and all tables. The master although running fine
 is almost eight years old. I'm thinking it's probably time to make one of
 the relays the master, but I've never done this before.

 I want the new master to have the IP address of the old master 192.168.0.1
 . To make the change I was going to take the master off the LAN and
 shutdown mysql on all the machines, change the IP address on the chosen
 relay to that of the master 192.168.0.1, then restart mysql on all the
 machines. I always refer to the machines by their IP addresses and never by
 their hostnames. Once I successfully make the change I was planning on
 making the old master a relay since it is still working fine.

 Will this plan work ok? Is there a better or easier way?



If the three machines are sync'ed and have consistent data I don't see the
need of stopping MySQL:

- Stop whatever writes to your current master
- Once you are completely sure there are no writes in your current master,
set it to read_only = ON
- In the slave which will become the master, get the logfile and current
position with: show master status;
 - Set the new IP in the new master


Using the position taken in the new master go to the slaves machines and:

stop slave; change master to master_host='IP',
master_log_file='whatever_file_name_you_got',
master_log_pos=whatever_number_you_got,
master_user='replication_or_whatever_you_have',
master_password='replication_or_whatever_you_have'; start slave;

- Set read_only = OFF in your new master
- Start your application so you can start getting writes again.

As soon as you get writes if you do a show master status; in the new
master you should see the position going forward.

I see that faster than any other thing.

Hope this helps
Manuel.


Re: replication question replacing the master

2014-01-18 Thread Richard Reina
Manuel,

Thank you very much for this information. This sounds like a very good 
strategy. I think I will try switching some slaves from one relay to another to 
familiarize myself and get practice and them do it to deploy a new master.

Again, thank you very much.

Richard 



 El Jan 18, 2014, a las 2:00 AM, Manuel Arostegui man...@tuenti.com escribió:
 
 
 
 
 2014/1/17 Richard Reina gatorre...@gmail.com
 I have 3 relay MySQL database servers on my small office LAN backing up a
 master and 3 more machines backing up each relay (1 each). They are all
 replicating all databases and all tables. The master although running fine
 is almost eight years old. I'm thinking it's probably time to make one of
 the relays the master, but I've never done this before.
 
 I want the new master to have the IP address of the old master 192.168.0.1
 . To make the change I was going to take the master off the LAN and
 shutdown mysql on all the machines, change the IP address on the chosen
 relay to that of the master 192.168.0.1, then restart mysql on all the
 machines. I always refer to the machines by their IP addresses and never by
 their hostnames. Once I successfully make the change I was planning on
 making the old master a relay since it is still working fine.
 
 Will this plan work ok? Is there a better or easier way?
 
 If the three machines are sync'ed and have consistent data I don't see the 
 need of stopping MySQL:
 
 - Stop whatever writes to your current master
 - Once you are completely sure there are no writes in your current master, 
 set it to read_only = ON
 - In the slave which will become the master, get the logfile and current 
 position with: show master status;
  - Set the new IP in the new master
 
 
 Using the position taken in the new master go to the slaves machines and:
 
 stop slave; change master to master_host='IP', 
 master_log_file='whatever_file_name_you_got', 
 master_log_pos=whatever_number_you_got, 
 master_user='replication_or_whatever_you_have', 
 master_password='replication_or_whatever_you_have'; start slave;
 
 - Set read_only = OFF in your new master
 - Start your application so you can start getting writes again.
 
 As soon as you get writes if you do a show master status; in the new master 
 you should see the position going forward.
 
 I see that faster than any other thing.
  
 Hope this helps
 Manuel.
 


replication question replacing the master

2014-01-17 Thread Richard Reina
I have 3 relay MySQL database servers on my small office LAN backing up a
master and 3 more machines backing up each relay (1 each). They are all
replicating all databases and all tables. The master although running fine
is almost eight years old. I'm thinking it's probably time to make one of
the relays the master, but I've never done this before.

I want the new master to have the IP address of the old master 192.168.0.1
. To make the change I was going to take the master off the LAN and
shutdown mysql on all the machines, change the IP address on the chosen
relay to that of the master 192.168.0.1, then restart mysql on all the
machines. I always refer to the machines by their IP addresses and never by
their hostnames. Once I successfully make the change I was planning on
making the old master a relay since it is still working fine.

Will this plan work ok? Is there a better or easier way?

Thanks for you attention.

Richard


Re: replication question replacing the master

2014-01-17 Thread Reindl Harald


Am 17.01.2014 22:42, schrieb Richard Reina:
 I have 3 relay MySQL database servers on my small office LAN backing up a
 master and 3 more machines backing up each relay (1 each). They are all
 replicating all databases and all tables. The master although running fine
 is almost eight years old. I'm thinking it's probably time to make one of
 the relays the master, but I've never done this before.
 
 I want the new master to have the IP address of the old master 192.168.0.1
 . To make the change I was going to take the master off the LAN and
 shutdown mysql on all the machines, change the IP address on the chosen
 relay to that of the master 192.168.0.1, then restart mysql on all the
 machines. I always refer to the machines by their IP addresses and never by
 their hostnames. Once I successfully make the change I was planning on
 making the old master a relay since it is still working fine.
 
 Will this plan work ok? Is there a better or easier way?

* stop the master
* sync the complete datadir to the new machine
* give the new machine the same ip
* start mysqld

how should the salve smell that anything has changed?



signature.asc
Description: OpenPGP digital signature


fulltext question

2013-11-26 Thread Jim Sheffer
Hello all-

I have a question on searching via fulltext.

I have the following SQL statement:

var('SQLResultsID') = 'select *, MATCH 
(product_id,product_name,product_desc) AGAINST(' + $sqlKeywordSearch + ') AS 
SCORE from products WHERE MATCH (product_id,product_name,product_desc) 
AGAINST(' + $sqlKeywordSearchB + ' IN BOOLEAN MODE) AND active NOT LIKE 
%no% ORDER BY score DESC

First off, the variable $sqlKeywordSearch contains the search keywords 
separated by spaces.  The second variable, $sqlKeywordSearchB, has keywords 
separated by spaces but also adds a “+ sign to the beginning of each keyword 
to do the Boolean search and match all the keywords in the search.

question #1 - Is this the best way to do a boolean search but also return a 
usable “score”?  The search is doing what I expect it to do so no problem there.

Here’s my main question: I want to be able to “boost the rankings(score) of 
the results based on the fields.  Anything that matches on the product_id field 
I would like to get a higher ranking, then the product_name field next, then 
the product_desc last (obviously, if something matches in the product_id field 
it is of greater importance than if it matches in the product_description 
field).  I know I can boost, or adjust, the score to the keywords if I want 
but is there a way to add to the search score rankings based on the field 
searched?

as always, Thank everyone for any help!


James

James Sheffer  j...@higherpowered.com
Lasso Developerhttp://www.higherpowered.com
phone:  469-256-0268




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Another query question...

2013-11-12 Thread hsv
 2013/11/08 17:35 -0800, Jan Steinman 
Okay, I think I found it:
http://bugs.mysql.com/bug.php?id=47713

I added a comment with a link to a page I set up to show the behaviour on my 
system.
http://www.ecoreality.org/wiki/WITH_ROLLUP_problem

It was submitted in 2009, severity Critical, triaged Serious, and still not 
fixed! 

Yea, and the bug to which I referred is also evident in the same report, where 
SumQuantity is 78, and not all the rest is NULL. 'Twouldn't surprise me if the 
bugs are akin.


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Another query question...

2013-11-08 Thread Jan Steinman
 From: h...@tbbs.net
 
 2013/11/04 09:32 -0800, Jan Steinman
 I noticed that I have similar queries that work as expected. The difference 
 appears to be that every query that is broken uses  WITH ROLLUP, and 
 removing this makes them behave as expected.
 
 Is this a known bug? Should I submit it as such?
 
 There is a bug that I about a half year ago reported, 
 http://bugs.mysql.com/bug.php?id=68564, that the NULL one expects with WITH 
 ROLLUP is not always NULL, but is instead the foregoing string in the same 
 field.

Okay, I think I found it:
http://bugs.mysql.com/bug.php?id=47713

I added a comment with a link to a page I set up to show the behaviour on my 
system.
http://www.ecoreality.org/wiki/WITH_ROLLUP_problem

It was submitted in 2009, severity Critical, triaged Serious, and still not 
fixed!

 The raw milk movement provides a real solution to the problem of 
food-borne illness -- because raw milk consumers make sure their milk comes 
from small, pasture-based farms and healthy animals unlikely to harbor 
pathogens and unlikely to contribute to water pollution, and because raw milk 
builds immunity to disease-causing organisms that are simply a natural part of 
the world in which we live. -- Ron Schmid
 Jan Steinman, EcoReality Co-op 


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Another query question...

2013-11-04 Thread Jan Steinman
The plot thickens...

I noticed that I have similar queries that work as expected. The difference 
appears to be that every query that is broken uses  WITH ROLLUP, and removing 
this makes them behave as expected.

Is this a known bug? Should I submit it as such?

If someone would be so kind as to point me to the bug system, I'll crawl around 
in there to see if it's a known problem.

Jan

Begin forwarded message:

 From: Jan Steinman j...@bytesmiths.com
 Date: 3 November 2013 18:35:47 PST
 
 MySQL 5.0.92-log
 
 I'm trying to form a clickable link using CONCAT, but the link as displayed 
 points to the NEXT row's URL, not the one from the same row as the other data 
 displayed!
 
 Is there something I don't understand about this?
 
 Below is the query. {{{1}}} is replaced by a year, like 2013.
 
 The second column is the problem one. When the table is displayed, the link 
 in the `Product` field points to the NEXT SEQUENTIAL product row! In other 
 words, if you click on the link for garlic, you'll get the page for 
 gherkins.
 
 Live example is at: http://www.EcoReality.org/wiki/2013_harvest
 
 If you hover over the link in the `Product` column, you can clearly see that 
 the page at the link is not the same as that in the `ID` column, but is in 
 fact the same `ID` as the next sequential row. I am so confused.
 
 SELECT
   harvest.product AS ID,
   CONCAT('a href=http://www.EcoReality.org/wiki/Product/', s_product.ID, 
 '', COALESCE(s_product.name, 'TOTAL:'), '/a') AS `Product`,
   FORMAT(sum(harvest.quantity), 3) AS `a 
 href=http://www.EcoReality.org/wiki/Harvest;Harvest/a`,
   harvest.units AS Units,
  CONCAT('$', FORMAT((SUM(harvest.quantity) * prices.price), 2)) AS Value,
   prices.market_type AS `R-W`,
   COUNT(*) AS Harvests,
   DATE(MIN(harvest.date)) AS Begin,
   DATE(MAX(harvest.date)) AS End
 FROM
   s_product_harvest harvest
 INNER JOIN
   s_product on s_product.ID = harvest.product AND
   s_product.units = harvest.units
 LEFT OUTER JOIN
   s_product_market_prices prices ON prices.product_ID = harvest.product AND
   prices.units = harvest.units AND
   year(prices.price_date) = year(harvest.date)
 WHERE
   year(harvest.date) = {{{1}}}
 GROUP BY
   s_product.name WITH ROLLUP
 
  Some days I wonder if it might not be better to culturally engineer 
 humans to enjoy small scale garden farming than to genetically engineer weeds 
 to save large scale agribusiness. -- Gene Logsdon
  Jan Steinman, EcoReality Co-op 
 

 The competition for grain between the wealthy car drivers of the world and 
the poorest people who are trying to survive is a moral issue that we should 
not ignore. The continued increase in biofuels production will result in a 
continued decrease in food availability, which we could someday consider to be 
a crime against humanity. -- Pat Murphy
 Jan Steinman, EcoReality Co-op 


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Another query question...

2013-11-04 Thread Jan Steinman
MySQL 5.0.92-log

I'm trying to form a clickable link using CONCAT, but the link as displayed 
points to the NEXT row's URL, not the one from the same row as the other data 
displayed!

Is there something I don't understand about this?

Below is the query. {{{1}}} is replaced by a year, like 2013.

The second column is the problem one. When the table is displayed, the link in 
the `Product` field points to the NEXT SEQUENTIAL product row! In other words, 
if you click on the link for garlic, you'll get the page for gherkins.

Live example is at: http://www.EcoReality.org/wiki/2013_harvest

If you hover over the link in the `Product` column, you can clearly see that 
the page at the link is not the same as that in the `ID` column, but is in fact 
the same `ID` as the next sequential row. I am so confused.

SELECT
   harvest.product AS ID,
   CONCAT('a href=http://www.EcoReality.org/wiki/Product/', s_product.ID, 
'', COALESCE(s_product.name, 'TOTAL:'), '/a') AS `Product`,
   FORMAT(sum(harvest.quantity), 3) AS `a 
href=http://www.EcoReality.org/wiki/Harvest;Harvest/a`,
   harvest.units AS Units,
  CONCAT('$', FORMAT((SUM(harvest.quantity) * prices.price), 2)) AS Value,
   prices.market_type AS `R-W`,
   COUNT(*) AS Harvests,
   DATE(MIN(harvest.date)) AS Begin,
   DATE(MAX(harvest.date)) AS End
FROM
   s_product_harvest harvest
 INNER JOIN
   s_product on s_product.ID = harvest.product AND
   s_product.units = harvest.units
 LEFT OUTER JOIN
   s_product_market_prices prices ON prices.product_ID = harvest.product AND
   prices.units = harvest.units AND
   year(prices.price_date) = year(harvest.date)
WHERE
   year(harvest.date) = {{{1}}}
GROUP BY
   s_product.name WITH ROLLUP

 Some days I wonder if it might not be better to culturally engineer humans 
to enjoy small scale garden farming than to genetically engineer weeds to save 
large scale agribusiness. -- Gene Logsdon
 Jan Steinman, EcoReality Co-op 


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Another query question...

2013-11-04 Thread hsv
 2013/11/04 09:32 -0800, Jan Steinman 
I noticed that I have similar queries that work as expected. The difference 
appears to be that every query that is broken uses  WITH ROLLUP, and removing 
this makes them behave as expected.

Is this a known bug? Should I submit it as such?

If someone would be so kind as to point me to the bug system, I'll crawl around 
in there to see if it's a known problem. 

There is a bug that I about a half year ago reported, 
http://bugs.mysql.com/bug.php?id=68564, that the NULL one expects with WITH 
ROLLUP is not always NULL, but is instead the foregoing string in the same 
field. I suspect that other bugs with missing NULL found by searching for 
ROLLUP are the same problem. (note link titled Affects Me!)

Another, one year ago reported, bug of mine was handled in 5.7.2, but this one 
not.


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Another query question...

2013-11-04 Thread Reindl Harald


Am 04.11.2013 22:55, schrieb h...@tbbs.net:
 2013/11/04 09:32 -0800, Jan Steinman 
 I noticed that I have similar queries that work as expected. The difference 
 appears to be that every query that is broken uses  WITH ROLLUP, and 
 removing this makes them behave as expected.
 
 Is this a known bug? Should I submit it as such?
 
 If someone would be so kind as to point me to the bug system, I'll crawl 
 around in there to see if it's a known problem. 
 
 There is a bug that I about a half year ago reported, 
 http://bugs.mysql.com/bug.php?id=68564, that the NULL one expects with WITH 
 ROLLUP is not always NULL, but is instead the foregoing string in the same 
 field. I suspect that other bugs with missing NULL found by searching for 
 ROLLUP are the same problem. (note link titled Affects Me!)
 
 Another, one year ago reported, bug of mine was handled in 5.7.2, but this 
 one not

would you please quote in a readable way instead  



signature.asc
Description: OpenPGP digital signature


Re: Archive Engine Question

2013-09-19 Thread Tim Callaghan
If you are looking for great compression another option is TokuDB.  It
supports quicklz, zlib, and lzma compression.


On Wed, Sep 18, 2013 at 2:30 AM, Manuel Arostegui man...@tuenti.com wrote:

 2013/9/17 Wayne Leutwyler wleut...@columbus.rr.com

  Hello List,
 
  I have a customer who is wanting to use the Archive Engine. I have no
  experience with this engine, other than what I am been reading. Why
 would I
  want to use Archive over InnoDB. They are only going to be placing audit
  information in the table.


 Hello,

 We use Archive for archive clusters (obviously) and it is a good option to
 save lot of disk space if you assume the performance can be slightly worse.
 As Keith has pointed out, make sure you know which statements you use
 because ARCHIVE doesn't support all the MySQL ones.

 If Archive isn't an option but you still want to save some disk, you can
 use InnoDB Compression:

 http://dev.mysql.com/doc/innodb-plugin/1.0/en/innodb-compression-background.html

 http://dev.mysql.com/doc/refman/5.5/en/innodb-compression-internals.html#innodb-compression-internals-storage-btree

 In all the tests we did we saw some performance degradation but nothing too
 serious and nothing we couldn't afford, but if you decide to try this, make
 sure you do a PoC so you know how it could impact in your scenario.

 Hope this helps
 Manuel.



Re: Archive Engine Question

2013-09-18 Thread Manuel Arostegui
2013/9/17 Wayne Leutwyler wleut...@columbus.rr.com

 Hello List,

 I have a customer who is wanting to use the Archive Engine. I have no
 experience with this engine, other than what I am been reading. Why would I
 want to use Archive over InnoDB. They are only going to be placing audit
 information in the table.


Hello,

We use Archive for archive clusters (obviously) and it is a good option to
save lot of disk space if you assume the performance can be slightly worse.
As Keith has pointed out, make sure you know which statements you use
because ARCHIVE doesn't support all the MySQL ones.

If Archive isn't an option but you still want to save some disk, you can
use InnoDB Compression:
http://dev.mysql.com/doc/innodb-plugin/1.0/en/innodb-compression-background.html
http://dev.mysql.com/doc/refman/5.5/en/innodb-compression-internals.html#innodb-compression-internals-storage-btree

In all the tests we did we saw some performance degradation but nothing too
serious and nothing we couldn't afford, but if you decide to try this, make
sure you do a PoC so you know how it could impact in your scenario.

Hope this helps
Manuel.


Archive Engine Question

2013-09-17 Thread Wayne Leutwyler
Hello List,

I have a customer who is wanting to use the Archive Engine. I have no 
experience with this engine, other than what I am been reading. Why would I 
want to use Archive over InnoDB. They are only going to be placing audit 
information in the table.

Walter Wayne Leutwyler, RHCT
Sr. MySQL Database Administrator
Mobile: 614 519 5672
Office: 614 889 4956
E-mail: wayne.leutwy...@gmail.com
E-mail: wleut...@columbus.rr.com
Website: http://penguin-workshop.dyndns.org

Courage is being scared to death, but saddling up anyway. --John Wayne



Re: Archive Engine Question

2013-09-17 Thread Keith Murphy
From here:

http://dev.mysql.com/doc/refman/5.0/en/archive-storage-engine.html

The ARCHIVE engine supports
INSERThttp://dev.mysql.com/doc/refman/5.0/en/insert.html
 and SELECT http://dev.mysql.com/doc/refman/5.0/en/select.html, but not
DELETE http://dev.mysql.com/doc/refman/5.0/en/delete.html,
REPLACEhttp://dev.mysql.com/doc/refman/5.0/en/replace.html,
or UPDATE http://dev.mysql.com/doc/refman/5.0/en/update.html. It does
support ORDER BY operations,
BLOBhttp://dev.mysql.com/doc/refman/5.0/en/blob.html columns,
and basically all but spatial data types (see Section 12.16.4.1, “MySQL
Spatial Data 
Types”http://dev.mysql.com/doc/refman/5.0/en/mysql-spatial-datatypes.html).
The ARCHIVE engine uses row-level locking.

*Storage:* Rows are compressed as they are inserted. The ARCHIVE engine
uses zlib lossless data compression (see http://www.zlib.net/). You
can use OPTIMIZE
TABLE http://dev.mysql.com/doc/refman/5.0/en/optimize-table.html to
analyze the table and pack it into a smaller format (for a reason to
use OPTIMIZE
TABLE http://dev.mysql.com/doc/refman/5.0/en/optimize-table.html, see
later in this section). Beginning with MySQL 5.0.15, the engine also
supports CHECK TABLEhttp://dev.mysql.com/doc/refman/5.0/en/check-table.html.
There are several types of insertions that are used:


You tell my why a customer who wants to place AUDIT information in a table
might want that?

Keith Murphy
http://www.paragon-cs.com

-- 

(c) 850-637-3877

On Tue, Sep 17, 2013 at 10:19 AM, Wayne Leutwyler
wleut...@columbus.rr.comwrote:

 Hello List,

 I have a customer who is wanting to use the Archive Engine. I have no
 experience with this engine, other than what I am been reading. Why would I
 want to use Archive over InnoDB. They are only going to be placing audit
 information in the table.

 Walter Wayne Leutwyler, RHCT
 Sr. MySQL Database Administrator
 Mobile: 614 519 5672
 Office: 614 889 4956
 E-mail: wayne.leutwy...@gmail.com
 E-mail: wleut...@columbus.rr.com
 Website: http://penguin-workshop.dyndns.org

 Courage is being scared to death, but saddling up anyway. --John Wayne




Re: Archive Engine Question

2013-09-17 Thread Wayne Leutwyler
Yea, the more I think about it the more it  makes good sense. Are there any 
special my.cnf setting we should be looking at. We currently have our systems 
tuned for InnoDB as our primary engine.

Thanks for the feedback Keith.

On Sep 17, 2013, at 11:34 AM, Keith Murphy bmur...@paragon-cs.com wrote:

 From here:
 
 http://dev.mysql.com/doc/refman/5.0/en/archive-storage-engine.html
 
 The ARCHIVE engine supports INSERT and SELECT, but not DELETE, REPLACE, or 
 UPDATE. It does support ORDER BY operations, BLOB columns, and basically all 
 but spatial data types (see Section 12.16.4.1, “MySQL Spatial Data Types”). 
 The ARCHIVE engine uses row-level locking.
 
 Storage: Rows are compressed as they are inserted. The ARCHIVE engine uses 
 zlib lossless data compression (see http://www.zlib.net/). You can use 
 OPTIMIZE TABLE to analyze the table and pack it into a smaller format (for a 
 reason to use OPTIMIZE TABLE, see later in this section). Beginning with 
 MySQL 5.0.15, the engine also supports CHECK TABLE. There are several types 
 of insertions that are used:
 
 
 
 You tell my why a customer who wants to place AUDIT information in a table 
 might want that?
 
 Keith Murphy
 http://www.paragon-cs.com
 
 -- 
 
 (c) 850-637-3877
 
 On Tue, Sep 17, 2013 at 10:19 AM, Wayne Leutwyler wleut...@columbus.rr.com 
 wrote:
 Hello List,
 
 I have a customer who is wanting to use the Archive Engine. I have no 
 experience with this engine, other than what I am been reading. Why would I 
 want to use Archive over InnoDB. They are only going to be placing audit 
 information in the table.
 
 Walter Wayne Leutwyler, RHCT
 Sr. MySQL Database Administrator
 Mobile: 614 519 5672
 Office: 614 889 4956
 E-mail: wayne.leutwy...@gmail.com
 E-mail: wleut...@columbus.rr.com
 Website: http://penguin-workshop.dyndns.org
 
 Courage is being scared to death, but saddling up anyway. --John Wayne
 
 
 
 


Walter Wayne Leutwyler, RHCT
Sr. MySQL Database Administrator
Mobile: 614 519 5672
Office: 614 889 4956
E-mail: wayne.leutwy...@gmail.com
E-mail: wleut...@columbus.rr.com
Website: http://penguin-workshop.dyndns.org

Courage is being scared to death, but saddling up anyway. --John Wayne



Re: hypothetical question about data storage

2013-07-30 Thread Carsten Pedersen

On 30-07-2013 01:16, Rick James wrote:

Elevator...  If the RAID _controller_ does the Elevator stuff, any OS
optimizations are wasted. And there have been benchmarks backing that
up.  (Sorry, don't have any links handy.)

RAID 5/10 ...  The testing I have done shows very little difference.


...right up to the day one of the disks fail, and you thought you could 
just plug in a new spindle and let the system take care of the rest...


http://www.miracleas.com/BAARF/
http://www.miracleas.com/BAARF/RAID5_versus_RAID10.txt

/ Carsten

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: hypothetical question about data storage

2013-07-30 Thread Manuel Arostegui
2013/7/30 Rick James rja...@yahoo-inc.com

 Elevator...  If the RAID _controller_ does the Elevator stuff, any OS
 optimizations are wasted.
 And there have been benchmarks backing that up.  (Sorry, don't have any
 links handy.)

 RAID 5/10 ...  The testing I have done shows very little difference.
  However, you can slant the conclusion by picking one versus the other of:
 For a given amount of disk space... RAID-X is better than Y.
 For a given number of drives... RAID-Y is better than X.


The tests I have done with RAID5 vs RAID10 the difference is huge, at least
in our clusters with heavy writes.
We usually do RAIDS over 4 or 8 SAS disks (15krpm).
The performance of each type of RAID needs to be tested for your concrete
scenario, you can find lot of benchmarks out there, but you need to test
your workload to be sure what works better for you. As Rick said, with
BBUs, disk schedulers, write back/write thru configuration etc things can
change.

The last tests with SSD disks shows no difference, so for the new servers
with SSD we're going for RAID5 as you get more disk space :-)

Just my 2 cents!
Manuel.


-- 
Manuel Aróstegui
Systems Team
tuenti.com


Question about Server Debug Info

2013-07-30 Thread Wayne Leutwyler
Hello List,

On two different occasions and on two different servers I have found server 
debug information written out to the mysqld.log file.

Now I know that by issuing the kill -1 mysql-pid will write that information to 
the mysqld.log file, and by using mysqladmin debug will also write debug 
information to the mysqld.log file.

I am wondering what other methods would write the debug info to the mysqld.log 
file. On the two servers in question my fellow DBA's nor myself did nothing 
that would have written debug info. 

Are there any internal settings or processes that would write debug information 
to the mysqld.log file? Could a client tool do something like this? We have 
reviewed the app privileges and the they have no ability to write debug info.

Thank you,  

Walter Wayne Leutwyler, RHCT
Sr. MySQL Database Administrator
Mobile: 614 519 5672
Office: 614 889 4956
E-mail: wayne.leutwy...@gmail.com
E-mail: wleut...@columbus.rr.com
Website: http://penguin-workshop.dyndns.org

Courage is being scared to death, but saddling up anyway. --John Wayne



Question regarding creating a query

2013-07-30 Thread Sukhjinder K. Narula
Hello,

I have a question regarding creating a query as follows:

I have several databases (all with same structure), which I to query. For
instansce:

db1, db2, db3 - all have table tb1 with field a, b and table tb2 with
fields flag1, flag2

So I want to query and get field a from tb for all db's. One way to do is
union i.e.

SELECT a FROM db1.tb1 WHERE (SELECT flag1 FROM db1.tb2) = 'y'
UNION
SELECT a FROM db2.tb1 WHERE (SELECT flag1 FROM db2.tb2) = 'y'
UNION
SELECT a FROM db3.tb1 WHERE (SELECT flag1 FROM db3.tb2) = 'y'

But the problem here is that if I add more db's, I have to update the query
every time.

In addition to above, I also have a database e.g. common, which has a table
called dbnames with field name, that keeps the name of all the databases I
have (db1, db2, db3).

So, what I would like to do is query the common db to get the names of the
db's and then run the select query on each db.

So here is the pseudocode of what I want to do:


for each (SELECT name AS DbName FROM common.dbnames)

(SELECT a FROM DbName.tb1 WHERE (SELECT flag1 FROM DbName.tb2) = 'y')  AS
CONCAT(DbName, '-', a)


Could you please advice if this possible and if yes, how can this be
acheived.

Many Thanks,

SK


RE: hypothetical question about data storage

2013-07-30 Thread Johan De Meersman
Rick James rja...@yahoo-inc.com wrote:

When writing a random block, RAID-5 does not need to touch all the
drives, only the one with parity.  Suitable XORs will update it
correctly.  So, a write hits 2 drives, whether you have RAID-5 or -10.

Only if the other blocks happen to be in the cache, otherwise a read is 
required. We performed the tests when we last added storage to our media farm.

On the other hand I'm not too familiar wit the actual algorithms used, so our 
findings might be vendor-dependant.




-- 
Sent from my Android phone with K-9 Mail. Please excuse my brevity.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



RE: Question regarding creating a query

2013-07-30 Thread Rick James
 I have to update the query every time.
Therein lies the difficulty with the schema design.

You could write a stored procedure to locate all the tables (use 
information_schema.TABLES, etc) and build the UNION, and finally execute it.  
The SP would have something very remotely like the foreach you suggested.

 -Original Message-
 From: Sukhjinder K. Narula [mailto:narula...@gmail.com]
 Sent: Tuesday, July 30, 2013 11:13 AM
 To: mysql@lists.mysql.com
 Subject: Question regarding creating a query
 
 Hello,
 
 I have a question regarding creating a query as follows:
 
 I have several databases (all with same structure), which I to query. For
 instansce:
 
 db1, db2, db3 - all have table tb1 with field a, b and table tb2 with
 fields flag1, flag2
 
 So I want to query and get field a from tb for all db's. One way to do is
 union i.e.
 
 SELECT a FROM db1.tb1 WHERE (SELECT flag1 FROM db1.tb2) = 'y'
 UNION
 SELECT a FROM db2.tb1 WHERE (SELECT flag1 FROM db2.tb2) = 'y'
 UNION
 SELECT a FROM db3.tb1 WHERE (SELECT flag1 FROM db3.tb2) = 'y'
 
 But the problem here is that if I add more db's, I have to update the
 query every time.
 
 In addition to above, I also have a database e.g. common, which has a
 table called dbnames with field name, that keeps the name of all the
 databases I have (db1, db2, db3).
 
 So, what I would like to do is query the common db to get the names of the
 db's and then run the select query on each db.
 
 So here is the pseudocode of what I want to do:
 
 
 for each (SELECT name AS DbName FROM common.dbnames)
 
 (SELECT a FROM DbName.tb1 WHERE (SELECT flag1 FROM DbName.tb2) = 'y')  AS
 CONCAT(DbName, '-', a)
 
 
 Could you please advice if this possible and if yes, how can this be
 acheived.
 
 Many Thanks,
 
 SK

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Question regarding creating a query

2013-07-30 Thread hsv
 2013/07/30 14:12 -0400, Sukhjinder K. Narula 
I have several databases (all with same structure), which I to query. For
instansce:

db1, db2, db3 - all have table tb1 with field a, b and table tb2 with
fields flag1, flag2

So I want to query and get field a from tb for all db's. One way to do is
union i.e.

SELECT a FROM db1.tb1 WHERE (SELECT flag1 FROM db1.tb2) = 'y'
UNION
SELECT a FROM db2.tb1 WHERE (SELECT flag1 FROM db2.tb2) = 'y'
UNION
SELECT a FROM db3.tb1 WHERE (SELECT flag1 FROM db3.tb2) = 'y'

But the problem here is that if I add more db's, I have to update the query
every time.

In addition to above, I also have a database e.g. common, which has a table
called dbnames with field name, that keeps the name of all the databases I
have (db1, db2, db3).

So, what I would like to do is query the common db to get the names of the
db's and then run the select query on each db.

So here is the pseudocode of what I want to do:


for each (SELECT name AS DbName FROM common.dbnames)

(SELECT a FROM DbName.tb1 WHERE (SELECT flag1 FROM DbName.tb2) = 'y')  AS
CONCAT(DbName, '-', a)

Well, you could build up the united query in a string and pass it to PREPARE


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



RE: hypothetical question about data storage

2013-07-29 Thread Rick James
Most RAID controllers will happily do Elevator stuff like you mentioned.
So will Linux.

For MySQL + RAID, a Linux elevator strategy of 'deadline' or 'noop' is optimal. 
 (The default, 'cfq', is not as good.)

A RAID controller with multiple drives striped (and optionally parity-checked) 
(RAID-5, -10) and with a BBU (Battery Backed Write Cache) is excellent for I/O.

I don't know about chronologically later.  InnoDB does the right thing, as 
long as the OS does not cheat on fsync, etc.

 1/10/10A/10aa342
Only 16 subdirectories per directory?  I would expect 256 to be more efficient 
overall.  This is because of fewer levels.  Scanning 256 is probably less 
costly than doing an extra level.  (Yeah, again, I can't _prove_ it in _your_ 
environment.)

4K tables on a single machine -- that is beginning to get into 'big' in 
reference to ulimit, table_open_cache, etc.  That is, if you went much past 
that, you would be getting into new areas of inefficiency.

I do not like splitting a database table into multiple tables, except by 
PARTITIONing.  PARTITIONing would also provide a 'instantaneous' way of purging 
old data.  (DROP PARTITION + REORGANIZE PARTITION)

Almost always (again no proof for your case), a single table is more efficient 
than many tables.  This applies to PARTITIONing, too, but there are can be 
other gains by using PARTITIONing.

InnoDB has a 64TB limit per PARTITION.

 -Original Message-
 From: william drescher [mailto:will...@techservsys.com]
 Sent: Saturday, July 27, 2013 4:32 AM
 To: mysql@lists.mysql.com
 Subject: Re: hypothetical question about data storage
 
 On 7/26/2013 6:58 PM, Chris Knipe wrote:
  The issue that we have identified is caused by seek time - hundreds of
  clients simultaneously searching for a single file.  The only real way
  to explain this is to run 100 concurrent instances of bonnie++ doing
  random read/writes... Your disk utilization and disk latency
  essentially goes through the roof resulting in IO wait and insanely
  high load averages (we've seen it spike to over 150 on a 8-core Xeon -
  at which time the application (at a 40 load average already) stops
  processing requests to prevent the server crashing).
 
 back in the day (many years ago) when I worked for IBM we had disk
 controllers that would queue and sort pending reads so that the heads
 would seek from low tracks across the disk to high tracks and then back to
 low. This resulted in very low seek _averages_.
 The controller was smart enough to make sure that if a write occurred,
 chronologically later reads got the right data, even if it had not been
 physically written to disk yet.
 
 Is there such a controller available now?
 
 bill
 
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



RE: hypothetical question about data storage

2013-07-29 Thread Johan De Meersman
Rick James rja...@yahoo-inc.com wrote:

For MySQL + RAID, a Linux elevator strategy of 'deadline' or 'noop' is
optimal.  (The default, 'cfq', is not as good.)

I should look into those again at some point. Do you have a brief word as to 
why they're better?


A RAID controller with multiple drives striped (and optionally
parity-checked) (RAID-5, -10) and with a BBU (Battery Backed Write
Cache) is excellent for I/O.

Very true. 10 is traditionally considered better - it's certainly faster - but 
5 is of course cheaper :-) 

I'd like to add that 4+1 is the optimal configuration for RAID5 , as that makes 
for a stripe of 2kb, assuming 512b sectors of course. You then pick an fs that 
supports blocks of that size , which means that no write will ever need to 
perform a read first to calculate the checksum.




-- 
Sent from my Android phone with K-9 Mail. Please excuse my brevity.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



RE: hypothetical question about data storage

2013-07-29 Thread Rick James
Elevator...  If the RAID _controller_ does the Elevator stuff, any OS 
optimizations are wasted.
And there have been benchmarks backing that up.  (Sorry, don't have any links 
handy.)

RAID 5/10 ...  The testing I have done shows very little difference.  However, 
you can slant the conclusion by picking one versus the other of:
For a given amount of disk space... RAID-X is better than Y.
For a given number of drives... RAID-Y is better than X.

When writing a random block, RAID-5 does not need to touch all the drives, only 
the one with parity.  Suitable XORs will update it correctly.  So, a write hits 
2 drives, whether you have RAID-5 or -10.

Some people make the chunk size 64KB (etc); not 512B.  With the Controller 
involved, there is not necessarily any benefit for large vs small chunk size.  
Writes are delayed until the it is optimal.  This leads to large streaming 
writes to each drive, regardless of chunk size (when writing a large stream).

A heavily used InnoDB system will be writing random 16KB blocks.

(I have no insight into RAID-6.)

 -Original Message-
 From: Johan De Meersman [mailto:vegiv...@tuxera.be]
 Sent: Monday, July 29, 2013 3:38 PM
 To: Rick James; will...@techservsys.com; mysql@lists.mysql.com
 Subject: RE: hypothetical question about data storage
 
 Rick James rja...@yahoo-inc.com wrote:
 
 For MySQL + RAID, a Linux elevator strategy of 'deadline' or 'noop' is
 optimal.  (The default, 'cfq', is not as good.)
 
 I should look into those again at some point. Do you have a brief word as
 to why they're better?
 
 
 A RAID controller with multiple drives striped (and optionally
 parity-checked) (RAID-5, -10) and with a BBU (Battery Backed Write
 Cache) is excellent for I/O.
 
 Very true. 10 is traditionally considered better - it's certainly faster -
 but 5 is of course cheaper :-)
 
 I'd like to add that 4+1 is the optimal configuration for RAID5 , as that
 makes for a stripe of 2kb, assuming 512b sectors of course. You then pick
 an fs that supports blocks of that size , which means that no write will
 ever need to perform a read first to calculate the checksum.
 
 
 
 
 --
 Sent from my Android phone with K-9 Mail. Please excuse my brevity.


Re: hypothetical question about data storage

2013-07-27 Thread william drescher

On 7/26/2013 6:58 PM, Chris Knipe wrote:

The issue that we have identified is caused by seek time - hundreds of
clients simultaneously searching for a single file.  The only real way
to explain this is to run 100 concurrent instances of bonnie++ doing
random read/writes... Your disk utilization and disk latency
essentially goes through the roof resulting in IO wait and insanely
high load averages (we've seen it spike to over 150 on a 8-core Xeon -
at which time the application (at a 40 load average already) stops
processing requests to prevent the server crashing).


back in the day (many years ago) when I worked for IBM we had 
disk controllers that would queue and sort pending reads so that 
the heads would seek from low tracks across the disk to high 
tracks and then back to low. This resulted in very low seek 
_averages_.
The controller was smart enough to make sure that if a write 
occurred, chronologically later reads got the right data, even if 
it had not been physically written to disk yet.


Is there such a controller available now?

bill


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: hypothetical question about data storage

2013-07-26 Thread Johan De Meersman
Hey Chris,

I'm afraid that this is not what databases are for, and the first thing you'll 
likely run into is amount of concurrent connections.

This is typically something you should really tackle from a systems 
perspective. Seek times are dramatically improved on SSD or similar storage - 
think FusionIO cards, but there's also a couple of vendors (Violin comes to 
mind) who provide full-blown SSD SANs.

If you prefer staying with spinning disks, you could still improve the seeks by 
focusing on the inner cylinders and potentially by using variable sector 
formatting. Again, there's SANs that do this for you.

Another minor trick is to turn off access timestamp updates when you mount the 
filesystem (noatime).

Also benchmark different filesystems, there's major differences between them. 
I've heard XFS being recommended, but I've never needed to benchmark for seek 
times myself. We're using IBM's commercial GPFS here, which is good with 
enormous amounts of huge files (media farm here), not sure how it'd fare with 
smaller files.

Hope that helps,
Johan

- Original Message -
 From: Chris Knipe sav...@savage.za.org
 To: mysql@lists.mysql.com
 Sent: Thursday, 25 July, 2013 11:53:53 PM
 Subject: hypothetical question about data storage
 
 Hi all,
 
 We run an VERY io intensive file application service.  Currently, our
 problem is that our disk spindles are being completely killed due to
 insufficient SEEK time on the hard drives (NOT physical read/write
 speeds).
 
 We have an directory structure where the files are stored based on
 the MD5
 checksum of the file name, i.e.
 /0/00/000/44533779fce5cf3497f87de1d060
 The majority of these files, are between 256K and 800K with the ODD
 exception (say less than 15%) being more than 1M but no more than 5M
 in
 size.  The content of the files are pure text (MIME Encoded).
 
 We believe that storing these files into an InnoDB table, may
 actually give
 us better performance:
 - There is one large file that is being read/written, instead of
 BILLIONS of
 small files
 - We can split the structure so that each directory (4096 in total)
 sit's on
 their own database
 - We can move the databases as load increases, which means that we
 can
 potentially run 2 physical database servers, each with 2048 databases
 each)
 - It's easy to move / migrate the data due to mysql and replication -
 same
 can be said for redundancy of the data
 
 We are more than likely looking at BLOB columns of course, and we
 need to
 read/write from the DB in excess of 100mbit/s
 
 Would the experts consider something like this as being feasible?  Is
 it
 worth it to go down this avenue, or are we just going to run into
 different
 problems?  If we are facing different problems, what can we possibly
 expect
 to go wrong here?
 
 Many thanks, and I look forward to any input.
 

-- 
Unhappiness is discouraged and will be corrected with kitten pictures.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: hypothetical question about data storage

2013-07-26 Thread Chris Knipe
Hi All,

Thanks for the responces, and I do concur.  I was taking a stab in the
dark so to speak.

We are working with our hosting providers currently and will be
introducing a multitude of small iSCSI SANs to split the storage
structure over a multitude of disks...   This is something that needs
to be addressed from a systems perspective rather than an
architectural one.

SSD (or Fusion and the like) are unfortunately still way to expensive
for the capacity that we require (good couple of TBs) - so mechanical
disks it would need to be.  However, with the use of SANs as we hope,
we should be able to go up from 4 to over 64 spindles whilst still
being able to share the storage and have redundancy.

Many thanks for the inputs and feedbacks...

--
C


On Fri, Jul 26, 2013 at 9:23 AM, Johan De Meersman vegiv...@tuxera.be wrote:
 Hey Chris,

 I'm afraid that this is not what databases are for, and the first thing 
 you'll likely run into is amount of concurrent connections.

 This is typically something you should really tackle from a systems 
 perspective. Seek times are dramatically improved on SSD or similar storage - 
 think FusionIO cards, but there's also a couple of vendors (Violin comes to 
 mind) who provide full-blown SSD SANs.

 If you prefer staying with spinning disks, you could still improve the seeks 
 by focusing on the inner cylinders and potentially by using variable sector 
 formatting. Again, there's SANs that do this for you.

 Another minor trick is to turn off access timestamp updates when you mount 
 the filesystem (noatime).

 Also benchmark different filesystems, there's major differences between them. 
 I've heard XFS being recommended, but I've never needed to benchmark for seek 
 times myself. We're using IBM's commercial GPFS here, which is good with 
 enormous amounts of huge files (media farm here), not sure how it'd fare with 
 smaller files.

 Hope that helps,
 Johan

 - Original Message -
 From: Chris Knipe sav...@savage.za.org
 To: mysql@lists.mysql.com
 Sent: Thursday, 25 July, 2013 11:53:53 PM
 Subject: hypothetical question about data storage

 Hi all,

 We run an VERY io intensive file application service.  Currently, our
 problem is that our disk spindles are being completely killed due to
 insufficient SEEK time on the hard drives (NOT physical read/write
 speeds).

 We have an directory structure where the files are stored based on
 the MD5
 checksum of the file name, i.e.
 /0/00/000/44533779fce5cf3497f87de1d060
 The majority of these files, are between 256K and 800K with the ODD
 exception (say less than 15%) being more than 1M but no more than 5M
 in
 size.  The content of the files are pure text (MIME Encoded).

 We believe that storing these files into an InnoDB table, may
 actually give
 us better performance:
 - There is one large file that is being read/written, instead of
 BILLIONS of
 small files
 - We can split the structure so that each directory (4096 in total)
 sit's on
 their own database
 - We can move the databases as load increases, which means that we
 can
 potentially run 2 physical database servers, each with 2048 databases
 each)
 - It's easy to move / migrate the data due to mysql and replication -
 same
 can be said for redundancy of the data

 We are more than likely looking at BLOB columns of course, and we
 need to
 read/write from the DB in excess of 100mbit/s

 Would the experts consider something like this as being feasible?  Is
 it
 worth it to go down this avenue, or are we just going to run into
 different
 problems?  If we are facing different problems, what can we possibly
 expect
 to go wrong here?

 Many thanks, and I look forward to any input.


 --
 Unhappiness is discouraged and will be corrected with kitten pictures.



-- 

Regards,
Chris Knipe

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



RE: hypothetical question about data storage

2013-07-26 Thread Rick James
Count the disk hits

If you have a filesystem directory, consider that it is designed to handle 
small numbers of files per directory.  Consider that there is a limited cache 
for directories, etc.  Plus there is the inode (vnode, whatever) storage for 
each file.  I don't know the details (and it varies wildly with filesystem 
(ext, xfs, zfs, etc)).

Looking at InnoDB...

Let's say you have a billion rows in a single table, and you need to fetch one 
row by the PRIMARY KEY, and it is a MD5 (sha-1, UUID, etc).  Such a key is 
_very_ random.

A billion rows would need about 5 levels of BTree.  The top levels would 
quickly all be cached.  (100M blocks * 16KB = 1.6GB.)  If the leaf nodes add up 
to 200GB, that is probably bigger than you innodb_buffer_pool_size.  In that 
case, a _random_ fetch is likely to be a cache miss.

A cache miss is about 100ms on normal rotating-media; perhaps 10ms on SSDs.  
This limits your reads to 10 (or 100) per second.

If you have big BLOBs in the table, then it gets messier.  InnoDB does not put 
more than 8K of a row in the actual 16KB block.  The rest is stored in another 
block(s).  So, it is likely to take an extra disk hit (200ms/20ms).

If your data size is 100 times as big as your buffer pool, then it becomes 
likely that the next level of the BTree won't be fully cacheable.  Now 
300ms/30ms.

I think it is likely that the small number of disk hits for InnoDB is better 
than the many disk hits for traversing a directory tree (with large 
directories) in the filesystem.  I vote for InnoDB over the directory tree.

Yes, you will have seeks.

No, adding more RAM won't help much.  Here's an argument:
Suppose your data is 20 times as big as the buffer pool and you are doing 
random fetches (MD5, etc).  Then 1/20 of fetches are cached; 95% cache miss.  
Estimated time: 0.95 * 100ms = 95ms.
Now you double your RAM.  1/10 cached - 90% cache miss - 90ms average - Not 
much improvement over 95.

 -Original Message-
 From: ckn...@savage.za.org [mailto:ckn...@savage.za.org] On Behalf Of
 Chris Knipe
 Sent: Friday, July 26, 2013 12:30 AM
 To: Johan De Meersman
 Cc: mysql
 Subject: Re: hypothetical question about data storage
 
 Hi All,
 
 Thanks for the responces, and I do concur.  I was taking a stab in the
 dark so to speak.
 
 We are working with our hosting providers currently and will be
 introducing a multitude of small iSCSI SANs to split the storage
 structure over a multitude of disks...   This is something that needs
 to be addressed from a systems perspective rather than an architectural
 one.
 
 SSD (or Fusion and the like) are unfortunately still way to expensive for
 the capacity that we require (good couple of TBs) - so mechanical disks it
 would need to be.  However, with the use of SANs as we hope, we should be
 able to go up from 4 to over 64 spindles whilst still being able to share
 the storage and have redundancy.
 
 Many thanks for the inputs and feedbacks...
 
 --
 C
 
 
 On Fri, Jul 26, 2013 at 9:23 AM, Johan De Meersman vegiv...@tuxera.be
 wrote:
  Hey Chris,
 
  I'm afraid that this is not what databases are for, and the first thing
 you'll likely run into is amount of concurrent connections.
 
  This is typically something you should really tackle from a systems
 perspective. Seek times are dramatically improved on SSD or similar
 storage - think FusionIO cards, but there's also a couple of vendors
 (Violin comes to mind) who provide full-blown SSD SANs.
 
  If you prefer staying with spinning disks, you could still improve the
 seeks by focusing on the inner cylinders and potentially by using variable
 sector formatting. Again, there's SANs that do this for you.
 
  Another minor trick is to turn off access timestamp updates when you
 mount the filesystem (noatime).
 
  Also benchmark different filesystems, there's major differences between
 them. I've heard XFS being recommended, but I've never needed to benchmark
 for seek times myself. We're using IBM's commercial GPFS here, which is
 good with enormous amounts of huge files (media farm here), not sure how
 it'd fare with smaller files.
 
  Hope that helps,
  Johan
 
  - Original Message -
  From: Chris Knipe sav...@savage.za.org
  To: mysql@lists.mysql.com
  Sent: Thursday, 25 July, 2013 11:53:53 PM
  Subject: hypothetical question about data storage
 
  Hi all,
 
  We run an VERY io intensive file application service.  Currently, our
  problem is that our disk spindles are being completely killed due to
  insufficient SEEK time on the hard drives (NOT physical read/write
  speeds).
 
  We have an directory structure where the files are stored based on
  the MD5 checksum of the file name, i.e.
  /0/00/000/44533779fce5cf3497f87de1d060
  The majority of these files, are between 256K and 800K with the ODD
  exception (say less than 15%) being more than 1M but no more than 5M
  in size.  The content of the files are pure text (MIME Encoded).
 
  We believe that storing

RE: hypothetical question about data storage

2013-07-26 Thread Johan De Meersman
Your argument against FS assumes that you don't know the exact filename 
(directory traversals), but your argument for InnoDB assumes that you do (index 
lookup). Apples and oranges.

Besides, the venerable ext2 handled up to a couple of tens of thousands of 
files per directory smoothly when listing; things have only improved since 
then. Small amounts is a very relative concept.

Rick James rja...@yahoo-inc.com wrote:
Count the disk hits

If you have a filesystem directory, consider that it is designed to
handle small numbers of files per directory.  Consider that there is a
limited cache for directories, etc.  Plus there is the inode (vnode,
whatever) storage for each file.  I don't know the details (and it
varies wildly with filesystem (ext, xfs, zfs, etc)).

Looking at InnoDB...

Let's say you have a billion rows in a single table, and you need to
fetch one row by the PRIMARY KEY, and it is a MD5 (sha-1, UUID, etc). 
Such a key is _very_ random.

A billion rows would need about 5 levels of BTree.  The top levels
would quickly all be cached.  (100M blocks * 16KB = 1.6GB.)  If the
leaf nodes add up to 200GB, that is probably bigger than you
innodb_buffer_pool_size.  In that case, a _random_ fetch is likely to
be a cache miss.

A cache miss is about 100ms on normal rotating-media; perhaps 10ms on
SSDs.  This limits your reads to 10 (or 100) per second.

If you have big BLOBs in the table, then it gets messier.  InnoDB does
not put more than 8K of a row in the actual 16KB block.  The rest is
stored in another block(s).  So, it is likely to take an extra disk hit
(200ms/20ms).

If your data size is 100 times as big as your buffer pool, then it
becomes likely that the next level of the BTree won't be fully
cacheable.  Now 300ms/30ms.

I think it is likely that the small number of disk hits for InnoDB is
better than the many disk hits for traversing a directory tree (with
large directories) in the filesystem.  I vote for InnoDB over the
directory tree.

Yes, you will have seeks.
   
No, adding more RAM won't help much.  Here's an argument:
Suppose your data is 20 times as big as the buffer pool and you are
doing random fetches (MD5, etc).  Then 1/20 of fetches are cached; 95%
cache miss.  Estimated time: 0.95 * 100ms = 95ms.
Now you double your RAM.  1/10 cached - 90% cache miss - 90ms average
- Not much improvement over 95.

 -Original Message-
 From: ckn...@savage.za.org [mailto:ckn...@savage.za.org] On Behalf Of
 Chris Knipe
 Sent: Friday, July 26, 2013 12:30 AM
 To: Johan De Meersman
 Cc: mysql
 Subject: Re: hypothetical question about data storage
 
 Hi All,
 
 Thanks for the responces, and I do concur.  I was taking a stab in
the
 dark so to speak.
 
 We are working with our hosting providers currently and will be
 introducing a multitude of small iSCSI SANs to split the storage
 structure over a multitude of disks...   This is something that needs
 to be addressed from a systems perspective rather than an
architectural
 one.
 
 SSD (or Fusion and the like) are unfortunately still way to expensive
for
 the capacity that we require (good couple of TBs) - so mechanical
disks it
 would need to be.  However, with the use of SANs as we hope, we
should be
 able to go up from 4 to over 64 spindles whilst still being able to
share
 the storage and have redundancy.
 
 Many thanks for the inputs and feedbacks...
 
 --
 C
 
 
 On Fri, Jul 26, 2013 at 9:23 AM, Johan De Meersman
vegiv...@tuxera.be
 wrote:
  Hey Chris,
 
  I'm afraid that this is not what databases are for, and the first
thing
 you'll likely run into is amount of concurrent connections.
 
  This is typically something you should really tackle from a systems
 perspective. Seek times are dramatically improved on SSD or similar
 storage - think FusionIO cards, but there's also a couple of vendors
 (Violin comes to mind) who provide full-blown SSD SANs.
 
  If you prefer staying with spinning disks, you could still improve
the
 seeks by focusing on the inner cylinders and potentially by using
variable
 sector formatting. Again, there's SANs that do this for you.
 
  Another minor trick is to turn off access timestamp updates when
you
 mount the filesystem (noatime).
 
  Also benchmark different filesystems, there's major differences
between
 them. I've heard XFS being recommended, but I've never needed to
benchmark
 for seek times myself. We're using IBM's commercial GPFS here, which
is
 good with enormous amounts of huge files (media farm here), not sure
how
 it'd fare with smaller files.
 
  Hope that helps,
  Johan
 
  - Original Message -
  From: Chris Knipe sav...@savage.za.org
  To: mysql@lists.mysql.com
  Sent: Thursday, 25 July, 2013 11:53:53 PM
  Subject: hypothetical question about data storage
 
  Hi all,
 
  We run an VERY io intensive file application service.  Currently,
our
  problem is that our disk spindles are being completely killed due
to
  insufficient SEEK time on the hard drives (NOT physical read/write

Re: hypothetical question about data storage

2013-07-26 Thread Chris Knipe
 that the next level of the BTree won't be fully
cacheable.  Now 300ms/30ms.

I think it is likely that the small number of disk hits for InnoDB is
better than the many disk hits for traversing a directory tree (with
large directories) in the filesystem.  I vote for InnoDB over the
directory tree.

Yes, you will have seeks.

No, adding more RAM won't help much.  Here's an argument:
Suppose your data is 20 times as big as the buffer pool and you are
doing random fetches (MD5, etc).  Then 1/20 of fetches are cached; 95%
cache miss.  Estimated time: 0.95 * 100ms = 95ms.
Now you double your RAM.  1/10 cached - 90% cache miss - 90ms average
- Not much improvement over 95.

 -Original Message-
 From: ckn...@savage.za.org [mailto:ckn...@savage.za.org] On Behalf Of
 Chris Knipe
 Sent: Friday, July 26, 2013 12:30 AM
 To: Johan De Meersman
 Cc: mysql
 Subject: Re: hypothetical question about data storage

 Hi All,

 Thanks for the responces, and I do concur.  I was taking a stab in
the
 dark so to speak.

 We are working with our hosting providers currently and will be
 introducing a multitude of small iSCSI SANs to split the storage
 structure over a multitude of disks...   This is something that needs
 to be addressed from a systems perspective rather than an
architectural
 one.

 SSD (or Fusion and the like) are unfortunately still way to expensive
for
 the capacity that we require (good couple of TBs) - so mechanical
disks it
 would need to be.  However, with the use of SANs as we hope, we
should be
 able to go up from 4 to over 64 spindles whilst still being able to
share
 the storage and have redundancy.

 Many thanks for the inputs and feedbacks...

 --
 C


 On Fri, Jul 26, 2013 at 9:23 AM, Johan De Meersman
vegiv...@tuxera.be
 wrote:
  Hey Chris,
 
  I'm afraid that this is not what databases are for, and the first
thing
 you'll likely run into is amount of concurrent connections.
 
  This is typically something you should really tackle from a systems
 perspective. Seek times are dramatically improved on SSD or similar
 storage - think FusionIO cards, but there's also a couple of vendors
 (Violin comes to mind) who provide full-blown SSD SANs.
 
  If you prefer staying with spinning disks, you could still improve
the
 seeks by focusing on the inner cylinders and potentially by using
variable
 sector formatting. Again, there's SANs that do this for you.
 
  Another minor trick is to turn off access timestamp updates when
you
 mount the filesystem (noatime).
 
  Also benchmark different filesystems, there's major differences
between
 them. I've heard XFS being recommended, but I've never needed to
benchmark
 for seek times myself. We're using IBM's commercial GPFS here, which
is
 good with enormous amounts of huge files (media farm here), not sure
how
 it'd fare with smaller files.
 
  Hope that helps,
  Johan
 
  - Original Message -
  From: Chris Knipe sav...@savage.za.org
  To: mysql@lists.mysql.com
  Sent: Thursday, 25 July, 2013 11:53:53 PM
  Subject: hypothetical question about data storage
 
  Hi all,
 
  We run an VERY io intensive file application service.  Currently,
our
  problem is that our disk spindles are being completely killed due
to
  insufficient SEEK time on the hard drives (NOT physical read/write
  speeds).
 
  We have an directory structure where the files are stored based on
  the MD5 checksum of the file name, i.e.
  /0/00/000/44533779fce5cf3497f87de1d060
  The majority of these files, are between 256K and 800K with the
ODD
  exception (say less than 15%) being more than 1M but no more than
5M
  in size.  The content of the files are pure text (MIME Encoded).
 
  We believe that storing these files into an InnoDB table, may
  actually give us better performance:
  - There is one large file that is being read/written, instead of
  BILLIONS of small files
  - We can split the structure so that each directory (4096 in
total)
  sit's on their own database
  - We can move the databases as load increases, which means that we
  can potentially run 2 physical database servers, each with 2048
  databases
  each)
  - It's easy to move / migrate the data due to mysql and
replication -
  same can be said for redundancy of the data
 
  We are more than likely looking at BLOB columns of course, and we
  need to read/write from the DB in excess of 100mbit/s
 
  Would the experts consider something like this as being feasible?
Is
  it worth it to go down this avenue, or are we just going to run
into
  different problems?  If we are facing different problems, what can
we
  possibly expect to go wrong here?
 
  Many thanks, and I look forward to any input.
 
 
  --
  Unhappiness is discouraged and will be corrected with kitten
pictures.



 --

 Regards,
 Chris Knipe

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql

 --
 Sent from Kaiten Mail. Please excuse my brevity.



-- 

Regards,
Chris Knipe

-- 
MySQL

Re: hypothetical question about data storage

2013-07-26 Thread hsv
 2013/07/27 00:58 +0200, Chris Knipe 
I would definately consider the md5 checksum as a
PK (char(32) due to the hex nature), 

Well, not that it greatly matters, but you could convert it to BINARY(16).


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Replication question

2013-07-25 Thread rich gray


On 24/07/2013 19:52, Rick James wrote:

4) 3 tables from the slaves are to be replicated back to the master

NO.

However, consider Percona XtraDb Cluster or MariaDB+Galera.  They allow 
multiple writable masters.  But they won't let you be so selective about tables 
not being replicated.
Here are the gotchas for Galera usage:
 http://mysql.rjweb.org/doc.php/galera
If you can live with them (plus replicating everything), it may be best for you.


Ok thanks Rick for confirming my initial gut feelings about this...! 
Will have to implement a manual process to push the required data back 
to the master.


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



hypothetical question about data storage

2013-07-25 Thread Chris Knipe
Hi all,

We run an VERY io intensive file application service.  Currently, our
problem is that our disk spindles are being completely killed due to
insufficient SEEK time on the hard drives (NOT physical read/write speeds).

We have an directory structure where the files are stored based on the MD5
checksum of the file name, i.e. /0/00/000/44533779fce5cf3497f87de1d060
The majority of these files, are between 256K and 800K with the ODD
exception (say less than 15%) being more than 1M but no more than 5M in
size.  The content of the files are pure text (MIME Encoded).

We believe that storing these files into an InnoDB table, may actually give
us better performance:
- There is one large file that is being read/written, instead of BILLIONS of
small files
- We can split the structure so that each directory (4096 in total) sit's on
their own database
- We can move the databases as load increases, which means that we can
potentially run 2 physical database servers, each with 2048 databases each)
- It's easy to move / migrate the data due to mysql and replication - same
can be said for redundancy of the data

We are more than likely looking at BLOB columns of course, and we need to
read/write from the DB in excess of 100mbit/s

Would the experts consider something like this as being feasible?  Is it
worth it to go down this avenue, or are we just going to run into different
problems?  If we are facing different problems, what can we possibly expect
to go wrong here?

Many thanks, and I look forward to any input.

--
Chris.



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: hypothetical question about data storage

2013-07-25 Thread Vahric Muhtaryan
Hi,
Sorry but mysql is not the address of it , use riak instead of mysql
With riak which is key and value based , all keys are on memory and just
only one seek enough to handle it
Consider to use riak
VM

On 7/26/13 12:53 AM, Chris Knipe sav...@savage.za.org wrote:

Hi all,

We run an VERY io intensive file application service.  Currently, our
problem is that our disk spindles are being completely killed due to
insufficient SEEK time on the hard drives (NOT physical read/write
speeds).

We have an directory structure where the files are stored based on the MD5
checksum of the file name, i.e. /0/00/000/44533779fce5cf3497f87de1d060
The majority of these files, are between 256K and 800K with the ODD
exception (say less than 15%) being more than 1M but no more than 5M in
size.  The content of the files are pure text (MIME Encoded).

We believe that storing these files into an InnoDB table, may actually
give
us better performance:
- There is one large file that is being read/written, instead of BILLIONS
of
small files
- We can split the structure so that each directory (4096 in total) sit's
on
their own database
- We can move the databases as load increases, which means that we can
potentially run 2 physical database servers, each with 2048 databases
each)
- It's easy to move / migrate the data due to mysql and replication - same
can be said for redundancy of the data

We are more than likely looking at BLOB columns of course, and we need to
read/write from the DB in excess of 100mbit/s

Would the experts consider something like this as being feasible?  Is it
worth it to go down this avenue, or are we just going to run into
different
problems?  If we are facing different problems, what can we possibly
expect
to go wrong here?

Many thanks, and I look forward to any input.

--
Chris.



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Replication question

2013-07-24 Thread rich gray
I have been asked to set up multiple database replication which I have 
done before for simple cases however there are some nuances with this 
instance that add some complexity and I'd like to hear your collective 
expertise on this proposed scenario:-


1) Single master database
2) n (probably 3 to start with) number of slave databases
3) All but 5 tables (123 tables in total) are to be replicated from the 
master to all the slaves

4) 3 tables from the slaves are to be replicated back to the master

It is mainly item 4) that concerns me - the primary ID's are almost 
certain to collide unless I seed the auto increment ID to partition the 
IDs into separate ranges or does MySQL handle this issue?
There are some foreign keys on one of the 3 slave to master tables but 
they are pointing at some extremely static tables that are very unlikely 
to change.


Is the above a feasible implementation...?

Thanks in advance for any advice/pointers!

Rich



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



RE: Replication question

2013-07-24 Thread Rick James
 4) 3 tables from the slaves are to be replicated back to the master
NO.

However, consider Percona XtraDb Cluster or MariaDB+Galera.  They allow 
multiple writable masters.  But they won't let you be so selective about tables 
not being replicated.
Here are the gotchas for Galera usage:
http://mysql.rjweb.org/doc.php/galera
If you can live with them (plus replicating everything), it may be best for you.

 -Original Message-
 From: rich gray [mailto:r...@richgray.com]
 Sent: Wednesday, July 24, 2013 8:21 AM
 To: mysql@lists.mysql.com
 Subject: Replication question
 
 I have been asked to set up multiple database replication which I have
 done before for simple cases however there are some nuances with this
 instance that add some complexity and I'd like to hear your collective
 expertise on this proposed scenario:-
 
 1) Single master database
 2) n (probably 3 to start with) number of slave databases
 3) All but 5 tables (123 tables in total) are to be replicated from the
 master to all the slaves
 4) 3 tables from the slaves are to be replicated back to the master
 
 It is mainly item 4) that concerns me - the primary ID's are almost
 certain to collide unless I seed the auto increment ID to partition the
 IDs into separate ranges or does MySQL handle this issue?
 There are some foreign keys on one of the 3 slave to master tables but
 they are pointing at some extremely static tables that are very unlikely
 to change.
 
 Is the above a feasible implementation...?
 
 Thanks in advance for any advice/pointers!
 
 Rich
 
 
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



restore question

2013-07-05 Thread Jim Sheffer
Hi everyone-

This is probably a no brainer (I'm new to Navicat) but I have a backup of a 
database from Navicat.

I want to be able to see if a certain field has changed since this morning in 
the backup (We are having problems with an order that somehow duplicated the 
items.  I need to see if there was only 1 of each item or two removed from 
inventory).  I don't need to do a restore into the database, just have a look 
at the backup.

Is this possible without going through the hoops of creating a copy of the 
database and restoring to the copy (I assume this is possible) - I DO NOT want 
to restore into the currently running database :-)

Any suggestions would b greatly appreciated!


James Sheffer,

The HigherPowered Team!

supp...@higherpowered.com  sa...@higherpowered.com
Web Design  Development http://www.higherpowered.com
phone:  469-256-0268   
 We help businesses succeed on the web!
 ---


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: restore question

2013-07-05 Thread shawn green

Hello Jim,

On 7/5/2013 3:11 PM, Jim Sheffer wrote:

Hi everyone-

This is probably a no brainer (I'm new to Navicat) but I have a backup of a 
database from Navicat.

I want to be able to see if a certain field has changed since this morning in the backup (We are 
having problems with an order that somehow duplicated the items.  I need to see if 
there was only 1 of each item or two removed from inventory).  I don't need to do a 
restore into the database, just have a look at the backup.

Is this possible without going through the hoops of creating a copy of the 
database and restoring to the copy (I assume this is possible) - I DO NOT want 
to restore into the currently running database :-)

Any suggestions would b greatly appreciated!



If the Navicat backup used the same process as mysqldump, then your 
table's data is stored in a plain-text SQL script.


Step 1) find the CREATE TABLE...  command for the table you are 
interested in.


Step 2) Just after the table's definition, you should see a sequence of 
INSERT commands. Those are your rows. Use your find or grep or search 
skills to identify the primary key values for the rows you are 
interested in. Visually parse that row (it's contained in its own set of 
() parentheses) to find the 'old' values you seek.


Sorry that it's such a manual process but you didn't want to restore so 
you get to pretend to be the database server :)


Yours,
--
Shawn Green
MySQL Principal Technical Support Engineer
Oracle USA, Inc. - Hardware and Software, Engineered to Work Together.
Office: Blountville, TN

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: restore question

2013-07-05 Thread spameden
Hi


2013/7/5 Jim Sheffer j...@higherpowered.com

 Hi everyone-

 This is probably a no brainer (I'm new to Navicat) but I have a backup of
 a database from Navicat.

 I want to be able to see if a certain field has changed since this morning
 in the backup (We are having problems with an order that somehow
 duplicated the items.  I need to see if there was only 1 of each item or
 two removed from inventory).  I don't need to do a restore into the
 database, just have a look at the backup.


First, dump current scheme with:

 mysqldump --skip-data database  database_schema.sql

Second, extract schema from Navicat (this might need additional filtering,
I'm not sure):

  grep -v 'INSERT INTO' backup.dump.sql  navicat_schema.sql

Third, compare:

  diff -u database_schema.sql navicat_schema.sql


 Is this possible without going through the hoops of creating a copy of the
 database and restoring to the copy (I assume this is possible) - I DO NOT
 want to restore into the currently running database :-)

 Any suggestions would b greatly appreciated!


 James Sheffer,

 The HigherPowered Team!

 supp...@higherpowered.com  sa...@higherpowered.com
 Web Design  Development http://www.higherpowered.com
 phone:  469-256-0268
  We help businesses succeed on the web!
  ---


 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql




Re: NET START MYSQL QUESTION?

2013-05-12 Thread shawn green

Hello Reindl,

On 5/11/2013 11:52 AM, Reindl Harald wrote:

...  virtually
nonofy is using mysql on windows seriously as i have
not touched windows since 2006 at all



Your experience is not indicative of the population as a whole. Many 
important and mission-critical installations exist on Windows.


You may not, or ever, choose to use Windows as a base platform but many 
people do and do superbly with their choice of OS.


--
Shawn Green
MySQL Principal Technical Support Engineer
Oracle USA, Inc. - Hardware and Software, Engineered to Work Together.
Office: Blountville, TN

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



RE: [Suspected Spam][Characteristics] Re: NET START MYSQL QUESTION?

2013-05-12 Thread Robinson, Eric
 why not answer the question another user made hours ago?
 under which account do you try to start mysqld?
 

Agreed. Chances are good that if he goes into the Windows Services control 
panel and gets the properties of the mysql service, he will find that it is 
configured to start under a Windows account other than the System account, and 
that the Windows account in question either does not have the required 
privileges or it is locked. I'd probably just change it to start under the 
System account.

--Erio


 

Disclaimer - May 12, 2013 
This email and any files transmitted with it are confidential and intended 
solely for Reindl Harald,mysql@lists.mysql.com. If you are not the named 
addressee you should not disseminate, distribute, copy or alter this email. Any 
views or opinions presented in this email are solely those of the author and 
might not represent those of Physicians' Managed Care or Physician Select 
Management. Warning: Although Physicians' Managed Care or Physician Select 
Management has taken reasonable precautions to ensure no viruses are present in 
this email, the company cannot accept responsibility for any loss or damage 
arising from the use of this email or attachments. 
This disclaimer was added by Policy Patrol: http://www.policypatrol.com/

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: NET START MYSQL QUESTION?

2013-05-11 Thread Reindl Harald

Am 11.05.2013 16:50, schrieb SIVASUTHAN NADARAJAH:
 I want to start the mysql from command prompt using NET START MYSQLBUT 
 the server not started. It display an error message. 
 C:\Users\PC NET START MySQLSystem error 5 has occurred.
 Access is denied.
 could you please help me, how to start the Mysql service FROM command prompt? 
   

what about look in the mysqld-log and verify which account
is used by the windows service? there is no difference between
the GUI and net start, a service works or not

* log-files
* service details (useraccount, credentials..)
* permissions to logfiles / datadir.

btw: do not use uppercase letters in your whole subject!




signature.asc
Description: OpenPGP digital signature


Re: NET START MYSQL QUESTION?

2013-05-11 Thread Reindl Harald
DO NOT REPLY OFF-LIST

there is no step by step guidance and virtually
nonofy is using mysql on windows seriously as i have
not touched windows since 2006 at all

understand how services on your used OS are working
or hire someone who does

Am 11.05.2013 17:44, schrieb SIVASUTHAN NADARAJAH:
 Sorry I could not understand your answer. please give me the step by step 
 guidance.
 thank you so much...
 
 Date: Sat, 11 May 2013 16:58:16 +0200
 From: h.rei...@thelounge.net
 To: mysql@lists.mysql.com
 Subject: Re: NET START MYSQL QUESTION?


 Am 11.05.2013 16:50, schrieb SIVASUTHAN NADARAJAH:
  I want to start the mysql from command prompt using NET START MYSQLBUT 
  the server not started. It display an
 error message.
  C:\Users\PC NET START MySQLSystem error 5 has occurred.
  Access is denied.
  could you please help me, how to start the Mysql service FROM command 
  prompt?

 what about look in the mysqld-log and verify which account
 is used by the windows service? there is no difference between
 the GUI and net start, a service works or not

 * log-files
 * service details (useraccount, credentials..)
 * permissions to logfiles / datadir.

 btw: do not use uppercase letters in your whole subject!



signature.asc
Description: OpenPGP digital signature


Re: NET START MYSQL QUESTION?

2013-05-11 Thread Mimiko

On 11.05.2013 17:50, SIVASUTHAN NADARAJAH wrote:

C:\Users\PC NET START MySQL System error 5 has occurred.
Access is denied.


Run command window from elevated permission.

--
Mimiko desu.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: NET START MYSQL QUESTION?

2013-05-11 Thread Reindl Harald

Am 11.05.2013 20:04, schrieb Mimiko:
 On 11.05.2013 17:50, SIVASUTHAN NADARAJAH wrote:
 C:\Users\PC NET START MySQL System error 5 has occurred.
 Access is denied.
 
 Run command window from elevated permission

wow - you mean he *really* tried to start a service as
restricted user and instead read the basic manuals
posted to the application list?

hopefully not...




signature.asc
Description: OpenPGP digital signature


Re: NET START MYSQL QUESTION?

2013-05-11 Thread hsv
 2013/05/11 20:50 +0600, SIVASUTHAN NADARAJAH 
I want to start the mysql from command prompt using NET START MYSQLBUT 
the server not started. It display an error message. 
C:\Users\PC NET START MySQLSystem error 5 has occurred.
Access is denied.
could you please help me, how to start the Mysql service FROM command prompt?   
SIVASUTHAN- Consultant Trainer

Well, ordinarily the MySQL service is started by Windows along with all the 
rest: After installing MySQL under Windows one runs MySQL Instance 
Configuration Wizard--the only MySQL program on my system that runs from 
Start. (If you cannot do this on the machine where mysqld.exe is to run I 
do not know what to say.)

If the service, with program mysqld.exe, quits, to start it one enters
start mysqld -b... from the command line, with the directory where my.ini 
is kept entered for  Of course, if mysqld.exe started and quit, as 
Reindl Harald suggested study the logs and find out why. It is not started 
through the client, mysql.exe.


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: NET START MYSQL QUESTION?

2013-05-11 Thread Reindl Harald

Am 11.05.2013 21:49, schrieb h...@tbbs.net:
 2013/05/11 20:50 +0600, SIVASUTHAN NADARAJAH 
 I want to start the mysql from command prompt using NET START MYSQLBUT the 
 server not started. It display an error message. 
 C:\Users\PC NET START MySQLSystem error 5 has occurred.
 Access is denied.
 could you please help me, how to start the Mysql service FROM command prompt? 
   

why not answer the question another user made hours ago?
under which account do you try to start mysqld?

hence you need FULL ADMIN PRIVILEGES on every OS to start services

or use the system settings which at least come up with a UAC request
alternative: use a proper operating system with a package-manager

google: windows system error 5
https://www.google.com/search?q=windows+system+error+5

thid *is not* a MySQL problem
please ask windows questions at windows forums

 It is not started through the client, mysql.exe

why should the client start a server?



signature.asc
Description: OpenPGP digital signature


Re: NET START MYSQL QUESTION?

2013-05-11 Thread hsv
 2013/05/11 22:58 +0200, Reindl Harald 
why not answer the question another user made hours ago?
under which account do you try to start mysqld? 

Well, I learnt something here.

When I had the problem of (under Vista) starting mysqld, from command prompt 
I always did this, start mysqld -b... (here start is like Unix s trailing 
), and never had a problem. It showed up running on the service list. Maybe 
something like Unix s set-user-id is in effect in mysqld.exe.

Now for the first time I learn of command NET, and its options. I do not 
remember seeing net start MySQL in MySQL s help when I installed it, only 
that which I above described.

This is not so much an OS problem, but a problem at the point where the OS s 
peculiarities and a big package s nature intersect.


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Chain Replication QUestion

2013-05-06 Thread Richard Reina
To activate log-slave-updates do I just add log-slave-updates to the
my.cnf file?



2013/4/30, Manuel Arostegui man...@tuenti.com:
 2013/4/30 Richard Reina gatorre...@gmail.com

 I have a few slaves set up on my local network that get updates from
 my main mysql database master. I was hoping to turn one into a master
 while keeping it a slave so that I can set up a chain.  Does anyone
 know where I can find a how to or other documentation for this
 specific task?


 It is quite easy:

 Enable log-slave-updates in the slave you want to be a master.
 Do a mysqldump -e --master-data=2 and put that mysqldump in the future
 slaves. Take a look at the first lines of the mysqldump where you'll find
 the position and logfile those slaves need to start the replication from.
 You can also use xtrabackup if you like.

 Manuel.


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Chain Replication QUestion

2013-05-06 Thread Michael Dykman
That is correct.


On Mon, May 6, 2013 at 11:06 AM, Richard Reina gatorre...@gmail.com wrote:

 To activate log-slave-updates do I just add log-slave-updates to the
 my.cnf file?



 2013/4/30, Manuel Arostegui man...@tuenti.com:
  2013/4/30 Richard Reina gatorre...@gmail.com
 
  I have a few slaves set up on my local network that get updates from
  my main mysql database master. I was hoping to turn one into a master
  while keeping it a slave so that I can set up a chain.  Does anyone
  know where I can find a how to or other documentation for this
  specific task?
 
 
  It is quite easy:
 
  Enable log-slave-updates in the slave you want to be a master.
  Do a mysqldump -e --master-data=2 and put that mysqldump in the future
  slaves. Take a look at the first lines of the mysqldump where you'll find
  the position and logfile those slaves need to start the replication from.
  You can also use xtrabackup if you like.
 
  Manuel.
 

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql




-- 
 - michael dykman
 - mdyk...@gmail.com

 May the Source be with you.


Re: Chain Replication QUestion

2013-05-01 Thread Richard Reina
Hello Manuel,

Thank you for your reply. Could I do the following?:

1) Enable log-bin on master2 (slave that will be converted to a master)
2) Enable log-slave-updates on master2
3) Execute CHANGE MASTER to on another existing slave so that it gets
it's updates from master2 instead of master1.

Thanks for the help thus far.


2013/4/30, Manuel Arostegui man...@tuenti.com:
 2013/4/30 Richard Reina gatorre...@gmail.com

 I have a few slaves set up on my local network that get updates from
 my main mysql database master. I was hoping to turn one into a master
 while keeping it a slave so that I can set up a chain.  Does anyone
 know where I can find a how to or other documentation for this
 specific task?


 It is quite easy:

 Enable log-slave-updates in the slave you want to be a master.
 Do a mysqldump -e --master-data=2 and put that mysqldump in the future
 slaves. Take a look at the first lines of the mysqldump where you'll find
 the position and logfile those slaves need to start the replication from.
 You can also use xtrabackup if you like.

 Manuel.


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



RE: Chain Replication QUestion

2013-05-01 Thread Andrew Morgan
If you're able to use MySQL 5.6 and enable GTIDs then it gets a whole lot 
simpler as you don't need to worry about finding the correct positions in the 
binary logs. Take a look at 
http://www.mysql.com/why-mysql/white-papers/mysql-replication-high-availability/
 and http://www.mysql.com/why-mysql/white-papers/mysql-replication-tutorial/

Andrew.

 -Original Message-
 From: Rick James [mailto:rja...@yahoo-inc.com]
 Sent: 01 May 2013 16:29
 To: Richard Reina; Manuel Arostegui
 Cc: mysql@lists.mysql.com
 Subject: RE: Chain Replication QUestion
 
  1) Enable log-bin on master2 (slave that will be converted to a
 master)
 That does not 'convert' it -- it makes it both a Master and a Slave (a
 Relay).
 
 The CHANGE MASTER is probably correct, but it is difficult to find the
 right spot.
 A simple way is to
 1. Stop all writes everywhere.
 2. Wait for replication to catchup everywhere.
 3. FLUSH LOGS everywhere.
 4. Now CHANGE MASTER on the Slave to the start (POS=0 or 4) of the
 freshly created binlog in the machine that is the Slave's new Master.
 5. Start writes.
 
 
  -Original Message-
  From: Richard Reina [mailto:gatorre...@gmail.com]
  Sent: Wednesday, May 01, 2013 6:00 AM
  To: Manuel Arostegui
  Cc: mysql@lists.mysql.com
  Subject: Re: Chain Replication QUestion
 
  Hello Manuel,
 
  Thank you for your reply. Could I do the following?:
 
  1) Enable log-bin on master2 (slave that will be converted to a
 master)
  2) Enable log-slave-updates on master2
  3) Execute CHANGE MASTER to on another existing slave so that it gets
  it's updates from master2 instead of master1.
 
  Thanks for the help thus far.
 
 
  2013/4/30, Manuel Arostegui man...@tuenti.com:
   2013/4/30 Richard Reina gatorre...@gmail.com
  
   I have a few slaves set up on my local network that get updates
 from
   my main mysql database master. I was hoping to turn one into a
  master
   while keeping it a slave so that I can set up a chain.  Does
 anyone
   know where I can find a how to or other documentation for this
   specific task?
  
  
   It is quite easy:
  
   Enable log-slave-updates in the slave you want to be a master.
   Do a mysqldump -e --master-data=2 and put that mysqldump in the
  future
   slaves. Take a look at the first lines of the mysqldump where
 you'll
   find the position and logfile those slaves need to start the
  replication from.
   You can also use xtrabackup if you like.
  
   Manuel.
  
 
  --
  MySQL General Mailing List
  For list archives: http://lists.mysql.com/mysql
  To unsubscribe:http://lists.mysql.com/mysql
 
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql
 

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



RE: Chain Replication QUestion

2013-05-01 Thread Rick James
 1) Enable log-bin on master2 (slave that will be converted to a master)
That does not 'convert' it -- it makes it both a Master and a Slave (a Relay).

The CHANGE MASTER is probably correct, but it is difficult to find the right 
spot.
A simple way is to 
1. Stop all writes everywhere.
2. Wait for replication to catchup everywhere.
3. FLUSH LOGS everywhere.
4. Now CHANGE MASTER on the Slave to the start (POS=0 or 4) of the freshly 
created binlog in the machine that is the Slave's new Master.
5. Start writes.


 -Original Message-
 From: Richard Reina [mailto:gatorre...@gmail.com]
 Sent: Wednesday, May 01, 2013 6:00 AM
 To: Manuel Arostegui
 Cc: mysql@lists.mysql.com
 Subject: Re: Chain Replication QUestion
 
 Hello Manuel,
 
 Thank you for your reply. Could I do the following?:
 
 1) Enable log-bin on master2 (slave that will be converted to a master)
 2) Enable log-slave-updates on master2
 3) Execute CHANGE MASTER to on another existing slave so that it gets
 it's updates from master2 instead of master1.
 
 Thanks for the help thus far.
 
 
 2013/4/30, Manuel Arostegui man...@tuenti.com:
  2013/4/30 Richard Reina gatorre...@gmail.com
 
  I have a few slaves set up on my local network that get updates from
  my main mysql database master. I was hoping to turn one into a
 master
  while keeping it a slave so that I can set up a chain.  Does anyone
  know where I can find a how to or other documentation for this
  specific task?
 
 
  It is quite easy:
 
  Enable log-slave-updates in the slave you want to be a master.
  Do a mysqldump -e --master-data=2 and put that mysqldump in the
 future
  slaves. Take a look at the first lines of the mysqldump where you'll
  find the position and logfile those slaves need to start the
 replication from.
  You can also use xtrabackup if you like.
 
  Manuel.
 
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Chain Replication QUestion

2013-04-30 Thread Manuel Arostegui
2013/4/30 Richard Reina gatorre...@gmail.com

 I have a few slaves set up on my local network that get updates from
 my main mysql database master. I was hoping to turn one into a master
 while keeping it a slave so that I can set up a chain.  Does anyone
 know where I can find a how to or other documentation for this
 specific task?


It is quite easy:

Enable log-slave-updates in the slave you want to be a master.
Do a mysqldump -e --master-data=2 and put that mysqldump in the future
slaves. Take a look at the first lines of the mysqldump where you'll find
the position and logfile those slaves need to start the replication from.
You can also use xtrabackup if you like.

Manuel.


RE: Rookie question

2013-04-30 Thread Rick James
OR would not show dups.

WHERE duespaid AND cat1 OR cat2
means
WHERE (duespaid AND cat1) OR cat2
That is probably not what you wanted -- add parens like
WHERE duespaid AND (cat1 OR cat2 ...)

But...

That is not a good way to build a schema.  What will happen when you add 
category9?

Plan A:  Have another table that says which categories a user has.  There would 
be 0-8 rows in this new table for each category.
SELECT d.* FROM directory d JOIN categories c ON d.userid = c.userid
WHERE c.category IN (1,2,3,4,5,6,7,8);

Plan B:  Use a SET as a single column for all the categories.  Then
AND (categories  x'ff') != x'00'
would check that at least one bit is on in the bottom 8 bits of that SET.  
(TINYINT UNSIGNED would work identically.  Change to SMALLINT UNSIGNED for 9-16 
categories; etc.)

There is probably a Plan C.

 -Original Message-
 From: Gary Smith [mailto:li...@l33t-d00d.co.uk]
 Sent: Monday, April 29, 2013 10:43 AM
 To: mysql@lists.mysql.com
 Subject: Re: Rookie question
 
 On 29/04/2013 18:29, Patrice Olivier-Wilson wrote:
  Hi all:
 
  I have a membership directory where folks can belong to more than one
 category. But all folks do not qualify for a category. So I want to
 list folks who have qualified in a category but not have them repeat.
 So if member 1 is in cat 3 and cat 5, I want their name only to show up
 once. Here's what I have so far, but it shows a member listed more than
 once.
 select distinct ?
 
 Gary
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Rookie question

2013-04-29 Thread Gary Smith

On 29/04/2013 18:29, Patrice Olivier-Wilson wrote:

Hi all:

I have a membership directory where folks can belong to more than one category. 
But all folks do not qualify for a category. So I want to list folks who have 
qualified in a category but not have them repeat. So if member 1 is in cat 3 
and cat 5, I want their name only to show up once. Here's what I have so far, 
but it shows a member listed more than once.

select distinct ?

Gary

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



RE: Update and lock question.

2013-04-08 Thread Rick James
An optimization (at least in InnoDB) is to delay updating the secondary 
index(es).  If you can provide a reproducible test case, it would probably be 
worth filing a bug at bugs.mysql.com

 -Original Message-
 From: Andrés Tello [mailto:mr.crip...@gmail.com]
 Sent: Friday, April 05, 2013 2:56 PM
 To: Urvashi Pathak
 Cc: mysql
 Subject: Re: Update and lock question.
 
 Thanks Urvashi.
 
 Based on your answer, instead of the data I looked into the index, and
 it appears that it was an index issue...
 
 I think I have nailed the wait lock contdition due a updating indexes
 unnecesarely...
 
 
 On Thu, Apr 4, 2013 at 10:52 AM, Urvashi Pathak
 urvashi_pat...@symantec.com
  wrote:
 
 
 
  Hi Andrés,
 
  Select for update makes sure that no other process can change the
 data
  between you selected it for update and then actually changed it and
  commit it.
  If you do not use select for update  then it is possible that some
  other process can change the data  in the mean time between you
  selected and  actually changes it. In this case you not see the
 result
  you actually intend to have.
 
  Innodb will only lock whole table only if there is no where clause in
  the update statement, which I sure you do not have. Innodb follows
 row
  level locking.
 
 
  -Urvi
 
  -Original Message-
  From: Andrés Tello [mailto:mr.crip...@gmail.com]
  Sent: Thursday, April 04, 2013 9:08 AM
  To: mysql
  Subject: Update and lock question.
 
  I'm doing some tests, but have a questions about locking.
 
  In a innodb table, if you issue an select for update lock for a row,
  supposedly, it only locks that row, but if you don't issue a select
  for update, and trow the update... does it locks the hole table?
 
  The update goes over an indexed field, or the effect of locking the
  hole table is due I'm updating an indexed field?
 
  This is because I'm running into dead locks, but I know there is no
  select for update to the row being updated.
 
 
  Thanks.
 

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



RE: Update and lock question.

2013-04-06 Thread Urvashi Pathak


Hi Andrés,

Select for update makes sure that no other process can change the data between 
you selected it for update and then actually changed it and commit it.
If you do not use select for update  then it is possible that some other 
process can change the data  in the mean time between you selected and  
actually changes it. In this case you not see the result you actually intend to 
have.

Innodb will only lock whole table only if there is no where clause in the 
update statement, which I sure you do not have. Innodb follows row level 
locking.


-Urvi

-Original Message-
From: Andrés Tello [mailto:mr.crip...@gmail.com] 
Sent: Thursday, April 04, 2013 9:08 AM
To: mysql
Subject: Update and lock question.

I'm doing some tests, but have a questions about locking.

In a innodb table, if you issue an select for update lock for a row, 
supposedly, it only locks that row, but if you don't issue a select for update, 
and trow the update... does it locks the hole table?

The update goes over an indexed field, or the effect of locking the hole table 
is due I'm updating an indexed field?

This is because I'm running into dead locks, but I know there is no select for 
update to the row being updated.


Thanks.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Update and lock question.

2013-04-05 Thread Andrés Tello
I'm doing some tests, but have a questions about locking.

In a innodb table, if you issue an select for update lock for a row,
supposedly, it only locks that row, but if you don't issue a select for
update, and trow the update... does it locks the hole table?

The update goes over an indexed field, or the effect of locking the hole
table is due I'm updating an indexed field?

This is because I'm running into dead locks, but I know there is no select
for update to the row being updated.


Thanks.


Re: Update and lock question.

2013-04-05 Thread Andrés Tello
Thanks Urvashi.

Based on your answer, instead of the data I looked into the index, and it
appears that it was an index issue...

I think I have nailed the wait lock contdition due a updating indexes
unnecesarely...


On Thu, Apr 4, 2013 at 10:52 AM, Urvashi Pathak urvashi_pat...@symantec.com
 wrote:



 Hi Andrés,

 Select for update makes sure that no other process can change the data
 between you selected it for update and then actually changed it and commit
 it.
 If you do not use select for update  then it is possible that some other
 process can change the data  in the mean time between you selected and
  actually changes it. In this case you not see the result you actually
 intend to have.

 Innodb will only lock whole table only if there is no where clause in the
 update statement, which I sure you do not have. Innodb follows row level
 locking.


 -Urvi

 -Original Message-
 From: Andrés Tello [mailto:mr.crip...@gmail.com]
 Sent: Thursday, April 04, 2013 9:08 AM
 To: mysql
 Subject: Update and lock question.

 I'm doing some tests, but have a questions about locking.

 In a innodb table, if you issue an select for update lock for a row,
 supposedly, it only locks that row, but if you don't issue a select for
 update, and trow the update... does it locks the hole table?

 The update goes over an indexed field, or the effect of locking the hole
 table is due I'm updating an indexed field?

 This is because I'm running into dead locks, but I know there is no select
 for update to the row being updated.


 Thanks.



Re: Blob implementation question

2013-03-13 Thread Adam Ilardi
Engine:  MyISAM?  InnoDB?  other?
InnoDB

Let's see the SELECT.
select bytes from table_name where id = %d

If InnoDB, let's see the transaction, if it is part of such.
It's a single query

If InnoDB, which (COMPACT, etc) are you using.
Not sure

You are asking about a single row with the 500MB, correct?
Yes

Thanks for the additional details


On Wed, Mar 13, 2013 at 1:00 PM, Rick James rja...@yahoo-inc.com wrote:

 A lot of details are missing...
 Engine:  MyISAM?  InnoDB?  other?
 Let's see the SELECT.
 If InnoDB, let's see the transaction, if it is part of such.
 If InnoDB, which (COMPACT, etc) are you using.
 You are asking about a single row with the 500MB, correct?

 In general, each request will ask for the same row, and will be blocked at
 some level.  The data will be fetched from disk and cached (radically
 differently, depending on the Engine).  Each request will be satisfied --
 perhaps sequentially, perhaps simultaneously.

 The resultset will need to be built at some point.  This will probably
 take up 500MB+ of extra RAM.  This might lead to swapping or running out of
 RAM.

 If the SELECT needs to build a temp table, it will be MyISAM, and it will
 be on disk.  But not all SELECTs need to build a temp table.  This, for
 example, won't:
 SELECT myblob FROM mytable WHERE id=123;
 This probably will (if foo is not indexed):
 SELECT myblob FROM mytable ORDER BY foo;

  -Original Message-
  From: Adam Ilardi [mailto:mastaskill...@gmail.com]
  Sent: Wednesday, March 13, 2013 9:16 AM
  To: mysql
  Subject: Blob implementation question
 
  Hello All,
 
  I'm trying to grok the blob implementation. This scenario is contrived
  to understand blobs please don't suggest I shouldn't do this. If you
  have a theoretical machine. Ubuntu with 4 cores/4gb of ram and we'll
  say once mysql is running 500mb of free disk space.
 
  I have a 500mb blob stored in a table and 30 concurrent requests come
  in to select the blob's bytes. How does mysql handle this situation @
  an implementation level?
 
  Would mysql buffer the blob data to the disk?
  Would mysql keep 30 large in memory buffers for the data?
 
  I'd like to know when I would be @ risk of either filling up the disk
  or running out of ram in this situation. I'm also curious as to the
  code level details about how blobs are read and transmitted to a
  client.
 
 
  Thanks,
  Adam



RE: Blob implementation question

2013-03-13 Thread Rick James
A lot of details are missing...
Engine:  MyISAM?  InnoDB?  other?
Let's see the SELECT.
If InnoDB, let's see the transaction, if it is part of such.
If InnoDB, which (COMPACT, etc) are you using.
You are asking about a single row with the 500MB, correct?

In general, each request will ask for the same row, and will be blocked at some 
level.  The data will be fetched from disk and cached (radically differently, 
depending on the Engine).  Each request will be satisfied -- perhaps 
sequentially, perhaps simultaneously.

The resultset will need to be built at some point.  This will probably take 
up 500MB+ of extra RAM.  This might lead to swapping or running out of RAM.

If the SELECT needs to build a temp table, it will be MyISAM, and it will be on 
disk.  But not all SELECTs need to build a temp table.  This, for example, 
won't:
SELECT myblob FROM mytable WHERE id=123;
This probably will (if foo is not indexed):
SELECT myblob FROM mytable ORDER BY foo;

 -Original Message-
 From: Adam Ilardi [mailto:mastaskill...@gmail.com]
 Sent: Wednesday, March 13, 2013 9:16 AM
 To: mysql
 Subject: Blob implementation question
 
 Hello All,
 
 I'm trying to grok the blob implementation. This scenario is contrived
 to understand blobs please don't suggest I shouldn't do this. If you
 have a theoretical machine. Ubuntu with 4 cores/4gb of ram and we'll
 say once mysql is running 500mb of free disk space.
 
 I have a 500mb blob stored in a table and 30 concurrent requests come
 in to select the blob's bytes. How does mysql handle this situation @
 an implementation level?
 
 Would mysql buffer the blob data to the disk?
 Would mysql keep 30 large in memory buffers for the data?
 
 I'd like to know when I would be @ risk of either filling up the disk
 or running out of ram in this situation. I'm also curious as to the
 code level details about how blobs are read and transmitted to a
 client.
 
 
 Thanks,
 Adam

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: mysql Ver 14.12 Distrib 5.0.27 user privileges question

2013-02-27 Thread Prabhat Kumar
you need CREATE Privileges.

http://dev.mysql.com/doc/refman/5.0/en/grant.html#grant-privileges

On Wed, Feb 27, 2013 at 10:42 PM, Rajeev Prasad rp.ne...@yahoo.com wrote:

 Hello,

 currently on this version of MySQL a database has been built for me to
 use. and following privileges are given: I am not able to create a table on
 my own. what privileges I need to create and modify tables in this database?

 mysql  Ver 14.12 Distrib 5.0.27, for sun-solaris2.9 (sparc) using
 EditLine wrapper



 mysql show grants;

 +---+
 | Grants for myuserid@%
 |

 +---+
 | GRANT USAGE ON *.* TO 'myuserid'@'%' IDENTIFIED BY PASSWORD
 '*4EF5..6' |
 | GRANT SELECT, INSERT, UPDATE, DELETE ON `mydb`.* TO 'myuserid'@'%'
 |

 +---+
 2 rows in set (0.00 sec)

 mysql

 at % means I can do the operations from other hosts too? using ssh.

 thank you.




-- 
Best Regards,

Prabhat Kumar
MySQL DBA

My Blog: http://adminlinux.blogspot.com
My LinkedIn: http://www.linkedin.com/in/profileprabhat


Re: mysql Ver 14.12 Distrib 5.0.27 user privileges question

2013-02-27 Thread Reindl Harald
oh, osrry
i was focused on the  at % means I can do the operations from other hosts too?

Am 27.02.2013 19:00, schrieb Stillman:
 OP's first question:  I am not able to create a table on my own. what 
 privileges I need to create and modify tables in this database?
 
 The answer to that question is that he/she needs CREATE to create tables and 
 ALTER to alter them.
 
 -Original Message-
 From: Reindl Harald [mailto:h.rei...@thelounge.net]
 Sent: Wednesday, February 27, 2013 12:55 PM
 To: mysql@lists.mysql.com
 Subject: Re: mysql Ver 14.12 Distrib 5.0.27 user privileges question
 
 says who?
 
 you MAY need CREATE privileges
 but it not uncommon have a defined scheme and not allow the user to create or 
 drop tables, the user below is able to do anything for a common web-app
 
 to anser the OP's question
 
 % in mysql is the same as * for the bash so yes, % means any host
 
 Am 27.02.2013 18:38, schrieb Prabhat Kumar:
 you need CREATE Privileges.
 http://dev.mysql.com/doc/refman/5.0/en/grant.html#grant-privileges

 On Wed, Feb 27, 2013 at 10:42 PM, Rajeev Prasad rp.ne...@yahoo.com wrote:
 currently on this version of MySQL a database has been built for me
 to use. and following privileges are given: I am not able to create a
 table on my own. what privileges I need to create and modify tables in this 
 database?

 mysql  Ver 14.12 Distrib 5.0.27, for sun-solaris2.9 (sparc) using
 EditLine wrapper

 mysql show grants;

 +---+
 | Grants for myuserid@%
 |

 +---+
 | GRANT USAGE ON *.* TO 'myuserid'@'%' IDENTIFIED BY PASSWORD
 '*4EF5..6' |
 | GRANT SELECT, INSERT, UPDATE, DELETE ON `mydb`.* TO 'myuserid'@'%'
 +---+
 2 rows in set (0.00 sec)

 at % means I can do the operations from other hosts too? using ssh



signature.asc
Description: OpenPGP digital signature


Re: mysql Ver 14.12 Distrib 5.0.27 user privileges question

2013-02-27 Thread Reindl Harald
says who?

you MAY need CREATE privileges
but it not uncommon have a defined scheme and not allow
the user to create or drop tables, the user below is
able to do anything for a common web-app

to anser the OP's question

% in mysql is the same as * for the bash
so yes, % means any host

Am 27.02.2013 18:38, schrieb Prabhat Kumar:
 you need CREATE Privileges.
 http://dev.mysql.com/doc/refman/5.0/en/grant.html#grant-privileges
 
 On Wed, Feb 27, 2013 at 10:42 PM, Rajeev Prasad rp.ne...@yahoo.com wrote:
 currently on this version of MySQL a database has been built for me to
 use. and following privileges are given: I am not able to create a table on
 my own. what privileges I need to create and modify tables in this database?

 mysql  Ver 14.12 Distrib 5.0.27, for sun-solaris2.9 (sparc) using
 EditLine wrapper

 mysql show grants;

 +---+
 | Grants for myuserid@%
 |

 +---+
 | GRANT USAGE ON *.* TO 'myuserid'@'%' IDENTIFIED BY PASSWORD
 '*4EF5..6' |
 | GRANT SELECT, INSERT, UPDATE, DELETE ON `mydb`.* TO 'myuserid'@'%'
 +---+
 2 rows in set (0.00 sec)

 at % means I can do the operations from other hosts too? using ssh



signature.asc
Description: OpenPGP digital signature


RE: mysql Ver 14.12 Distrib 5.0.27 user privileges question

2013-02-27 Thread Stillman, Benjamin
OP's first question:  I am not able to create a table on my own. what 
privileges I need to create and modify tables in this database?

The answer to that question is that he/she needs CREATE to create tables and 
ALTER to alter them.


-Original Message-
From: Reindl Harald [mailto:h.rei...@thelounge.net]
Sent: Wednesday, February 27, 2013 12:55 PM
To: mysql@lists.mysql.com
Subject: Re: mysql Ver 14.12 Distrib 5.0.27 user privileges question

says who?

you MAY need CREATE privileges
but it not uncommon have a defined scheme and not allow the user to create or 
drop tables, the user below is able to do anything for a common web-app

to anser the OP's question

% in mysql is the same as * for the bash so yes, % means any host

Am 27.02.2013 18:38, schrieb Prabhat Kumar:
 you need CREATE Privileges.
 http://dev.mysql.com/doc/refman/5.0/en/grant.html#grant-privileges

 On Wed, Feb 27, 2013 at 10:42 PM, Rajeev Prasad rp.ne...@yahoo.com wrote:
 currently on this version of MySQL a database has been built for me
 to use. and following privileges are given: I am not able to create a
 table on my own. what privileges I need to create and modify tables in this 
 database?

 mysql  Ver 14.12 Distrib 5.0.27, for sun-solaris2.9 (sparc) using
 EditLine wrapper

 mysql show grants;

 +---+
 | Grants for myuserid@%
 |

 +---+
 | GRANT USAGE ON *.* TO 'myuserid'@'%' IDENTIFIED BY PASSWORD
 '*4EF5..6' |
 | GRANT SELECT, INSERT, UPDATE, DELETE ON `mydb`.* TO 'myuserid'@'%'
 +---+
 2 rows in set (0.00 sec)

 at % means I can do the operations from other hosts too? using ssh




Notice: This communication may contain privileged and/or confidential 
information. If you are not the intended recipient, please notify the sender by 
email, and immediately delete the message and any attachments without copying 
or disclosing them. LBI may, for any reason, intercept, access, use, and 
disclose any information that is communicated by or through, or which is stored 
on, its networks, applications, services, and devices.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



RE: Question about Innodb

2013-02-04 Thread Rick James
Meta info about the tables is stored in ibdata1.  Hence, it is not possible to 
copy just the .ibd file to another database or machine.  5.6.x will remedy this 
with some export/import commands that do not involve reading/writing the rows 
individually.  (Ditto for moving partitions.)

(Sorry, I don't know the exact fields in ibdata1.) 

 -Original Message-
 From: Wayne Leutwyler [mailto:wleut...@columbus.rr.com]
 Sent: Monday, February 04, 2013 11:47 AM
 To: mysql@lists.mysql.com
 Subject: Question about Innodb
 
 Question about InnoDB tables and tablespaces.
 
 I have one file per table turned on. Its my understanding that even
 with one file per table turned on, that data is updated in the default
 system ibdata files. What type of data is stored in the ibdata files?
 
 Thanks,
 
 Walter Wayne Leutwyler, RHCT
 Sr. MySQL Database Administrator
 Mobile: 614 519 5672
 Office: 614 889 4956
 E-mail: wayne.leutwy...@gmail.com
 E-mail: wleut...@columbus.rr.com
 Website: http://penguin-workshop.dyndns.org
 
 Courage is being scared to death, but saddling up anyway. --John
 Wayne


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Relay log Question

2013-01-17 Thread Nitin Mehta
Hi,

There was sort of a bug which was fixed in MySQL 5.5 with replication 
heartbeat. Before the replication heartbeat, a new relay log file would be 
created after every slave_net_timeout. It doesn't have any negative impact 
though.

Hope that helps.



 From: Akshay Suryavanshi akshay.suryavansh...@gmail.com
To: Wayne Leutwyler wleut...@columbus.rr.com 
Cc: mysql@lists.mysql.com 
Sent: Wednesday, January 9, 2013 1:42 AM
Subject: Re: Relay log Question
 
Also, you may want to see, if at all new file is really getting every hour
exactly, if any cron'd script runs, which executes flush logs on the
slave server. That will also rotate relay log.

Cheers

On Wed, Jan 9, 2013 at 1:35 AM, Akshay Suryavanshi 
akshay.suryavansh...@gmail.com wrote:

 Hi,

 Please re-phrase your question. The relay logs are created as and when
 required by the Slave_SQL thread. Once all the events in the relay logs are
 executed the relay log would be purged by the Slave_SQL thread.

 By setting relay_log_purge=0 you are disabling this automatic purge
 option. So the new relay log files will be created however the older ones
 would not be deleted.

 Creation or rotation of relay or binary logs is not time-based, for some
 situations it might create in one hour, however for many others it might
 create after 1 day, week, or even months.

 Usually the relay logs in your case should be of definite size, check out
 this setting max_relay_log_size and max_binlog_size, the latter would
 come into picture if the prior one is disabled.

 Thanks,
 Akshay S


 On Wed, Jan 9, 2013 at 1:21 AM, Wayne Leutwyler 
 wleut...@columbus.rr.comwrote:

 Hello List,

 Quick question. I am running MySQL 5.1.66, and I noticed that every hour,
 my slave is creating a new relay log. I added this:

 relay_log_purge = 0

 to the my.cnf and it did not solve the issue.

 What am I missing? Never seen this issue before.

 Thanks,

 Walter

 Courage is being scared to death, but saddling up anyway. --John Wayne




Re: Relay log Question

2013-01-08 Thread Akshay Suryavanshi
Hi,

Please re-phrase your question. The relay logs are created as and when
required by the Slave_SQL thread. Once all the events in the relay logs are
executed the relay log would be purged by the Slave_SQL thread.

By setting relay_log_purge=0 you are disabling this automatic purge option.
So the new relay log files will be created however the older ones would not
be deleted.

Creation or rotation of relay or binary logs is not time-based, for some
situations it might create in one hour, however for many others it might
create after 1 day, week, or even months.

Usually the relay logs in your case should be of definite size, check out
this setting max_relay_log_size and max_binlog_size, the latter would
come into picture if the prior one is disabled.

Thanks,
Akshay S

On Wed, Jan 9, 2013 at 1:21 AM, Wayne Leutwyler wleut...@columbus.rr.comwrote:

 Hello List,

 Quick question. I am running MySQL 5.1.66, and I noticed that every hour,
 my slave is creating a new relay log. I added this:

 relay_log_purge = 0

 to the my.cnf and it did not solve the issue.

 What am I missing? Never seen this issue before.

 Thanks,

 Walter

 Courage is being scared to death, but saddling up anyway. --John Wayne




  1   2   3   4   5   6   7   8   9   10   >