Re: MySQL server has gone away

2017-04-03 Thread Johnny Withers
I'd suspect the underlying query is poorly designed for the amount of data
you have stored. If you have access to the mysql server you could connect
to it using any mysql client and run 'show full processlist' to see the
query as that page is trying to load. The query is probably in the 'sending
data' state and gets killed at the limit of one of the timeout variables.
This causes the application to throw that error you have.

Copy that query out, prefix it with explain and see if there are any tables
in the query where an index could be added to optimize the database for the
query.



On Mon, Apr 3, 2017 at 6:09 AM, Ken D'Ambrosio <k...@jots.org> wrote:

> Basically, it says that MySQL is not responding to queries. So it likely
> has died, or perhaps is mis-configured.
>
> On April 3, 2017 7:07:25 AM EDT, Mahmood N <nt_mahm...@yahoo.com> wrote:
> >Hi,I am using Moodle which itself uses SQL for the database. Problem is
> >that, when I run the email plugin and execute the command, the refresh
> >time of the page becomes high (in the order of 3-5 minutes) and at the
> >end, I see this message
> >Debug info: MySQL server has gone away
> >SELECT id, sid, state, userid, lastip, timecreated, timemodified FROM
> >mdl_sessions WHERE sid = ?
> >[array (
> > 0 => 'jqfbgd5b0q6e2l81bb5gb87mn3',
> >)]
> >Error code: dmlreadexceptionStack trace:
> >  - line 479 of /lib/dml/moodle_database.php: dml_read_exception thrown
> >- line 1175 of /lib/dml/mysqli_native_moodle_database.php: call to
> >moodle_database->query_end()
> >- line 1551 of /lib/dml/moodle_database.php: call to
> >mysqli_native_moodle_database->get_records_sql()
> >- line 1523 of /lib/dml/moodle_database.php: call to
> >moodle_database->get_record_sql()
> >- line 1502 of /lib/dml/moodle_database.php: call to
> >moodle_database->get_record_select()
> >- line 286 of /lib/classes/session/manager.php: call to
> >moodle_database->get_record()
> >- line 82 of /lib/classes/session/manager.php: call to
> >core\session\manager::initialise_user_session()
> >   - line 785 of /lib/setup.php: call to core\session\manager::start()
> >   - line 27 of /config.php: call to require_once()
> >   - line 30 of /index.php: call to require_once()
> >
> >
> >Although it looks like a bug in Moodle, but the guys said it is a MySQL
> >issue. I am confused about that. If you have any idea please let me
> >know. What does this error say exactly?
> >
> > Regards,
> >Mahmood
>
> --
> Sent from my Android device with K-9 Mail. Please excuse my brevity.




-- 
-
Johnny Withers
601.209.4985
joh...@pixelated.net


Re: using alias in where clause

2016-01-25 Thread Johnny Withers
On Mon, Jan 25, 2016 at 9:32 PM, Larry Martell <larry.mart...@gmail.com>
wrote:

> On Mon, Jan 25, 2016 at 8:26 PM, Johnny Withers <joh...@pixelated.net>
> wrote:
> > You should probably turn this into a UNION and put an index on each
> column:
> >
> > SELECT f_tag_ch_y_bottom AS ftag FROM data_cst WHERE f_tag_ch_y_bottom =
> > 'E-CD7'
> > UNION ALL
> > SELECT f_tag_ch_x_bottom AS ftag FROM data_cst WHERE f_tag_ch_x_bottom =
> > 'E-CD7'
> > UNION ALL
> > SELECT f_tag_bottom_minor_axis AS ftag FROM data_cst WHERE
> > f_tag_bottom_minor_axis = 'E-CD7'
> > UNION ALL
> > SELECT f_tag_bottom_major_axis AS ftag FROM data_cst WHERE
> > f_tag_bottom_major_axis = 'E-CD7'
> > UNION ALL
> > SELECT f_tag_bottom FROM data_cst AS ftag WHERE f_tag_bottom = 'E-CD7'
> > ;
>
> This may work for me, but I need to explore this more tomorrow. I need
> the select to control the rows included in the aggregation. For
> example, the rows where the f_tag_* col that is used does not =
> 'E-CD7' should not be included in the aggregation. Also, I grossly
> simplified the query for this post. In reality I have 15 items in the
> where clause and a having as well.
>

I see, I missed the STDDEV() function you had, perhaps you could add that
column to each SELECT in the untion, then wrap the entire union inside
another select:

SELECT ftag, STDDEV(ch_x_top) FROM (
..union stuff here...
)


>
> > Doing this any other way will prevent index usage and a full table scan
> will
> > be required.
>
> Yes, I will be adding indices - right now I am just worried about
> getting the query to work. But this is important as it's a big table.
> So I do appreciate you mentioning it.
>

In reality, you could repeat the IFNULL(...) in the where clause the same
way you have it in the column list. Not the optimal solution but it'd work
for a proof of concept.


>
> > Is there a possibility of more than one column matching? How would you
> > handle that?
>
> I was told only 1 of the 5 will be populated and the other 4 will be
> null. But still, I said I have to code for the case where that is not
> true. So then I was told to use the first one I find that is not null,
> looking in the order I had in my original post.
>

You could also wrap another select around the union to handle more than one
of the columns having a value. You could use the outer select to pick the
one you wanted, something similar to:

SELECT
IFNULL(f_tag_ch_y_bottom,IFULL(f_tag_ch_x_bottom,IFNULL(,STDEV(ch_x_top)
FROM (
SELECT f_tag_ch_y_bottom,NULL as f_tag_ch_x_bottom,NULL AS
f_tag_bottom_minor,..., ch_x_top FROM data_cst WHERE f_tag_ch_y_bottom =
'E-CD7'
UNION ALL
SELECT NULL AS f_tag_ch_y_bottom, f_tag_ch_x_bottom,NULL AS
f_tag_bottom_minor,...,ch_x_top FROM data_cst WHERE f_tag_ch_x_bottom
= 'E-CD7'
UNION
)

And so on for each column/query.


>
>
> > On Mon, Jan 25, 2016 at 6:16 PM, Larry Martell <larry.mart...@gmail.com>
> > wrote:
> >>
> >> I know I cannot use an alias in a where clause, but I am trying to
> >> figure out how to achieve what I need.
> >>
> >> If I could have an alias in a where clause my sql would look like this:
> >>
> >> SELECT IFNULL(f_tag_bottom,
> >> IFNULL(f_tag_bottom_major_axis,
> >>  IFNULL(f_tag_bottom_minor_axis,
> >>   IFNULL(f_tag_ch_x_bottom,
> >>IFNULL(f_tag_ch_y_bottom, NULL) as ftag,
> >>STDDEV(ch_x_top)
> >> FROM data_cst
> >> WHERE ftag = 'E-CD7'
> >> GROUP BY wafer_id, lot_id
> >>
> >> But I can't use ftag in the where clause. I can't put it in a having
> >> clause, as that would exclude the already aggregated rows and I want
> >> to filter then before the aggregation. Anyone have any idea how I can
> >> do this?
> >>
> >> --
> >> MySQL General Mailing List
> >> For list archives: http://lists.mysql.com/mysql
> >> To unsubscribe:http://lists.mysql.com/mysql
> >>
> >
> >
> >
> > --
> > -
> > Johnny Withers
> > 601.209.4985
> > joh...@pixelated.net
>



-- 
-
Johnny Withers
601.209.4985
joh...@pixelated.net


Re: using alias in where clause

2016-01-25 Thread Johnny Withers
You should probably turn this into a UNION and put an index on each column:

SELECT f_tag_ch_y_bottom AS ftag FROM data_cst WHERE f_tag_ch_y_bottom =
'E-CD7'
UNION ALL
SELECT f_tag_ch_x_bottom AS ftag FROM data_cst WHERE f_tag_ch_x_bottom =
'E-CD7'
UNION ALL
SELECT f_tag_bottom_minor_axis AS ftag FROM data_cst WHERE
f_tag_bottom_minor_axis = 'E-CD7'
UNION ALL
SELECT f_tag_bottom_major_axis AS ftag FROM data_cst WHERE
f_tag_bottom_major_axis = 'E-CD7'
UNION ALL
SELECT f_tag_bottom FROM data_cst AS ftag WHERE f_tag_bottom = 'E-CD7'
;

Doing this any other way will prevent index usage and a full table scan
will be required.

Is there a possibility of more than one column matching? How would you
handle that?


On Mon, Jan 25, 2016 at 6:16 PM, Larry Martell <larry.mart...@gmail.com>
wrote:

> I know I cannot use an alias in a where clause, but I am trying to
> figure out how to achieve what I need.
>
> If I could have an alias in a where clause my sql would look like this:
>
> SELECT IFNULL(f_tag_bottom,
> IFNULL(f_tag_bottom_major_axis,
>  IFNULL(f_tag_bottom_minor_axis,
>   IFNULL(f_tag_ch_x_bottom,
>IFNULL(f_tag_ch_y_bottom, NULL) as ftag,
>STDDEV(ch_x_top)
> FROM data_cst
> WHERE ftag = 'E-CD7'
> GROUP BY wafer_id, lot_id
>
> But I can't use ftag in the where clause. I can't put it in a having
> clause, as that would exclude the already aggregated rows and I want
> to filter then before the aggregation. Anyone have any idea how I can
> do this?
>
> --
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:    http://lists.mysql.com/mysql
>
>


-- 
-
Johnny Withers
601.209.4985
joh...@pixelated.net


Re: Sequence Numbering

2015-06-30 Thread Johnny Withers
Found the problem here, someone had created on insert and on update
triggers on this table that were interferring with the process.

This logic works wonderfully when you need sequence numbers that are also
transaction safe. The key is you need a unique index on all the columns
involved.

I think myisam tables support a multi colum auto increment primary key.
This works the same when you need the same in innodb.

 On Jun 30, 2015 8:37 AM, Bob Eby eby...@gmail.com wrote:

  If the return value of the UPDATE stmt is zero, the following stmt is
 ran:

 By return value I assume you mean rows affected ?

  This is working great, and has been for many years; however,
  today I noticed it was not working on a particular MySQL server.

 By not working what exactly is the failure mode?

 Not getting 0 rows affected?  0 rows affected not triggering
 subsequent INSERT? Something else?

 Note: It would make me uneasy to use LAST_INSERT_ID()
 where there is no AUTO_INCREMENT column myself...

 Good luck,
 Robert



Re: Sequence Numbering

2015-06-29 Thread Johnny Withers
I should add:

By working I mean:

The first time the UPDATE stmt runs, it will return a zero value for the
seq_id.
The insert stmt will initialize it to 1000
The second time the UPDATE stmt runs, it returns 1001.
The third time, 1002, etc.

When it doesn't work, the second time the code is ran you get a duplicate
key entry for an attempt to insert 1000 again.



On Mon, Jun 29, 2015 at 7:03 PM, Johnny Withers joh...@pixelated.net
wrote:

 Hello all,

 I have a tabled defined:

 CREATE TABLE `store_seq` (
   `seq_type_id` smallint(3) unsigned NOT NULL DEFAULT '0',
   `store_id` int(10) unsigned NOT NULL DEFAULT '0',
   `loan_model_id` int(10) unsigned NOT NULL DEFAULT '0',
   `store_bank_id` int(10) unsigned NOT NULL DEFAULT '0',
   `seq_id` int(10) unsigned NOT NULL DEFAULT '0',
   `check_format` varchar(50) DEFAULT NULL,
   UNIQUE KEY `idx_contract`
 (`loan_model_id`,`seq_type_id`,`store_id`,`seq_id`,`store_bank_id`),
   KEY `idx_chk`
 (`store_bank_id`,`seq_type_id`,`store_id`,`seq_id`,`loan_model_id`)
 ) ENGINE=InnoDB DEFAULT CHARSET=latin1

 The following SQL is used to generate and initialize the sequence number
 for the idx_contract key:

 UPDATE store_seq SET seq_id=LAST_INSERT_ID(seq_id+1) WHERE
 loan_model_id=224 AND seq_type_id=2 AND store_id=179;

 If the return value of the UPDATE stmt is zero, the following stmt is ran:

 INSERT INTO store_seq(seq_type_id,store_id,loan_model_id,seq_id)
 VALUES(2,179,224,1000)

 This is working great, and has been for many years; however, today I
 noticed it was not working on a particular MySQL server.

 The server where I have verified it as working is:
 5.5.42-log

 The server where it is not working is:
 5.5.32-enterprise-commercial-advanced

 This same code is used on about 10 other mysql servers where it is working
 fine. I have verified the table schema is the same on both mysql servers.
 Anyone have any insight as to why it may not be working on that one server?

 Thanks,


 --
 -
 Johnny Withers
 601.209.4985
 joh...@pixelated.net




-- 
-
Johnny Withers
601.209.4985
joh...@pixelated.net


Sequence Numbering

2015-06-29 Thread Johnny Withers
Hello all,

I have a tabled defined:

CREATE TABLE `store_seq` (
  `seq_type_id` smallint(3) unsigned NOT NULL DEFAULT '0',
  `store_id` int(10) unsigned NOT NULL DEFAULT '0',
  `loan_model_id` int(10) unsigned NOT NULL DEFAULT '0',
  `store_bank_id` int(10) unsigned NOT NULL DEFAULT '0',
  `seq_id` int(10) unsigned NOT NULL DEFAULT '0',
  `check_format` varchar(50) DEFAULT NULL,
  UNIQUE KEY `idx_contract`
(`loan_model_id`,`seq_type_id`,`store_id`,`seq_id`,`store_bank_id`),
  KEY `idx_chk`
(`store_bank_id`,`seq_type_id`,`store_id`,`seq_id`,`loan_model_id`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1

The following SQL is used to generate and initialize the sequence number
for the idx_contract key:

UPDATE store_seq SET seq_id=LAST_INSERT_ID(seq_id+1) WHERE
loan_model_id=224 AND seq_type_id=2 AND store_id=179;

If the return value of the UPDATE stmt is zero, the following stmt is ran:

INSERT INTO store_seq(seq_type_id,store_id,loan_model_id,seq_id)
VALUES(2,179,224,1000)

This is working great, and has been for many years; however, today I
noticed it was not working on a particular MySQL server.

The server where I have verified it as working is:
5.5.42-log

The server where it is not working is:
5.5.32-enterprise-commercial-advanced

This same code is used on about 10 other mysql servers where it is working
fine. I have verified the table schema is the same on both mysql servers.
Anyone have any insight as to why it may not be working on that one server?

Thanks,


-- 
-
Johnny Withers
601.209.4985
joh...@pixelated.net


Re: SUPER privilege required for simple update

2015-05-09 Thread Johnny Withers
It's there an update trigger defined on the table? It could be doing
something that requires the super privilege.
On May 9, 2015 3:12 AM, Peter Abplanalp pabplan...@accucode.com wrote:

 hello,

 I have a replication client where replication stopped because mysql said
 that the SUPER privilege was required for an update statement.  I tried
 running the same update under a normal user and it also failed saying the
 SUPER privilege was required.  I thought the SUPER privilege was only
 necessary for admin tasks.  Why is it being required for a simple table
 update statement?

 Interestingly, I am able to insert a new row into the table; however, i get
 the same error when i then try to update the new row.  I have restarted
 mysql hoping that might fix the issue, no luck.  i've also checked the file
 permissions on table files on the file system and they are all owned by the
 same user which is running mysql.  I've tried to create another test table
 and was successful and could also insert and update that table with no
 issues.  I am also able to insert and update other tables but I have not
 tried them all.

 I am at a loss.  Any and all help appreciated.

 thanks,

 -peter



Re: Replication problem

2014-08-30 Thread Johnny Withers
There's a duplicate key on the audit table, 18699. Delete it and restart
slave (start slave). Check slave status again,  might be more rows in there
duplicated.

You might want to compare the row to master to ensure it's a duplicate
before deleting from slave.
 On Aug 30, 2014 7:52 AM, william drescher will...@techservsys.com
wrote:

 On 8/29/2014 7:40 PM, Suresh Kuna wrote:

 You can paste the show slave status\G here for us to review  and on
 Master,
 show global variables like 'log-bin%'; show master status ( 3 to 4 times
 continuously )

 after a more complex transaction;
 +--+--+--+--+
 | File | Position | Binlog_Do_DB | Binlog_Ignore_DB |
 +--+--+--+--+
 | mysql-bin.001225 | 5952 |  |  |
 +--+--+--+--+

 *** 1. row ***
Slave_IO_State: Waiting for master to send event
   Master_Host: 192.168.10.101
   Master_User: repl
   Master_Port: 3306
 Connect_Retry: 60
   Master_Log_File: mysql-bin.001225
   Read_Master_Log_Pos: 5952
Relay_Log_File: ubuntu2-relay-bin.02
 Relay_Log_Pos: 16426
 Relay_Master_Log_File: mysql-bin.001139
  Slave_IO_Running: Yes
 Slave_SQL_Running: No
   Replicate_Do_DB:
   Replicate_Ignore_DB:
Replicate_Do_Table:
Replicate_Ignore_Table:
   Replicate_Wild_Do_Table:
   Replicate_Wild_Ignore_Table:
Last_Errno: 1062
Last_Error: Error 'Duplicate entry '18699' for key
 'PRIMARY'' on query. Default database: 'Information_server'. Query: 'insert
 into audit  (audit_type, subtype, user_Id, ptNum, data) values ('login',
 'login', 'WPD', 0, 'Login: WPD from location: Suite B')'
  Skip_Counter: 0
   Exec_Master_Log_Pos: 90012430
   Relay_Log_Space: 29733455
   Until_Condition: None
Until_Log_File:
 Until_Log_Pos: 0
Master_SSL_Allowed: No
Master_SSL_CA_File:
Master_SSL_CA_Path:
   Master_SSL_Cert:
 Master_SSL_Cipher:
Master_SSL_Key:
 Seconds_Behind_Master: NULL
 Master_SSL_Verify_Server_Cert: No
 Last_IO_Errno: 0
 Last_IO_Error:
Last_SQL_Errno: 1062
Last_SQL_Error: Error 'Duplicate entry '18699' for key
 'PRIMARY'' on query. Default database: 'Information_server'. Query: 'insert
 into audit  (audit_type, subtype, user_Id, ptNum, data) values ('login',
 'login', 'WPD', 0, 'Login: WPD from location: Suite B')'
   Replicate_Ignore_Server_Ids:
  Master_Server_Id: 3



 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql




Re: Concurrent read performance problems

2013-08-11 Thread Johnny Withers
Just because it says filrsort doesn't mean it'll create a file on disk.
Table schema and full query would be helpful here too

http://www.mysqlperformanceblog.com/2009/03/05/what-does-using-filesort-mean-in-mysql/
On Aug 11, 2013 1:28 PM, Brad Heller b...@cloudability.com wrote:

 Yes sorry, here's the explain. It was taken from MariaDB 5.5.32. Looks like
 there is a lot of filesort goin' on here. Also note that I'm only using the
 first two fields of the covering index (intentionally).


 +--+-++---+++-+---+-+---+
 | id   | select_type | table  | type  | possible_keys
| key| key_len | ref
   | rows| Extra
 |

 +--+-++---+++-+---+-+---+
 |1 | PRIMARY | derived2 | ALL   | NULL
 | NULL   | NULL| NULL
| 1004685 | Using temporary; Using
 filesort   |
 |2 | DERIVED | accounts   | range |
 PRIMARY,unique_account_identifiers | unique_account_identifiers | 257 |
 NULL  |   3 | Using
 where; Using index; Using temporary; Using filesort |
 |2 | DERIVED | facts  | ref   | covering
 | covering   | 4   |
 facts.accounts.id|  334895 | Using
 where
   |

 +--+-++---+++-+---+-+---+


 *Brad Heller *| Engineering Lead | Cloudability.com | 541-231-1514 |
 Skype:
 brad.heller | @bradhe http://www.twitter.com/bradhe |
 @cloudabilityhttp://www.twitter.com/cloudability


 On Sun, Aug 11, 2013 at 8:45 AM, Johan De Meersman vegiv...@tuxera.be
 wrote:

  On my phone now, but it smells of file-based sorting, making disk access
  the bottleneck. Can you provide the explain?
 
 
  Brad Heller b...@cloudability.com wrote:
 
  Hey list, first time posting here so apologies if this is the wrong
 forum
  for this but I'm really out of options on how to solve this problem!
 
  *Short version:*
 
 
  1. High concurrent reads, performing the same well-indexed query type to
  the same two tables.
  2. No additional traffic at all--just reads from these two tables. No
  writes anywhere.
  3. Very fast (sub-second) when server is only servicing one request at a
  time.
  4. Very slow (30s+) when server is servicing 2-3 requests at a time.
 
  Questions:
 
  1. What resource is under contention for servicing read-only queries if
 you
  have a large buffer pool and a plenty-big thread cache?
  2. What parameters can I tune to increase concurrent reads to these two
  tables?
 
  *Long version:*
 
 
  I've got a MySQL server that has only about 50 connections open to it at
  any given time. It basically only has one OLAP q
   uery
  type being ran against
  it that amounts to something like this:
 
  SELECT (3 fields with 1 count) FROM (SELECT (3 fields with 1 aggregate)
  FROM table INNER JOIN ... WHERE ... GROUP BY ...) dt GROUP BY ...;
 
  These queries are well indexed and run very well
 individually--sub-second,
  usually even faster. When I run a few of these queries simultaneously
 (2-3
  on my laptop, 6-7 on our production boxes) performance grinds to a halt:
  Consistently about 30 seconds to service a query.
 
  Ideally, I'd like to be able to run 50+ of these queries concurrently.
 
  I've tried MySQL 5.5.27 and MySQL 5.6.13 and get the same results on
 both
  machines. I've tried tweaking the following my.cnf parameters to be
 higher:
 
  thread_concurrency = 20
  thread_cache_size = 340
  innodb_buffer_pool_size=2G
 
  A few other my.cnf parameters that I have set:
 
  innodb_file_format=Barracuda
  innodb_file_format_max=Barracuda
  innodb_file_per_table=1
  skip-external-locking
  innodb_log_files_in_group=2
  innodb_log_file_size=2000M
  max_allowed_packet=64M
 
  Thanks in advance,
 
  Brad Heller
 
 
  --
  Sent from Kaiten Mail. Please excuse my brevity.
 



Re: Slow Response -- What Does This Sound Like to You?

2013-05-09 Thread Johnny Withers
I delt with a similar situation where users complained the system would
freeze up for 30-60 seconds at random intervals. After days of looking at
queries, logs, error logs, etc.. We were no closer to finding a solution.
We do have a service that runs every 15 minutes to cache some data in our
system, in one app it creates some temporary tables. In this app the tables
were not being created as memory tables. Since we also use connection
pooling, the temporary tables created evey 15 minutes were not dropped when
the task completed. When the connection was finally closed there were a lot
of temporary tables to drop and the MySQL server would hang while this
process was completed. Changing to memory tables solved the problem.

Might not be your issue but it reminded me of this.

On Thursday, May 9, 2013, Robinson, Eric wrote:

 We have a situation where users complain that the system periodically
 freezes for 30-90 seconds. We check the slow query logs and find that one
 user issued a complex query that did indeed take 30-90 seconds to complete.
 However, NO slow queries are recorded for the other 50 users, before,
 during, or after the freeze. Note that the complex query in question always
 shows: Lock_time: 0.

 Q: What conditions could cause single query to lock up a database for a
 while for all users (even though it shows lock time: 0)  but no other
 slow queries would show in the logs for any other users who are hitting the
 database at the same time?

 OS: RHEL3 x64
 CPU: 8 x 2.9GHz Xeon
 RAM: 32GB
 Disk: RAID 5 (6 x 512GB SSD)
 MySQL: 5.0.95 x64
 Engine: MyISAM

 --
 Eric Robinson







 Disclaimer - May 9, 2013
 This email and any files transmitted with it are confidential and intended
 solely for mysql@lists.mysql.com javascript:;. If you are not the named
 addressee you should not disseminate, distribute, copy or alter this email.
 Any views or opinions presented in this email are solely those of the
 author and might not represent those of Physicians' Managed Care or
 Physician Select Management. Warning: Although Physicians' Managed Care or
 Physician Select Management has taken reasonable precautions to ensure no
 viruses are present in this email, the company cannot accept responsibility
 for any loss or damage arising from the use of this email or attachments.
 This disclaimer was added by Policy Patrol: http://www.policypatrol.com/



-- 
-
Johnny Withers
601.209.4985
joh...@pixelated.net


Re: Upgrading form mysql 5.0.90 to 5.5 or 5.6

2013-02-15 Thread Johnny Withers
;
 
  DELETE t1 AS a2 FROM t1 AS a1 INNER JOIN t2 AS a2;
 
  Correct:
 
 
 
  DELETE FROM t1 USING t1 AS a1 INNER JOIN t2 AS a2;
 
  DELETE t1 FROM t1 AS a1 INNER JOIN t2 AS a2;
 
  §  Previously, for alias references in the list of tables from which
  to
  delete rows in a multiple-table delete, the default database is used
  unless
  one is specified explicitly. For example, if the default database is
  db1,
  the following statement does not work because the unqualified alias
  reference a2 is interpreted as having a database of db1:
 
  §
 
  §  DELETE a1, a2 FROM db1.t1 AS a1 INNER JOIN db2.t2 AS a2
 
  WHERE a1.id=a2.id;
 
  To correctly match an alias that refers to a table outside the
  default
  database, you must explicitly qualify the reference with the name of
  the
  proper database:
 
 
 
  DELETE a1, db2.a2 FROM db1.t1 AS a1 INNER JOIN db2.t2 AS a2
 
  WHERE a1.id=a2.id;
 
  Now alias resolution does not require qualification and alias
  references
  should not be qualified with the database name. Qualified names are
  interpreted as referring to tables, not aliases.
 
  Statements containing alias constructs that are no longer permitted
  must
  be
  rewritten. (Bug #27525)
 
 
 
 
 
  On Thu, Feb 14, 2013 at 6:11 PM, Rick James rja...@yahoo-inc.com
  wrote:
 
  Singer, do you have some examples?
 
 
  -Original Message-
  From: Singer Wang [mailto:w...@singerwang.com]
  Sent: Thursday, February 14, 2013 2:59 PM
  To: Mihail Manolov
  Cc: Mike Franon; Akshay Suryavanshi; mysql@lists.mysql.com
  Subject: Re: Upgrading form mysql 5.0.90 to 5.5 or 5.6
 
 
  There are queries that works with 5.1/5.0 that do not work with
 5.5,
  I
  would test extensively..
 
  S
 
 
  On Thu, Feb 14, 2013 at 5:22 PM, Mihail Manolov 
  mihail.mano...@liquidation.com wrote:
 
  You could jump from 5.0 directly to 5.5 and skip 5.1. I have
  without
  any issues. There are some configuration file change, which you
  may
  want to consider checking. I definitely recommend upgrading your
  development servers for an extensive testing. Some queries _may_
  run
  slower or not work at all and you may have to rearrange how you
  join
  tables in your queries.
 
  The upgrade from 5.5 to 5.6 should me smoother, though.
 
 
  On Feb 14, 2013, at 4:28 PM, Mike Franon wrote:
 
  Great thanks for the info, I guess the best way to do this is
  take
  a
  spare server, set it up with our standard setup, and then start
  the
  upgrade as you said 5.0 - 5.1 - 5.5, test and then upgrade to
  5.6
  and test.




-- 
-
Johnny Withers
601.209.4985
joh...@pixelated.net


Re: Database lock on mysql import - Information

2012-09-10 Thread Johnny Withers
When you say locked, do queries on the other databases fail with an
error? If so, whats the error? Is it all queries, or just inserts?

Also, how are you doing your export and import?

Sent from my iPad

On Sep 10, 2012, at 2:38 AM, Roland RoLaNd r_o_l_a_...@hotmail.com wrote:


 Dear all,

 I realize this is a very newbie question so bear with me please.

 I know that when you import/export a DB its tables are locked to ensure 
 consistency and no data corruption.
 but why would other DBs on the same server get locked if im 
 importing/exporting one DB ?
 in other words:

 exporting/importing X DB.
 will lock every other DB on the server up until it's done.

 and is there another way to move a DB from X to N on the same server using 
 rsync/cp or something of that sort (without using mysqldump which would lock 
 DBs ? )


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: update doesn't

2012-08-19 Thread Johnny Withers
The client indicates a warning after the update. Issue a show warnings
after the update.
On Aug 19, 2012 11:19 AM, william drescher will...@techservsys.com
wrote:

 On 8/17/2012 12:13 PM, Rik Wasmus wrote:

 I get 1 row affected, but the status does not change when I look
 at the row.

 If I set it to 'X' it does change.

 To make it even more wacky, if I (using phpMyAdmin) change it to
 'H' it will change and the row is shown change, but when I go to
 examine the row (using the pencil icon=Edit) it changes back to 'W'.

 Either there is something really strange or my mysql is possessed.

 I am using Server version: 5.1.63-0ubuntu0.10.04.

 Anyone have any thoughts about this or suggestions on how to
 debug it?


 1) One thing that _could_ do this is a trigger. Does SHOW TRIGGERS; show
 any
 that could be doing this?

 2) However, in 99.999% of cases, it is just a logic error in the
 application
 (be it your application or PHPMyAdmin), not anything in MySQL. Can you
 connect
 with the command line client, run the UPDATE statement, en then check
 what the
 SELECT shows? If it shows a correct result... the problem ain't in MySQL
 itself.

  mysql select status from tasks;
 ++
 | status |
 ++
 | W  |
 ++
 1 row in set (0.00 sec)

 mysql update tasks set status= 'H';
 Query OK, 1 row affected (0.00 sec)
 Rows matched: 1 Changed 1 Warnings: 0

 mysql select status from tasks;
 ++
 | status |
 ++
 | W  |
 ++
 1 row in set (0.00 sec)

 whoops

 bill





 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql




Re: Looking for consultant

2012-07-18 Thread Johnny Withers
Would you consider a service like www.xeround.com?

Sent from my iPad

On Jul 17, 2012, at 7:23 PM, Carl Kabbe c...@etrak-plus.com wrote:

 On Monday, I asked if there were consultants out there who could help set up 
 an NDB high availability system.  As I compared our needs to NDB, it became 
 obvious that NDB was not the answer and more obvious that simply adding high 
 availability processes to our existing Innodb system was.

 So, I am back asking if there are consultants lurking on this list that could 
 help with this project.

 Thanks,

 Carl
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Need help for performance tuning with Mysql

2012-05-22 Thread Johnny Withers
I don't see any attachments.

First, I would upgrade to 5.5 as 5.0 is very old. The upgrade process
is painless.

Second, make sure your Innodb buffer pool is allocating as much ram as
possible. I'd even go as far as adding another 8gb of ram to the
server. The buffer pool setting is going to give you the best
performance increase.

Also, what kind of hard disks do you have the data files on? Raid? No raid?

Sent from my iPad

On May 22, 2012, at 9:08 PM, Yu Watanabe yu.watan...@jp.fujitsu.com wrote:

 Hello all.

 I would like to ask for advice with performance tuning with MySQL.

 Following are some data for my server.

 CPU: Xeon(TM) 2.8GHz (2CPUs - 8core total)
 Memory : 8GB
 OS : RHEL 4.4 x86_64
 MySQL  : MySQL 5.0.50sp1-enterprise

 Attached file
 # my.cnf.txt  : my.cnf information
 # mysqlext_20120522131034.log : variable and status information from 
 mysqladmin

 I have 2 database working with high load.

 I wanted to speed up my select and update queries not by
 optimizing the query itself but tuning the my.cnf.

 I have referred to following site,
 http://dev.mysql.com/doc/refman/5.0/en/server-status-variables.html

 and read Hiperformance Mysql vol.2 ,
 and increased the following values,

 table_cache
 thread_cache_size
 tmp_table_size
 max_heap_table_size

 but made not much difference.

 According to the ps and sar result

 *1 PS result
 Date   Time  CPU%  RSS VSZ
 2012/5/22  21:00:39  109   294752  540028

 *2 SAR
 Average CPU user 25%
sys  5%
io   3%

 I assume that MySQL can work more but currently not.

 I am considersing to off load 1 high load database to
 seperate process and make MySQL work in multiple process.

 It would be a great help if people in this forum can give
 us an adivice for the tuning.

 Best Regards,
 Yu Watanabe


 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: InnoDB and Memory Allocation

2012-05-04 Thread Johnny Withers
Wow, I hope this hasn't been hanging out in the mysql list server for a
year.

I completely forgot about this problem. Thanks for the info though. I've
been running this server with overcommit_memory=0 and a 42GB buffer pool
for a while now.

Thanks again!


On Fri, May 4, 2012 at 9:38 AM, Charles Cazabon 
charlesc-mysql@pyropus.ca wrote:

 Johnny Withers wrote:

  I hope someone can help me out here. I'm having trouble with some new
  servers and memory allocation.
 
  Some basic specs on the servers:
  32GB total mem
  2GB swap
  64-bit RHEL
  64-bit mysqld
  overcommit_memory=2
 
  mysql fails to start with 14GB innodb_buffer_pool_size
  mysql will start with 12GB buffer pool setting
  When overcommit_memory is set to 0, mysql starts with 26GB buffer pool.

 This is due to a system-wide limitation on the amount of memory that can be
 allocated by processes that the Linux kernel imposes when overcommit is
 disabled.  The limit is (amount of swap) + (percentage of physical memory),
 and the default percentage is 50.

 So the limit in your case is 2G + 16G == 18G for all processes together;
 14G
 for innodb_buffer_pool_size is too much.

 You can increase the amount of swap, or increase the percentage (via
 /proc/sys/vm/overcommit_ratio, which isn't as well-known, though it is in
 the
 kernel documentation).

 Hope that helps.

 Charles

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql




-- 
-
Johnny Withers
601.209.4985
joh...@pixelated.net


Re: FYI: how to prevent mysql from oom-killer

2012-04-13 Thread Johnny Withers
I had this same problem with the OOM for a while. Very frustrating to have
to reboot a server to bring it back to life.

I found out the OOM only ran when the swap file was about 99% full. The
servers I had this problem on had 16GB and 24GB of ram, but only 2GB of
swap. I increased the swap on the 24GB servers to 48GB and on the 16GB
servers to 32GB. The swap never fills to over 60% now and I haven't' had
any OOM problems since and the systems run great.

I've also set vm.swappiness=0 in /etc/sysctl.conf

-JW

On Fri, Apr 13, 2012 at 7:17 AM, Reindl Harald h.rei...@thelounge.netwrote:

 the following may be useful for most server systems

 OOM-killer acts if some process reclaims more and more
 memory and the kernel randomly kills unimportant tasks
 using hughe memory

 in case of a running mysqld the classification unimportant
 is nearly all time wrong and can cause hughe damage and work
 in other words: you really never want killed a database server
 randomly instead dbmail-imapd which can be restarted via
 systemd without pain and may be the root-cause of OOM
 _

 with one single command you can protect processes from get killed
 i started to run this every 15 minutes to make sure it is also
 active after restarts

 i am considering include this in mysqld.service as
 ExecStartPost=-/usr/local/bin/mysql-no-oom.sh in our
 internal mysqld-packages and include also the script
 _

 [root@mail:~]$ cat /etc/crontab | grep oom
 0,15,30,45 * * * * root bash /usr/local/bin/mysql-no-oom.sh

 [root@mail:~]$ cat /usr/local/bin/mysql-no-oom.sh
 #!/bin/bash
 pgrep -f /usr/libexec/mysqld | while read PID; do echo -1000 
 /proc/$PID/oom_score_adj; done




-- 
-
Johnny Withers
601.209.4985
joh...@pixelated.net


Re: Remote mysql too slow

2012-04-09 Thread Johnny Withers
You may want to ensure the nic is connected to the network at the
right speed and duplex. It seems that every new server I get now has
to have the speed and duplex explicitly set instead of auto negotiate.

Sent from my iPad

On Apr 9, 2012, at 4:00 AM, J. Bakshi joydeep.bak...@infoservices.in wrote:


 Hello,

 I have been provided a muscular linux server to use as a Mysql server
 in our organization. The server is located just beside the web server
 and within the same network. This dedicated server has 8GB RAM, i5 processors
 and running mysql as service. No apache, php . nothing. All resources are
 dedicated to mysql only.

 Mysql version - mysql  Ver 14.14 Distrib 5.1.49, for debian-linux-gnu (x86_64)

 The BIG hitch is; when we connect with this box the web sites become too slow.
 I have added the following at my.cnf under [mysqld] section

 ` ` ` ` `
 skip_external_locking
 skip_name_resolve
 skip_host_cach

 ` ` ` ` ` `


 The sql connection becomes little faster but still it is considerably
 slow; specially with such a muscular dedicated linx box just for Mysql.
 Is there anything else which I can add/configure to make the network latecy
 small or any such mechanism to make the query fast ?


 I run the mysqltuner directly on the remote mysql server; and here is the
 result

 ```
  General Statistics --
 [--] Skipped version check for MySQLTuner script
 [OK] Currently running supported MySQL version 5.1.49-3-log
 [OK] Operating on 64-bit architecture

  Storage Engine Statistics ---
 [--] Status: -Archive -BDB -Federated +InnoDB -ISAM -NDBCluster
 [--] Data in MyISAM tables: 1G (Tables: 4777)
 [--] Data in InnoDB tables: 3G (Tables: 5543)
 [--] Data in MEMORY tables: 0B (Tables: 136)
 [!!] Total fragmented tables: 5562

  Performance Metrics -
 [--] Up for: 3d 23h 55m 27s (1M q [4.523 qps], 81K conn, TX: 23B, RX: 469M)
 [--] Reads / Writes: 74% / 26%
 [--] Total buffers: 2.2G global + 20.4M per thread (150 max threads)
 [OK] Maximum possible memory usage: 5.2G (66% of installed RAM)
 [OK] Slow queries: 2% (39K/1M)
 [OK] Highest usage of available connections: 18% (28/150)
 [OK] Key buffer size / total MyISAM indexes: 2.0G/268.5M
 [!!] Key buffer hit rate: 78.2% (5M cached / 1M reads)
 [OK] Query cache efficiency: 42.0% (327K cached / 778K selects)
 [!!] Query cache prunes per day: 1993
 [OK] Sorts requiring temporary tables: 0% (48 temp sorts / 9K sorts)
 [!!] Temporary tables created on disk: 39% (91K on disk / 230K total)
 [OK] Thread cache hit rate: 99% (28 created / 81K connections)
 [!!] Table cache hit rate: 6% (16K open / 248K opened)
 [OK] Open file limit used: 36% (11K/32K)
 [OK] Table locks acquired immediately: 99% (695K immediate / 695K locks)
 [!!] InnoDB data size / buffer pool: 3.6G/8.0M

  Recommendations -
 General recommendations:
Run OPTIMIZE TABLE to defragment tables for better performance
When making adjustments, make tmp_table_size/max_heap_table_size equal
Reduce your SELECT DISTINCT queries without LIMIT clauses
Increase table_cache gradually to avoid file descriptor limits
 Variables to adjust:
query_cache_size ( 128M)
tmp_table_size ( 100M)
max_heap_table_size ( 100M)
table_cache ( 16000)
innodb_buffer_pool_size (= 3G)

 

 Please note, every day mysql optimization as well as repairing is done
 by a cron at night. I have also tried with the suggestion multiple
 times before but every time it gives some new suggestion. I have 8GB
 physical RAM in this server and here is some statistics

 `
 # free -m
 total   used   free sharedbuffers cached
 Mem:  7986   7913 73  0224   6133
 -/+ buffers/cache:   1554   6431
 Swap: 3813  0   3813
 `

 And here is the vmstat result with 10 count and 5 sec delay

 `

 # vmstat 5 10
 procs ---memory-- ---swap-- -io -system-- cpu
 r  b   swpd   free   buff  cache   si   sobibo   in   cs us sy id wa
 0  0  0  56328 230440 629967600257466  0  0 98  2
 0  0  0  55700 230440 629974400 022  226  272  0  0 99  1
 0  0  0  55964 230440 629985600 0   314  348  388  0  0 94  5
 0  0  0  55452 230440 629995600 061  304  364  0  0 97  2
 0  1  0  55592 230440 630042400 0   271  199  257  0  0 96  4
 0  0  0  54584 230440 630090800 0   338  342  428  0  0 92  8
 0  0  0  54800 230440 630107200 077  119  133  0  0 98  2
 0  0  0  53964 230440 630153200

Re: MySQL on 64 bit Windows 7?

2012-03-01 Thread Johnny Withers
I would imagine the installer is 32-bit only just so they don't have to
release two versions of it.

I'm sure it'll allow you to download the 64-bit version of the server
though.

JW

On Thu, Mar 1, 2012 at 3:10 PM, Dotan Cohen dotanco...@gmail.com wrote:

 Although 74 bit Windows 7 is listed as supported [1], I do not see
 such a binary listed on the download page [2]. Should one use the 32
 bit installer on 64 bit Windows? Is the installer page sniffing my UA
 (Firefox on Debian) and trying to guess as to the correct binary for
 me? Am I looking in the wrong place?

 Thanks.


 [1] http://www.mysql.com/support/supportedplatforms/database.html
 [2] http://dev.mysql.com/downloads/installer/


 --
 Dotan Cohen

 http://gibberish.co.il
 http://what-is-what.com

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql




-- 
-
Johnny Withers
601.209.4985
joh...@pixelated.net


Re: Getting data from 2 tables if records have same date!

2012-02-29 Thread Johnny Withers
Sounds like you need to LEFT JOIN:

SELECT *
FROM CUSTOMERS
LEFT JOIN CALLS ON CUSTOMERS.PHONE = CALLS.PHONE AND CALLS.DATE = 02/28/12
WHERE CUSTOMERS.DATE = 02/28/12

But that would only get customers created on 2/28 AND having a call on 2/28
OR not call at all on 2/28.

This would give you customers created on 2/28 with no calls AND customers
created on 2/28 with a call on 2/28:

SELECT *
FROM CUSTOMERS
LEFT JOIN CALLS ON CUSTOMERS.PHONE = CALLS.PHONE
WHERE CUSTOMERS.DATE = 02/28/12 AND (CALLS.PHONE IS NULL OR CALLS.DATE =
02/28/12)

JW

On Wed, Feb 29, 2012 at 1:02 PM, LUCi5R luc...@luci5r.com wrote:

 Guys,

 I've been working with MySQL for a while (been on  off this list over the
 last 10 years or so); I'm definitely not a n00b and have worked with SQL
 extensively. Used JOIN and all quite a bit ... but I haven't done
 subqueries, union or nested joins.

 I'm completely stumped on this problem; and Google hasn't been helpful at
 all. I'll try to be as descriptive as possible.

 I have 2 tables ... CUSTOMERS and CALLS.
 Think of Customers table as your Directory. It has the customer's contact
 information  some other information. In total about 20 fields in there.
 The Calls table has only about 7 fields. Each time a customer calls in,
 the conversation details gets recorded in this Calls table.

 The PHONE field is the key field that joins the CUSTOMERS  CALLS tables.
 That is the only identifying key that gets written on the Calls record when
 that customer calls.

 One thing to note -- It is possible for a customer to exist in the
 CUSTOMERS table, but not exist in the CALLS table; however, it is not
 possible for a PHONE # to be in the CALLS table but not in CUSTOMERS table.
 Essentially, a customer's record has to be created first in the CUSTOMERS
 table before a call can be recorded from him in the CALLS table.

 Also, CALLS table can have multiple entries with same PHONE # (Customer
 called many times - maybe even same day), but CUSTOMERS will only have a
 single entry for a PHONE #.

 Here comes my problem ...

 I have a PHONE SEARCH box with the ability to define a date range; for
 simplicity sake - we'll use just One Date instead of DATE ... BETWEEN for
 now.

 When someone searches for a PHONE number, I want to show ALL the CUSTOMERS
 that:
 a. Were CREATED on that day (Date defined in Search Criteria)
 b. Had CALLED in that day (Date defined in Search Criteria)

 The DATA that I need to pull up and show is in the CUSTOMERS table; not
 the CALLS table -- so the DATA I need needs to come out of the CUSTOMERS
 table matching on phone from both tables for the given DATE.

 In other words - any CUSTOMER that has the PHONE NUMBER which appears  in
 BOTH CUSTOMERS  CALLS table with the DATE defined should pull up.

 For the life of me - I can't get this to work!!
 Let's take the date 02/28/12 for example sake.

 My biggest issue is ... using JOIN, I can pull up ...
 a. ALL the phone/customers that appeared in the CALLS table with date
 02/28/12
 b. ALL the phone/customers that appeared in CALLS  CUSTOMERS with date
 02/28/12

 BUT -- If there's a customer with date 02/28/12 who DOES NOT appear in
 CALLS table at all - does NOT show up!! And that is because I'm using
 CUSTOMERS.PHONE=CALLS.PHONE in the JOIN ON clause. So it obviously won't
 pick up a record where the phone didn't exist in both tables.

 My initial query was:

 SELECT * FROM CUSTOMERS JOIN CALLS ON CUSTOMERS.PHONE = CALLS.PHONE
 WHERE CUSTOMERS.DATE = 02/28/12 AND CALLS.DATE = 02/28/12

 I've tried 100's of combinations of this query; many different OR, AND,
 GROUP BY combinations --- but I can't pull up ALL records from CUSTOMERS
 with DATE 02/28/12 and ALL records from CALLS with DATE 02/28/12 in a
 single query.

 I've hit a wall here.

 Any ideas/suggestions/advice?

 THANKS

 ~~
 LUCi5R
 e:  luc...@luci5r.com
 w:  http://www.luci5r.com



 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql




-- 
-
Johnny Withers
601.209.4985
joh...@pixelated.net


Re: Removing Data Duplicacy

2012-02-22 Thread Johnny Withers
You can also handle this with transactions:

CREATE TABLE `seq` (
  `seq_num` int(10) unsigned NOT NULL DEFAULT '1000'
) ENGINE=InnoDB DEFAULT CHARSET=latin1


#Initialize sequence numbers
INSERT INTO seq(seq_num) VALUES(1000);

#Get next sequence number
START TRANSACTION;

UPDATE seq SET seq_num=LAST_INSERT_ID(seq_num+1);

#Do other inserts into other tables with this sequence number

COMMIT;

#ROLLBACK if something fails

Other inserts will be blocked until this process is either COMMIT'd or
ROLLBACK'd, preventing you from using the same sequence # again. As soon as
the COMMIT or ROLLBACK occurs, the other transactions will continue.

-JW

On Wed, Feb 22, 2012 at 7:57 AM, Arthur Fuller fuller.art...@gmail.comwrote:

 I agree with the testicular remedy, but in the case of the iron codpiece, I
 can think of another approach which may work for you. It still uses Select,
 but reads a one-row table, so it shouldn't hurt performance much. The table
 serves no other purpose than storing the next available PK; call the table
 NextPK, say. The algorithm might go like this:

 1. Lock the table NextPK.
 2. Select its value.
 3. Update the column with current value + 1.
 4. Unlock the table.
 5. Do your inserts.

 The lock will be very brief, perhaps brief enough to satisfy your
 requirement.

 --
 Arthur
 Cell: 647.710.1314

 Only two businesses refer to their clientele as users: drug dealing and
 software development.

 -- Arthur Fuller




-- 
-
Johnny Withers
601.209.4985
joh...@pixelated.net


Re: Removing Data Duplicacy

2012-02-22 Thread Johnny Withers
I'm not sure, It seems to me the proper way to do would be to insert into
table1, get the insert ID, then insert into table2 using that ID, this is
pretty standard stuff.

Not sure why, in this case, he cannot do that.

-JW


On Wed, Feb 22, 2012 at 8:54 AM, Rhino rhi...@sympatico.ca wrote:

 I miised the first message in this thread but is there some compelling
 reason why you simply don't use a unique index on the primary key of the
 table to prevent duplicates in the first place?

 --
 Rhino


 On 2012-02-22 09:40, Johnny Withers wrote:

 You can also handle this with transactions:

 CREATE TABLE `seq` (
   `seq_num` int(10) unsigned NOT NULL DEFAULT '1000'
 ) ENGINE=InnoDB DEFAULT CHARSET=latin1


 #Initialize sequence numbers
 INSERT INTO seq(seq_num) VALUES(1000);

 #Get next sequence number
 START TRANSACTION;

 UPDATE seq SET seq_num=LAST_INSERT_ID(seq_**num+1);

 #Do other inserts into other tables with this sequence number

 COMMIT;

 #ROLLBACK if something fails

 Other inserts will be blocked until this process is either COMMIT'd or
 ROLLBACK'd, preventing you from using the same sequence # again. As soon
 as
 the COMMIT or ROLLBACK occurs, the other transactions will continue.

 -JW

 On Wed, Feb 22, 2012 at 7:57 AM, Arthur Fullerfuller.art...@gmail.com**
 wrote:

  I agree with the testicular remedy, but in the case of the iron
 codpiece, I
 can think of another approach which may work for you. It still uses
 Select,
 but reads a one-row table, so it shouldn't hurt performance much. The
 table
 serves no other purpose than storing the next available PK; call the
 table
 NextPK, say. The algorithm might go like this:

 1. Lock the table NextPK.
 2. Select its value.
 3. Update the column with current value + 1.
 4. Unlock the table.
 5. Do your inserts.

 The lock will be very brief, perhaps brief enough to satisfy your
 requirement.

 --
 Arthur
 Cell: 647.710.1314

 Only two businesses refer to their clientele as users: drug dealing and
 software development.

 -- Arthur Fuller






-- 
-
Johnny Withers
601.209.4985
joh...@pixelated.net


Re: conditional updating

2012-02-09 Thread Johnny Withers
Update table set mydate=now() where mydate='-00-00'; should do it.

Sent from my iPad

On Feb 9, 2012, at 7:15 AM, william drescher will...@techservsys.com wrote:

 I want to update a date field in a record. if the date in the field is 
 -00-00 I want to change it to the current date.  I would appreciate 
 suggestions or links on how to do this.

 Yup, tried reading the manual, but need a bit of help.
 I will be updating another field at the same time.

 bill


 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: conditional updating

2012-02-09 Thread Johnny Withers
So, add your other criteria to the where clause, you failed to say
there were other conditions in your first email.

Sent from my iPad

On Feb 9, 2012, at 7:56 AM, william drescher will...@techservsys.com wrote:

 On 2/9/2012 8:22 AM, Johnny Withers wrote:
 Update table set mydate=now() where mydate='-00-00'; should do it.
 can't do that because the record is selected by other criteria.
 Thanks

 bill



 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: conditional updating

2012-02-09 Thread Johnny Withers
It implied to me there were two fields to update based on the date
being a given value. Read it how you like.

Sent from my iPad

On Feb 9, 2012, at 9:34 AM, Michael Dykman mdyk...@gmail.com wrote:

 He did mention that there was another field he was updating, which
 implies that the state of the date field was not the only condition.

 - michael

 On Thu, Feb 9, 2012 at 9:22 AM, Johnny Withers joh...@pixelated.net wrote:
 So, add your other criteria to the where clause, you failed to say
 there were other conditions in your first email.

 Sent from my iPad

 On Feb 9, 2012, at 7:56 AM, william drescher will...@techservsys.com wrote:

 On 2/9/2012 8:22 AM, Johnny Withers wrote:
 Update table set mydate=now() where mydate='-00-00'; should do it.
 can't do that because the record is selected by other criteria.
 Thanks

 bill



 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql


 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql




 --
  - michael dykman
  - mdyk...@gmail.com

  May the Source be with you.

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: hide server-version at connect?

2012-01-10 Thread Johnny Withers
Security through obscurity?

If none of your MySQL (or samba) servers are open to untrusted
networks, why do you need to hide this information.

Sent from my iPad

On Jan 10, 2012, at 6:35 AM, Reindl Harald h.rei...@thelounge.net wrote:



 Am 10.01.2012 13:29, schrieb Johan De Meersman:
 - Original Message -
 From: Reindl Harald h.rei...@thelounge.net

 sure? what's the binary between version and mysql_native_password?

 No idea, I never bothered to look at the raw protocol :-)

 me too :-)

 mysql 5.5 would be enough i guess, but how to create a patch for
 rpmbuild which must not be permanently changed for every mysql update

 I wouldn't think that bit of the code changes a lot between versions

 i will look

 really. Still, do you really think exposing that is such a risk?
 Do you have servers exposed to the net?

 currently i am scanning all servers from a OpenVAS-Appliance and
 my primary target is get away all information disclosures

 samba is the next in the list blwoing out even build-number *grr*

 all or nothing :-)


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Maximum line length or statement length for mysqldump

2011-10-20 Thread Johnny Withers
I remain convinced that users simply need to learn patience, though.

HAHAHAHAHAHAHAHAHAHAHAHAHAHAHA!!!

Good one!


Sent from my iPad

On Oct 20, 2011, at 8:44 AM, Johan De Meersman vegiv...@tuxera.be wrote:

 I remain convinced that users simply need to learn patience, though.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: mysql server does not recognize user password

2011-10-18 Thread Johnny Withers
Why does mysql say using password: no? Seems to me the password is not
being sent.

On Oct 18, 2011 8:37 PM, Tim Johnson t...@akwebsoft.com wrote:

* Claudio Nanni claudio.na...@gmail.com [111018 17:02]:

 FLUSH PRIVILEGES is not needed when you use GRANT/REVOKE/CREATE USER
 etc,

 Usually this ...

linus:~ tim$ sudo mysql
Password:
.
mysql SELECT USER(),CURRENT_USER();
+++
| USER() | CURRENT_USER() |
+++
| root@localhost | root@localhost |
+++

1 row in set (0.00 sec)
...hmm... on my linux box, where all works, I see
 'tim@localhost'

 try this:

 mysql -utim -psecret -h127.0.0.1 -P3306
linus:~ tim$ mysql -utim -p** -h127.0.0.1 -P3306

ERROR 1045 (28000): Access denied for user 'tim'@'localhost' (using
password: NO)
Thanks.

-- 
Tim
tim at tee jay forty nine dot com or akwebsoft dot com
http://www.akwebsoft.com

-- 
MySQL ...
To unsubscribe:http://lists.mysql.com/mysql?unsub=joh...@pixelated.net


Re: Fwd: Large insert query gives MySQL server gone away

2011-10-10 Thread Johnny Withers
Max packet size?

On Oct 10, 2011 6:12 PM, Neil Tompkins neil.tompk...@googlemail.com
wrote:


 As per the subject we've a large insert query that gives up the error
MySQL server has gone away when we try to execute it.  Any ideas why ?

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=joh...@pixelated.net


Re: 4 minute slow on select count(*) from table - myisam type

2011-10-06 Thread Johnny Withers
I think in order to solve your problem you will need to post the queries 
running against this table along with the explain output of each problem query. 
Optimizing server settings is a good start, however, individual query 
performance sounds like your problem now. 

Sent from my iPad

On Oct 6, 2011, at 6:47 AM, Joey L mjh2...@gmail.com wrote:

 Just as an fyi - I have other databases and their corresponding apache
 websites on the same server - performing okay.
 It seems that apache/mysql server is just having a hard time dealing
 with the access to those pages that deal with the 9gig table on that
 particular site.  -- Most of the access is done by webcrawlers to the
 site - so there is a lot of activity occuring on the 9gig tables.
 
 thanks
 mjh
 
 On Thu, Oct 6, 2011 at 6:13 AM, Joey L mjh2...@gmail.com wrote:
 guys - i am having such a hard time with this..it is killing me!!!
 Sorry - had to vent.
 my machine is running an tyan S2912G2NR  -- with 2 opterons and 12gig
 of memory. I have 2 software raided drives 1gig each.
 I run a couple of databases --- my largest table is about 9gig in
 size. --it is being accessed a lot.
 My my.cnf is as follows:
 
 #
 # The MySQL database server configuration file.
 #
 # You can copy this to one of:
 # - /etc/mysql/my.cnf to set global options,
 # - ~/.my.cnf to set user-specific options.
 #
 # One can use all long options that the program supports.
 # Run program with --help to get a list of available options and with
 # --print-defaults to see which it would actually understand and use.
 #
 # For explanations see
 # http://dev.mysql.com/doc/mysql/en/server-system-variables.html
 
 # This will be passed to all mysql clients
 # It has been reported that passwords should be enclosed with ticks/quotes
 # escpecially if they contain # chars...
 # Remember to edit /etc/mysql/debian.cnf when changing the socket location.
 [client]
 port= 3306
 socket  = /var/run/mysqld/mysqld.sock
 
 # Here is entries for some specific programs
 # The following values assume you have at least 32M ram
 
 # This was formally known as [safe_mysqld]. Both versions are currently 
 parsed.
 [mysqld_safe]
 socket  = /var/run/mysqld/mysqld.sock
 nice= 0
 
 [mysqld]
 #
 # * Basic Settings
 #
 user= mysql
 pid-file= /var/run/mysqld/mysqld.pid
 socket  = /var/run/mysqld/mysqld.sock
 port= 3306
 basedir = /usr
 datadir = /var/lib/mysql
 tmpdir  = /tmp
 language= /usr/share/mysql/english
 skip-external-locking
 #
 # Instead of skip-networking the default is now to listen only on
 # localhost which is more compatible and is not less secure.
 bind-address= 127.0.0.1
 #
 # * Fine Tuning
 #
 key_buffer  = 2G
 key_buffer_size = 2G
 max_allowed_packet  = 16M
 thread_stack= 192K
 thread_cache_size   = 8
 join_buffer_size= 128
 # This replaces the startup script and checks MyISAM tables if needed
 # the first time they are touched
 myisam-recover = BACKUP
 max_connections= 100
 table_cache= 1024
 max_heap_table_size = 32M
 tmp_table_size  = 32M
 thread_concurrency = 10
 #
 # * Query Cache Configuration
 #
 query_cache_limit   = 2M
 query_cache_size= 16M
 #
 # * Logging and Replication
 #
 # Both location gets rotated by the cronjob.
 # Be aware that this log type is a performance killer.
 # As of 5.1 you can enable the log at runtime!
 general_log_file= /var/log/mysql/mysql.log
 general_log = 2
 #
 # Error logging goes to syslog due to 
 /etc/mysql/conf.d/mysqld_safe_syslog.cnf.
 #
 # Here you can see queries with especially long duration
 #log_slow_queries   = /var/log/mysql/mysql-slow.log
 #long_query_time = 2
 #log-queries-not-using-indexes
 #
 # The following can be used as easy to replay backup logs or for replication.
 # note: if you are setting up a replication slave, see README.Debian about
 #   other settings you may need to change.
 #server-id  = 1
 #log_bin= /var/log/mysql/mysql-bin.log
 expire_logs_days= 10
 max_binlog_size = 100M
 #binlog_do_db   = include_database_name
 #binlog_ignore_db   = include_database_name
 #
 # * InnoDB
 #
 # InnoDB is enabled by default with a 10MB datafile in /var/lib/mysql/.
 # Read the manual for more InnoDB related options. There are many!
 #
 # * Security Features
 #
 # Read the manual, too, if you want chroot!
 # chroot = /var/lib/mysql/
 #
 # For generating SSL certificates I recommend the OpenSSL GUI tinyca.
 #
 # ssl-ca=/etc/mysql/cacert.pem
 # ssl-cert=/etc/mysql/server-cert.pem
 # ssl-key=/etc/mysql/server-key.pem
 
 
 
 [mysqldump]
 quick
 quote-names
 max_allowed_packet  = 64M
 
 [mysql]
 #no-auto-rehash # faster start of mysql but no tab completition
 
 [isamchk]
 key_buffer  = 16M
 
 #
 # * 

Re: 4 minute slow on select count(*) from table - myisam type

2011-10-06 Thread Johnny Withers
Doing the same query on a table that fits into memory is a completely different 
thing than doing the query on a table where half the needed data resides on 
disk. Maybe your queries are not using an index? On a table with a few 
100-thousand records this is probably a non issue for the server, when the 
table has millions, well then it could be a problem. Maybe your indexes for 
this table don't fit into memory? Who knows? That is why you need to look at 
the problem queries. 

I can tweak server settings all day long, but if my query is doing a join on a 
table with 40 million records with no index, it'll never work. 

Sent from my iPad

On Oct 6, 2011, at 8:40 AM, Joey L mjh2...@gmail.com wrote:

 thanks for the response - but do not believe queries are the issue
 because - Like I said - i have other websites doing the same exact
 queries as I am doing on the site with the 9gig table.
 -- my issue is optimizing mysql to handle lots of queries on a 9gig
 db. --- i think that is the focus.
 All other websites (10 websites) are being handled fine in terms of
 performance - with same queries -- just that table is about 100meg.
 I have run optimize on it and recover and prune,etcno luck.
 thanks
 mjh
 
 On Thu, Oct 6, 2011 at 9:15 AM, Johnny Withers joh...@pixelated.net wrote:
 I think in order to solve your problem you will need to post the queries 
 running against this table along with the explain output of each problem 
 query. Optimizing server settings is a good start, however, individual query 
 performance sounds like your problem now.
 
 Sent from my iPad
 
 On Oct 6, 2011, at 6:47 AM, Joey L mjh2...@gmail.com wrote:
 
 Just as an fyi - I have other databases and their corresponding apache
 websites on the same server - performing okay.
 It seems that apache/mysql server is just having a hard time dealing
 with the access to those pages that deal with the 9gig table on that
 particular site.  -- Most of the access is done by webcrawlers to the
 site - so there is a lot of activity occuring on the 9gig tables.
 
 thanks
 mjh
 
 On Thu, Oct 6, 2011 at 6:13 AM, Joey L mjh2...@gmail.com wrote:
 guys - i am having such a hard time with this..it is killing me!!!
 Sorry - had to vent.
 my machine is running an tyan S2912G2NR  -- with 2 opterons and 12gig
 of memory. I have 2 software raided drives 1gig each.
 I run a couple of databases --- my largest table is about 9gig in
 size. --it is being accessed a lot.
 My my.cnf is as follows:
 
 #
 # The MySQL database server configuration file.
 #
 # You can copy this to one of:
 # - /etc/mysql/my.cnf to set global options,
 # - ~/.my.cnf to set user-specific options.
 #
 # One can use all long options that the program supports.
 # Run program with --help to get a list of available options and with
 # --print-defaults to see which it would actually understand and use.
 #
 # For explanations see
 # http://dev.mysql.com/doc/mysql/en/server-system-variables.html
 
 # This will be passed to all mysql clients
 # It has been reported that passwords should be enclosed with ticks/quotes
 # escpecially if they contain # chars...
 # Remember to edit /etc/mysql/debian.cnf when changing the socket location.
 [client]
 port= 3306
 socket  = /var/run/mysqld/mysqld.sock
 
 # Here is entries for some specific programs
 # The following values assume you have at least 32M ram
 
 # This was formally known as [safe_mysqld]. Both versions are currently 
 parsed.
 [mysqld_safe]
 socket  = /var/run/mysqld/mysqld.sock
 nice= 0
 
 [mysqld]
 #
 # * Basic Settings
 #
 user= mysql
 pid-file= /var/run/mysqld/mysqld.pid
 socket  = /var/run/mysqld/mysqld.sock
 port= 3306
 basedir = /usr
 datadir = /var/lib/mysql
 tmpdir  = /tmp
 language= /usr/share/mysql/english
 skip-external-locking
 #
 # Instead of skip-networking the default is now to listen only on
 # localhost which is more compatible and is not less secure.
 bind-address= 127.0.0.1
 #
 # * Fine Tuning
 #
 key_buffer  = 2G
 key_buffer_size = 2G
 max_allowed_packet  = 16M
 thread_stack= 192K
 thread_cache_size   = 8
 join_buffer_size= 128
 # This replaces the startup script and checks MyISAM tables if needed
 # the first time they are touched
 myisam-recover = BACKUP
 max_connections= 100
 table_cache= 1024
 max_heap_table_size = 32M
 tmp_table_size  = 32M
 thread_concurrency = 10
 #
 # * Query Cache Configuration
 #
 query_cache_limit   = 2M
 query_cache_size= 16M
 #
 # * Logging and Replication
 #
 # Both location gets rotated by the cronjob.
 # Be aware that this log type is a performance killer.
 # As of 5.1 you can enable the log at runtime!
 general_log_file= /var/log/mysql/mysql.log
 general_log = 2
 #
 # Error logging goes to syslog due to 
 /etc/mysql

Re: 4 minute slow on select count(*) from table - myisam type

2011-10-06 Thread Johnny Withers
I've sent this email a few times now, mysql list kept rejecting it due to
size, sorry for any duplicates


I think you need to examine this query in particular:

| 2567 | p_092211 | localhost | p_092211 | Query   |   11 | Sending
data | select oldurl, newurl, id, dateadd from  w6h8a_sh404sef_urls
where newurl   AND soundex(oldurl)  |
| 2568 | p_092211 | localhost | p_092211 | Query   |   69 | Sending
data | select oldurl, newurl, id, dateadd from  w6h8a_sh404sef_urls
where newurl   AND soundex(oldurl)
| 2582 | p_092211 | localhost | p_092211 | Query   |   69 | Locked
 | update `w6h8a_sh404sef_urls` set cpt=(cpt+1) where `oldurl` =
'Camargo-Illinois-Holiday_Light_Tour-H |

One of those has been running for over a minute. Doing a show full
processlist will give the entire query, you could then paste it into your
SQL editor prefixed with explain and see what the heck is taking so long.
Pretty sure it has to do with using a function on on oldurl in the where
clause -- can't use an index when you do this; therefore, the entire table
has to be scanned. Also, since this table doesn't fit into memory, its disk
bound. If you have the ability to modify the table structure and the
software, a column could be added to the table that is the result of
SOUNDEX(oldurl) and then an index added to that column. The where clause
could then use soundex_column=whatever instead and utilize the index.

I haven't used MYISAM in a long time, so i'm not sure about this but.. is
the INSERT locked due to the SELECT queries that have been running for so
long? And are the rest of the selects (with 8s running time) locked by the
INSERT?

-
Johnny Withers
601.209.4985
joh...@pixelated.net


Re: Slow query - please help

2011-10-05 Thread Johnny Withers
Try adding an index on cities.name, it may prevent the file sort. What was the 
original query time and what is it now?

Sent from my iPad

On Oct 5, 2011, at 4:01 AM, Tompkins Neil neil.tompk...@googlemail.com wrote:

 Following my mail below, if anyone can help optimise the query further that
 would be a great help.
 
 -- Forwarded message --
 From: Tompkins Neil neil.tompk...@googlemail.com
 Date: Wed, Oct 5, 2011 at 9:48 AM
 Subject: Re: Slow query - please help
 To: Johnny Withers joh...@pixelated.net
 Cc: mysql@lists.mysql.com mysql@lists.mysql.com
 
 
 I just revised my query and now get the following output :
 
 '1', 'PRIMARY', 'cities', 'ref', 'IDX_country_code,IDX_language_code',
 'IDX_country_code', '7', 'const', '2088', '100.00', 'Using index condition;
 Using where; Using filesort'
 '4', 'DEPENDENT SUBQUERY', 'districts', 'ref',
 'IDX_country_code,IDX_language_code', 'IDX_country_code', '7', 'const',
 '121', '100.00', 'Using index condition; Using where'
 '3', 'DEPENDENT SUBQUERY', 'hotels', 'ref',
 'IDX_country_code,IDX_language_code', 'IDX_country_code', '7', 'const',
 '9982', '100.00', 'Using index condition; Using where'
 '2', 'DEPENDENT SUBQUERY', 'hotels', 'ref',
 'UNQ_folder_url,IDX_country_code,IDX_enabled,IDX_language_code',
 'IDX_enabled', '1', 'const', '267', '100.00', 'Using index condition; Using
 where'
 
 After doing this the query speed is acceptable.
 
 Thanks
 Neil
 
 On Wed, Oct 5, 2011 at 3:12 AM, Johnny Withers joh...@pixelated.net wrote:
 
 Can you post the explain extended output of your query?
 
 Sent from my iPad
 
 On Oct 4, 2011, at 2:45 PM, Neil Tompkins neil.tompk...@googlemail.com
 wrote:
 
 Can anyone help me ?
 
 
 Begin forwarded message:
 
 From: Tompkins Neil neil.tompk...@googlemail.com
 Date: 30 September 2011 20:23:47 GMT+01:00
 To: mark carson mcar...@pixie.co.za
 Cc: [MySQL] mysql@lists.mysql.com
 Subject: Re: Slow query - please help
 
 
 I downloaded version mysql-5.6.2-m5-win32.msi and he table definitions
 are below, let me know if you need any more information.
 
 CREATE TABLE `districts` (
 `district_id` int(11) NOT NULL,
 `language_code` char(2) COLLATE utf8_unicode_ci NOT NULL DEFAULT 'en',
 `city_id` int(11) DEFAULT NULL,
 `country_code` char(2) COLLATE utf8_unicode_ci DEFAULT NULL,
 `name` varchar(255) COLLATE utf8_unicode_ci NOT NULL,
 `latitude` double DEFAULT NULL,
 `longitude` double DEFAULT NULL,
 `meta_title` varchar(70) COLLATE utf8_unicode_ci DEFAULT NULL,
 `meta_keywords` varchar(155) COLLATE utf8_unicode_ci DEFAULT NULL,
 `meta_description` varchar(155) COLLATE utf8_unicode_ci DEFAULT NULL,
 `link_text` varchar(155) COLLATE utf8_unicode_ci DEFAULT NULL,
 `folder_url` varchar(55) COLLATE utf8_unicode_ci DEFAULT NULL,
 `enabled` enum('Y','N') COLLATE utf8_unicode_ci NOT NULL DEFAULT 'N',
 `last_changed` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
 PRIMARY KEY (`district_id`,`language_code`),
 UNIQUE KEY `UNQ_folder_url` (`folder_url`),
 KEY `IDX_country_code` (`country_code`),
 KEY `IDX_enabled` (`enabled`),
 KEY `IDX_folder_url` (`folder_url`),
 KEY `IDX_language_code` (`language_code`),
 KEY `IDX_latitude` (`latitude`),
 KEY `IDX_longitude` (`longitude`)
 ) ENGINE=MyISAM DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci;
 
 CREATE TABLE `cities` (
 `city_id` int(11) NOT NULL,
 `language_code` char(2) COLLATE utf8_unicode_ci NOT NULL,
 `country_code` char(2) COLLATE utf8_unicode_ci DEFAULT NULL,
 `name` varchar(255) COLLATE utf8_unicode_ci NOT NULL,
 `nr_hotels` int(11) DEFAULT NULL,
 `latitude` double DEFAULT NULL,
 `longitude` double DEFAULT NULL,
 `meta_title` varchar(70) COLLATE utf8_unicode_ci DEFAULT NULL,
 `meta_keywords` varchar(155) COLLATE utf8_unicode_ci DEFAULT NULL,
 `meta_description` varchar(155) COLLATE utf8_unicode_ci DEFAULT NULL,
 `link_text` varchar(155) COLLATE utf8_unicode_ci DEFAULT NULL,
 `folder_url` varchar(55) COLLATE utf8_unicode_ci DEFAULT NULL,
 `enabled` enum('Y','N') COLLATE utf8_unicode_ci NOT NULL DEFAULT 'N',
 `last_changed` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
 PRIMARY KEY (`city_id`,`language_code`),
 UNIQUE KEY `UNQ_folder_url` (`folder_url`),
 KEY `IDX_country_code` (`country_code`),
 KEY `IDX_enabled` (`enabled`),
 KEY `IDX_folder_url` (`folder_url`),
 KEY `IDX_language_code` (`language_code`),
 KEY `IDX_latitude` (`latitude`),
 KEY `IDX_longitude` (`longitude`)
 ) ENGINE=MyISAM DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci;
 
 
 CREATE TABLE `hotels` (
 `hotel_id` int(11) NOT NULL,
 `language_code` char(2) COLLATE utf8_unicode_ci NOT NULL DEFAULT 'en',
 `address` varchar(255) COLLATE utf8_unicode_ci DEFAULT NULL,
 `city` varchar(255) COLLATE utf8_unicode_ci DEFAULT NULL,
 `city_id` int(11) DEFAULT NULL,
 `class_is_estimated` tinyint(4) DEFAULT NULL,
 `class` tinyint(4) DEFAULT NULL,
 `country_code` char(2) COLLATE utf8_unicode_ci DEFAULT NULL,
 `currencycode` char(3) COLLATE utf8_unicode_ci DEFAULT NULL,
 `district` varchar(45) COLLATE utf8_unicode_ci DEFAULT NULL

Re: Slow query - please help

2011-10-04 Thread Johnny Withers
Can you post the explain extended output of your query?

Sent from my iPad

On Oct 4, 2011, at 2:45 PM, Neil Tompkins neil.tompk...@googlemail.com wrote:

 Can anyone help me ?
 
 
 Begin forwarded message:
 
 From: Tompkins Neil neil.tompk...@googlemail.com
 Date: 30 September 2011 20:23:47 GMT+01:00
 To: mark carson mcar...@pixie.co.za
 Cc: [MySQL] mysql@lists.mysql.com
 Subject: Re: Slow query - please help
 
 
 I downloaded version mysql-5.6.2-m5-win32.msi and he table definitions are 
 below, let me know if you need any more information.
 
 CREATE TABLE `districts` (
  `district_id` int(11) NOT NULL,
  `language_code` char(2) COLLATE utf8_unicode_ci NOT NULL DEFAULT 'en',
  `city_id` int(11) DEFAULT NULL,
  `country_code` char(2) COLLATE utf8_unicode_ci DEFAULT NULL,
  `name` varchar(255) COLLATE utf8_unicode_ci NOT NULL,
  `latitude` double DEFAULT NULL,
  `longitude` double DEFAULT NULL,
  `meta_title` varchar(70) COLLATE utf8_unicode_ci DEFAULT NULL,
  `meta_keywords` varchar(155) COLLATE utf8_unicode_ci DEFAULT NULL,
  `meta_description` varchar(155) COLLATE utf8_unicode_ci DEFAULT NULL,
  `link_text` varchar(155) COLLATE utf8_unicode_ci DEFAULT NULL,
  `folder_url` varchar(55) COLLATE utf8_unicode_ci DEFAULT NULL,
  `enabled` enum('Y','N') COLLATE utf8_unicode_ci NOT NULL DEFAULT 'N',
  `last_changed` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
  PRIMARY KEY (`district_id`,`language_code`),
  UNIQUE KEY `UNQ_folder_url` (`folder_url`),
  KEY `IDX_country_code` (`country_code`),
  KEY `IDX_enabled` (`enabled`),
  KEY `IDX_folder_url` (`folder_url`),
  KEY `IDX_language_code` (`language_code`),
  KEY `IDX_latitude` (`latitude`),
  KEY `IDX_longitude` (`longitude`)
 ) ENGINE=MyISAM DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci;
 
 CREATE TABLE `cities` (
  `city_id` int(11) NOT NULL,
  `language_code` char(2) COLLATE utf8_unicode_ci NOT NULL,
  `country_code` char(2) COLLATE utf8_unicode_ci DEFAULT NULL,
  `name` varchar(255) COLLATE utf8_unicode_ci NOT NULL,
  `nr_hotels` int(11) DEFAULT NULL,
  `latitude` double DEFAULT NULL,
  `longitude` double DEFAULT NULL,
  `meta_title` varchar(70) COLLATE utf8_unicode_ci DEFAULT NULL,
  `meta_keywords` varchar(155) COLLATE utf8_unicode_ci DEFAULT NULL,
  `meta_description` varchar(155) COLLATE utf8_unicode_ci DEFAULT NULL,
  `link_text` varchar(155) COLLATE utf8_unicode_ci DEFAULT NULL,
  `folder_url` varchar(55) COLLATE utf8_unicode_ci DEFAULT NULL,
  `enabled` enum('Y','N') COLLATE utf8_unicode_ci NOT NULL DEFAULT 'N',
  `last_changed` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
  PRIMARY KEY (`city_id`,`language_code`),
  UNIQUE KEY `UNQ_folder_url` (`folder_url`),
  KEY `IDX_country_code` (`country_code`),
  KEY `IDX_enabled` (`enabled`),
  KEY `IDX_folder_url` (`folder_url`),
  KEY `IDX_language_code` (`language_code`),
  KEY `IDX_latitude` (`latitude`),
  KEY `IDX_longitude` (`longitude`)
 ) ENGINE=MyISAM DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci;
 
 
 CREATE TABLE `hotels` (
  `hotel_id` int(11) NOT NULL,
  `language_code` char(2) COLLATE utf8_unicode_ci NOT NULL DEFAULT 'en',
  `address` varchar(255) COLLATE utf8_unicode_ci DEFAULT NULL,
  `city` varchar(255) COLLATE utf8_unicode_ci DEFAULT NULL,
  `city_id` int(11) DEFAULT NULL,
  `class_is_estimated` tinyint(4) DEFAULT NULL,
  `class` tinyint(4) DEFAULT NULL,
  `country_code` char(2) COLLATE utf8_unicode_ci DEFAULT NULL,
  `currencycode` char(3) COLLATE utf8_unicode_ci DEFAULT NULL,
  `district` varchar(45) COLLATE utf8_unicode_ci DEFAULT NULL,
  `hoteltype_id` int(11) DEFAULT NULL,
  `is_closed` tinyint(4) DEFAULT NULL,
  `latitude` double DEFAULT NULL,
  `longitude` double DEFAULT NULL,
  `maxrate` double DEFAULT NULL,
  `minrate` double DEFAULT NULL,
  `original_name` varchar(255) COLLATE utf8_unicode_ci DEFAULT NULL,
  `name` varchar(255) COLLATE utf8_unicode_ci DEFAULT NULL,
  `nr_rooms` int(11) DEFAULT NULL,
  `preferred` int(11) DEFAULT NULL,
  `ranking` int(11) DEFAULT NULL,
  `review_nr` int(11) DEFAULT NULL,
  `review_score` double DEFAULT NULL,
  `zip` varchar(45) COLLATE utf8_unicode_ci DEFAULT NULL,
  `checkin_from` varchar(15) COLLATE utf8_unicode_ci DEFAULT NULL,
  `checkin_to` varchar(15) COLLATE utf8_unicode_ci DEFAULT NULL,
  `checkout_from` varchar(15) COLLATE utf8_unicode_ci DEFAULT NULL,
  `checkout_to` varchar(15) COLLATE utf8_unicode_ci DEFAULT NULL,
  `meta_title` varchar(70) COLLATE utf8_unicode_ci DEFAULT NULL,
  `meta_keywords` varchar(155) COLLATE utf8_unicode_ci DEFAULT NULL,
  `meta_description` varchar(155) COLLATE utf8_unicode_ci DEFAULT NULL,
  `link_text` varchar(155) COLLATE utf8_unicode_ci DEFAULT NULL,
  `folder_url` varchar(255) COLLATE utf8_unicode_ci DEFAULT NULL,
  `short_profile` varchar(255) COLLATE utf8_unicode_ci DEFAULT NULL,
  `enabled` enum('Y','N') COLLATE utf8_unicode_ci NOT NULL DEFAULT 'N',
  `last_changed` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
  PRIMARY KEY (`hotel_id`,`language_code`),
  UNIQUE KEY `UNQ_folder_url` (`folder_url`),

Re: SLOW performance over network

2011-09-29 Thread Johnny Withers
Check your auto negotiate setting on your nic. Run ifconfig and see if there
are a lot of errors.

On Sep 29, 2011 10:13 AM, Jim Moseby jmos...@elasticfabrics.com wrote:

Yeah:

# host 72.30.2.43 /* yahoo.com */
43.2.30.72.in-addr.arpa domain name pointer ir1.fp.vip.sk1.yahoo.com.
# host 10.1.20.97 /* my windows box */
97.20.1.10.in-addr.arpa has no PTR record



 Todd Lyons tly...@ivenue.com 9/29/2011 10:26 AM 

On Thu, Sep 29, 2011 at 7:12 AM, Jim Moseby jmos...@elasticfabrics.com
wrote:
 I still use the ol...
To unsubscribe:
http://lists.mysql.com/mysql?unsub=jmos...@elasticfabrics.com




CONFIDENTIALITY NOTICE: This message is directed to and is for the use of
the above-noted addres...


Re: table design question

2011-09-19 Thread Johnny Withers
I would design three tables:

Table1 (states):
ID, name, abbreviation

Table2 (state_item):
ID, state_id (from states), item_id (from item_type), item_value (varchar)

Table3 (item_type):
ID, item_name

Into the item_type table you can insert:

Nick Name
Motto
Name origin
Facts
SomeOtherDataPoint
SomeOtherDataPoint2

etc

Now, you can have as many nick names per state as needed, some states may
have 1, some 50, etc. Same for every other data point you want to keep track
of for each state as well.



On Mon, Sep 19, 2011 at 8:55 AM, Richard Reina gatorre...@gmail.com wrote:

 I want to create a US geography database. So far I have categories such as
 state nick names (some states have more than one), state mottos (text 25 to
 150 characters), state name origins (100-300 characters), state trivial
 facts, entry into union.  My question is; would it be better to keep at
 least some of this information in separate tables like:

 state_basic
 ID | name | Incorporation | Entry in Union| Name_origin | Motto

 state_nicknames
 ID | name | nick_name|

 state_trivia
 ID | name | fact

 or would it be batter for queries to try to put all this information in one
 table?

 Thanks,

 Richard




-- 
-
Johnny Withers
601.209.4985
joh...@pixelated.net


Re: Concerned : Developer getting There is no 'root'@'%' registered error message

2011-08-17 Thread Johnny Withers
Change the definer to one of your registered root accounts. Root@127 or
root@localhost.

On Aug 17, 2011 8:39 AM, Brent Clark brentgclarkl...@gmail.com wrote:

Hiya

Ill be up front my saying my knowledge of store procedures is limited.

A Developer gave me a procedure to load.

It starts off with:

CREATE DEFINER=`root`@`%` PROCEDURE 


But now, the developer informs me that he gets the following message. There
is no 'root'@'%' registered

Googling reveals the following link : http://forums.mysql.com/read.**
php?10,237843,238950#msg-**238950http://forums.mysql.com/read.php?10,237843,238950#msg-238950

And that I can run:

GRANT ALL PRIVILEGES ON *.* TO 'root'@'%' IDENTIFIED BY 'mysql' WITH GRANT
OPTION;

Im hesitant to do it.

My Mysql root acess details are:

mysql select user, host from user WHERE user = 'root';
+--+-+
| user | host|
+--+-+
| root | 127.0.0.1   |
| root | localhost   |
+--+-+

If someone can share their opinion, thoughts or share the same concerns it
would be appreciated.

Kind Regards
Brent Clark


Re: Concerned : Developer getting There is no 'root'@'%' registered error message

2011-08-17 Thread Johnny Withers
It would allow anyone from anywhere to access your server as root with full
access to all databases using the password 'mysql'. Not very secure. I don't
recommend having a root@% defined.

On Aug 17, 2011 8:50 AM, Brent Clark brentgclarkl...@gmail.com wrote:

Hiya

Thank you so much for replying. I really appreciate it.

I know the answer (well I think I do :) ), but im still going to ask. What
is the risk if do the GRANT ALL PRIVILEGES ON *.* TO 'root'@'%' IDENTIFIED
BY 'mysql' WITH GRANT OPTION;
To satisfy the developer.

Thank you again.
Brent



On 17/08/2011 15:42, Johnny Withers wrote:

 
 
  Change the definer to one of your registered root accounts. Root@127 or
 root@localhost.
 

  On Aug 17, 2011 8:39 AM, Brent Clark brentgclarkl...@gmail.commailto:
 brentgclarklist@gmail.c...



Re: shall i jump from 5.1 to 5.5

2011-08-16 Thread Johnny Withers
This is a poor benchmark because the query never changes. Query cache takes 
over after first request. 

Sent from my iPad

On Aug 16, 2011, at 4:28 PM, Alvin Ramos alvin.ra...@reachsmart.com wrote:

 I know my previous email was vague, it was sent via smartphone.  I’ve got a 
 simple PHP page pulling information from one of larger database tables:
 
 
 
 PHP Code:
 
 
 
 html
 
 head
 
 basefront face=Arial
 
 /head
 
 
 
 body
 
 
 
 ?php
 
 
 
 // set server access variables
 
 $host = 127.0.0.1;
 
 $user = web;
 
 $pass = password;
 
 $db = md ;
 
 
 
 // open connections to database
 
 $connect = mysql_connect($host, $user, $pass) or die (Unable to connect!);
 
 
 
 // select database to use
 
 mysql_select_db($db) or die (Unable to select database!);
 
 
 
 // create SQL query string
 
 $query = SELECT * FROM members limit 1000;
 
 
 
 //execute query and obtain result set
 
 $result = mysql_query($query) or die (Error in query: $query.  . 
 mysql_error());
 
 
 
 // are there any rows in the result?
 
 if (mysql_num_rows($result)  0)
 
 {
 
// yes
 
// iterate through result set
 
// format query results as table
 
echo table cellpadding=10 border=1;
 
while($row = mysql_fetch_assoc($result))
 
{
 
echo tr;
 
echo td . $row['member_id'] . /td;
 
echo td . $row['fname'] . /td;
 
echo /tr;
 
}
 
echo /table;
 
 }
 
 else
 
 {
 
// no
 
// print status message
 
echo NO rows found!;
 
 }
 
 
 
 // close connection
 
 mysql_close($connect);
 
 
 
 ?
 
 
 
 /body
 
 /html
 
 
 
 I’ve got apache benchmark then running 5 concurrent connections 10,000 times. 
  I changed the $host to the IP for the 5.5 server then to the 5.1 server and 
 here are one of my many results:
 
 
 
 5.1 results:
 
 
 
 Server Software:Apache/2.2.3
 
 Server Hostname:aramos.dev
 
 Server Port:80
 
 
 
 Document Path:  /mysqlfetch51.php
 
 Document Length:35808 bytes
 
 
 
 Concurrency Level:  5
 
 Time taken for tests:   3263.909079 seconds
 
 Complete requests:  1
 
 Failed requests:0
 
 Write errors:   0
 
 Total transferred:  35964 bytes
 
 HTML transferred:   35808 bytes
 
 Requests per second:3.06 [#/sec] (mean)
 
 Time per request:   1631.955 [ms] (mean)
 
 Time per request:   326.391 [ms] (mean, across all concurrent requests)
 
 Transfer rate:  107.60 [Kbytes/sec] received
 
 
 
 Connection Times (ms)
 
  min  mean[+/-sd] median   max
 
 Connect:01   2.3  1 155
 
 Processing:   593 1629 699.7   1524   13580
 
 Waiting:  574 1611 699.7   1506   13562
 
 Total:595 1630 699.7   1526   13580
 
 
 
 Percentage of the requests served within a certain time (ms)
 
  50%   1526
 
  66%   1725
 
  75%   1856
 
  80%   1944
 
  90%   2215
 
  95%   2559
 
  98%   4339
 
  99%   4741
 
 100%  13580 (longest request)
 
 
 
 5.5 results:
 
 
 
 erver Software:Apache/2.2.3
 
 Server Hostname:aramos.dev
 
 Server Port:80
 
 
 
 Document Path:  /mysqlfetch.php
 
 Document Length:35808 bytes
 
 
 
 Concurrency Level:  5
 
 Time taken for tests:   3400.300474 seconds
 
 Complete requests:  1
 
 Failed requests:0
 
 Write errors:   0
 
 Total transferred:  35964 bytes
 
 HTML transferred:   35808 bytes
 
 Requests per second:2.94 [#/sec] (mean)
 
 Time per request:   1700.150 [ms] (mean)
 
 Time per request:   340.030 [ms] (mean, across all concurrent requests)
 
 Transfer rate:  103.29 [Kbytes/sec] received
 
 
 
 Connection Times (ms)
 
  min  mean[+/-sd] median   max
 
 Connect:01   2.7  1 168
 
 Processing:   595 1697 724.8   1598   14505
 
 Waiting:  577 1679 724.8   1580   14486
 
 Total:596 1698 724.8   1600   14506
 
 
 
 Percentage of the requests served within a certain time (ms)
 
  50%   1600
 
  66%   1799
 
  75%   1939
 
  80%   2028
 
  90%   2314
 
  95%   2640
 
  98%   4387
 
  99%   4805
 
 100%  14506 (longest request)
 
 
 
 
 
 I’ve ran tests even against our web sites and its slower than the 5.1 server. 
  Any suggestions, anything I should change on the 5.5 server?  The hardware 
 and OS is identical from the 5.1 server.  Thanks!
 
 
 
 Alvin Ramos
 
 
 
 From: w...@pythian.com [mailto:w...@pythian.com] On Behalf Of Singer X.J. Wang
 Sent: Tuesday, August 16, 2011 4:08 PM
 To: Alvin Ramos
 Cc: Prabhat Kumar; Reindl Harald; mysql@lists.mysql.com
 Subject: Re: shall i jump from 5.1 to 5.5
 
 
 
 Are you doing concurrent workloads?
 
 
 
 On Tue, Aug 16, 2011 at 16:04, Alvin Ramos alvin.ra...@reachsmart.com wrote:
 
 I've been running some bench marking between 5.1 and 5.5 myself and haven't 
 notice any huge performance improvements on 5.5. Even though white papers 
 

Re: different size under the two OS

2011-08-11 Thread Johnny Withers
du reports how much space the file takes on the disk. This # depends on the
block size of each file system.

On Aug 11, 2011 9:13 PM, Feng He short...@gmail.com wrote:

Hello DBAs,

Though this is not exactly a mysql problem, but I think this list may
be helpful for my question.

I have dumped a mysql data file, and scp it to another host.
The current host is ubuntu-8.04, the remote host is ubuntu-9.10.
As you can see below:

The current host:

$ md5sum fcm.0812.sql.gz
ea08ec505c1b1724213538fed7483975  fcm.0812.sql.gz
$ lsb_release -r
Release:8.04
$ du -k fcm.0812.sql.gz
418080  fcm.0812.sql.gz

The remote host:

$ md5sum fcm.0812.sql.gz
ea08ec505c1b1724213538fed7483975  fcm.0812.sql.gz
$ lsb_release -r
Release:9.10
$ du -k fcm.0812.sql.gz
417672  fcm.0812.sql.gz


Though the files in two hosts have the same md5sum, but why they have
different size with 'du -k' showed?

Thanks.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=joh...@pixelated.net


Re: Too many aliases

2011-08-06 Thread Johnny Withers
It's simple... ttwwadi is the only reason I assume. 

Sent from my iPad

On Aug 5, 2011, at 2:39 PM, (Hal�sz S�ndor) h...@tbbs.net wrote:

 2011/08/04 10:21 -0500, Johnny Withers 
 http://en.wikipedia.org/wiki/Hungarian_notation
 
 
 On Thu, Aug 4, 2011 at 9:41 AM, Mike Diehl mdi...@diehlnet.com wrote
 Well, while we're on the subject of SQL style, can anyone tell me why I'm
 always seeing people prefixing the name of a table with something like
 tbl?
 
 
 Yeah, but why perpetuate such a thing in a language that has type (at least 
 'table' is distinct)? BCPL had at all no type.
 
 
 -- 
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql?unsub=joh...@pixelated.net
 

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Too many aliases

2011-08-04 Thread Johnny Withers
http://en.wikipedia.org/wiki/Hungarian_notation


On Thu, Aug 4, 2011 at 9:41 AM, Mike Diehl mdi...@diehlnet.com wrote:

 Well, while we're on the subject of SQL style, can anyone tell me why I'm
 always seeing people prefixing the name of a table with something like
 tbl?

 For example:

 create table tblCUSTOMERS ( ... );

 Seems to me that you should probably know that CUSTOMERS is a table, or is
 it
 just me?

 Looking forward to your input.

 Mike.

 On Thursday 04 August 2011 6:43:55 am David Lerer wrote:
  I agree. I use the same column name in all tables where it has the same
  function - but I consistently add a suffix or prefix. And yes, it is the
  old fashion way David.
 
  -Original Message-
  From: h...@tbbs.net [mailto:h...@tbbs.net]
  Sent: Thursday, August 04, 2011 8:26 AM
  To: r...@grib.nl
  Cc: mysql@lists.mysql.com
  Subject: Re: Too many aliases
 
   2011/08/03 12:46 +0200, Rik Wasmus 
 
  But the
  main thing is it helps to distinguish tables  in joins having the same
  table
  more then once (and of course results from subqueries etc.):
 
  SELECT first.*
  FROM tablename first
  LEFT JOIN   tablename second
 ONfirst.some_id = second.some_id
 AND first.id != second.id
  WHERE second.id IS NULL
  
  Well, yes, here it is needful. But it seems to me from most of the
  examples that people here post, that they have the idea that it is the
  style always to use one-letter aliases, whether it is helpful or not.
 
  Now I do not do this, but I often see examples where a field for one
  purpose has in one table one name, and in another table another,
  slightly different, name, and then, too, I see alias used, although, in
  this case, no table name at all is needed. (I like to use the same field
  name in all tables where it has the same function.)

 --

 Take care and have fun,
 Mike Diehl.

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql?unsub=joh...@pixelated.net




-- 
-
Johnny Withers
601.209.4985
joh...@pixelated.net


Re: Query bug

2011-07-24 Thread Johnny Withers
What's your vb code for outputting the results look like?

On Jul 24, 2011 8:22 AM, Velen Vydelingum ve...@biz-mu.com wrote:

Hi,

I have the following query which is fine when I run it from the mysql shell
screen:

select supplier_code,0,0,0,0,0-sum(amountpaid),0 from custpayments where
 paymentdate='2010-12-02' and grnno not in (Select sale_id from
saletrans_cons where paymode='Credit') group by supplier_code

but when I use VB to send this same query the results exclude the last
record.

Can anyone tell me why I'm having this issue with VB?

Thanks.


Veln


Re: stored procedure insert statement

2011-07-09 Thread Johnny Withers
It seems to me that your insert statement is trying to insert duplicate rows
into the storage table. This is why insert ignore and replace work.

On Jul 9, 2011 3:49 AM, Igor Shevtsov nixofort...@googlemail.com wrote:

Hi all,
I can't explain strange behaviour of the INSERT statement in the stored
procedure.
The idea is to generate a list based on the output of 3 INNER JOIN of
regularly updated tables.
Something like :

INSERT INTO storage
(column list)
SELECT
column list
FROM t1 JOIN t2
ON t1.x=t2.y
JOIN t3
ON t2.z=t3.w
WHERE CONDITIONS;

The procedure runs daily by crontask and it inserts correct number of
output rows.

But after It runs and populated a storage table, I added new entries and
expect to find them in the storage table. Even though they were picked
up by SELECT statement, they haven't been INSERTed into the storage table.
If I DELETE or TRUNCATE from the storage table and run the procedure all
newly added entries and existed entries are their, but if I add new rows
and run the procedure again It doesn't update the table.
All tables have a unique identifier, so duplicate errors are impossible.
I use INNODB engine for all tables.
I understand that stored procedure is a precompiled thing and I believe
it could be something to do with cache but I couldn't find proper
explanation or similar case online.
I found when I use INSERT IGNORE INTO or INSERT REPLACE INTO, I receive
the proper result with newly entries added to the storage table.
Any ideas guys?
Have a nice weekend ALL.
Cheers,
Igor

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=joh...@pixelated.net


Re: MySQL and set complements

2011-07-08 Thread Johnny Withers
Leonardo,

I think a new compound key on email_id and activity in the activities table
may help.

I'm not sure if this will help or not, Its hard to test w/o having a large
data set to test against.


On Thu, Jul 7, 2011 at 9:07 PM, Leonardo Borges leonardoborges...@gmail.com
 wrote:

 Sure can:

 show create table activities;

 CREATE TABLE `activities` (
   `id` int(11) NOT NULL AUTO_INCREMENT,
   `user_id` int(11) DEFAULT NULL,
   `email` varchar(100) DEFAULT NULL,
   `country_iso` varchar(2) DEFAULT NULL,
   `tags` varchar(255) DEFAULT NULL,
   `postcode` int(11) DEFAULT NULL,
   `activity` varchar(100) DEFAULT NULL,
   `page_id` int(11) DEFAULT NULL,
   `donation_frequency` varchar(100) DEFAULT NULL,
   `email_id` int(11) DEFAULT NULL,
   `created_at` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE
 CURRENT_TIMESTAMP,
   PRIMARY KEY (`id`),
   KEY `user_idx` (`user_id`),
   KEY `email_idx` (`email_id`),
   KEY `activity_idx` (`activity`)
 ) ENGINE=MyISAM AUTO_INCREMENT=11331976 DEFAULT CHARSET=latin1


 And the explain:


 ++-+---+---+-+--+-+++-+
 | id | select_type | table | type  | possible_keys   | key
  | key_len | ref| rows   | Extra   |

 ++-+---+---+-+--+-+++-+
 |  1 | SIMPLE  | u | index | NULL|
 id_idx   | 5   | NULL   | 972064 | Using index |
 |  1 | SIMPLE  | a | ref   | user_idx,email_idx,activity_idx |
 user_idx | 5   | getup.u.id | 20 | Using where |

 ++-+---+---+-+--+-+++-+


 Cheers,
 Leonardo Borges
 www.leonardoborges.com


 On Fri, Jul 8, 2011 at 11:58 AM, Johnny Withers joh...@pixelated.netwrote:

 Can you post show create table for activity and explain output of the
 problem query?

 On Jul 7, 2011 8:51 PM, Leonardo Borges leonardoborges...@gmail.com
 wrote:

 Hello everyone,

 I have an increasingly popular web application running on top of mysql and
 due to its popularity, I'm running into performance issues. After
 carefully
 examining database indexes and tuning queries I was able to pin down the
 slowest part of the system.

 The app's got a user segmentation tool that allows you to filter users
 based
 on a range of criteria from which the slowest is: Select all users that
 did
 not receive the email of id 100

 To answer this question we turn to the activities table, which is
 basically
 a denormalized log of actions taken by the user in this format:
 user_id  | activity| email_id | ...
 10 | email_sent   | 100| ...
 10 | subscribed   | NULL | ...
 10 | email_open  | 100| ...


 Given this table and the question above, the usual way of finding out all
 users who did not receive this email is through the use of a left outer
 join, such as:

 select u.id
 from users u
 left outer join activities a
 on u.id = a.user_id
 and a.activity = 'email_sent'
 and a.email_id = 100
 where a.user_id is null

 That's all fine  for medium-ish tables. However our current activities
 table
 has over 13 million rows, slowing the hell out of this left outer join,
 taking about 52 seconds in my machine.

 What this query is trying to do is to get the relative complement of set
 A(users) to B(activities). As far as I know mysql doesn't support set
 subtraction, thus the reason for these queries being slow.

 Based on that I've setup a test database on Postgresql, which supports
 this
 very set operation and rewrote the query to look like this:

 select u.id
 from users u
 except
 select a.user_id
 from activities a
 where a.activity = 'email_sent'
 and a.email_id = 100;

 The fact that postgresql knows how to subtract sets brought this query
 down
 to only 4 seconds.

 My question then is: since this is a somewhat common query in our system,
 are there any workarounds I could use in mysql to improve things?

 I did find one myself, but it's a bit convoluted and might not perform
 well
 under load, but the following sql script gives me similar performance in
 mysql:

 create temporary table email_sent_100
 select a.user_id
 from user_activity_events a
 where a.activity = 'email_sent'


 and a.email_id = 100;

 create index user_id_idx on email_sent_100(user_id);   //this could
 potentially bring the runtime down in the case of a larg temp table.

 select count(u.id)
 from users u
 left outer join email_sent_100 s
   on u.id = s.user_id
   and s.user_id is null;

 A lot more lines and a lot more complex, but does the job in this example.

 I'd appreciate your thoughts.

 Cheers,
 Leonardo Borges
 www.leonardoborges.com





-- 
-
Johnny Withers
601.209.4985
joh...@pixelated.net


Re: MySQL and set complements

2011-07-08 Thread Johnny Withers
What did the explain output look like after the new index?


On Fri, Jul 8, 2011 at 8:53 AM, Leonardo Borges leonardoborges...@gmail.com
 wrote:

 Hi Johnny,

 I just gave that a try but it didn't help as I suspected.

 I still believe the problem is in mysql not being able to handle set
 subtractions. Therefore, it has to perform the work harder to return the
 rows that represent a no match with NULL values in place so they can then
 be filtered by the WHERE clause.


 This type of query seems to be a corner case in mysql one should be aware
 about when working with large datasets.

 Cheers,
 Leonardo Borges
 www.leonardoborges.com


 On Fri, Jul 8, 2011 at 11:18 PM, Johnny Withers joh...@pixelated.netwrote:

 Leonardo,

 I think a new compound key on email_id and activity in the activities
 table may help.

 I'm not sure if this will help or not, Its hard to test w/o having a large
 data set to test against.


 On Thu, Jul 7, 2011 at 9:07 PM, Leonardo Borges 
 leonardoborges...@gmail.com wrote:

 Sure can:

 show create table activities;

 CREATE TABLE `activities` (
   `id` int(11) NOT NULL AUTO_INCREMENT,
   `user_id` int(11) DEFAULT NULL,
   `email` varchar(100) DEFAULT NULL,
   `country_iso` varchar(2) DEFAULT NULL,
   `tags` varchar(255) DEFAULT NULL,
   `postcode` int(11) DEFAULT NULL,
   `activity` varchar(100) DEFAULT NULL,
   `page_id` int(11) DEFAULT NULL,
   `donation_frequency` varchar(100) DEFAULT NULL,
   `email_id` int(11) DEFAULT NULL,
   `created_at` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE
 CURRENT_TIMESTAMP,
   PRIMARY KEY (`id`),
   KEY `user_idx` (`user_id`),
   KEY `email_idx` (`email_id`),
   KEY `activity_idx` (`activity`)
 ) ENGINE=MyISAM AUTO_INCREMENT=11331976 DEFAULT CHARSET=latin1


 And the explain:


 ++-+---+---+-+--+-+++-+
 | id | select_type | table | type  | possible_keys   |
 key  | key_len | ref| rows   | Extra   |

 ++-+---+---+-+--+-+++-+
 |  1 | SIMPLE  | u | index | NULL|
 id_idx   | 5   | NULL   | 972064 | Using index |
 |  1 | SIMPLE  | a | ref   | user_idx,email_idx,activity_idx |
 user_idx | 5   | getup.u.id | 20 | Using where |

 ++-+---+---+-+--+-+++-+


 Cheers,
 Leonardo Borges
 www.leonardoborges.com


 On Fri, Jul 8, 2011 at 11:58 AM, Johnny Withers joh...@pixelated.netwrote:

 Can you post show create table for activity and explain output of the
 problem query?

 On Jul 7, 2011 8:51 PM, Leonardo Borges leonardoborges...@gmail.com
 wrote:

 Hello everyone,

 I have an increasingly popular web application running on top of mysql
 and
 due to its popularity, I'm running into performance issues. After
 carefully
 examining database indexes and tuning queries I was able to pin down the
 slowest part of the system.

 The app's got a user segmentation tool that allows you to filter users
 based
 on a range of criteria from which the slowest is: Select all users that
 did
 not receive the email of id 100

 To answer this question we turn to the activities table, which is
 basically
 a denormalized log of actions taken by the user in this format:
 user_id  | activity| email_id | ...
 10 | email_sent   | 100| ...
 10 | subscribed   | NULL | ...
 10 | email_open  | 100| ...


 Given this table and the question above, the usual way of finding out
 all
 users who did not receive this email is through the use of a left outer
 join, such as:

 select u.id
 from users u
 left outer join activities a
 on u.id = a.user_id
 and a.activity = 'email_sent'
 and a.email_id = 100
 where a.user_id is null

 That's all fine  for medium-ish tables. However our current activities
 table
 has over 13 million rows, slowing the hell out of this left outer join,
 taking about 52 seconds in my machine.

 What this query is trying to do is to get the relative complement of set
 A(users) to B(activities). As far as I know mysql doesn't support set
 subtraction, thus the reason for these queries being slow.

 Based on that I've setup a test database on Postgresql, which supports
 this
 very set operation and rewrote the query to look like this:

 select u.id
 from users u
 except
 select a.user_id
 from activities a
 where a.activity = 'email_sent'
 and a.email_id = 100;

 The fact that postgresql knows how to subtract sets brought this query
 down
 to only 4 seconds.

 My question then is: since this is a somewhat common query in our
 system,
 are there any workarounds I could use in mysql to improve things?

 I did find one myself, but it's a bit convoluted and might not perform
 well
 under load, but the following sql

Re: MySQL and set complements

2011-07-07 Thread Johnny Withers
Can you post show create table for activity and explain output of the
problem query?

On Jul 7, 2011 8:51 PM, Leonardo Borges leonardoborges...@gmail.com
wrote:

Hello everyone,

I have an increasingly popular web application running on top of mysql and
due to its popularity, I'm running into performance issues. After carefully
examining database indexes and tuning queries I was able to pin down the
slowest part of the system.

The app's got a user segmentation tool that allows you to filter users based
on a range of criteria from which the slowest is: Select all users that did
not receive the email of id 100

To answer this question we turn to the activities table, which is basically
a denormalized log of actions taken by the user in this format:
user_id  | activity| email_id | ...
10 | email_sent   | 100| ...
10 | subscribed   | NULL | ...
10 | email_open  | 100| ...


Given this table and the question above, the usual way of finding out all
users who did not receive this email is through the use of a left outer
join, such as:

select u.id
from users u
left outer join activities a
on u.id = a.user_id
and a.activity = 'email_sent'
and a.email_id = 100
where a.user_id is null

That's all fine  for medium-ish tables. However our current activities table
has over 13 million rows, slowing the hell out of this left outer join,
taking about 52 seconds in my machine.

What this query is trying to do is to get the relative complement of set
A(users) to B(activities). As far as I know mysql doesn't support set
subtraction, thus the reason for these queries being slow.

Based on that I've setup a test database on Postgresql, which supports this
very set operation and rewrote the query to look like this:

select u.id
from users u
except
select a.user_id
from activities a
where a.activity = 'email_sent'
and a.email_id = 100;

The fact that postgresql knows how to subtract sets brought this query down
to only 4 seconds.

My question then is: since this is a somewhat common query in our system,
are there any workarounds I could use in mysql to improve things?

I did find one myself, but it's a bit convoluted and might not perform well
under load, but the following sql script gives me similar performance in
mysql:

create temporary table email_sent_100
select a.user_id
from user_activity_events a
where a.activity = 'email_sent'


and a.email_id = 100;

create index user_id_idx on email_sent_100(user_id);   //this could
potentially bring the runtime down in the case of a larg temp table.

select count(u.id)
from users u
left outer join email_sent_100 s
  on u.id = s.user_id
  and s.user_id is null;

A lot more lines and a lot more complex, but does the job in this example.

I'd appreciate your thoughts.

Cheers,
Leonardo Borges
www.leonardoborges.com


Lock wait timeout

2011-06-15 Thread Johnny Withers
I'm getting a Lock wait timeout exceeded error and I'm not sure why. I've
been trying to read this SHOW ENGINE INNODB STATUS output, but I don't
understand what it's trying to tell me.

Can someone give me a hand here?

---TRANSACTION 1942A27B, ACTIVE 124 sec, process no 4849, OS thread id
1311476032 starting index read
mysql tables in use 1, locked 1
LOCK WAIT 2 lock struct(s), heap size 376, 1 row lock(s)
MySQL thread id 992511, query id 903933959 192.168.100.41 ecash Updating
UPDATE trans_item SET is_void=1,void_bus_date='2011-06-15
00:00:00',void_actual_dt='2011-06-15
19:46:57',void_store_id=0,void_user_id=1,void_drawer_id=0,void_hdr_new_status_id=12
WHERE company_id=30 AND id=47077
Trx read view will not see trx with id = 1942A27C, sees  19428B97
--- TRX HAS BEEN WAITING 124 SEC FOR THIS LOCK TO BE GRANTED:
RECORD LOCKS space id 233406 page no 2296 n bits 136 index `PRIMARY` of
table `745cash_ecash`.`trans_item` trx id 1942A27B lock_mode X locks rec but
not gap waiting
Record lock, heap no 63 PHYSICAL RECORD: n_fields 45; compact format; info
bits 0

This is the very first statement in a series of statements, so i'm not quite
sure why it would fail. I could understand a later statement failing due to
this one having the table locked or something.

MySQL v5.5.9-log

-- 
-
Johnny Withers
601.209.4985
joh...@pixelated.net


Re: LEFT JOIN and WHERE: identical or not and what is better?

2011-04-25 Thread Johnny Withers
The only difference once MySQL parses these two queries is the first one is
a LEFT JOIN, which will produce all records from the blogs table even if
there is no matching record in the users table. The second query produces an
INNER JOIN which means only rows with matching records in both tables will
be returned.

Which one is faster? Probably the second since NULLs do not have to be
considered -- probably not much faster though.
Which one is better? That'll depend on your needs, if you only need records
from both tables that have a matching row in the other, the second is
better. If you need all blogs, even those without a matching user (can that
even occur?), the first one is better.

I prefer to write the INNER JOIN out though because it leaves my WHERE
clause to do filtering.

JW


On Mon, Apr 25, 2011 at 9:42 AM, Andre Polykanine an...@oire.org wrote:

 Hello everyone,
 Sorry for my beginner question. Actually I have been using MySql for a
 long  time  but  I  just  start  using some advanced things (earlier I
 accomplished  those  tasks  with  PHP),  so  I  will  be asking stupid
 questions, please bear with me.
 Here is the first one.
 We have two queries:
 SELECT `blogs`.* FROM `Blogs` LEFT JOIN `Users` ON
 `Blogs`.`UserId`=`Users`.`Id`;
 and the following one:
 SELECT   `Blogs`.*  FROM  `Blogs`,  `Users`  WHERE
 `Blogs`.`UserId`=`Users`.`Id`;

 1. Are they identical?
 2.  Which is better (faster, more optimal, more kosher, I mean, better
 style...)?
 Thanks!


 --
 With best regards from Ukraine,
 Andre
 Skype: Francophile
 Twitter: http://twitter.com/m_elensule
 Facebook: http://facebook.com/menelion


 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql?unsub=joh...@pixelated.net




-- 
-
Johnny Withers
601.209.4985
joh...@pixelated.net


InnoDB and Memory Allocation

2011-04-19 Thread Johnny Withers
I hope someone can help me out here. I'm having trouble with some new
servers and memory allocation.

Some basic specs on the servers:
32GB total mem
2GB swap
64-bit RHEL
64-bit mysqld
overcommit_memory=2

mysql fails to start with 14GB innodb_buffer_pool_size
mysql will start with 12GB buffer pool setting
When overcommit_memory is set to 0, mysql starts with 26GB buffer pool.

I'm trying to overcome a problem where running with overcommit_memory=0
causes the oom-killer to eventually take over and start killing things. This
usually causes the entire box to become unresponsive and has to be reboot
from the console. I've read that setting overcommit_memory=2 will cause
malloc() to fail and the offending program to crash. I would rather this
happen than the oom-killer take over. Why can't mysql allocate more than
~14GB when this is set to 2 though? Does anyone else have any experience
with this?


Some more detailed specs about the machine and mysql:

free -m reports 28GB free (32 GB +/- buffers)

[root@p2383075 ~]# ulimit -a
core file size  (blocks, -c) 0
data seg size   (kbytes, -d) unlimited
scheduling priority (-e) 0
file size   (blocks, -f) unlimited
pending signals (-i) 270336
max locked memory   (kbytes, -l) 32
max memory size (kbytes, -m) unlimited
open files  (-n) 1024
pipe size(512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority  (-r) 0
stack size  (kbytes, -s) 10240
cpu time   (seconds, -t) unlimited
max user processes  (-u) 270336
virtual memory  (kbytes, -v) unlimited
file locks  (-x) unlimited

/usr/libexec/mysqld: ELF 64-bit LSB executable, AMD x86-64, version 1
(SYSV), for GNU/Linux 2.6.9, dynamically linked (uses shared libs), for
GNU/Linux 2.6.9, stripped

mysql show variables like '%version%'\G
*** 2. row ***
Variable_name: version
Value: 5.0.77-log
*** 4. row ***
Variable_name: version_comment
Value: Source distribution
*** 5. row ***
Variable_name: version_compile_machine
Value: x86_64
*** 6. row ***
Variable_name: version_compile_os
Value: redhat-linux-gnu

Thanks!


-- 
-
Johnny Withers
601.209.4985
joh...@pixelated.net


Re: Convert unix time to 12:00 AM of that same day

2011-03-01 Thread Johnny Withers
You could use:

CONCAT(DATE_FORMAT(FROM_UNIXTIME(1298999201),'%Y-%m-%d'),' 12:00:00')

JW

On Tue, Mar 1, 2011 at 11:58 AM, Bryan Cantwell bcantw...@firescope.comwrote:

 It was of course a typo, and even with the correct number isn't the answer


 On 03/01/2011 11:47 AM, Claudio Nanni wrote:


 You can start by using 60*60*24=86400
 ;)

 On Mar 1, 2011 6:17 PM, Bryan Cantwell bcantw...@firescope.commailto:
 bcantw...@firescope.com wrote:
  I'd asked before how to convert a unix timestamp to the hour that it is
  in (and got the perfect answer) :
  1298999201 = 3/1/2011 11:06:41 AM
  (1298999201 - (1298999201 % 3600)) = 3/1/2011 11:00:00 AM
 
  Now getting the timestamp converted to midnight of that same day isn't
  as simple as:
  1298999201 - (1298999201 % 85400)
  That just gives me a unix time from yesterday...
 
  How can I convert 1298999201 (3/1/2011 11:06:41 AM) to 1298959200
  (3/1/2011 12:00:00 AM)?
 
 
  --
  MySQL General Mailing List
  For list archives: http://lists.mysql.com/mysql
  To unsubscribe:
 http://lists.mysql.com/mysql?unsub=claudio.na...@gmail.com
 






-- 
-
Johnny Withers
601.209.4985
joh...@pixelated.net


Re: centos server's mysql version's problem

2011-02-27 Thread Johnny Withers
I think the version CentOS (5.5) wants to use is 5.0.77-log. I don't think
CentOS will be updating the repository 5.5 uses for updates to any later
version of mysql.

I just completed two MySQL 5.5 installs on some RHEL machines by using the
remi repository. I followed this guide here:

http://www.if-not-true-then-false.com/2010/install-mysql-on-fedora-centos-red-hat-rhel/

http://www.if-not-true-then-false.com/2010/install-mysql-on-fedora-centos-red-hat-rhel/Install
went well, server runs great. I'd suggest going with 5.5.

JW

On Sun, Feb 27, 2011 at 11:30 AM, Yang Yang dapiy...@gmail.com wrote:

 hi,i am a new guy for a company,and will make 1-3 server to mysql
 datacenter

 i have 2 question want to ask and hope receive responce

 1.on centos or linux server,i should use 5.0 version or 5.1 version,which
 verison is better and can resolve many traffic use php+mysql

 2.the my.cnf should i configure or where i can find some information?

 thanks all ,hope receive responce




-- 
-
Johnny Withers
601.209.4985
joh...@pixelated.net


Re: centos server's mysql version's problem

2011-02-27 Thread Johnny Withers
The only 5.1 version I tried was percona's. I only tested it in
development.  Can't speak for MySQL's version. 5.5 is GA now, its supposed
to have many improvements for mulitiprocessor machines over 5.0 and 5.1.
This will be my first production install of 5.5, real traffic has not hit
yet, but it performed well in testing. Our 5.0 installs are handling about
25gb of data and 1200qps during peak hours.

On Feb 27, 2011 2:29 PM, Yang Yang dapiy...@gmail.com wrote:


thanks johnny

what about 5.1,did it performance better than 5.0 on centos when it has
large traffic?

5.1 is newer,but i know not newer is better


thanks again
2011/2/28 Johnny Withers joh...@pixelated.net



 I think the version CentOS (5.5) wants to use is 5.0.77-log. I don't think
CentOS will be updat...


Re: Converting INNODB to file-per-table?

2011-02-11 Thread Johnny Withers
Dump the entire DB, drop the DB, restore the DB.

On Fri, Feb 11, 2011 at 11:53 AM, Jan Steinman j...@bytesmiths.com wrote:

 Our incremental backups seem to be filling with instances of ib_logfile1,
 ib_logfile2, and ibdata1.

 I know that changing a single byte in a single INNODB table causes these
 files to be touched.

 I put innodb_file_per_table in /etc/my.cnf, but apparently, that only
 causes new databases to be file per table, and it is older databases that
 are being touched in a minor way daily, causing gigabytes to be backed up
 needlessly.

 Some time ago, someone posted a way to convert existing INNODB tables to
 file per table, but I am unable to find that.

 Can someone please post that procedure again?

 (I also welcome any you shouldn't be doing it that way comments, as long
 as they show a better way... :-)

 This is for a fairly low-volume server, running on a Mac Mini with two
 500GB disks.

 Thanks!

 
 In summary, the idea is to give all of the information to help others to
 judge the value of your contribution; not just the information that leads to
 judgement in one particular direction or another. -- Richard P. Feynman
  Jan Steinman, EcoReality Co-op 


 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql?unsub=joh...@pixelated.net




-- 
-
Johnny Withers
601.209.4985
joh...@pixelated.net


Re: High disk usage

2011-02-10 Thread Johnny Withers
Do you queries stay stuck in any particular status? (Writing to net,
Opening/Closing tables, Copying to temp table?)
What kind of disk sub system do you have? What other hardware do you have?
What is the primary engine type?

JW

On Thu, Feb 10, 2011 at 7:39 AM, Santiago Soares
santiagosoa...@gmail.comwrote:

 I'm not sure I made myself clear:
 The problem is not disk usage, but CPU time waiting for I/O, which is very
 high.

 Any ideas?

 Santiago Soares
 Fone: (41) 8488-0537


 On Thu, Feb 10, 2011 at 11:15 AM, Santiago Soares
 santiagosoa...@gmail.comwrote:

  Hello,
 
  We have a database with about 160k tables. This database is causing very
  high disk usage.
  I'd like to know if there is anything we can do to optimize the database,
  in order to reduce disk usage.
 
  With a show global status I see a strange behavior:
  | Open_files| 286   |
  | Opened_files  | 1050743   |
 
  At this time the database has just started (about 10 minutes).
 
  Here is our my.cnf:
 
  [mysqld2]
  socket=/var/lib/mysql/mysql2.sock
  port   = 3306
  bind-address = 172.31.235.52
  pid-file   = /var/lib/mysql/mysql2.pid
  datadir= /var/lib/mysql/DB2
  language   = /usr/share/mysql/english
  user=mysql
  old_passwords=1
 
  log-slow-queries = /var/log/mysql2-slow.log
  innodb_file_per_table
  query_cache_size = 128M
  join_buffer_size = 3M
  tmp_table_size = 256M
  max_heap_table_size = 256M
  thread_cache_size = 4
  table_cache = 6608
  innodb_buffer_pool_size = 2G
  long_query_time = 10
  log-error=/var/log/mysqld2.log
 
  tmpdir  = /tmp/tmpdir
 
  Is there anything I can do?
 
  Thank you.
 
  Santiago
 
 
 




-- 
-
Johnny Withers
601.209.4985
joh...@pixelated.net


Re: Incorrect key file for table

2011-01-14 Thread Johnny Withers
The result of your query without the join
probably exceeded your tmp_table_size variable. When this
occurs, MySQL quit writing the temp table to disk thus producing an
incorrect table file. (I think).

JW


On Fri, Jan 14, 2011 at 3:48 PM, Jørn Dahl-Stamnes
sq...@dahl-stamnes.netwrote:

 On Friday 14 January 2011 14:03, Johan De Meersman wrote:
  Check your free diskspace on your temp location.

 About 900+ Mb free.

 But I don't think that a full filesystem was the problem (I don't think the
 mysqld server was able to fill the disk with 900 Mb in 1-2 seconds). After
 some debugging I found that it was an error in the SQL statment:

 mysql describe SELECT images.* FROM images, albums, accesses WHERE
 images.IMAGE_CATEGORY_ID=22 AND albums.ACCESS_ID=accesses.ID;

 ++-+--+---+---+-+-+---+--++
 | id | select_type | table| type  | possible_keys | key |
 key_len
 | ref   | rows | Extra  |

 ++-+--+---+---+-+-+---+--++
 |  1 | SIMPLE  | accesses | index | PRIMARY   | PRIMARY | 4
 | NULL  |3 | Using index|
 |  1 | SIMPLE  | albums   | ref   | albums_FI_4   | albums_FI_4 | 4
 | photo_dev.accesses.id |   68 | Using index|
 |  1 | SIMPLE  | images   | ALL   | images_FI_2   | NULL| NULL
 | NULL  | 9712 | Using where; Using join buffer |

 ++-+--+---+---+-+-+---+--++

 A join was missing. Strange that this passed the syntax check because the
 select statment does not make sense (in the application).

 It should containt a AND images.ALBUM_ID=albums.ID'. When I fixed this, it
 worked :)

 Thanx anyway.

 --
 Jørn Dahl-Stamnes
 homepage: http://www.dahl-stamnes.net/dahls/

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql?unsub=joh...@pixelated.net




-- 
-
Johnny Withers
601.209.4985
joh...@pixelated.net


Re: I/O read performance

2011-01-13 Thread Johnny Withers
(Assuming you are on Linux)

Take a look at the output of: iostate -xk 60

The output will update every 60 secs. Take a look at the value in the %util
column after a few updates, if it's around 90% you have become disk bound
and you'll need to figure out a way to decrease some load on the disk. You
may need to increase the size of innodb_buffer_pool if you DB is mostly
innodb tables. Add more RAM to the server if needed. You may also want to
add more disks to your disk array (assuming you have one). Move logs from
this disk array to another (binary logs, query logs, etc).

You can also look at your rkB/s (reads) and wkB/s (writes). If you are read
heavy, you may need to change the kind of RAID you are using for one that
performs better when reading. Same goes for write heavy, pick the best RAID
for your use.

HTH




On Thu, Jan 13, 2011 at 1:44 PM, Steve Staples sstap...@mnsi.net wrote:

 Hello,

 I've been noticing a little lag in my application lately,  it seems as
 if 1 table in 1 database is getting slower to read from.   Mind you,
 that table is being accessed a LOT of times per second every hour of
 every day, and then the application searches on this same table too.

 In my sandbox, it is fast to search (as there is no other reads/queries
 on that table), so i don't think it is disk i/o (but you never know
 right?).  I've also double checked all the indexing, to insure indexes
 are used.

 What I was wondering is, are the reads/queries simultaneous, or are they
 sequential?  would symlinking the file to another db make any difference
 (or is that even possible)?

 any insight would be appreciated, or even any ideas on what I may do to
 increase the performance, or even how to measure where the issue could
 be would help as well.

 Steve


 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql?unsub=joh...@pixelated.net




-- 
-
Johnny Withers
601.209.4985
joh...@pixelated.net


Re: Back-up Plan for Large Database

2010-12-27 Thread Johnny Withers
Might want to check out LVM snapshots:

http://www.mysqlperformanceblog.com/2006/08/21/using-lvm-for-mysql-backup-and-replication-setup/

http://www.mysqlperformanceblog.com/2006/08/21/using-lvm-for-mysql-backup-and-replication-setup/Using
a slave to pull backups from is something I would not do. You have no
guarantee the data on the slave matches the master 100%.



On Mon, Dec 27, 2010 at 5:19 AM, Adarsh Sharma adarsh.sha...@orkash.comwrote:

 Dear all,

 Back-up is the most important thing that need special attention. We have a
 production Mysql Server of near about 200 GB data and expect to grow @ 50 GB
 per month.

 Our application continuously writes data in Mysql tables.

 I followed some Links but want some more thoughts to choose best option.

 http://www.zmanda.com/mysql-backup-considerations.html

 I also read about RAID and some other features too.

 But I want to know what is the best back up plan for 24/7 running Large
 Production Mysql Cluster.





 Thanks  Regards

 Adarsh Sharma

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql?unsub=joh...@pixelated.net




-- 
-
Johnny Withers
601.209.4985
joh...@pixelated.net


Re: Back-up Plan for Large Database

2010-12-27 Thread Johnny Withers
You could do that, if you have 5.5+

Still no guarantee that the slave matches the master 100%, even says so in
the manual. In my opinion, you shouldn't pull backups from the slave. YMMV.


On Mon, Dec 27, 2010 at 11:14 AM, Wagner Bianchi
wagnerbianch...@gmail.comwrote:

 *You have no guarantee the data on the slave matches the master 100%.*
 *
 *
 Try it with *semi-synchronous* replication.

 Best regards.
 --
 Wagner Bianchi


 2010/12/27 Johnny Withers joh...@pixelated.net

 Might want to check out LVM snapshots:


 http://www.mysqlperformanceblog.com/2006/08/21/using-lvm-for-mysql-backup-and-replication-setup/

 
 http://www.mysqlperformanceblog.com/2006/08/21/using-lvm-for-mysql-backup-and-replication-setup/
 Using
 a slave to pull backups from is something I would not do. You have no
 guarantee the data on the slave matches the master 100%.



 On Mon, Dec 27, 2010 at 5:19 AM, Adarsh Sharma adarsh.sha...@orkash.com
 wrote:

  Dear all,
 
  Back-up is the most important thing that need special attention. We have
 a
  production Mysql Server of near about 200 GB data and expect to grow @
 50 GB
  per month.
 
  Our application continuously writes data in Mysql tables.
 
  I followed some Links but want some more thoughts to choose best option.
 
  http://www.zmanda.com/mysql-backup-considerations.html
 
  I also read about RAID and some other features too.
 
  But I want to know what is the best back up plan for 24/7 running Large
  Production Mysql Cluster.
 
 
 
 
 
  Thanks  Regards
 
  Adarsh Sharma
 
  --
  MySQL General Mailing List
  For list archives: http://lists.mysql.com/mysql
  To unsubscribe:
 http://lists.mysql.com/mysql?unsub=joh...@pixelated.net
 
 


 --
 -
 Johnny Withers
 601.209.4985
 joh...@pixelated.net





-- 
-
Johnny Withers
601.209.4985
joh...@pixelated.net


Re: odd problem with select as statement

2010-12-20 Thread Johnny Withers
I can't tell you 'why' it is occurring when the field name begins with 4E5,
but you can solve your problem by enclosing all your field names
in backticks ( ` ).

IE:

SELECT field AS `4E5664736F400E8B482EA7AA67853D13`



On Mon, Dec 20, 2010 at 11:43 AM, Ramsey, Robert L
robert-ram...@uiowa.eduwrote:

 I am having the hardest time getting a select as statement right.  Here is
 the full query:

 select
 SUM(IF(image='EE0840D00E2ED8A317E5FA9899C48C19',1,0)) as
 EE0840D00E2ED8A317E5FA9899C48C19,
 SUM(IF(image='235C7987796D5B7CEBF56FBDA2BF7815',1,0)) as
 235C7987796D5B7CEBF56FBDA2BF7815,
 SUM(IF(image='96DC0562ED6E6F7FE789A18E09BC5889',1,0)) as
 96DC0562ED6E6F7FE789A18E09BC5889,
 SUM(IF(image='D8B0EA710D2EF408391132F451AE724A',1,0)) as
 D8B0EA710D2EF408391132F451AE724A,
 SUM(IF(image='018C4DB7229D7D2BEB040D241739B784',1,0)) as
 018C4DB7229D7D2BEB040D241739B784,
 SUM(IF(image='98DE1FCA50AC9CE6E0FEA25BAB0177FE',1,0)) as
 98DE1FCA50AC9CE6E0FEA25BAB0177FE,
 SUM(IF(image='4E5664736F400E8B482EA7AA67853D13',1,0)) as
 4E5664736F400E8B482EA7AA67853D13,   --offending line
 SUM(IF(image='FEB810A43A1B275605BD6B69F444700C',1,0)) as
 FEB810A43A1B275605BD6B69F444700C
 from dsrssfeed

 If I remove that one line, the query works fine.  If I do:

 select  SUM(IF(image='4E5664736F400E8B482EA7AA67853D13',1,0)) as 4E from
 dsrssfeed ;

 it works.  But these fail:

 select  SUM(IF(image='4E5664736F400E8B482EA7AA67853D13',1,0)) as
 4E5664736F400E8B482EA7AA67853D13 from dsrssfeed ;
 select  SUM(IF(image='4E5664736F400E8B482EA7AA67853D13',1,0)) as 4E5 from
 dsrssfeed ;

 It can't be field name length, since even 4E5 fails, the field name can
 start with a number since 4E succeeds.

 Any ideas?

 The goal is to see what arbitrary images have information associated with
 them.  The table has two fields:  image is a UID that is the primary key,
 and caption which is a varchar(255) that has information about the image.
  Images are added and deleted from the table as they are changed on a web
 page.  The UID is generated by a third party program that I have to
 interface with and have no control over.  An array of image UIDs is sent to
 the php script and the script needs to determine which UIDs are present in
 the table.  Rather than make N number of individual queries as I iterate
 through the array, I iterate through the array and build the query on the
 fly to make one query.  Then I iterate through the array again and check the
 value in the field.  1 means the UID has an entry, 0 means it doesn't.  I
 thought doing 1 mysql call would be more efficient than lots of calls as I
 iterate through the array.  But since there will probably never be more than
 100 images in the table at any one time, it may not make any difference.
  But now I'm just curious as to why this is happening.

 Thanks,

 Bob







-- 
-
Johnny Withers
601.209.4985
joh...@pixelated.net


Re: Problem with WHERE .. IN

2010-12-20 Thread Johnny Withers
The sub-select only returns a single row, so IN(...) is only looking at a
single value in the list .. it doesn't expand to into IN (5,7,11).

On Mon, Dec 20, 2010 at 11:52 AM, muhammad subair msub...@gmail.com wrote:

 I have table post (id INT and parent VARCHAR)

 +--+-+
 | id   | parent  |
 +--+-+
 |1 | 0   |
 |2 | 0   |
 |3 | 1   |
 |4 | 0   |
 |5 | 1   |
 |6 | 0   |
 |7 | 1,5 |
 |8 | 1,5 |
 |9 | 1,5 |
 |   10 | 5,7,11  |
 |   11 | 1,5,7,10|
 |   12 | 1,5,7,10,11 |
 +--+-+

 SELECT id FROM post WHERE where id IN (SELECT parent FROM post WHERE id =
 10);
 +--+
 | id   |
 +--+
 |5 |
 +--+

 whereas the results I want is

 +--+
 | id   |
 +--+
 |5 |
 |7 |
 |   11 |
 +--+

 Please tell me, where is wrong

 Thanks  Regards
 --
 Muhammad Subair
 +62 8176583311




-- 
-
Johnny Withers
601.209.4985
joh...@pixelated.net


Re: Records not being displayed

2010-12-19 Thread Johnny Withers
Your very first mysql_fetch_array consumes the first result.

On Dec 19, 2010 8:19 AM, Gary gp...@paulgdesigns.com wrote:

I have an issue that the first record in a query is not being displayed.  It
seems that the first row in alphabetical order is not being brought to the
screen.

I have a MySQL DB that lists beers.  I have a column for 'type' of beer
(imported, domestic, craft, light). The queries:

$result = MySQL_query(SELECT * FROM beer WHERE type = 'imported' AND stock
= 'YES' ORDER by beername );

When I run the query

if (mysql_num_rows($result) == !'0') {
   $row = mysql_fetch_array($result);

 echo 'h3Imported Beers/h3';
 echo 'table width=100% border=0 cellspacing=1 cellpadding=1
id=tableone summary=

 thBeer/th
 thMaker/th
 thType/th
 thSingles/th
 th6-Packs/th
 thCans/th
 thBottles/th
 thDraft/th
 thSize/th
 thDescription/th';

 while ($row = mysql_fetch_array($result)) {

 echo 'tr td' . $row['beername'].'/td';
 echo 'td' . $row['manu'] . '/td';
 echo 'td' . $row['type'] . '/td';
 echo 'td width=40' . $row['singles'] . '/td';
 echo 'td width=20' . $row['six'] . '/td';
 echo 'td width=40' . $row['can'] . '/td';
 echo 'td width=20' . $row['bottles'] . '/td';
 echo 'td width=40' . $row['tap'] . '/td';
 echo 'td' . $row['size'] . '/td';
 echo 'td' . $row['descrip'] . '/td';
'/tr';
   }
 echo '/tablebr /';

}

All but the first row in alphabetical order are displayed properly.

Can anyone tell me where I am going wrong?
--
Gary



__ Information from ESET Smart Security, version of virus signature
database 5715 (20101219) __

The message was checked by ESET Smart Security.

http://www.eset.com





--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=joh...@pixelated.net


Re: a query not using index

2010-11-09 Thread Johnny Withers
Would a compound index on both startnum and endnum be a better choice?

JW

On Tuesday, November 9, 2010, Aveek Misra ave...@yahoo-inc.com wrote:
 Probably indexes need to be rebuilt using myisamchk after you changed the 
 data type of the index columns. Apart from that I can't see why your query is 
 not using the indexes. Is it possible that the cardinality of the column 
 values is so low that indexes are not being used? You could try and run a 
 ANALYZE TABLE (or myismachk -a for MyISAM tables) and  then a SHOW INDEX to 
 see the cardinality information for these key columns.

 Thanks
 Aveek

 On Nov 9, 2010, at 3:43 PM, wroxdb wrote:

 Thanks for the idea.
 I have changed the datatype to bigint, the result is not changed.

 mysql desc select * from ip_test where startNum = 3061579775 and
 endNum = 3061579775;
 ++-+-+--+-+--+-+--++-+
 | id | select_type | table   | type | possible_keys   | key  | key_len
 | ref  | rows   | Extra   |
 ++-+-+--+-+--+-+--++-+
 |  1 | SIMPLE  | ip_test | ALL  | startNum,endNum | NULL | NULL
 | NULL | 396528 | Using where |
 ++-+-+--+-+--+-+--++-+


 CREATE TABLE `ip_test` (
  `startNum` bigint(20) NOT NULL,
  `endNum` bigint(20) NOT NULL,
  `country` varchar(50) NOT NULL default '',
  `province` varchar(50) NOT NULL default '',
  `city` varchar(50) NOT NULL default '',
  `isp` varchar(100) default NULL,
  KEY `startNum` (`startNum`),
  KEY `endNum` (`endNum`)
 ) ENGINE=MyISAM DEFAULT CHARSET=utf8 |




 在 2010年11月9日 下午5:20,Aveek Misra ave...@yahoo-inc.com 写道:
 I don't see how BETWEEN is not equivalent to (startNum = and endNum =). 
 Of course please try and let us know if that resolves the issue. But if it 
 doesn't, I suspect it is because the indexes are created on columns which 
 are floating point data type. That's because floating point numbers are 
 approximate and not stored as exact values. Attempts to treat double values 
 as exact in comparison may lead to the kind of issues that you are getting. 
 I could be wrong though; but if Johan's trick does not work, you might try 
 and change the data type to DECIMAL to see if it helps (or BIGINT if your 
 numbers are not using any digits after the decimal since BIGINT and DOUBLE 
 both use 8 bytes for storage).



 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql?unsub=joh...@pixelated.net



-- 
-
Johnny Withers
601.209.4985
joh...@pixelated.net

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: How do I GRANT SELECT to mysql.time_zone_name for ANYONE?!

2010-10-15 Thread Johnny Withers
I think this is one of those times you would update the mysql.user table
directly, then flush privileges.

JW


On Thu, Oct 14, 2010 at 6:46 PM, Daevid Vincent dae...@daevid.com wrote:

 I'm pulling my hair out. How do I GRANT the SELECT ability to ANY USER for
 the very specific mysql.time_zone_name table?? I don't want to GRANT it to
 every individual user manually, I want one single GRANT that encompasses
 every user simultaneously.

 I've tried all of these, and they all are valid in mySQL but none of them
 actually have the desired result.

 GRANT SELECT ON `mysql`.`time_zone_name` TO '%';
 GRANT SELECT ON `mysql`.`time_zone_name` TO '%'@'%';
 GRANT SELECT ON `mysql`.`time_zone_name` TO ''@'%';
 GRANT SELECT ON `mysql`.`time_zone_name` TO '';
 GRANT SELECT ON `mysql`.`time_zone_name`; (this one fails)

 Here are the results:

 SELECT * FROM mysql.time_zone_name LIMIT 0, 5000

 Error Code : 1142
 SELECT command denied to user 'daevid'@'mycompany.com' for table
 'time_zone_name'


 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql?unsub=joh...@pixelated.net




-- 
-
Johnny Withers
601.209.4985
joh...@pixelated.net


Re: Can this be done with a single query?

2010-10-12 Thread Johnny Withers
I would try:

SELECT DISTINCT(e.src_ip)
FROM event AS e
LEFT JOIN mappings AS m ON e.src_ip=m.src_ip
WHERE e.timestamp BETWEEN '2010-10-11 00:00:00' AND '2010-10-12 00:00:00'
AND INET_NTOA(e.src_ip) NOT LIKE '10.%.%.%'
AND INET_NTOA(e.src_ip) NOT LIKE '172.16.%.%'
AND INET_NTOA(e.src_ip) NOT LIKE '192.168.%.%'
AND m.src_ip IS NULL
;

I would also modify the where clause to use:

AND src_ip NOT BETWEEN INET_ATON('10.0.0.0') AND INET_ATON(10.255.255.255)
AND src_ip NOT BETWEEN INET_ATON('172.16.0.0') AND INET_ATON(172.16.255.255)
AND src_ip NOT BETWEEN INET_ATON('192.168.0.0') AND
INET_ATON(192.168.255.255)

instead of

AND INET_NTOA(src_ip) NOT LIKE '10.%.%.%'
AND INET_NTOA(src_ip) NOT LIKE '172.16.%.%'
AND INET_NTOA(src_ip) NOT LIKE '192.168.%.%

You should also ensure there is an index on src_ip in events and mappings
tables.

Using the INET_NTOA() function on the src_ip column will prevent index usage
during the query.

-JW


On Tue, Oct 12, 2010 at 5:19 AM, Paul Halliday paul.halli...@gmail.comwrote:

 I have 2 tables: events and mappings.

 what I want to do is something like:

 SELECT DISTINCT(src_ip) FROM event WHERE timestamp BETWEEN '2010-10-11
 00:00:00' AND '2010-10-12 00:00:00' AND INET_NTOA(src_ip) NOT LIKE
 '10.%.%.%' AND INET_NTOA(src_ip) NOT LIKE '172.16.%.%' AND
 INET_NTOA(src_ip)
 NOT LIKE '192.168.%.%';

 but, within that somewhere also check to see if src_ip exists in mappings.
 If it does, do not return it.

 Is this possible?

 Thanks.
 --
 Paul Halliday
 Ideation | Individualization | Learner | Achiever | Analytical
 http://www.pintumbler.org




-- 
-
Johnny Withers
601.209.4985
joh...@pixelated.net


Re: How to get hanging 1:M table rows as single column in main query?

2010-09-29 Thread Johnny Withers
GROUP_CONCAT() ?

And group by id_fmr ?

JW


On Wed, Sep 29, 2010 at 2:38 PM, Daevid Vincent dae...@daevid.com wrote:

 Given three basic tables. An fmr table which has Field Maintenance
 Reports, a Seat table and a hanging or glue table to map Seats to FMRs.
 [See below]

 How do I get all the Seats to be in a single row with the FMR data?

 If I make this kind of query, they come in as separate rows:

 SELECT
`id_fmr`,
`fmr_number`,
`fmr_system`,
`fmr_station`,
`created_ts`,
 `seat`
 FROM `fmr`
  JOIN `fmr_has_seat` USING (id_fmr)
  JOIN `dim_seat` USING (id_dim_seat)
 WHERE id_fmr = 3;

 id_fmr  fmr_number  fmr_system  fmr_station   created_ts  seat
 --  --  --  ---  ---  
 3  320237274 2333  JFK  2010-09-24 04:35:31  35C
 3  320237274 2333  JFK  2010-09-24 04:35:31  35D
 3  320237274 2333  JFK  2010-09-24 04:35:31  35E

 I want something more like:

 id_fmr  fmr_number  fmr_system  fmr_station   created_ts  seat
 --  --  --  ---  ---
 ---
 3  320237274 2333  JFK  2010-09-24 04:35:31
 35C,35D,35E


 Now, I'm going to be showing a few thousand FMR rows (and ideally their
 seats).

 What I do now is use PHP to pull the FMR records that match a certain
 criteria/filter.
 Then I pull in the entire dim_seats as an array and store it in a session
 since it's not going to change ever. Then I loop over all the id_fmr that I
 have pulled and look up in the fmr_has_seat table by id_fmr and implode()
 the seats from the session array. It saves me a few joins and gets the job
 done, but I keep feeling like there's a better way to do it.

 I'm thinking there's some magic with a subselect and concat or something in
 SQL, but then I wonder if that's any more efficient as mySQL still has to
 do two SELECTs per FMR row. This feels to me like a common problem and
 there must be an optimal mySQL way of doing it. Hanging tables of 1:M
 relationships are used everywhere.

 ===
 ==

 CREATE TABLE `fmr` (
  `id_fmr` int(11) NOT NULL auto_increment,
  `fmr_number` varchar(32) NOT NULL default '',
  `fmr_system` smallint(6) default NULL,
  `fmr_station` varchar(4) NOT NULL default '',
  `created_ts` timestamp NOT NULL default CURRENT_TIMESTAMP,
  PRIMARY KEY  (`id_fmr`)
 ) ENGINE=InnoDB

 id_fmr  fmr_number  fmr_system  fmr_station created_ts
 --  --  --  ---  ---
 1  319235F2A 2333  JFK  2010-09-24 04:35:31
 2  319235F29 2333  JFK  2010-09-24 04:35:31
 3  320237274 2333  JFK  2010-09-24 04:35:31  ---
 4  32023726D 2333  JFK  2010-09-24 04:35:31
 5  32023725A 2333  JFK  2010-09-24 04:35:31
 6  32023724F 2333  JFK  2010-09-24 04:35:31
 7  320237241 2333  LAX  2010-09-24 04:35:31
 8  32023723A 2333  LAX  2010-09-24 04:35:31
 9  320237232 2333  JFK  2010-09-24 04:35:31
10  320237230 2333  JFK  2010-09-24 04:35:31
 ....   ..   ..   ..

 CREATE TABLE `fmr_has_seat` (
  `id_fmr` int(11) NOT NULL auto_increment,
  `id_dim_seat` int(10) unsigned NOT NULL,
  PRIMARY KEY  (`id_fmr`,`id_dim_seat`),
  KEY `id_dim_seat` (`id_dim_seat`),
  CONSTRAINT `fmr_has_seat_ibfk_1` FOREIGN KEY (`id_fmr`) REFERENCES `fmr`
 (`id_fmr`) ON DELETE CASCADE ON UPDATE CASCADE,
  CONSTRAINT `fmr_has_seat_ibfk_2` FOREIGN KEY (`id_dim_seat`) REFERENCES
 `dim_seat` (`id_dim_seat`) ON DELETE CASCADE ON UPDATE CASCADE
 ) ENGINE=InnoDB

 id_fmr  id_dim_seat
 --  ---
 3  888 ---
 3  889 ---
 3  890 ---
 4  422
 4  423
 4  551
 4  552
 4  553
 5  420
 5  550
 5  628
 5  629
 5  706
 5  707
 5  811
  ...   ...

 CREATE TABLE `dim_seat` (
  `id_dim_seat` int(10) unsigned NOT NULL auto_increment,
  `seat` varchar(4) default NULL,
  PRIMARY KEY  (`id_dim_seat`),
  KEY `seat` (`seat`)
 ) ENGINE=InnoDB

 id_dim_seat  seat
 ---  --
  ...  ...
888  35C  ---
889  35D  ---
890  35E  ---
891  35F
892  35G
...  ...


 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql?unsub=joh...@pixelated.net




-- 
-
Johnny Withers
601.209.4985
joh...@pixelated.net


Re: Howto optimize Distinct query over 1.3mil rows?

2010-09-28 Thread Johnny Withers
;

  
 ++-+--+---+---++-+--+-+--+
| id | select_type | table| type  | possible_keys
   | key| key_len | ref  | rows| Extra
  |

  
 ++-+--+---+---++-+--+-+--+
|  1 | SIMPLE  | Newspaper_Issues | index |
 BIB_ID,BIB_ID_Issue_Date,BIB_ID_call_no_short | call_no_bib_id | 227 |
 NULL | 1333298 | Using where; Using index |

  
 ++-+--+---+---++-+--+-+--+
1 row in set (0.00 sec)

 Interestingly enough, I get much better performance if I just drop the
 WHERE clause, but that doesn't help me get what I want though.  *grin*

mysql explain select distinct(Call_No) from Newspaper_Issues;

  
 ++-+--+---+---+-+-+--+--+--+
| id | select_type | table| type  | possible_keys | key
 | key_len | ref  | rows | Extra|

  
 ++-+--+---+---+-+-+--+--+--+
|  1 | SIMPLE  | Newspaper_Issues | range | NULL  | Call_No
 | 111 | NULL |  928 | Using index for group-by |

  
 ++-+--+---+---+-+-+--+--+--+
1 row in set (0.00 sec)



 Would it make sense to split the Call_No data off into it's own table, and
 put in a proper numeric ID, instead of the current VARCHAR(36) it uses?  So
 in Newspaper_Issues I'd just have a Call_No_ID and a new Call_No table with:

  Call_No_IDINT
  Call_No   Char(36)

 which would hopefully index better?  I only have 928 distinct Call_No
 strings, so I'm not afraid of wasting space or anything.

 Thanks,
 John

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql?unsub=joh...@pixelated.net




-- 
-
Johnny Withers
601.209.4985
joh...@pixelated.net


Re: ORDER BY with field alias issue

2010-09-28 Thread Johnny Withers
Order by reservation.time

JW

On Tuesday, September 28, 2010, Chris W 4rfv...@cox.net wrote:
  I have the following query that is giving me problems.

 SELECT DATE_FORMAT(`Time`, '%h:%i%p') as `Time`
 FROM `reservation`
 ORDER BY `Time`

 Problem is it sorts wrong because of the date format function output with am 
 and pm.  I guess I should have named things differently but I would rather 
 not do that.  Is there a standard way to get around this and have it sort by 
 the non-formatted time value?


 Chris W

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:    http://lists.mysql.com/mysql?unsub=joh...@pixelated.net



-- 
-
Johnny Withers
601.209.4985
joh...@pixelated.net

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: document for mysql performance improvement

2010-09-23 Thread Johnny Withers
If this is a dedicated MySQL box, i would increase the InnoDB buffer pool
setting in my.cnf to at least 8GB, 12 would be even better (you did say you
have 16GB of ram in the machine I believe).

Also, what is the output of:

show status like '%tmp%';

JW

On Wed, Sep 22, 2010 at 8:01 PM, Vokern vok...@gmail.com wrote:

 2010/9/23 Johnny Withers joh...@pixelated.net
 
  Can you show us the output of: show status like '%innodb%'
  JW
 


 Sure.

 mysql show status like '%innodb%';
 +---++
 | Variable_name | Value  |
 +---++
 | Innodb_buffer_pool_pages_data | 262143 |
 | Innodb_buffer_pool_pages_dirty| 7219   |
 | Innodb_buffer_pool_pages_flushed  | 376090524  |
 | Innodb_buffer_pool_pages_free | 0  |
 | Innodb_buffer_pool_pages_misc | 1  |
 | Innodb_buffer_pool_pages_total| 262144 |
 | Innodb_buffer_pool_read_ahead_rnd | 385466 |
 | Innodb_buffer_pool_read_ahead_seq | 1304599|
 | Innodb_buffer_pool_read_requests  | 19253892075|
 | Innodb_buffer_pool_reads  | 142749467  |
 | Innodb_buffer_pool_wait_free  | 0  |
 | Innodb_buffer_pool_write_requests | 3491971805 |
 | Innodb_data_fsyncs| 32809939   |
 | Innodb_data_pending_fsyncs| 0  |
 | Innodb_data_pending_reads | 0  |
 | Innodb_data_pending_writes| 0  |
 | Innodb_data_read  | 4013196644352  |
 | Innodb_data_reads | 147753642  |
 | Innodb_data_writes| 440467519  |
 | Innodb_data_written   | 12643997136896 |
 | Innodb_dblwr_pages_written| 376090524  |
 | Innodb_dblwr_writes   | 5464581|
 | Innodb_log_waits  | 6599   |
 | Innodb_log_write_requests | 490350909  |
 | Innodb_log_writes | 201315186  |
 | Innodb_os_log_fsyncs  | 13605257   |
 | Innodb_os_log_pending_fsyncs  | 0  |
 | Innodb_os_log_pending_writes  | 0  |
 | Innodb_os_log_written | 319623115776   |
 | Innodb_page_size  | 16384  |
 | Innodb_pages_created  | 6050545|
 | Innodb_pages_read | 244945432  |
 | Innodb_pages_written  | 376090524  |
 | Innodb_row_lock_current_waits | 0  |
 | Innodb_row_lock_time  | 594325 |
 | Innodb_row_lock_time_avg  | 154|
 | Innodb_row_lock_time_max  | 27414  |
 | Innodb_row_lock_waits | 3857   |
 | Innodb_rows_deleted   | 2170086|
 | Innodb_rows_inserted  | 550876090  |
 | Innodb_rows_read  | 15529216710|
 | Innodb_rows_updated   | 142880071  |
 +---++
 42 rows in set (0.00 sec)




-- 
-
Johnny Withers
601.209.4985
joh...@pixelated.net


Re: document for mysql performance improvement

2010-09-22 Thread Johnny Withers
Can you show us the output of: show status like '%innodb%'

JW


On Tue, Sep 21, 2010 at 10:11 PM, vokern vok...@gmail.com wrote:

 And this is the innodb file size, does this matter for performance?

 $ du -h ibdata*
 11G ibdata1
 11G ibdata2
 11G ibdata3
 59G ibdata4



 2010/9/22 vokern vok...@gmail.com:
  This is piece of the setting in my.cnf:
 
  set-variable = innodb_buffer_pool_size=4G
  set-variable = innodb_additional_mem_pool_size=20M
  set-variable = innodb_flush_log_at_trx_commit=2
  set-variable =
 innodb_data_file_path=ibdata1:10G;ibdata2:10G;ibdata3:10G;ibdata4:10G:autoextend
 
  key_buffer  = 1024M
  sort_buffer = 1M
  read_buffer = 1M
  max_allowed_packet  = 1M
  thread_stack= 192K
  thread_cache_size   = 8
  max_heap_table_size = 64M
  myisam-recover = BACKUP
  max_connections= 800
  query_cache_limit   = 1M
  query_cache_size= 16M
 
 
  the disk:
 
  # fdisk -l
 
  Disk /dev/sda: 598.0 GB, 597998698496 bytes
  255 heads, 63 sectors/track, 72702 cylinders
  Units = cylinders of 16065 * 512 = 8225280 bytes
  Disk identifier: 0x0004158f
 
 
 
  from iostat -x:
 
  Device: rrqm/s   wrqm/s r/s w/s   rsec/s   wsec/s
  avgrq-sz avgqu-sz   await  svctm  %util
  sda   0.79   309.57   31.06   50.98  1306.74  2860.71
  50.80 0.293.59   0.97   7.93
  dm-0  0.00 0.000.560.42 4.49 3.40
  8.00 0.33  338.96   1.14   0.11
 
 
  The db is still slow. Thanks for the future helps.
 

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql?unsub=joh...@pixelated.net




-- 
-
Johnny Withers
601.209.4985
joh...@pixelated.net


Re: Swap data in columns

2010-09-22 Thread Johnny Withers
Couldn't you just rename the columns?

JW


On Wed, Sep 22, 2010 at 11:53 AM, Rolando Edwards
redwa...@logicworks.netwrote:

 I ran these commands:

 use test
 DROP TABLE IF EXISTS mydata;
 CREATE TABLE mydata (id INT NOT NULL AUTO_INCREMENT PRIMARY KEY,column1
 CHAR(1),column2 CHAR(2));
 INSERT INTO mydata (column1,column2) VALUES ('a','z'), ('b','y'),
 ('c','x'), ('d','w'), ('e','v');
 SELECT * FROM mydata;
 UPDATE mydata A LEFT JOIN mydata B USING (id) SET
 A.column1=B.column2,A.column2=B.column1;
 SELECT * FROM mydata;

 I got this output:

 lwdba@ (DB test) :: use test
 Database changed
 lwdba@ (DB test) :: DROP TABLE IF EXISTS mydata;
 Query OK, 0 rows affected (0.00 sec)

 lwdba@ (DB test) :: CREATE TABLE mydata (id INT NOT NULL AUTO_INCREMENT
 PRIMARY KEY,column1 CHAR(1),column2 CHAR(2));
 Query OK, 0 rows affected (0.05 sec)

 lwdba@ (DB test) :: INSERT INTO mydata (column1,column2) VALUES ('a','z'),
 ('b','y'), ('c','x'), ('d','w'), ('e','v');
 Query OK, 5 rows affected (0.00 sec)
 Records: 5  Duplicates: 0  Warnings: 0

 lwdba@ (DB test) :: SELECT * FROM mydata;
 ++-+-+
 | id | column1 | column2 |
 ++-+-+
 |  1 | a   | z   |
 |  2 | b   | y   |
 |  3 | c   | x   |
 |  4 | d   | w   |
 |  5 | e   | v   |
 ++-+-+
 5 rows in set (0.00 sec)

 lwdba@ (DB test) :: UPDATE mydata A LEFT JOIN mydata B USING (id) SET
 A.column1=B.column2,A.column2=B.column1;
 Query OK, 5 rows affected (0.03 sec)
 Rows matched: 5  Changed: 5  Warnings: 0

 lwdba@ (DB test) :: SELECT * FROM mydata;
 ++-+-+
 | id | column1 | column2 |
 ++-+-+
 |  1 | z   | a   |
 |  2 | y   | b   |
 |  3 | x   | c   |
 |  4 | w   | d   |
 |  5 | v   | e   |
 ++-+-+
 5 rows in set (0.00 sec)

 GIVE IT A TRY !!!

 Rolando A. Edwards
 MySQL DBA (CMDBA)

 155 Avenue of the Americas, Fifth Floor
 New York, NY 10013
 212-625-5307 (Work)
 AIM  Skype : RolandoLogicWorx
 redwa...@logicworks.net
 http://www.linkedin.com/in/rolandoedwards


 -Original Message-
 From: nixofortune [mailto:nixofort...@googlemail.com]
 Sent: Wednesday, September 22, 2010 12:30 PM
 To: mysql@lists.mysql.com
 Subject: Swap data in columns

 Hi all.

 Sorry for very simple question, just can't figure out the solution.
 I need to swap data in column1 with data in column2.


 ++-+-+
 | id | column1 | column2 |
 ++-+-+
 |  1 | a   | z   |
 |  2 | b   | y   |
 |  3 | c   | x   |
 |  4 | d   | w   |
 |  5 | e   | v   |
 ++-+-+

 Can you achieve this with a simple query?
 so for id 1 column1 = 'z' and column2 = 'a' and so on.

 Thanks guys,
 Igor

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql?unsub=joh...@pixelated.net




-- 
-
Johnny Withers
601.209.4985
joh...@pixelated.net


Re: Swap data in columns

2010-09-22 Thread Johnny Withers
This is even better!

JW


On Wed, Sep 22, 2010 at 2:27 PM, Steve Staples sstap...@mnsi.net wrote:

 What about:
 select `id`, `column1` as 'column2', `column2` as 'column1';

 Steve



 On Wed, 2010-09-22 at 13:06 -0500, Johnny Withers wrote:
  Couldn't you just rename the columns?
 
  JW
 
 
  On Wed, Sep 22, 2010 at 11:53 AM, Rolando Edwards
  redwa...@logicworks.netwrote:
 
   I ran these commands:
  
   use test
   DROP TABLE IF EXISTS mydata;
   CREATE TABLE mydata (id INT NOT NULL AUTO_INCREMENT PRIMARY KEY,column1
   CHAR(1),column2 CHAR(2));
   INSERT INTO mydata (column1,column2) VALUES ('a','z'), ('b','y'),
   ('c','x'), ('d','w'), ('e','v');
   SELECT * FROM mydata;
   UPDATE mydata A LEFT JOIN mydata B USING (id) SET
   A.column1=B.column2,A.column2=B.column1;
   SELECT * FROM mydata;
  
   I got this output:
  
   lwdba@ (DB test) :: use test
   Database changed
   lwdba@ (DB test) :: DROP TABLE IF EXISTS mydata;
   Query OK, 0 rows affected (0.00 sec)
  
   lwdba@ (DB test) :: CREATE TABLE mydata (id INT NOT NULL
 AUTO_INCREMENT
   PRIMARY KEY,column1 CHAR(1),column2 CHAR(2));
   Query OK, 0 rows affected (0.05 sec)
  
   lwdba@ (DB test) :: INSERT INTO mydata (column1,column2) VALUES
 ('a','z'),
   ('b','y'), ('c','x'), ('d','w'), ('e','v');
   Query OK, 5 rows affected (0.00 sec)
   Records: 5  Duplicates: 0  Warnings: 0
  
   lwdba@ (DB test) :: SELECT * FROM mydata;
   ++-+-+
   | id | column1 | column2 |
   ++-+-+
   |  1 | a   | z   |
   |  2 | b   | y   |
   |  3 | c   | x   |
   |  4 | d   | w   |
   |  5 | e   | v   |
   ++-+-+
   5 rows in set (0.00 sec)
  
   lwdba@ (DB test) :: UPDATE mydata A LEFT JOIN mydata B USING (id) SET
   A.column1=B.column2,A.column2=B.column1;
   Query OK, 5 rows affected (0.03 sec)
   Rows matched: 5  Changed: 5  Warnings: 0
  
   lwdba@ (DB test) :: SELECT * FROM mydata;
   ++-+-+
   | id | column1 | column2 |
   ++-+-+
   |  1 | z   | a   |
   |  2 | y   | b   |
   |  3 | x   | c   |
   |  4 | w   | d   |
   |  5 | v   | e   |
   ++-+-+
   5 rows in set (0.00 sec)
  
   GIVE IT A TRY !!!
  
   Rolando A. Edwards
   MySQL DBA (CMDBA)
  
   155 Avenue of the Americas, Fifth Floor
   New York, NY 10013
   212-625-5307 (Work)
   AIM  Skype : RolandoLogicWorx
   redwa...@logicworks.net
   http://www.linkedin.com/in/rolandoedwards
  
  
   -Original Message-
   From: nixofortune [mailto:nixofort...@googlemail.com]
   Sent: Wednesday, September 22, 2010 12:30 PM
   To: mysql@lists.mysql.com
   Subject: Swap data in columns
  
   Hi all.
  
   Sorry for very simple question, just can't figure out the solution.
   I need to swap data in column1 with data in column2.
  
  
   ++-+-+
   | id | column1 | column2 |
   ++-+-+
   |  1 | a   | z   |
   |  2 | b   | y   |
   |  3 | c   | x   |
   |  4 | d   | w   |
   |  5 | e   | v   |
   ++-+-+
  
   Can you achieve this with a simple query?
   so for id 1 column1 = 'z' and column2 = 'a' and so on.
  
   Thanks guys,
   Igor
  
   --
   MySQL General Mailing List
   For list archives: http://lists.mysql.com/mysql
   To unsubscribe:
 http://lists.mysql.com/mysql?unsub=joh...@pixelated.net
  
  
 
 



 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql?unsub=joh...@pixelated.net




-- 
-
Johnny Withers
601.209.4985
joh...@pixelated.net


Re: Unique ID's across multiple databases

2010-09-14 Thread Johnny Withers
I may have missed what you are trying to do here. NoSQL is really a bad name
and should really be renamed to NoREL instead. NoSQL implementations are not
used just because of limitations of traditional RDBMS when it comes to sheer
traffic volume, they are also used because they scale horizontally very
well. When I was reading though all these emails it seems to be you would
have 200+ nodes here with the SAME data. Maybe that's not what you are
trying to do?

I also did not know the software for maintaining the data was already in
place.


JW

On Tue, Sep 14, 2010 at 12:57 AM, Johan De Meersman vegiv...@tuxera.bewrote:



 On Mon, Sep 13, 2010 at 8:59 PM, Johnny Withers joh...@pixelated.netwrote:


 This sounds like a good job for a 'NoSQL' system. Maybe?


 I can't help but blink at that. How exactly is NoSQL going to fix issues
 that are related to topology, not inherent SQL limitations ? Which
 particular incarnation of NoSQL are you thinking of ?

 Also, I suspect rewriting all 200 clients' services is not going to be an
 option :-)

 --
 Bier met grenadyn
 Is als mosterd by den wyn
 Sy die't drinkt, is eene kwezel
 Hy die't drinkt, is ras een ezel




-- 
-
Johnny Withers
601.209.4985
joh...@pixelated.net


Re: Unique ID's across multiple databases

2010-09-13 Thread Johnny Withers
This sounds like a good job for a 'NoSQL' system. Maybe?

JW


On Mon, Sep 13, 2010 at 1:46 PM, Jerry Schwartz je...@gii.co.jp wrote:

 -Original Message-
 From: Kiss Dániel [mailto:n...@dinagon.com]
 Sent: Monday, September 13, 2010 11:49 AM
 To: Jerry Schwartz
 Cc: Johan De Meersman; Max Schubert; mysql@lists.mysql.com;
 replicat...@lists.mysql.com
 Subject: Re: Unique ID's across multiple databases
 
 Well, not exactly.
 
 I do not own all the databases. Some of them are placed at customers, some
 of them are at my data warehouse. So, neither NAS or Fibre Channel is a
 solution in this case.
 
 [JS] Then you have a mess on your hands.

 Are you going to be mirroring these databases separately for each customer?

 I wish you well.

 Regards,

 Jerry Schwartz
 Global Information Incorporated
 195 Farmington Ave.
 Farmington, CT 06032

 860.674.8796 / FAX: 860.674.8341
 E-mail: je...@gii.co.jp
 Web site: www.the-infoshop.com




 On Mon, Sep 13, 2010 at 4:30 PM, Jerry Schwartz je...@gii.co.jp wrote:
 
  -Original Message-
  From: vegiv...@gmail.com [mailto:vegiv...@gmail.com] On Behalf Of
 Johan
  De
  Meersman
  Sent: Monday, September 13, 2010 7:27 AM
  To: Kiss Dániel
  Cc: Max Schubert; mysql@lists.mysql.com; replicat...@lists.mysql.com
  Subject: Re: Unique ID's across multiple databases
  
  Hmm, that's a very interesting scenario, indeed.
  
  One bad connection will break the chain, though, so in effect you'll be
  multiplying the disconnecting rate...
  
  I think you'd be better of with a star topology, but MySQL
 unfortunately
  only allows ring-types. This is gonna require some good thinking on
 your
  part :-)
  
  [JS] It sounds like you are trying to protect against a regional
 disaster.
 
  This is precisely the type of scenario for which NAS or FibreChannel is
  used.
  You let the storage medium take care of replication. Typically you'd
 only
  need
  two units, perhaps on opposite sides of the country, using FibreChannel
  over
  IP.
 
  I've been out of this market (sales/support side) for many years, so I
  don't
  know what the current technology costs, but if you can afford it that is
  the
  way to go. It will make your life much simpler.
 
 
  Regards,
 
  Jerry Schwartz
  Global Information Incorporated
  195 Farmington Ave.
  Farmington, CT 06032
 
  860.674.8796 / FAX: 860.674.8341
  E-mail: je...@gii.co.jp
  Web site: www.the-infoshop.com
 
 
 
  On Mon, Sep 13, 2010 at 12:28 PM, Kiss Dániel n...@dinagon.com
 wrote:
  
   This is actually more for failover scenarios where databases are
 spread
  in
   multiple locations with unreliable internet connections. But you want
 to
   keep every single location working even when they are cut off from
 the
   other
   databases. The primary purpose is not load distribution.
  
   On Mon, Sep 13, 2010 at 12:03 PM, Johan De Meersman 
 vegiv...@tuxera.be
   wrote:
  
   
   
On Sun, Sep 12, 2010 at 9:45 PM, Kiss Dániel n...@dinagon.com
  wrote:
   
offset + increment thingy is good if you know in advance that
 you'll
   have
a
limited number of servers. But if you have no idea that you will
 have
  2,
20,
or 200 servers in your array in the future, you just can't pick an
   optimal
   
   
What benefit do you think you will reap from that many masters ?
 Don't
forget that every write still has to be done on every server, so
  you're
   not
actually distributing that load; while for reads you only need
 simple
slaves.
   
   
--
Bier met grenadyn
Is als mosterd by den wyn
Sy die't drinkt, is eene kwezel
Hy die't drinkt, is ras een ezel
   
  
  
  
  
  --
  Bier met grenadyn
  Is als mosterd by den wyn
  Sy die't drinkt, is eene kwezel
  Hy die't drinkt, is ras een ezel
 
 
 
 




 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql?unsub=joh...@pixelated.net




-- 
-
Johnny Withers
601.209.4985
joh...@pixelated.net


Re: hard disk crash: how to discover the db?

2010-09-10 Thread Johnny Withers
I agree with everyone else here.

This NOT a backup job listed here. it is a RESTORE job that is restoring
data previously backed up into a file named 'file.tar' (should probably be
file.tar.gz or file.tgz anyway).

This sequence of commands should never be run on a server that you are
wanting to backup (development, testing, stage and/or production).

 service mysql stop
 cd /var/lib/mysql
 *rm -rf * (this will delete all your MySQL data)*
 tar zxvf file.tar
 rm -rf ib_logfile*
 chown -R mysql.mysql
 service mysql start



-JW



On Fri, Sep 10, 2010 at 9:51 AM, Steve Staples sstap...@mnsi.net wrote:

 did I miss the joke here?   Why does this backup script look SO wrong,
 and very malicious?

 service mysql stop -- stopping mysql... that's fine.
 cd /var/lib/mysql -- changing directories to the /var/lib/mysql, ok...
 fine
 rm -rf * -- WHAT?  WHY ARE YOU REMOVING RECUSIVLY and FORCING DELETES on
 all the MySQL files??   are you insane?  this is going to make a bad day
 for a lot of people
 tar zxvf file.tar -- Wait, you just blew away all the files in
 the /var/lib/mysql directory, how can you extract a tar file, when it
 does not exist, since you just deleted everything?

 anyway, I hope I missed the joke here, or missed something...

 Steve.


 On Thu, 2010-09-09 at 18:02 -0400, George Larson wrote:
  We do nightly backups at work just by taring the mysql directory.  In
  my environment, that is /var/lib/mysql.
 
  Like this:
 
  service mysql stop
  cd /var/lib/mysql
  rm -rf *
  tar zxvf file.tar
  rm -rf ib_logfile*
  chown -R mysql.mysql
  service mysql start
 
  Something similar might work for you.  Somebody with more MySQL
  expertise than me can probably help you customize the process to your
  environment.
 
  Good luck!
  G
 
  On 9 September 2010 17:08, Uwe Brauer o...@mat.ucm.es wrote:
   andrew.2.mo...@nokia.com wrote:
  
   Try using the failed hdd as a slave in a Linux machine.
  
   You might find that the hdd won't boot to OS but may have enough in it
 to
   access the file system.
  
   I have done that already and I have access. But I don't know how to
 extract
   the db (via dump) since the corresponding mysql server software is not
   running. how can i tell linux to use the mysql db of the Mac?
  
   Uwe Brauer
  
   --
   MySQL General Mailing List
   For list archives: http://lists.mysql.com/mysql
   To unsubscribe:
http://lists.mysql.com/mysql?unsub=george.g.lar...@gmail.com
  
  
 


 --

 Steve Staples
 Web Application Developer
 519.258.2333 x8414


 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql?unsub=joh...@pixelated.net




-- 
-
Johnny Withers
601.209.4985
joh...@pixelated.net


Re: hard disk crash: how to discover the db?

2010-09-10 Thread Johnny Withers
If you still have the mysqld binary and you _think_ it is version
5.3.something, you could try this:

strings /usr/libexec/mysqld | grep 5.3.*

The version # should be somewhere near the top of the output. (Is on my copy
anyway, of 5.0.x)


  --

*Johnny Withers*
jwith...@ecashsoftware.com
601.919.2275 x112
 [image: eCash Software Systems]


On Fri, Sep 10, 2010 at 12:00 PM, george larson
george.g.lar...@gmail.comwrote:



 Shawn Green (MySQL) wrote:
  On 9/10/2010 10:01 AM, george larson wrote:
 
  Uwe Brauer wrote:
  ...
  The only one I know of, for my environment, is /etc/my.cnf.  I believe
  that it can be located elsewhere but you could just use 'find' to find
  it.  I've broken my dev. MySQL many, many times and that's the only file
  I know about outside of my data directory.  :)
 
  I don't have any good ideas about discerning precisely what version of
  MySQL was running, though.
 
 
  The error log will have the version information. Each successful
  startup includes something similar to
 
  100910  7:50:30 [Note] mysqld: ready for connections.
  Version: '5.1.48-enterprise-gpl-advanced'  socket: ''  port: 3306
  MySQL Enterprise Server - Advanced Edition (GPL)
 
  For more information on how to locate the error log:
  http://dev.mysql.com/doc/refman/5.1/en/error-log.html
 
 That's a neat trick and I don't mean to steal the thread but that didn't
 work for me:

 ---
 :/var/lib/mysql # head mysqld.log
 100910 12:50:09 mysqld_safe Starting mysqld daemon with databases from
 /var/lib/mysql
 100910 12:50:09 [Warning] option 'table_cache': unsigned value 536870912
 adjusted to 524288
 100910 12:50:09 [Warning] The syntax '--log_slow_queries' is deprecated
 and will be removed in MySQL 7.0. Please use
 '--slow_query_log'/'--slow_query_log_file' instead.
 100910 12:50:09 [Warning] Changed limits: max_open_files: 65535
 max_connections: 50  table_cache: 32737
 100910 12:50:09 [Note] Plugin 'ndbcluster' is disabled.
 100910 12:50:09 [Note] Plugin 'FEDERATED' is disabled.
 ---

 I'm going to comb through my 'my.cnf' to see if it might have somehow
 been disabled.

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql?unsub=joh...@pixelated.net




-- 
-
Johnny Withers
601.209.4985
joh...@pixelated.net


Re: Replaying the mysqld.log file from production onto QA???

2010-09-10 Thread Johnny Withers
Would something like: http://www.maatkit.org/doc/mk-log-player.html

http://www.maatkit.org/doc/mk-log-player.htmlWork? I know the timings
won't be the same as how the queries actually came in, but  it could be
a start,

JW


On Fri, Sep 10, 2010 at 8:38 AM, Johan De Meersman vegiv...@tuxera.bewrote:

 On Fri, Sep 10, 2010 at 1:56 PM, Shawn Green (MySQL) 
 shawn.l.gr...@oracle.com wrote:

  He already did! Those are the logs he needs to replay. He has the logs
  already but needs tools to extract the commands and repeat them as a load
  test.
 
  Do you have any techniques you can share?
 

 Alas, no. I suspect the rather expensive Avalanches we got might be able
 to,
 but that's not particularly helpful to him :-)

 I wrote a little thing many years ago that pulled Apache logs from
 production over SSH and multithreaded them at the staging server, but SQL
 stuff would be much more time-critical - can't select before the insert :-)



 --
 Bier met grenadyn
 Is als mosterd by den wyn
 Sy die't drinkt, is eene kwezel
 Hy die't drinkt, is ras een ezel




-- 
-
Johnny Withers
601.209.4985
joh...@pixelated.net


Re: Performance problems on MySQL

2010-09-03 Thread Johnny Withers
Ok, so I'm stumped?

What kind of hardware is behind this thing?

-JW

On Fri, Sep 3, 2010 at 4:44 AM, Alexandre Vieira nul...@gmail.com wrote:

 Hi Johnny,

 mysql EXPLAIN SELECT * FROM clientinfo WHERE userid='911930694';

 ++-++---+---+-+-+---+--+---+
 | id | select_type | table  | type  | possible_keys | key | key_len
 | ref   | rows | Extra |

 ++-++---+---+-+-+---+--+---+
 |  1 | SIMPLE  | clientinfo | const | PRIMARY   | PRIMARY | 23
 | const |1 |   |

 ++-++---+---+-+-+---+--+---+
 1 row in set (0.53 sec)

 Thanks

 BR
 AJ





Re: Performance problems on MySQL

2010-09-03 Thread Johnny Withers
What does

SHOW TABLE STATUS LIKE 'table_name'

Say about this table?

-JW

On Fri, Sep 3, 2010 at 8:59 AM, Alexandre Vieira nul...@gmail.com wrote:

 Hi,

 I've done some tests with INT(8) vs the VARCHAR(23) on the userid PK and it
 makes a little difference but not enough for the application to run in real
 time processing.

 It's a Sun Fire V240 2x 1.5ghz UltraSparc IIIi with 2GB of RAM.

 MySQL is eating 179MB of RAM and 5,4% of CPU.

   PID USERNAME  SIZE   RSS STATE  PRI NICE  TIME  CPU PROCESS/NLWP
   6229 mysql 455M  179M sleep   580   0:03.11 5,4% mysqld/68

 The machine has ~1GB of free memory. MySQL and InnoDB has free pages to
 grow and we have ~50% of free CPU time.

 Currently I can't use the replication server since the application running
 on top if this BD can only talk to 1 data source.
 At the moment it's also not possible to change the application in order to
 make it use the DB more wisely.

 Basically we have a table with lots of selects, lots of update, lots of
 inserts and deletes. Data manipulation is random, doesn't follow any
 specific pattern. All working concurrently.

 A big bottleneck is:

 8 queries inside InnoDB, 28 queries in queue

 1 read views open inside InnoDB

 Increasing innodb_thread_concurrency might help without causing any
 problems to the overall performance.

 Makes total sense if you read:
 http://peter-zaitsev.livejournal.com/9138.html

 Thanks in advance.

 BR
 AJ


 On Fri, Sep 3, 2010 at 2:31 PM, Johnny Withers joh...@pixelated.netwrote:

 Ok, so I'm stumped?

 What kind of hardware is behind this thing?

 -JW

 On Fri, Sep 3, 2010 at 4:44 AM, Alexandre Vieira nul...@gmail.comwrote:

 Hi Johnny,

 mysql EXPLAIN SELECT * FROM clientinfo WHERE userid='911930694';

 ++-++---+---+-+-+---+--+---+
 | id | select_type | table  | type  | possible_keys | key |
 key_len | ref   | rows | Extra |

 ++-++---+---+-+-+---+--+---+
 |  1 | SIMPLE  | clientinfo | const | PRIMARY   | PRIMARY |
 23  | const |1 |   |

 ++-++---+---+-+-+---+--+---+
 1 row in set (0.53 sec)

 Thanks

 BR
 AJ





 --
 Alexandre Vieira - nul...@gmail.com




-- 
-
Johnny Withers
601.209.4985
joh...@pixelated.net


Re: Performance problems on MySQL

2010-09-03 Thread Johnny Withers
Very confusing...

Why is index_length zero ?

On top of that, there's only 500K rows in the table with a data size of
41MB. Maybe InnoDB is flushing to disk too often?

What's the output of iostat -dxk 60 ? (run for a minute+ to get 2 output
girds)



  --

*Johnny Withers*
jwith...@ecashsoftware.com
601.919.2275 x112
 [image: eCash Software Systems]


On Fri, Sep 3, 2010 at 9:20 AM, Alexandre Vieira nul...@gmail.com wrote:

 Hi,

 mysql SHOW TABLE STATUS LIKE 'clientinfo';

 +++-++++-+-+--+---++-+-++---+--+++
 | Name   | Engine | Version | Row_format | Rows   | Avg_row_length |
 Data_length | Max_data_length | Index_length | Data_free | Auto_increment |
 Create_time | Update_time | Check_time | Collation |
 Checksum | Create_options | Comment|

 +++-++++-+-+--+---++-+-++---+--+++
 | clientinfo | InnoDB |  10 | Compact| 504762 | 82 |
 41500672 |   0 |0 | 0 |   NULL |
 2010-09-01 03:21:36 | NULL| NULL   | latin1_swedish_ci |
 NULL || InnoDB free: 276480 kB |

 +++-++++-+-+--+---++-+-++---+--+++
 1 row in set (0.02 sec)

 BR
 AJ


 On Fri, Sep 3, 2010 at 3:18 PM, Johnny Withers joh...@pixelated.netwrote:

 What does

 SHOW TABLE STATUS LIKE 'table_name'

 Say about this table?

 -JW


 On Fri, Sep 3, 2010 at 8:59 AM, Alexandre Vieira nul...@gmail.comwrote:

 Hi,

 I've done some tests with INT(8) vs the VARCHAR(23) on the userid PK and
 it makes a little difference but not enough for the application to run in
 real time processing.

 It's a Sun Fire V240 2x 1.5ghz UltraSparc IIIi with 2GB of RAM.

 MySQL is eating 179MB of RAM and 5,4% of CPU.

   PID USERNAME  SIZE   RSS STATE  PRI NICE  TIME  CPU PROCESS/NLWP
   6229 mysql 455M  179M sleep   580   0:03.11 5,4% mysqld/68

 The machine has ~1GB of free memory. MySQL and InnoDB has free pages to
 grow and we have ~50% of free CPU time.

 Currently I can't use the replication server since the application
 running on top if this BD can only talk to 1 data source.
 At the moment it's also not possible to change the application in order
 to make it use the DB more wisely.

 Basically we have a table with lots of selects, lots of update, lots of
 inserts and deletes. Data manipulation is random, doesn't follow any
 specific pattern. All working concurrently.

 A big bottleneck is:

 8 queries inside InnoDB, 28 queries in queue

 1 read views open inside InnoDB

 Increasing innodb_thread_concurrency might help without causing any
 problems to the overall performance.

 Makes total sense if you read:
 http://peter-zaitsev.livejournal.com/9138.html

 Thanks in advance.

 BR
 AJ


 On Fri, Sep 3, 2010 at 2:31 PM, Johnny Withers joh...@pixelated.netwrote:

 Ok, so I'm stumped?

 What kind of hardware is behind this thing?

 -JW

 On Fri, Sep 3, 2010 at 4:44 AM, Alexandre Vieira nul...@gmail.comwrote:

 Hi Johnny,

 mysql EXPLAIN SELECT * FROM clientinfo WHERE userid='911930694';

 ++-++---+---+-+-+---+--+---+
 | id | select_type | table  | type  | possible_keys | key |
 key_len | ref   | rows | Extra |

 ++-++---+---+-+-+---+--+---+
 |  1 | SIMPLE  | clientinfo | const | PRIMARY   | PRIMARY |
 23  | const |1 |   |

 ++-++---+---+-+-+---+--+---+
 1 row in set (0.53 sec)

 Thanks

 BR
 AJ





 --
 Alexandre Vieira - nul...@gmail.com




 --
 -
 Johnny Withers
 601.209.4985
 joh...@pixelated.net




 --
 Alexandre Vieira - nul...@gmail.com




-- 
-
Johnny Withers
601.209.4985
joh...@pixelated.net


Re: Performance problems on MySQL

2010-09-03 Thread Johnny Withers
I think your MySQL instance is disk bound.

If you look at your iostats, md2, 12 and 22 have a ~10ms wait time before a
request can be processed. iostat is also reporting those disks are 75%+
utilized which means they are doing about all they can do.

Anyway you can add more disks? Add faster disks?

I'm not an iostat expert, use my research and recommendations with caution
=)


  --

*Johnny Withers*
jwith...@ecashsoftware.com
601.919.2275 x112
 [image: eCash Software Systems]


On Fri, Sep 3, 2010 at 10:37 AM, Alexandre Vieira nul...@gmail.com wrote:

 Hi,

 The DB is working on /var, which is md2 / md12 / md22.

   extended device statistics
 device   r/sw/s   kr/s   kw/s wait actv  svc_t  %w  %b

 md2  0.1   80.00.4  471.4  0.0  1.0   12.2   0  94
 md10 0.05.70.0   78.8  0.0  0.1   19.7   0   9
 md11 0.00.00.00.0  0.0  0.00.0   0   0
 md12 0.0   80.00.3  471.4  0.0  0.89.9   0  76
 md20 0.05.70.0   78.8  0.0  0.1   21.1   0   9
 md21 0.00.00.00.0  0.0  0.00.0   0   0
 md22 0.0   80.00.1  471.3  0.0  0.8   10.6   0  81
 sd0  0.2   86.80.3  550.5  0.0  0.9   10.6   0  78
 sd1  0.2   86.80.2  550.4  0.0  1.0   11.3   0  83
 sd30 0.00.00.00.0  0.0  0.00.0   0   0
 nfs1 0.00.00.00.0  0.0  0.00.0   0   0
   extended device statistics
 device   r/sw/s   kr/s   kw/s wait actv  svc_t  %w  %b
 md0  0.05.60.0   83.2  0.0  0.2   28.0   0  10
 md1  0.00.00.00.0  0.0  0.00.0   0   0
 md2  0.1   84.20.7  527.2  0.0  1.0   11.8   0  93
 md10 0.05.60.0   83.2  0.0  0.1   19.0   0   8
 md11 0.00.00.00.0  0.0  0.00.0   0   0
 md12 0.0   84.20.3  527.2  0.0  0.89.7   0  77
 md20 0.05.60.0   83.2  0.0  0.1   19.9   0   8
 md21 0.00.00.00.0  0.0  0.00.0   0   0
 md22 0.0   84.10.4  527.2  0.0  0.9   10.3   0  82
 sd0  0.2   91.10.3  610.7  0.0  0.9   10.4   0  79
 sd1  0.2   91.00.4  610.7  0.0  1.0   11.0   0  84
 sd30 0.00.00.00.0  0.0  0.00.0   0   0
 nfs1 0.00.00.00.0  0.0  0.00.0   0   0

 I really can't say why Index_Lenght is 0... It might be something with the
 index?


 mysql SHOW INDEX FROM clientinfo;

 +++--+--+-+---+-+--++--++-+
 | Table  | Non_unique | Key_name | Seq_in_index | Column_name |
 Collation | Cardinality | Sub_part | Packed | Null | Index_type | Comment |

 +++--+--+-+---+-+--++--++-+
 | clientinfo |  0 | PRIMARY  |1 | userid  |
 A |  503836 | NULL | NULL   |  | BTREE  | |

 +++--+--+-+---+-+--++--++-+
 1 row in set (0.00 sec)


 mysql SHOW TABLE STATUS LIKE 'clientinfo';

 +++-++++-+-+--+---++-+-++---+--+++
 | Name   | Engine | Version | Row_format | Rows   | Avg_row_length |
 Data_length | Max_data_length | Index_length | Data_free | Auto_increment |
 Create_time | Update_time | Check_time | Collation |
 Checksum | Create_options | Comment|

 +++-++++-+-+--+---++-+-++---+--+++
 | clientinfo | InnoDB |  10 | Compact| 508170 | 81 |
 41500672 |   0 |0 | 0 |   NULL |
 2010-09-01 03:21:36 | NULL| NULL   | latin1_swedish_ci |
 NULL || InnoDB free: 276480 kB |

 +++-++++-+-+--+---++-+-++---+--+++
 1 row in set (0.00 sec)

 BR
 AJ


 On Fri, Sep 3, 2010 at 3:46 PM, Johnny Withers joh...@pixelated.netwrote:

 Very confusing...

 Why is index_length zero ?

 On top of that, there's only 500K rows in the table with a data size of
 41MB. Maybe InnoDB is flushing to disk too often?

 What's the output

Re: Performance problems on MySQL

2010-09-03 Thread Johnny Withers
It seems that when your index is PRIMARY on InnoDB tables, it's magic and is
part of the data thereby it is not included in the index_length field.

I have never noticed this. I don't think adding a new index will make a
difference.

You could try moving your log files to a different disk array than where
your data is. If you have binary and query logging enabled, it's probably a
good idea.



  --

*Johnny Withers*
jwith...@ecashsoftware.com
601.919.2275 x112
 [image: eCash Software Systems]


On Fri, Sep 3, 2010 at 12:45 PM, Alexandre Vieira nul...@gmail.com wrote:

 Hi,

 When creating a table in MySQL with a PK it automatically creates an INDEX,
 correct?

 The Index_Length: 0 is rather strange..I've created a new INDEX on top of
 my PK column on my test system and Index_Length shows a big value different
 from 0. Do you think this might have any impact?

 mysql show index from gwtraffic.clientinfo;


 ++++--+-+---+-+--++--++-+
 | Table  | Non_unique | Key_name   | Seq_in_index | Column_name |
 Collation | Cardinality | Sub_part | Packed | Null | Index_type | Comment |

 ++++--+-+---+-+--++--++-+
 | clientinfo |  0 | PRIMARY|1 | userid  |
 A |  548216 | NULL | NULL   |  | BTREE  | |
 | clientinfo |  1 | userid_idx |1 | userid  |
 A |  548216 | NULL | NULL   |  | BTREE  | |

 ++++--+-+---+-+--++--++-+

 2 rows in set (0.01 sec)

 mysql show table status LIKE 'clientinfo';


 +++-++++-+-+--+---++-+-++---+--++-+
 | Name   | Engine | Version | Row_format | Rows   | Avg_row_length |
 Data_length | Max_data_length | Index_length | Data_free | Auto_increment |
 Create_time | Update_time | Check_time | Collation |
 Checksum | Create_options | Comment |

 +++-++++-+-+--+---++-+-++---+--++-+
 | clientinfo | InnoDB |  10 | Compact| 548216 | 62 |
 34144256 |   0 |  5783552 | 0 |   NULL |
 2010-09-03 17:38:16 | NULL| NULL   | latin1_swedish_ci |
 NULL || InnoDB free: 1214464 kB |


 +++-++++-+-+--+---++-+-++---+--++-+
 1 row in set (0.00 sec)

 I'm trying to stress my test DB but can't measure any different results
 with or without the second INDEX.

 Regarding the disks.. the DB is updated 20+ times every second. Writing the
 log, checkpoint to disk, etc.. can cause that much load?

 BR
 AJ





Re: Performance problems on MySQL

2010-09-02 Thread Johnny Withers
 id 12973170 localhost root
 show innodb status
 ---TRANSACTION 0 1953324057, ACTIVE 0 sec, OS thread id 4127, thread
 declared inside InnoDB 498
 mysql tables in use 1, locked 1
 2 lock struct(s), heap size 320, undo log entries 1
 MySQL thread id 276140, query id 12973085 192.168.87.6 myuser_adm update
 INSERT
 ---TRANSACTION 0 1953324055, ACTIVE 0 sec, OS thread id 4144, thread
 declared inside InnoDB 498
 mysql tables in use 1, locked 1
 2 lock struct(s), heap size 320, undo log entries 1
 MySQL thread id 276147, query id 12973082 192.168.87.6 myuser_adm update
 INSERT
 ---TRANSACTION 0 1953324053, ACTIVE 0 sec, OS thread id 4141, thread
 declared inside InnoDB 499
 mysql tables in use 1, locked 1
 2 lock struct(s), heap size 320, undo log entries 1
 MySQL thread id 275720, query id 12973078 192.168.87.6 myuser_adm end
 UPDATE
 ---TRANSACTION 0 1953324051, ACTIVE 0 sec, OS thread id 4029, thread
 declared inside InnoDB 499
 mysql tables in use 1, locked 1
 2 lock struct(s), heap size 320, undo log entries 1
 MySQL thread id 273472, query id 12973075 192.168.87.6 myuser_adm end
 UPDATE
 ---TRANSACTION 0 1953324049, ACTIVE 1 sec, OS thread id 4109, thread
 declared inside InnoDB 499
 mysql tables in use 1, locked 1
 2 lock struct(s), heap size 320, undo log entries 1
 MySQL thread id 274973, query id 12973084 192.168.87.6 myuser_adm end
 UPDATE
 ---TRANSACTION 0 1953324046, ACTIVE 1 sec, OS thread id 4126, thread
 declared inside InnoDB 498
 mysql tables in use 1, locked 1
 2 lock struct(s), heap size 320, undo log entries 1
 MySQL thread id 276146, query id 12973081 192.168.87.6 myuser_adm update
 INSERT
 ---TRANSACTION 0 1953324043, ACTIVE 1 sec, OS thread id 4122, thread
 declared inside InnoDB 499
 mysql tables in use 1, locked 1
 2 lock struct(s), heap size 320, undo log entries 1
 MySQL thread id 274989, query id 12973069 192.168.87.6 myuser_adm end
 UPDATE
 ---TRANSACTION 0 1953324041, ACTIVE 1 sec, OS thread id 4099, thread
 declared inside InnoDB 498
 mysql tables in use 1, locked 1
 2 lock struct(s), heap size 320, undo log entries 1
 MySQL thread id 276145, query id 12973066 192.168.87.6 myuser_adm update
 INSERT
 ---TRANSACTION 0 1953324038, ACTIVE (PREPARED) 1 sec, OS thread id 4077
 preparing
 mysql tables in use 1, locked 1
 2 lock struct(s), heap size 320, undo log entries 1
 MySQL thread id 275321, query id 12973067 192.168.87.6 myuser_adm end
 UPDATE
 
 FILE I/O
 
 I/O thread 0 state: waiting for i/o request (insert buffer thread)
 I/O thread 1 state: waiting for i/o request (log thread)
 I/O thread 2 state: waiting for i/o request (read thread)
 I/O thread 3 state: waiting for i/o request (write thread)
 Pending normal aio reads: 0, aio writes: 0,
  ibuf aio reads: 0, log i/o's: 0, sync i/o's: 0
 Pending flushes (fsync) log: 1; buffer pool: 0
 3514 OS file reads, 4497412 OS file writes, 4320374 OS fsyncs
 0.00 reads/s, 0 avg bytes/read, 41.05 writes/s, 40.73 fsyncs/s
 -
 INSERT BUFFER AND ADAPTIVE HASH INDEX
 -
 Ibuf: size 1, free list len 0, seg size 2,
 0 inserts, 0 merged recs, 0 merges
 Hash table size 1452727, used cells 496505, node heap has 597 buffer(s)
 31.26 hash searches/s, 15.31 non-hash searches/s
 ---
 LOG
 ---
 Log sequence number 61 3783563525
 Log flushed up to   61 3783563173
 Last checkpoint at  61 3778869606
 1 pending log writes, 0 pending chkp writes
 4297652 log i/o's done, 40.63 log i/o's/second
 --
 BUFFER POOL AND MEMORY
 --
 Total memory allocated 419047082; in additional pool allocated 2578048
 Buffer pool size   22400
 Free buffers       15784
 Database pages     6019
 Modified db pages  1895
 Pending reads 0
 Pending writes: LRU 0, flush list 0, single page 0
 Pages read 3543, created 2476, written 594057
 0.00 reads/s, 0.00 creates/s, 6.47 writes/s
 Buffer pool hit rate 1000 / 1000
 --
 ROW OPERATIONS
 --
 8 queries inside InnoDB, 29 queries in queue
 1 read views open inside InnoDB
 Main thread id 11, state: sleeping
 Number of rows inserted 530873, updated 2100423, deleted 31904, read 7886015
 0.21 inserts/s, 19.95 updates/s, 0.00 deletes/s, 40.05 reads/s
 
 END OF INNODB MONITOR OUTPUT
 

 ###


-- 
-
Johnny Withers
601.209.4985
joh...@pixelated.net

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Performance Tunning

2010-08-31 Thread Johnny Withers
So, it's not just me that is stuck in this infinite loop? I thought I had
gone mad!

-- 
-
Johnny Withers
601.209.4985
joh...@pixelated.net

On Tue, Aug 31, 2010 at 5:23 AM, Johan De Meersman vegiv...@tuxera.bewrote:

 1. Find out what is slow
 2. Fix it
 3. GOTO 1

 On Tue, Aug 31, 2010 at 11:13 AM, kranthi kiran
 kranthikiran@gmail.comwrote:

  Hi All,
   In performance tunning what are the steps can follow,please help
  me
 
  Thanks  Regards,
  Kranthi kiran
 



 --
 Bier met grenadyn
 Is als mosterd by den wyn
 Sy die't drinkt, is eene kwezel
 Hy die't drinkt, is ras een ezel



Re: InnoDB Tablespace

2010-08-03 Thread Johnny Withers
About the above - it is saying 6144 KB so it is 6.1 GB.

Are you sure? I would think 6144KB = 6.144 MB, or 6144 * 1000 = 6,144,000
bytes.

I think since InnoDB, by default, extends the table space by 8MB increments,
this is reporting the free space in this increment. How can I tell total
remaining space so I can adjust and/or add new table space before it runs
out of space next time?

I have another server with a different config line, however, the last innodb
file specified is also max 16G and when i run show table status on it, it
reports 3983360 kB free, which i would assume is 3.9 GB? Could this be
because it's filling up space in one of the files before the last
auto-extending file, which these files are fixed sizes?

Both servers are 5.0.77-log. One server is Cent OS, the other is RHEL.

I'm very confused here.

JW

On Mon, Aug 2, 2010 at 1:53 PM, Suresh Kuna sureshkumar...@gmail.comwrote:

 Hey john,

 Yes you can add it but safe to keep auto-extend at the end and monitor the
 disk space as well.


 Now when i run the same show table status command, the comment field says:
 InnoDB free: 6144 kB

 Is that telling me that I only have 6MB of storage left even though I
 increased the table space by 8GB?

 About the above - it is saying 6144 KB so it is 6.1 GB.



 On Tue, Aug 3, 2010 at 12:05 AM, Johnny Withers joh...@pixelated.netwrote:

 I recently ran out of table space on a production server that had the
 following configuration line:


 innodb_data_file_path=ibdata1:1G;ibdata2:1G;ibdata3:1G;ibdata4:1G;ibdata5:1G;ibdata6:2G:autoextend:max:8G

 Before I changed this line and restarted the server, I ran SHOW TABLE
 STATUS
 LIKE 'table' on one of the databases and the comment filed said:
 InnoDB Free: 3NNN kB (I don't remember the exact number, but know it
 started
 with 3 and had 4 digits.

 I modified the configuration line above to:


 innodb_data_file_path=ibdata1:1G;ibdata2:1G;ibdata3:1G;ibdata4:1G;ibdata5:1G;ibdata6:2G:autoextend:max:16G

 Now when i run the same show table status command, the comment field says:
 InnoDB free: 6144 kB

 Is that telling me that I only have 6MB of storage left even though I
 increased the table space by 8GB?

 Also, If I wanted to add another file to this file_path variable, can I
 just
 add it to the end like so:


 innodb_data_file_path=ibdata1:1G;ibdata2:1G;ibdata3:1G;ibdata4:1G;ibdata5:1G;ibdata6:16G;
 *ibdata7:16G*

 Or will that cause MySQL to complain the file size isn't correct the next
 time it starts?


 Thanks for any help!


 --
 -
 Johnny Withers
 601.209.4985
 joh...@pixelated.net




 --
 Thanks
 Suresh Kuna
 MySQL DBA




-- 
-
Johnny Withers
601.209.4985
joh...@pixelated.net


InnoDB Tablespace

2010-08-02 Thread Johnny Withers
I recently ran out of table space on a production server that had the
following configuration line:

innodb_data_file_path=ibdata1:1G;ibdata2:1G;ibdata3:1G;ibdata4:1G;ibdata5:1G;ibdata6:2G:autoextend:max:8G

Before I changed this line and restarted the server, I ran SHOW TABLE STATUS
LIKE 'table' on one of the databases and the comment filed said:
InnoDB Free: 3NNN kB (I don't remember the exact number, but know it started
with 3 and had 4 digits.

I modified the configuration line above to:

innodb_data_file_path=ibdata1:1G;ibdata2:1G;ibdata3:1G;ibdata4:1G;ibdata5:1G;ibdata6:2G:autoextend:max:16G

Now when i run the same show table status command, the comment field says:
InnoDB free: 6144 kB

Is that telling me that I only have 6MB of storage left even though I
increased the table space by 8GB?

Also, If I wanted to add another file to this file_path variable, can I just
add it to the end like so:

innodb_data_file_path=ibdata1:1G;ibdata2:1G;ibdata3:1G;ibdata4:1G;ibdata5:1G;ibdata6:16G;
*ibdata7:16G*

Or will that cause MySQL to complain the file size isn't correct the next
time it starts?


Thanks for any help!


-- 
-
Johnny Withers
601.209.4985
joh...@pixelated.net


Re: Security issues

2010-05-22 Thread Johnny Withers
You could use CVE, Postgre's security page doesn't seem to sync with their
CVE entries, even though they reference CVE entries on their comprehensive
security page.

http://cve.mitre.org/cgi-bin/cvekey.cgi?keyword=postgresql

http://cve.mitre.org/cgi-bin/cvekey.cgi?keyword=mysql

JW

On Sat, May 22, 2010 at 10:51 AM, Rob Wultsch wult...@gmail.com wrote:

 On Sat, May 22, 2010 at 5:44 AM, Martin Gainty mgai...@hotmail.com
 wrote:
  Good Morning Rob-
 
  one vulnerability (with UDFs)
  http://dev.mysql.com/tech-resources/articles/security_alert.html
 
  a manager considering a enterprise-wide security solution may want
  to consider Oracle Identity Manager (with Glassfish 3.2)
 
 http://under-linux.org/en/content/oracle-introduces-schedule-for-glassfish-556/
 
  Does this help?
  Martin Gainty

 Martin,

 Thank you for the reply.

 The guys across the street have a single page with cliff notes about
 every vulnerability effecting every supported version*. The page I
 noted was comprehensive. Martin, what you listed was a page with an
 single vuln and a page which looks like a product.

 The grass is looking pretty darn green on the other side of the street.

 *And they support all the way back to 7.4, which is equivalent to 4.1
 era. 2005 is not that long ago.
 --
 Rob Wultsch
 wult...@gmail.com

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql?unsub=joh...@pixelated.net




-- 
-
Johnny Withers
601.209.4985
joh...@pixelated.net


Re: Database Quotas

2010-05-21 Thread Johnny Withers
I ran out of space on a large, busy production database just a few weeks
ago. All tables are InnoDB and I experienced zero data loss.

It was actually running out of space for almost 2 weeks after a review of
the log file. As temp files were deleted transactions were able to continue
until all but zero bytes of the disk were available.

I think MySQL did a fantastic job handling the problem.

JW

On Fri, May 21, 2010 at 9:21 AM, Tim Gustafson t...@soe.ucsc.edu wrote:

  if MYSQL attempts to insert more bytes than what is available
  on disk you will get 28 ENOSPC No space left on device
  http://dev.mysql.com/doc/refman/5.0/en/operating-system-error-codes.html

 Does it figured that out before it tries to write a record?  So, if I have
 2KB left on the device and I write a 4KB record, does the first 2KB get
 written and then the error occurs, or does the error occur before the write
 is attempted?

 I guess what I'm asking is will the tables be marked as crashed when an
 ENOSPC happens, or will the tables still be in good health?

 If they're still in good health, then I suppose that I could use ZFS file
 systems to allocate space for databases...it just seems that this ought to
 be a feature of the database.  :)

 Tim Gustafson
 Baskin School of Engineering
 UC Santa Cruz
 t...@soe.ucsc.edu
 831-459-5354

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql?unsub=joh...@pixelated.net




-- 
-
Johnny Withers
601.209.4985
joh...@pixelated.net


Re: Where to index - over 15m records and growing

2010-05-07 Thread Johnny Withers
You could be running into this:

http://dev.mysql.com/doc/refman/5.0/en/how-to-avoid-table-scan.html



On Fri, May 7, 2010 at 10:05 AM, Peter Brawley
peter.braw...@earthlink.netwrote:

 something tells me I need to rethink this yes.


 If you were to add a computed column yearmonth, you could write WHERE
 yearmonth=201004.

 PB

 -

 Chris Knipe wrote:

 On Fri, May 7, 2010 at 10:42 AM, Rob Wultsch wult...@gmail.com wrote:



 Second thing:
 How selective is
 WHERE IVAOData.TrackerTime =UNIX_TIMESTAMP('2010-04-01 00:00:00')
   AND IVAOData.TrackerTime =UNIX_TIMESTAMP('2010-04-30 23:59:50') 

 Test by running
 SELECT COUNT(*)
 FROM  IVAOData
 WHERE IVAOData.TrackerTime =UNIX_TIMESTAMP('2010-04-01 00:00:00')
   AND IVAOData.TrackerTime =UNIX_TIMESTAMP('2010-04-30 23:59:50')

 If this is a large proportion of the row count then you are probably
 in store for pain. It sounds like you are matching half the table. Big
 (intermediate) result sets often end in pain.




 At this stage, you are correct.  We have roughly 2 months worth of data in
 the table and are selecting about half (one months worth), thus about 50%.
 With 6 months worth of data in the table and selecting one months worth of
 data, that's roughly 16% of the data - but it will still be a bulk large
 result... Hmmm, something tells me I need to rethink this yes.





 Third thing:
 My (rather sleepy) gut thinks your best bet is a a composite index on
 the table IVAOData on the columns TrackerTime and FlightID. This will
 make all access to the table in this query hit a covering index.




 Took over 12 hours to create the index on TrackerTime, and you're right -
 I
 should have seen and realised this.  I will drop the index on TrackerTime
 and re-create it using both colums as I should have done in the first
 place.





 Fourth thing:
 What do you intend to ask the database with this query. This query
 smells overly broad




 The idea is to get a count of the number of entries from Dep to Des during
 the last month.  I.E.  How many records are there where Dep and Des are
 the
 same during the last month.  With some changes to the application that
 captures the data in the first place, I should be able to drop the need
 for
 this query completely.  You have made me think a bit here and you're
 right.
 This is not the way to do it.

 I'll rethink this a bit more and come up with something better.

 PS - Started the query before my first email was even posted, it's still
 running... 3948 Seconds the last time I checked...



  


 No virus found in this incoming message.
 Checked by AVG - www.avg.com Version: 8.5.437 / Virus Database:
 271.1.1/2859 - Release Date: 05/07/10 06:26:00






-- 
-
Johnny Withers
601.209.4985
joh...@pixelated.net


Re: Speeding up inserts in InnoDB

2010-04-22 Thread Johnny Withers
I'm a little confused.. are the inserts slow, or are the updates slow?

It sounds like you mean the updates were going about 50/updates sec. You
could speed up the update by adding an index on phoneticcallsign.CallSign.

JW

On Thu, Apr 22, 2010 at 10:13 AM, Chris W 4rfv...@cox.net wrote:

 I have a very simple table.

 CREATE TABLE  `hams`.`phoneticcallsign` (
  `CallSign` char(6) NOT NULL,
  `PhoneticCallSign` char(6) NOT NULL,
  PRIMARY KEY (`CallSign`),
  KEY `PhoneticIdx` (`PhoneticCallSign`) USING BTREE
 )
 I inserted a little over 1 million records with CallSign = to a value from
 another table and PhoneticCallSign blank.  Then I used the following simple
 php script to set the value of PhoneticCallSign.

  $query  = SELECT `CallSign` \n;
  $query .= FROM `phoneticcallsign`  \n;
  $query .= WHERE `PhoneticCallSign` = '' \n;
  $result = mysql_query($query) or die(DB error $query . mysql_error() );
  while(($row = mysql_fetch_row($result))){
   $CallSign = $row[0];
   $PhoneticCallSign = SoundsLike($CallSign);
   $query  = UPDATE `phoneticcallsign` \n;
   $query .= SET `PhoneticCallSign` = '$PhoneticCallSign' \n;
   $query .= WHERE `CallSign` = '$CallSign' \n;
   $Uresult = mysql_query($query) or die(DB error $query . mysql_error()
 );
  }

 This was running very slow and I was getting only about 50 inserts per
 second.  I noticed that the table was InnoDB so I decided to change it to
 MyISAM and try again.  With MyISAM I was getting around 10,000 inserts per
 second.  Surely there is some way to make InnoDB faster.

 Any ideas?

 Chris W


 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql?unsub=joh...@pixelated.net




-- 
-
Johnny Withers
601.209.4985
joh...@pixelated.net


Re: Populating dataset

2010-03-25 Thread Johnny Withers
Wouldn't you want to handle that on the application side? Otherwise, you
would have to have a table of all possible dates to do some kind of join to
find missing values. (I would think).

JW

On Thu, Mar 25, 2010 at 11:50 AM, Tompkins Neil 
neil.tompk...@googlemail.com wrote:

 Hi,

 We have a table which contains the following sample of information

 2010-03-20
 2010-03-21
 2010-03-22
 2010-03-23
 2010-03-26
 2010-03-27
 2010-03-28
 2010-03-30

 As you can see, the dates listed are not constant e.g missing 2010-03-29 as
 a example.  For our query like SELECT check_in_date FROM bookings how would
 I know to automatically insert a missing date like 2010-03-29 between the
 date range I search ?

 Thanks,
 Neil




-- 
-
Johnny Withers
601.209.4985
joh...@pixelated.net


Re: SELECT and INSERT if no row is returned

2010-03-24 Thread Johnny Withers
Make parent_id a unique key. Doing a select first and inserting if no
result will work 99.9% of the time; however, consider 2 rows being
inserted at the same time.

JW

On Tuesday, March 23, 2010, Kyong Kim kykim...@gmail.com wrote:
 I need to INSERT a row into a table only if it does not exist to
 insure that there won't be duplicate rows.
 Due to the the usage case and the design of the primary key,
 non-unique id + auto increment, I don't think insert ignore is an
 option.

 What would be simplest and cheapest way to make sure that given the
 following schema

 create table some_table
 ( parent_id int //non-unique
   seq_id int auto_increment ) ENGINE=INNODB

 that no row with the same parent_id can be inserted?

 Kyong

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:    http://lists.mysql.com/mysql?unsub=joh...@pixelated.net



-- 
-
Johnny Withers
601.209.4985
joh...@pixelated.net

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: UDF - Sequence Numbers

2010-03-09 Thread Johnny Withers
After some tinkering around on one of my test database servers with this
sequence table design; I see what you mean.

Defining the table with store_id and seq_id columns allows me to UPDATE
table SET seq_id=LAST_INSERT_ID(seq_id+1) WHERE store_id=N and will block
any further updates on store_id=N to that table until the entire transaction
block is COMMIT'd. However, it does allow me to issue the same statement
with store_id=X and is not blocked.

I thought this design would allow for gaps in the sequence numbering, but it
works great.

Thanks!



On Mon, Mar 8, 2010 at 4:10 PM, Gavin Towey gto...@ffn.com wrote:

 Others may correct me if I'm wrong, but if you want to maintain your own
 auto increment without duplicates like that, then you need to serialize
 inserts to your table.  Which means either doing an explicit table lock by
 the process that's inserting, or using innodb with transactions in the
 SERIALIZABLE tx isolation mode.

 If I were you, I would maintain your incrementing sequence in a separate
 table.

 Prep the table:
 CREATE TABLE store_seq ( store_id int unsigned not null, nextid int
 unsigned not null );
 insert into store_seq ( 1, 1 );

 Get next id:
 update store_seq set nextid=LAST_INSERT_ID(nextid+1) where store_id=1;
 select LAST_INSERT_ID();

 This will give you the next id in an atomic way, and avoid replication
 problems.  It means your insert will take two queries, but that's a small
 price to pay for correctness.

 Regards,
 Gavin Towey

 -Original Message-
 From: Johnny Withers [mailto:joh...@pixelated.net]
 Sent: Monday, March 08, 2010 1:31 PM
 To: MySQL General List
 Subject: UDF - Sequence Numbers

 I have two servers, both running 5.0.77-log, one is setup as a master, the
 other as a replication slave.

 The database contains a table that holds records of loans for financial
 lending stores. This table has an ID column this is defined as auto
 increment. There is another column called store_seq_num that holds the
 sequence number for each loan done in each store. This column needs to work
 like the auto-increment field; however, it's value is dependent upon which
 store created the loan. Currently there is a UDF called fnNextStoreSeqNum
 that returns the next sequence number for the new loan for the given store.
 It does this by executing:

 SELECT MAX(store_seq_num)+1 AS NextId FROM trans_adv WHERE
 trans_adv.store_id=N;

 It uses the store_seq_num key and explain says Select tables optimized
 away. in the extra column.

 The INSERT statement for this table looks something like this:

 INSERT INTO trans_adv(store_id,store_seq_num,...)
 VALUES(fnNextStoreSeqNum(10),10,);

 The problem comes in on the replication server. Sometimes the sequence
 numbers do not match the master. The root cause of the problem seems to be
 when two loans are created in the same store at the same time (same second
 -- or even 2 seconds apart sometimes). The master duplicates the sequence
 number and the slave writes the correct sequence numbers. This seems to
 happen when the server is under heavy load (600+ queries per sec). I hvae a
 feeling it's due to the loan being created in a single transaction;
 therefore the sequence number for the first loan really didn't exist to
 any other connections until COMMIT was issued.

 Is there a better way to do these sequence numbers? Should the key be
 defined as UNIQUE? If it is defined as UNIQUE how can this key be added to
 the existing table that has duplicate sequence numbers?

 A partial create table statement is below for the trans_adv table.

 CREATE TABLE `trans_adv` (
  `id` int(10) unsigned NOT NULL auto_increment,
  `store_id` int(10) unsigned NOT NULL default '0',
  `store_seq_num` int(10) unsigned NOT NULL default '0',
  PRIMARY KEY  (`id`),
  KEY `store_key` (`store_id`),
  KEY `store_seq_num_key` (`company_id`,`store_id`,`store_seq_num`),
 ) ENGINE=InnoDB AUTO_INCREMENT=3049363 DEFAULT CHARSET=latin1
 ;



 --
 -
 Johnny Withers
 601.209.4985
 joh...@pixelated.net

 This message contains confidential information and is intended only for the
 individual named.  If you are not the named addressee, you are notified that
 reviewing, disseminating, disclosing, copying or distributing this e-mail is
 strictly prohibited.  Please notify the sender immediately by e-mail if you
 have received this e-mail by mistake and delete this e-mail from your
 system. E-mail transmission cannot be guaranteed to be secure or error-free
 as information could be intercepted, corrupted, lost, destroyed, arrive late
 or incomplete, or contain viruses. The sender therefore does not accept
 liability for any loss or damage caused by viruses or errors or omissions in
 the contents of this message, which arise as a result of e-mail
 transmission. [FriendFinder Networks, Inc., 220 Humbolt court, Sunnyvale, CA
 94089, USA, FriendFinder.com




-- 
-
Johnny Withers
601.209.4985
joh...@pixelated.net


UDF - Sequence Numbers

2010-03-08 Thread Johnny Withers
I have two servers, both running 5.0.77-log, one is setup as a master, the
other as a replication slave.

The database contains a table that holds records of loans for financial
lending stores. This table has an ID column this is defined as auto
increment. There is another column called store_seq_num that holds the
sequence number for each loan done in each store. This column needs to work
like the auto-increment field; however, it's value is dependent upon which
store created the loan. Currently there is a UDF called fnNextStoreSeqNum
that returns the next sequence number for the new loan for the given store.
It does this by executing:

SELECT MAX(store_seq_num)+1 AS NextId FROM trans_adv WHERE
trans_adv.store_id=N;

It uses the store_seq_num key and explain says Select tables optimized
away. in the extra column.

The INSERT statement for this table looks something like this:

INSERT INTO trans_adv(store_id,store_seq_num,...)
VALUES(fnNextStoreSeqNum(10),10,);

The problem comes in on the replication server. Sometimes the sequence
numbers do not match the master. The root cause of the problem seems to be
when two loans are created in the same store at the same time (same second
-- or even 2 seconds apart sometimes). The master duplicates the sequence
number and the slave writes the correct sequence numbers. This seems to
happen when the server is under heavy load (600+ queries per sec). I hvae a
feeling it's due to the loan being created in a single transaction;
therefore the sequence number for the first loan really didn't exist to
any other connections until COMMIT was issued.

Is there a better way to do these sequence numbers? Should the key be
defined as UNIQUE? If it is defined as UNIQUE how can this key be added to
the existing table that has duplicate sequence numbers?

A partial create table statement is below for the trans_adv table.

CREATE TABLE `trans_adv` (
  `id` int(10) unsigned NOT NULL auto_increment,
  `store_id` int(10) unsigned NOT NULL default '0',
  `store_seq_num` int(10) unsigned NOT NULL default '0',
  PRIMARY KEY  (`id`),
  KEY `store_key` (`store_id`),
  KEY `store_seq_num_key` (`company_id`,`store_id`,`store_seq_num`),
) ENGINE=InnoDB AUTO_INCREMENT=3049363 DEFAULT CHARSET=latin1
;



-- 
-
Johnny Withers
601.209.4985
joh...@pixelated.net


Re: Master/Slave - trucate master table

2010-03-05 Thread Johnny Withers
Does START SLAVE fail?

If so, what is the output of SHOW SLAVE STATUS?

JW

On Fri, Mar 5, 2010 at 10:11 AM, Steven Staples sstap...@mnsi.net wrote:

 Good day everyone...

 I am working with a master/slave set up, and of course, we started it once
 the database tables were a combined size of 60gb+

 We've got it running and in sync now, (stopped the master, deleted the bin
 files, rsync'd the database tables and restarted the master) and all has
 been fine for the last week... until today.

 We stopped the mysql process on the slave, and did a backup (due to size,
 we
 just rsync and backup the rsync) of the table files.  Took about 40 minutes
 to do (still not sure why it was that long, but anyway), and then we
 restarted the slave mysql.  We're getting an error now.  Apparently one of
 the tables is out of sync now?   What appears to have happened, is that
 when
 the slave was down, i truncated a table on the master.

 I've read that this sometimes causes errors, is this a bug? Or a is there
 something I should do differently?


 Steven Staples



 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql?unsub=joh...@pixelated.net




-- 
-
Johnny Withers
601.209.4985
joh...@pixelated.net


Re: Index not being used

2010-03-05 Thread Johnny Withers
I don't have your data, so I do not really know if my results are correct.

I created this table on my system and added two records:

INSERT INTO form_fields_items(siteid,fieldid,`value`,sortorder) VALUES
(1234,5678,'test1',0)
;
INSERT INTO form_fields_items(siteid,fieldid,`value`,sortorder) VALUES
(4321,8765,'test2',0)
;

Both of your sample selects use the sitefieldsort key with a key_len of 8.
The extra part of explain's output reads Using where.

MySQL Server version 5.0.77-log.

Maybe you need to run ANALYZE TABLE on this table?

JW

On Fri, Mar 5, 2010 at 10:42 AM, Simon Kimber si...@internetstuff.ltd.ukwrote:

 Hi Everyone,

 I have the following table:

 CREATE TABLE `form_fields_items` (
  `ID` int(11) NOT NULL auto_increment,
  `siteid` int(11) NOT NULL default '0',
  `fieldid` int(11) NOT NULL default '0',
  `value` varchar(150) NOT NULL default '',
  `sortorder` int(11) NOT NULL default '0',
  PRIMARY KEY  (`ID`),
  KEY `siteid` (`siteid`),
  KEY `fieldid` (`fieldid`),
  KEY `sortorder` (`sortorder`),
  KEY `sitefieldsort` (`siteid`,`fieldid`,`sortorder`)
 ) ENGINE=MyISAM DEFAULT CHARSET=latin1 AUTO_INCREMENT=61219 ;

 And I am running the following query:

 SELECT * FROM form_fields_items WHERE siteid = 1234 AND fieldid = 5678
 ORDER BY sortorder

 And an explain returns the following:

 id   select_type table   typepossible_keys   key key_len
 ref  rowsExtra
 1   SIMPLE  form_fields_items   ref
 siteid,fieldid,sitefieldsortfieldid 4   const   9   Using
 where; Using filesort


 Can anyone tell me why this is not using the sitefieldsort index?

 If I change the query to something that returns no rows, such as:

 SELECT * FROM form_fields_items WHERE siteid = 1 AND fieldid = 1 ORDER
 BY sortorder

 An explain shows it using the correct index.

 Thanks for your time!

 Simon

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql?unsub=joh...@pixelated.net




-- 
-
Johnny Withers
601.209.4985
joh...@pixelated.net


Re: 50 things to know before migrating from Oracle to MySQL

2010-01-28 Thread Johnny Withers
Doesn't Google run MySQL ?

Hmmm


On Thu, Jan 28, 2010 at 4:56 PM, Carl c...@etrak-plus.com wrote:

 A quick Google turned up


 http://www.xaprb.com/blog/2009/03/13/50-things-to-know-before-migrating-oracle-to-mysql/

 Man, I love Google.

 Thanks,

 Carl
 - Original Message - From: Daevid Vincent dae...@daevid.com

 To: mysql@lists.mysql.com
 Cc: 'changuno ' chang...@rediffmail.com
 Sent: Thursday, January 28, 2010 5:49 PM
 Subject: RE: 50 things to know before migrating from Oracle to MySQL


  -Original Message-
 From: John Meyer [mailto:johnme...@pueblocomputing.com]
 Sent: Thursday, January 28, 2010 2:16 PM
 To: mysql@lists.mysql.com

 On 1/28/2010 3:21 AM, changuno wrote:
  Read a blog which states 50 things to know before migrating
  from Oracle to MySQL. Any comments on this?
 
 would it have been too much to just link to it?


 Thought the same thing.

 Not only that, it would have been PREFERRED,
 so I can BOOKMARK it and SHARE it with my other colleagues.


 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql?unsub=c...@etrak-plus.com




 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql?unsub=joh...@pixelated.net




-- 
-
Johnny Withers
601.209.4985
joh...@pixelated.net


Re: Best way to synchronize two database schemas

2010-01-22 Thread Johnny Withers
 of the two database, compare, and generate the
  necessary script
   to apply to db_prod.
  
   Thanks,
   Randall Price
  
  
 
 
  --
  Bier met grenadyn
  Is als mosterd by den wyn
  Sy die't drinkt, is eene kwezel
  Hy die't drinkt, is ras een ezel
 

  --
  MySQL General Mailing List
  For list archives: http://lists.mysql.com/mysql
  To unsubscribe:
  http://lists.mysql.com/mysql?unsub=dae...@daevid.com
 


 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql?unsub=vegiv...@tuxera.be






 --
 Bier met grenadyn
 Is als mosterd by den wyn
 Sy die't drinkt, is eene kwezel
 Hy die't drinkt, is ras een ezel





-- 
-
Johnny Withers
601.209.4985
joh...@pixelated.net


Re: tmp tables

2010-01-13 Thread Johnny Withers
No one designs a shopping cart system this way.

http://www.google.com/search?q=shopping+cart+database+table+design

If you are dead set on this crazy design it doesn't matter if you put the
temp tables in the main database or another database. It'll be just as silly
either way.

JW



On Wed, Jan 13, 2010 at 12:38 PM, Victor Subervi victorsube...@gmail.comwrote:

 On Mon, Jan 11, 2010 at 3:21 PM, mos mo...@fastmail.fm wrote:

  At 09:56 AM 1/11/2010, Johnny Withers wrote:
 
  Victor,
 
  The temporary table solution is not a good one. Use a single table and
  store
  each item put into a cart identified by the session ID of the user. A
  process should clean out this table periodically since there are
  shoppers
  that abandon carts from time to time.
 
  The design of this table could be quite simple:
 
  id - int auto_inc
  session_id - varchar
  added_dt - datetime
  item_id - int
  quantity - int
 
  You could get more complicated depending in your need an item
 requirements
  (colors, sizes, etc).
 
  -JW
 
 
  I also agree with JW, a single table is better. Don't forget shoppers may
  spend 30 minutes of more filling up their shopping cart and may lose the
  connection to the site or have to exit without completing the order. You
  need to retain the registered user's shopping cart so he can go back to
 it
  later. Only delete it if there is no activity on it for 7-10 days.  If
 you
  use temporary tables and throw out the shopping cart contents if the
 session
  is lost, then you will *iss off a lot of customers.
 

 I don't see how a single table is a solution. Tables are not 3D data
 structures. If they were, you'd be right! I need 2D to describe the
 customer's data: cols (ID, quantity, options, etc) X products. How can I
 add
 a third axis of custID??? No! Not possible.

 I am using cookies and committing data to the temp tables every time the
 customer puts something in the shopping cart.

 Can someone please answer my question as to which is better, putting the
 temp tables in a separate database or together with the 'workhorse' tables?
 TIA,
 V




-- 
-
Johnny Withers
601.209.4985
joh...@pixelated.net


Re: file per table performance

2010-01-12 Thread Johnny Withers
There are a few articles on this at MySQL Performance Blog:

http://www.mysqlperformanceblog.com/?s=innodb_file_per_table+performance



On Tue, Jan 12, 2010 at 3:05 PM, Bryan Cantwell bcantw...@firescope.comwrote:

 Anyone have information they can provide on the performance hit of using
 innodb_file_per_table?
 I'd assume that since there are many individual tables that this would slow
 performance, but perhaps not.
 In a huge database, is this not a good idea, or a better one?


 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql?unsub=joh...@pixelated.net




-- 
-
Johnny Withers
601.209.4985
joh...@pixelated.net


  1   2   3   >