Re: Query two different databases for differences

2008-02-11 Thread BJ Swope
On Feb 11, 2008 7:27 PM, James Eaton [EMAIL PROTECTED] wrote:

 SELECT
  MIN(TableName) as TableName, id, col1, col2, col3, ...
 FROM (
  SELECT 'Table a' as TableName, a.id, a.col1, a.col2, a.col3, ...
  FROM a
  UNION ALL
  SELECT 'Table b' as TableName, b.id, b.col1, b.col2, b.col3, ...
  FROM b
 ) AS tmp
 GROUP BY id, col1, col2, col3, ...
 HAVING COUNT(*) = 1
 ORDER BY ID;


 This finds common rows.  That doesn't help when the tables have about
 20,000 rows and most are the same.  How do I invert the query so that I
 can find the 'uncommon' rows?  Second, the primary key ('id' in the
 example) values do not match, so how/where in the query can you specify
 how to match rows from the two tables using other columns?


SELECT A.message_number
FROM message_table A
LEFT JOIN delivery_table B
ON  A.message_number = B.message_number
WHERE B.message_number is NULL
AND A.deleted = 0;

Will select every message_number that appears in A but not B and A's deleted
field equals 0.

-- 
We are all slave to our own paradigm. -- Joshua Williams

If the letters PhD appear after a person's name, that person will remain
outdoors even after it's started raining. -- Jeff Kay


Re: Mass insert on InnoDB

2008-01-29 Thread BJ Swope
drop the indexes for the conversion then rebuild the indexes after the
tables are converted.

On Jan 29, 2008 4:08 PM, David Schneider-Joseph [EMAIL PROTECTED] wrote:

 Hi all,

 I am attempting to convert a very large table (~23 million rows) from
 MyISAM to InnoDB.  If I do it in chunks of one million at a time, the
 first million are very fast (approx. 3 minutes or so), and then it
 gets progressively worse, until by the time I get even to the fourth
 chunk, it's taking 15-20 minutes, and continuing to worsen.  This is
 much worse degradation than the O*log(N) that you would expect.

 I have determined that this is in part due to the indexes on the
 table.  I have an index on two columns, call them A and B.  There is
 no relationship between the ordering in A and the ordering in B.  If I
 create the InnoDB table with only index A, and insert the data into it
 in the order of A, then almost no gradual degradation in performance
 can be observed between chunks.  Similarly, if I create the InnoDB
 table with only index B, and insert the data ordered by B, it also
 maintains its performance.

 However, I have not been able find a way to insert the data with
 *both* indexes, A and B, without suffering this gradual degradation in
 performance.  I have tried all sorts of methods to convert the table,
 such as dumping to a file and importing back into an InnoDB table,
 using ALTER TABLE big_table ENGINE=InnoDB;, importing the data
 without the second index and adding it after the fact, and in all
 cases it is just a game of whackamole, with the overhead being moved
 elsewhere.  (Note: my primary key is an integer column, so that is not
 the issue here.)

 This problem can even be reproduced in a very simple test case, where
 I continuously insert approximately 1 million rows into a table, with
 random data.  `big_table` can be any table with approximately one
 million rows in id range 1 through 100 (we're not actually using
 any data from it):

  mysql create table test (id int NOT NULL auto_increment, x int NOT
  NULL, primary key (id), key (x)) ENGINE=InnoDB;
  Query OK, 0 rows affected (0.08 sec)
 
  mysql insert into test (x) select cast(rand()*1000 as unsigned)
  from big_table where id between 1 and 100;
  Query OK, 981734 rows affected (22.23 sec)
  Records: 981734  Duplicates: 0  Warnings: 0
 
  mysql insert into test (x) select cast(rand()*1000 as unsigned)
  from big_table where id between 1 and 100;
  Query OK, 981734 rows affected (37.03 sec)
  Records: 981734  Duplicates: 0  Warnings: 0
 
  mysql insert into test (x) select cast(rand()*1000 as unsigned)
  from big_table where id between 1 and 100;
  Query OK, 981734 rows affected (56.41 sec)
  Records: 981734  Duplicates: 0  Warnings: 0
 
  mysql insert into test (x) select cast(rand()*1000 as unsigned)
  from big_table where id between 1 and 100;
  Query OK, 981734 rows affected (1 min 8.47 sec)
  Records: 981734  Duplicates: 0  Warnings: 0
 
  mysql insert into test (x) select cast(rand()*1000 as unsigned)
  from big_table where id between 1 and 100;
  Query OK, 981734 rows affected (1 min 27.67 sec)
  Records: 981734  Duplicates: 0  Warnings: 0
 
  mysql insert into test (x) select cast(rand()*1000 as unsigned)
  from big_table where id between 1 and 100;
  Query OK, 981734 rows affected (1 min 57.93 sec)
  Records: 981734  Duplicates: 0  Warnings: 0

 Any ideas, anyone?

 Thanks,
 David

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:
 http://lists.mysql.com/[EMAIL PROTECTED]




-- 
We are all slave to our own paradigm. -- Joshua Williams

If the letters PhD appear after a person's name, that person will remain
outdoors even after it's started raining. -- Jeff Kay


Re: Bash script array from MySQL query - HELP Please!!!

2007-05-22 Thread BJ Swope

I would look at the 15th URL to see if there are specials in there that are
breaking the hash somehow.

On 5/22/07, Ben Benson [EMAIL PROTECTED] wrote:




I'm having problems getting a monitoring script to run.



I've put the troublesome bit in a separate script just to test, and it
goes
like this:



declare -a HNdeclares the array HN



HN=(`echo SELECT url FROM hosts | mysql --user=netmon --password=n3tm0n
--skip-column-names check_http`) runs the query and assigns each record to
a
new element in the array



echo ${#HN} echo's number of elements in array



for ((i=0;i${#HN};i++)); do



echo ${HN[${i}]}  echo value of each element.



done



Seems simple enough yeah?! Well if I run echo SELECT url FROM hosts |
mysql --user=user --password=pass --skip-column-names check_http at the
command line, i get all of the records - 32. If I run the script above, it
simply refuses to put more than 14 elements in the array.



Then, to confuse it even more, if I sort the query, it gives a different
amount depending on what its sorted by!! For example, if I sort it by
'url'
it seems to generate 569 elements!



Can anyone please spot the undoubtedly obvious error I've made here?! I've
been scratching my head for days, to no avail!



Many thanks in advance,



Ben Benson







--
We are all slave to our own paradigm. -- Joshua Williams

If the letters PhD appear after a person's name, that person will remain
outdoors even after it's started raining. -- Jeff Kay


Re: secure port 3306

2007-05-07 Thread BJ Swope

On 5/7/07, Steven Buehler [EMAIL PROTECTED] wrote:


Thank You so much.  I will have to try this later today.  I have
never done a port forward in iptables before.  I knew that I could, but
just
never had a need or tried so it slipped my mind about this.
The thing is...I need to securely do this.  Here would be the
setup
Desktop - Secure connection to Server 1 - Secure connection to Server 2.
So I am assuming that what I need to do is to have the Desktop SSH into
Server 1 which will have the iptables setup to tunnel to Server 2 and then
use a tunnel from Secure CRT (or putty) to tunnel all the way to Server 2
through Server 1?  Server one can only be accessed with SSH from Server 1.

Again, Thank You.
Steve




You can forward a port on your local computer, through server1 to server2
using ssh tunnelling.

ssh -L 3306:server2_ip_address:3306  [EMAIL PROTECTED]


now you can point your sql client to localhost 3306 and it will then be
transported across your ssh session to server1.  Once it gets to server1
then server1 will forward that traffic to server2 on port 3306.

putty, a well known windows based ssh client can do this under Connection 
SSH  Tunnels.

Source Port is the Port on the local machine that will be forwarded across
the ssh connection.
Destination is the IP:Port to where the tunneled traffic is delivered.
Choose Local as the type of tunnel and do not forget to click the Add
button to add the tunnel to the ssh connection you are configuring.


Re: Help with this query. How to do a group by on a datetime just the month/day/year

2006-05-03 Thread BJ Swope

select count(*), substring(timeofclick,1,7) from MTracking where
mallarea=1001 group by 2;



On 5/3/06, Randy Paries [EMAIL PROTECTED] wrote:


Hello,

Not sure if i can do this.

I have a table with a datetime column

I would like to do group by a day of the month.

if i do something like

select count(*) from MTracking where mallarea=1001 group by timeofclick

every one is listed because time.

So is this possible?

Thanks
Randy

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]