Re: slave status: vague documentation of Seconds_Behind_Master

2007-05-02 Thread Ofer Inbar
Mathieu Bruneau <[EMAIL PROTECTED]> wrote:
> > In the section on Seconds_Behind_Master, first it says:
> > 
> >   When the slave SQL thread is actively running (processing updates),
> >   this field is the number of seconds that have elapsed since the
> >   timestamp of the most recent event on the master executed by that thread.
> 
> This is generally true and accepted, the important part to read here is
> "actively running". Which leads to the other paragraph:
[...]
> So if the SQL_THREAD isn't executing anything he will report "0" in
> "Seconds_behinds_master". If you think about it, it make sense because
> he doesn't have any "master" timestamp to base his calculation on. So
> the warning applies here, it's not because your SQL_THREAD report 0
> seconds behind master, that means he's caught up with the master, it
> simply means it's caught up with the IO_THREAD. If the io_thread is
> lagging, there's no current way of knowing it

Ahah.  So "processing updates" does *not* include monitoring the
relay log for new material, and if it sees nothing new in the relay
log it reports 0.  Thanks.
  -- Cos

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: slave status: vague documentation of Seconds_Behind_Master

2007-05-02 Thread Mathieu Bruneau
Ofer Inbar a écrit :
> I'm confused by a bit of the documentation here:
>   http://dev.mysql.com/doc/refman/5.0/en/show-slave-status.html
> 
> In the section on Seconds_Behind_Master, first it says:
> 
>   When the slave SQL thread is actively running (processing updates),
>   this field is the number of seconds that have elapsed since the
>   timestamp of the most recent event on the master executed by that thread.
> 

This is generally true and accepted, the important part to read here is
"actively running". Which leads to the other paragraph:

> ... but later it says:
> 
>  If the network connection between master and slave is fast, the slave
>  I/O thread is very close to the master, so this field is a good
>  approximation of how late the slave SQL thread is compared to the
>  master. If the network is slow, this is not a good approximation; the
>  slave SQL thread may quite often be caught up with the slow-reading
>  slave I/O thread, so Seconds_Behind_Master often shows a value of 0,
>  even if the I/O thread is late compared to the master. In other
>  words, this column is useful only for fast networks.
> 

So if the SQL_THREAD isn't executing anything he will report "0" in
"Seconds_behinds_master". If you think about it, it make sense because
he doesn't have any "master" timestamp to base his calculation on. So
the warning applies here, it's not because your SQL_THREAD report 0
seconds behind master, that means he's caught up with the master, it
simply means it's caught up with the IO_THREAD. If the io_thread is
lagging, there's no current way of knowing it


> These two sections seem contradictory to me.  If Seconds_Behind_Master
> actually works the way it is first defined, then it should reflect any
> delays caused by the network as well as delays in the SQL thread.
> 
> Since each event in the binary log is timestamped by the master, we
> know when the operation happened on the master.  If we compare that
> timestamp to the current time, we know how long ago it happened - so
> if we look at the timestamp of the most recent event executed by the
> slave SQL thread we see when it happened on the master, and can tell
> how much time has elapsed since then.
> 
> Two problems with this approach would be:
> 
>  1. If the local clocks on the master and slave aren't in sync,
> the timestamp comparison to current time would be off.
> 
>  2. If no writes to the master database have happened in a while, this
> would report the slave as "behind" because the most recent operation
> on the master has a timestamp from some time ago.
> 
> Both of these lead me to suspect that Seconds_Behind_Master does *not*
> actually work the way the first paragraph implies; if so, then the
> second paragraph I quoted from the doc could very well be true.
> 
> But if so, what exactly does Seconds_Behind_Master show?
>   -- Cos
> 

Problem 1 is true but ... you use ntp right? Seconds_Behinds_Master only
has second sensitivity anyway...

Problem 2 isn't true, because if the SQL_Thread don't have anything to
play in it's relay log it will show 0 so you won't think he is behind
the master but that leads to problem output in second paragraph that you
paste from the manual. WHere if the Network is making it lag, you may
think the slave is up-to-date but it's not really!

So basically if you want to know if your slave is really up-to-date. You
have to make sure, the IO_THREAD isn't constantly transferring and that
the SQL_THREAD has done playing everything in the relay log!

Maybe having a separate SQL_THREAD_LAG and IO_THREAD_LAG could be
something possible (Feature request?) But I don't know if the IO_THREAD
parse enough of the event to read timestamp in it.

Hope that helps!

-- 
Mathieu Bruneau
aka ROunofF

===
GPG keys available @ http://rounoff.darktech.org

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



a script to archive binary logs

2007-05-02 Thread Ofer Inbar
For disaster recovery, it's good to have copies of your database dumps
that you can easily & conveniently access, that are outside the data
center where the database lives.  Since we do a weekly full dump and
use binary logs for "incrementals", I also wanted copies of our binary
logs in the same place.

However, binary logs for an active server can be very big.  It'd be
nice to gzip them them for faster transfer, lower bandwidth charges,
less disk space used on the backup host, etc.  And they compress well:
in my experience, usually to about 1/10th of original size.

Unfortunately, if you gzip the destination, you can't easily use rsync
to make the backups, since it won't correctly identify which files
need to be copied or deleted.  So I wrote this script, which syncs one
directory to another, gzip'ing the resulting files - but only files
whose name matches a regex you set at the beginning.  It knows that
each file in the source dir corresponds to a file with the same name
with .gz appended in the destination dir, and correctly figures out
which ones to copy over and which ones to delete.

I posted the generic version at: http://thwip.sysadmin.org/dirsyncgz

Here it is, with variables set for typical mysql binary log use:

--
#!/usr/bin/perl
#
# $Id: dirsyncgz,v 1.1 2007/05/03 04:15:35 cos Exp $
#
# syncs files w/names matching a regex from srcdir to destdir, and gzips
#
# only files whose modification time is more recent than the
# corresponding gzip'ed file will be copied, and if a file has been
# deleted from the srcdir, the corresponding gzip'ed file will be
# deleted from the destdir

my $srcdir = "/var/lib/mysql";
my $destdir = "/backup/mysqllogs";
my $basename = "^binlog.\d+$";

opendir SRCDIR, $srcdir or die "$0: can't open directory $srcdir: $!\n";

foreach $file
  ( sort grep { /$basename/ && -f "$srcdir/$_" } readdir(SRCDIR) )

{ next unless ((stat("$srcdir/$file"))[9] > (stat("$destdir/$file.gz"))[9]);
  print "Copying $srcdir/$file to $destdir\n";
  
  system("cp -p $srcdir/$file $destdir") == 0
or warn "$0: cp -p $srcdir/$file $destdir failed: $?\n"
and next;
  system("gzip -f $destdir/$file") == 0
or warn "$0: gzip -f $destdir/$file failed: $?\n";
}

# now delete from the backup dir any logs deleted from the srcdir

opendir DESTDIR, $destdir or die "$0: can't open directory $destdir: $!\n";

foreach $savedfile
  ( sort grep { /$basename/ && -f "$destdir/$_" } readdir(DESTDIR) )
{ $savedfile =~ s/.gz$//;
  next if -f "$srcdir/$savedfile";

  print "Deleting $savedfile from $destdir\n";
  unlink "$destdir/${savedfile}.gz"
or unlink "$destdir/$savedfile"
or warn "$0: error deleting $savedfile: $!\n";
}
--

You can sync the logs to a remotely mounted filesystem and/or use its
destination directory as a source directory for your rsync.

  --  Cos (Ofer Inbar)  --  [EMAIL PROTECTED]
  It's been said that if a sysadmin does his job perfectly, he's the
  fellow that people wonder what he does and why the company needs him,
  until he goes on vacation.  -- comp.unix.admin FAQ

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: expire_logs_days

2007-05-02 Thread Paul DuBois

At 8:46 PM -0400 5/2/07, Baron Schwartz wrote:

Ofer Inbar wrote:

Mark Leith <[EMAIL PROTECTED]> wrote:
Do keep in mind that expire_logs_days only gets triggered at a) 
server start up b) the time a binary log has to roll over.


If your binary logs do not roll over for quite a period of time 
(i.e are lower load systems) that still stay up for long periods - 
you might not see a log expired for some period.


That's a good point, though probably a minor one: At most you would
end up with one binary logfile that's "old" and not deleted.  As soon
as you create a new one, that one would be deleted (if this feature works).

In our case, we flush logs nightly.  (but hardly ever restart mysqld)
  -- Cos



We roll many logs every day, but never restart unless we have to. 
So for us, it looked like it genuinely wasn't working on roll; I 
have no idea about restart.


I have a 4.1.13 server that's been up for 100 days.  It has expire_logs_days,
and I have 7 binlog files.  I do flush my logs once a day to force the logs
to rotate.

So that's one confirmation that it works, at least in 4.1.13. :-)

--
Paul DuBois, MySQL Documentation Team
Madison, Wisconsin, USA
MySQL AB, www.mysql.com

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: expire_logs_days

2007-05-02 Thread Baron Schwartz

Ofer Inbar wrote:

Mark Leith <[EMAIL PROTECTED]> wrote:
Do keep in mind that expire_logs_days only gets triggered at a) server 
start up b) the time a binary log has to roll over.


If your binary logs do not roll over for quite a period of time (i.e are 
lower load systems) that still stay up for long periods - you might not 
see a log expired for some period.


That's a good point, though probably a minor one: At most you would
end up with one binary logfile that's "old" and not deleted.  As soon
as you create a new one, that one would be deleted (if this feature works).

In our case, we flush logs nightly.  (but hardly ever restart mysqld)
  -- Cos



We roll many logs every day, but never restart unless we have to.  So for us, it 
looked like it genuinely wasn't working on roll; I have no idea about restart.


Baron

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: expire_logs_days

2007-05-02 Thread Ofer Inbar
Mark Leith <[EMAIL PROTECTED]> wrote:
> Do keep in mind that expire_logs_days only gets triggered at a) server 
> start up b) the time a binary log has to roll over.
> 
> If your binary logs do not roll over for quite a period of time (i.e are 
> lower load systems) that still stay up for long periods - you might not 
> see a log expired for some period.

That's a good point, though probably a minor one: At most you would
end up with one binary logfile that's "old" and not deleted.  As soon
as you create a new one, that one would be deleted (if this feature works).

In our case, we flush logs nightly.  (but hardly ever restart mysqld)
  -- Cos

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: best filesystem for mysql

2007-05-02 Thread Jeremy Cole

Hi Jeff,

There isn't really a clear winner in any case, but the tests done in the 
article linked to are highly suspect.  It would be much more interesting 
to see real tests done with real hardware...


Nonetheless, the usual answer I would have is to lean towards ease of 
administration and good understanding, rather than getting 1% better 
performance.  That usually means ext3.


Regards,

Jeremy

Jeff Pang wrote:

hello list,

I saw this article for the suitable filesystem for mysql.
http://www.bullopensource.org/ext4/sqlbench/


From what I saw,the best filesystem for MyISAM is ext3,the best filesystem for 
InnoDB is Reiserfs.

How about your thought on it?Thanks.

50€ AMAZON-Einkaufsgutschein bei Bestellung von Arcor-DSL:
Viel oder wenig? Schnell oder langsam? Unbegrenzt surfen + telefonieren
ohne Zeit- und Volumenbegrenzung? DAS TOP ANGEBOT JETZT bei Arcor: günstig
und schnell mit DSL - das All-Inclusive-Paket für clevere Doppel-Sparer,
nur  39,85 €  inkl. DSL- und ISDN-Grundgebühr!
http://www.arcor.de/rd/emf-dsl-2



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: expire_logs_days

2007-05-02 Thread Mark Leith

Mark Leith wrote:

Juan Eduardo Moreno wrote:

Hi,

I'm experience using expire_log_days and don't work. I set this 
parameters
in the CNF and when the time of ( for example 5 days) is in, don't 
delete

anything.

On my expirience, this parameters don't work ( 5.0.27).


I am testing this now (on 5.0.40) and will see how it works out. 


OK initial testing of this has shown no problems with the rolling over 
that happens when the server starts/stops:


medusa:/usr/local/mysql/data root# ls -l
total 41024
-rw-rw1 mysql  wheel   5242880 May  5 22:34 ib_logfile0
-rw-rw1 mysql  wheel   5242880 May  2 22:27 ib_logfile1
-rw-rw1 mysql  wheel  10485760 May  5 22:34 ibdata1
-rw-rw1 mysql  wheel   117 May  5 22:33 medusa-bin.02
-rw-rw1 mysql  wheel   117 May  5 22:34 medusa-bin.03
-rw-rw1 mysql  wheel   117 May  5 22:34 medusa-bin.04
-rw-rw1 mysql  wheel98 May  5 22:34 medusa-bin.05
-rw-rw1 mysql  wheel   160 May  5 22:34 medusa-bin.index
-rw-rw1 mysql  wheel  6051 May  5 22:34 medusa.err
-rw-rw1 mysql  wheel 5 May  5 22:34 medusa.pid
drwxr-x---   53 mysql  wheel  1802 May  2 22:26 mysql
drwxr-x---2 mysql  wheel68 Apr 21 06:09 test
medusa:/usr/local/mysql/data root# mysqladmin -u root shutdown
medusa:/usr/local/mysql/data root# date
Sat May  5 22:35:07 BST 2007
medusa:/usr/local/mysql/data root# date
Wed May  9 22:35:24 BST 2007
medusa:/usr/local/mysql/data root# ../bin/mysqld --user=mysql &
[1] 1867
medusa:/usr/local/mysql/data root# 070509 22:35:53 [Warning] Setting 
lower_case_table_names=2 because file system for 
/usr/local/mysql-enterprise-gpl-5.0.40-osx10.4-i686/data/ is case 
insensitive
070509 22:35:53 [Warning] No argument was provided to --log-bin, and 
--log-bin-index was not used; so replication may break when this MySQL 
server acts as a master and has his hostname changed!! Please use 
'--log-bin=/usr/local/mysql-enterprise-gpl-5.0.40-osx10.4-i686/data/medusa-bin' 
to avoid this problem.

070509 22:35:53  InnoDB: Started; log sequence number 0 43655
070509 22:35:54 [Note] ../bin/mysqld: ready for connections.
Version: '5.0.40-enterprise-gpl-log'  socket: '/tmp/mysql.sock'  port: 
3306  MySQL Enterprise Server (GPL)


medusa:/usr/local/mysql/data root# ls -l
total 41000
-rw-rw1 mysql  wheel   5242880 May  9 22:35 ib_logfile0
-rw-rw1 mysql  wheel   5242880 May  2 22:27 ib_logfile1
-rw-rw1 mysql  wheel  10485760 May  5 22:35 ibdata1
-rw-rw1 mysql  wheel98 May  9 22:35 medusa-bin.06
-rw-rw1 mysql  wheel75 May  9 22:35 medusa-bin.index
-rw-rw1 mysql  wheel  6341 May  5 22:35 medusa.err
-rw-rw1 mysql  wheel 5 May  9 22:35 medusa.pid
drwxr-x---   53 mysql  wheel  1802 May  2 22:26 mysql
drwxr-x---2 mysql  wheel68 Apr 21 06:09 test

The only other one to test now is test that this also happens when a 
binary log rolls over.


Do keep in mind that expire_logs_days only gets triggered at a) server 
start up b) the time a binary log has to roll over.


If your binary logs do not roll over for quite a period of time (i.e are 
lower load systems) that still stay up for long periods - you might not 
see a log expired for some period.


Anyway, I'll still check out binary log rollover as well..

Cheers,

Mark

--
Mark Leith, Support Engineer
MySQL AB, Worcester, England, www.mysql.com
Are you MySQL certified?  www.mysql.com/certification


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: [PHP] [X-POST] Fastest way to dump this huge table

2007-05-02 Thread Brian Dunning
Thanks to everyone who answered, think I've got enough info now to  
handle it.  :)


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: expire_logs_days

2007-05-02 Thread Mark Leith

Juan Eduardo Moreno wrote:

Hi,

I'm experience using expire_log_days and don't work. I set this 
parameters

in the CNF and when the time of ( for example 5 days) is in, don't delete
anything.

On my expirience, this parameters don't work ( 5.0.27).


I am testing this now (on 5.0.40) and will see how it works out.

Cheers!

Mark

--
Mark Leith, Support Engineer
MySQL AB, Worcester, England, www.mysql.com
Are you MySQL certified?  www.mysql.com/certification


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: ROLLUP and Text on total lines

2007-05-02 Thread Dan Nelson
In the last episode (May 02), Jesse said:
>  Using ROLLUP in my GROUP BY, I have been able to get MySQL to insert 
>  sub-total lines, which is great.  However, it would be even better if I 
>  could determine the text on those subtotal lines.  Here's my query:
> 
>  SELECT Sc.State, St.Description, Count(*)
>  FROM InvHead I
>JOIN Schools Sc on Sc.ID=I.ChapterID
>JOIN Participants P ON P.InvNo=I.InvNo
>JOIN StatusCodes St ON P.Status=St.Code
>  GROUP BY Sc.State, St.Description WITH ROLLUP
> 
>  I have seen examples that use (I believe) MS SQL and the Grouping()
>  function to determine if the row is an inserted subtotal or total
>  row.  Is there a way to determine if the row is a subtotal or total
>  row, and change the text to indicate which it is?

The totals rows should have some of the grouped fields set to NULL, at
least according to the examples in the manual.  You should be able to
test those in an IF() expresion that returns a special string if State
and/or Description are NULL.
 
-- 
Dan Nelson
[EMAIL PROTECTED]

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: how to enable logging, MySQL 5.0.37 FreeBSD 6.2

2007-05-02 Thread Ray
I seem to have problems receiving from this list.
I hadn't seen this answer come through until I was searching google again 
today.  :)
I just can't get my head around the mysql config system. If you can provide me 
with instructions, It would be greatly appreciated.
Ray
  
>There is a sql log that you can enable in the config file.

>If you look at the config file now it will most likely only have one log 
file, 
>Look at MSQL manual and logging. 

>I dont have the setting infront of me at the moment.

>If you cannot find give a yell.




On Wednesday 25 April 2007 10:06:19 Ray wrote:
>> I am trying to debug another application and I need to enable logging of
>> all queries (temporary only) to MySQL (5.0.37). OS is FreeBSD 6.2. MySQL
>> installed from ports.

>-- 
>Regards
>Michael Cole


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: basic architecture review?

2007-05-02 Thread Jerry Schwartz
You might benefit from drawing your layout as a picture, if you haven't
already. Use arrows to connect the fields in each table with the fields in
other tables that they will "hook" to. That will give you an idea of which
fields to index and JOIN on. You JOIN on those fields, and you index the
ones that have many different values.

Regards,

Jerry Schwartz
Global Information Incorporated
195 Farmington Ave.
Farmington, CT 06032

860.674.8796 / FAX: 860.674.8341


> -Original Message-
> From: Michael Higgins [mailto:[EMAIL PROTECTED]
> Sent: Wednesday, May 02, 2007 3:20 PM
> To: 'Dan Buettner'
> Cc: mysql@lists.mysql.com
> Subject: RE: basic architecture review?
>
> > -Original Message-
> > From: Dan Buettner [mailto:[EMAIL PROTECTED]
> >
> > Michael, this looks pretty decent overall.
> >
> > I'm a big fan of fully descriptive table and column names,
>
> [8<] (All good suggestions, thanks.)
>
> > One performance suggestion: add an index on each table for
> > the claim_id column.  This will greatly speed retrieval of
> > material related to a given claim/overview.  ALTER TABLE x
> > ADD INDEX claim_id_idx (claim_id)
>
> Okay, this was exactly the kind of question I had. So, if I
> do that, then
> mysql will just access that INDEX information internally? IOW, I don't
> actually query on that field, or ever have to think about it
> again, right?
>
> >
> > When you say "query the lot", what do you mean?  Get all
> > related "stuff" in a single SQL statement?  Possible, but
> > maybe a bit messy, and not as easy to maintain as a handful
> > of routines that each get documents, emails, pictures.  As
> > you add more tables holding related material the SQL would
> > become unwieldy and you'd likely break it down later anyway.
> >
>
> Yeah, I get that... but what I'm looking for is to select
> (all non-blob
> fields) from (all the tables) where claim_id = x .
> [what do I do
> here? some kind of a 'join'?]
>
> This way, I'd be able to get access to each record associated
> with that
> claim_id from one, say, webpage.
>
> Like, having retrieved an image_id from the monolithic query,
> I could then
> retrieve the associated image blob with another query.
>
> Anyway, I'm sure this will all become clearer to me eventually... ;-)
>
> Thanks a bunch,
>
> Michael Higgins
>
>
>
> --
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:
> http://lists.mysql.com/[EMAIL PROTECTED]
>
>




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: expire_logs_days

2007-05-02 Thread Baron Schwartz

Ofer Inbar wrote:

Baron Schwartz <[EMAIL PROTECTED]> wrote:

Actually, the manual does mention the variable, but it doesn't work
for us.  We run a nightly cron job that just runs [purge master logs]


When you say "it doesn't work for us" do you mean that you tried it?
In what way did it not work?


The logs stayed in the directory.  They did not get purged.



Tim Lucia <[EMAIL PROTECTED]> wrote:

We do the same thing, based on the "rumors" I read at the time I set it up.
(Where "rumors" means that googling for expire_logs_days reveals many with
problems and not much good news.)


Has anyone here had direct experience with expire_logs_days either
working or not working?  What happened?

(note: I'm running 5.0.24)


We are upgrading now, but at the time, were running 5.0.26.

There is no bug report related to this at the moment, which is kind of amazing given 
how many people have said it doesn't work, but a support engineer is looking into it, 
and so am I in my copious spare time ;-)


Baron


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



slave status: vague documentation of Seconds_Behind_Master

2007-05-02 Thread Ofer Inbar
I'm confused by a bit of the documentation here:
  http://dev.mysql.com/doc/refman/5.0/en/show-slave-status.html

In the section on Seconds_Behind_Master, first it says:

  When the slave SQL thread is actively running (processing updates),
  this field is the number of seconds that have elapsed since the
  timestamp of the most recent event on the master executed by that thread.

... but later it says:

 If the network connection between master and slave is fast, the slave
 I/O thread is very close to the master, so this field is a good
 approximation of how late the slave SQL thread is compared to the
 master. If the network is slow, this is not a good approximation; the
 slave SQL thread may quite often be caught up with the slow-reading
 slave I/O thread, so Seconds_Behind_Master often shows a value of 0,
 even if the I/O thread is late compared to the master. In other
 words, this column is useful only for fast networks.

These two sections seem contradictory to me.  If Seconds_Behind_Master
actually works the way it is first defined, then it should reflect any
delays caused by the network as well as delays in the SQL thread.

Since each event in the binary log is timestamped by the master, we
know when the operation happened on the master.  If we compare that
timestamp to the current time, we know how long ago it happened - so
if we look at the timestamp of the most recent event executed by the
slave SQL thread we see when it happened on the master, and can tell
how much time has elapsed since then.

Two problems with this approach would be:

 1. If the local clocks on the master and slave aren't in sync,
the timestamp comparison to current time would be off.

 2. If no writes to the master database have happened in a while, this
would report the slave as "behind" because the most recent operation
on the master has a timestamp from some time ago.

Both of these lead me to suspect that Seconds_Behind_Master does *not*
actually work the way the first paragraph implies; if so, then the
second paragraph I quoted from the doc could very well be true.

But if so, what exactly does Seconds_Behind_Master show?
  -- Cos

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Refman 5.0.30?

2007-05-02 Thread Walter Tuvell
Does anybody have a copy of the MySQL Reference Manual,
refman-5.0-en.pdf, for the *exact* version 5.0.30 (as stated on p. 2 of
the doc), that they could please send me?  MySQL itself says it doesn't
archive docs.

 

The reason is, 5.0.30 is the version of the MySQL software we're
shipping bundled with our product, and we have this customer who's picky
about details of this kind ...



Re: expire_logs_days

2007-05-02 Thread Juan Eduardo Moreno

Hi,

I'm experience using expire_log_days and don't work. I set this parameters
in the CNF and when the time of ( for example 5 days) is in, don't delete
anything.

On my expirience, this parameters don't work ( 5.0.27).

Regards
Juan

On 5/2/07, Ofer Inbar <[EMAIL PROTECTED]> wrote:


Baron Schwartz <[EMAIL PROTECTED]> wrote:
> Actually, the manual does mention the variable, but it doesn't work
> for us.  We run a nightly cron job that just runs [purge master logs]

When you say "it doesn't work for us" do you mean that you tried it?
In what way did it not work?

Tim Lucia <[EMAIL PROTECTED]> wrote:
> We do the same thing, based on the "rumors" I read at the time I set it
up.
> (Where "rumors" means that googling for expire_logs_days reveals many
with
> problems and not much good news.)

Has anyone here had direct experience with expire_logs_days either
working or not working?  What happened?

(note: I'm running 5.0.24)

  --  Cos (Ofer Inbar)  --  [EMAIL PROTECTED]
   "OSI is a beautiful dream, and TCP/IP is living it!"
 -- Einar Stefferud <[EMAIL PROTECTED]>, IETF mailing list, 12 May 1992

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]




Re: View select results

2007-05-02 Thread Peter Brawley

>One last thing: you set, at first, a parameter called @prev with Null
>(' ') value: right?

No, I set it to a string containing one space char. Use anything that 
does not occur as data in the column.


>And, after, you use, instead IF ELSE statement, another syntax: is it
>trinary operator? if yes, why it not is in the online MySQL manual?

I used the IF() function, see 'Control Flow Functions' under 'Functions 
and Operators' in the manual


PB

--

spacemarc wrote:

2007/5/2, Peter Brawley <[EMAIL PROTECTED]>:
Right, give the computed column an alias differeing from the column 
name, eg


SET @prev='';
SELECT
  IF(area = @prev, '', @prev := area) AS AreaHdr,
  text,amount
FROM products
ORDER BY area DESC;


ok, now it works! thanks!

One last thing: you set, at first, a parameter called @prev with Null
(' ') value: right?

And, after, you use, instead IF ELSE statement, another syntax: is it
trinary operator? if yes, why it not is in the online MySQL manual?



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: expire_logs_days

2007-05-02 Thread Ofer Inbar
Baron Schwartz <[EMAIL PROTECTED]> wrote:
> Actually, the manual does mention the variable, but it doesn't work
> for us.  We run a nightly cron job that just runs [purge master logs]

When you say "it doesn't work for us" do you mean that you tried it?
In what way did it not work?

Tim Lucia <[EMAIL PROTECTED]> wrote:
> We do the same thing, based on the "rumors" I read at the time I set it up.
> (Where "rumors" means that googling for expire_logs_days reveals many with
> problems and not much good news.)

Has anyone here had direct experience with expire_logs_days either
working or not working?  What happened?

(note: I'm running 5.0.24)

  --  Cos (Ofer Inbar)  --  [EMAIL PROTECTED]
   "OSI is a beautiful dream, and TCP/IP is living it!"
 -- Einar Stefferud <[EMAIL PROTECTED]>, IETF mailing list, 12 May 1992

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: View select results

2007-05-02 Thread spacemarc

2007/5/2, Peter Brawley <[EMAIL PROTECTED]>:

Right, give the computed column an alias differeing from the column name, eg

SET @prev='';
SELECT
  IF(area = @prev, '', @prev := area) AS AreaHdr,
  text,amount
FROM products
ORDER BY area DESC;


ok, now it works! thanks!

One last thing: you set, at first, a parameter called @prev with Null
(' ') value: right?

And, after, you use, instead IF ELSE statement, another syntax: is it
trinary operator? if yes, why it not is in the online MySQL manual?

--
http://www.spacemarc.it

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



ROLLUP and Text on total lines

2007-05-02 Thread Jesse
Using ROLLUP in my GROUP BY, I have been able to get MySQL to insert 
sub-total lines, which is great.  However, it would be even better if I 
could determine the text on those subtotal lines.  Here's my query:


SELECT Sc.State, St.Description, Count(*)
FROM InvHead I
  JOIN Schools Sc on Sc.ID=I.ChapterID
  JOIN Participants P ON P.InvNo=I.InvNo
  JOIN StatusCodes St ON P.Status=St.Code
GROUP BY Sc.State, St.Description WITH ROLLUP

I have seen examples that use (I believe) MS SQL and the Grouping() function 
to determine if the row is an inserted subtotal or total row.  Is there a 
way to determine if the row is a subtotal or total row, and change the text 
to indicate which it is?


Jesse 



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: secure port 3306

2007-05-02 Thread Mogens Melander
On linux, one could do a port forward:

EXTIF=eth0 # Or whatever the interface that faces internet is called.

iptables -A FORWARD -i $EXTIF -p tcp -s  --dport 3306 -j ACCEPT
iptables -A PREROUTING -t nat -p tcp -s  \
   -d  --dport 3306 -j DNAT --to :3306

On Wed, May 2, 2007 17:03, Steven Buehler wrote:
> I have a client that needs to be able to remotely connect to port 3306
> securely.  I have tried to suggest an SSH Tunnel, but they do not want
> their
> clients to have SSH access.  Another problem is that even if we do tunnel,
> it needs to go thru one server that is connected to the Internet and into
> the MySQL server which is NOT accessible from the Internet.
>
> Any suggestions?
>
> Thanks
> Steve
>
>
> --
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
>
>
> --
> This message has been scanned for viruses and
> dangerous content by OpenProtect(http://www.openprotect.com), and is
> believed to be clean.
>


-- 
Later

Mogens Melander
+45 40 85 71 38
+66 870 133 224



-- 
This message has been scanned for viruses and
dangerous content by OpenProtect(http://www.openprotect.com), and is
believed to be clean.


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: REGEXP Character Classes

2007-05-02 Thread Paul DuBois

At 5:33 AM -0400 5/2/07, John Kebbel wrote:

I went to the MySQL documentation pages and read up on using COLLATE. I
knew SELECT was case-insensitive, but I was sort of surprised that using
a character class didn't override that. Anyway, I next tried the
"status" command to see if it gave me any characterset information.

Client characterset:latin1
Server characterset:latin1

Once I thought I understood what was going on with COLLATE and case
sensitivity, I tried this command...

SELECT id, pswd, division, department, title, classification FROM pswds
WHERE pswd REGEXP '[:lower:]' COLLATE latin1_bin;

It seemed to work fine. I searched the column to see if I could find any
instances of all caps value, but did not find any. (They do exist; I
created the data for this table from a Perl script solely to practice
using  character class regular expressions.)

Then I tried this command. It should not have found any instances of all
lower case passwords, but it did.

SELECT id, pswd, division, department, title, classification FROM pswds
WHERE pswd REGEXP '[:upper:]' COLLATE latin1_bin;
+--+--+--++++
| id   | pswd | division | department | title  |
classification |
+--+--+--++++
|8 | euwsrbwm | Customer Service | Accounting | Clerical   | 0f1b12
|
|   13 | mejccvoz | Customer Service | Receiving  | Clerical   | 437113
|
|   18 | kwkheprh | Customer Service | Purchasing | Clerical   | 29652
|
|   20 | qpvxvqhz | Customer Service | Accounting | Clerical   | bcb244
|

Is there something obvious that I'm missing here?


Ah.  Yes.  I should have noticed this in your earlier message, sorry:
The character class names *include* the square brackets, so to use
them as part of a range, you need another set of square brackets,
i.e., [[:upper:]].

What your statement looks for is any values containing :, u, p, e, or r,
and that's why you see the values returned that you do.

--
Paul DuBois, MySQL Documentation Team
Madison, Wisconsin, USA
MySQL AB, www.mysql.com

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: basic architecture review?

2007-05-02 Thread Michael Higgins
> -Original Message-
> From: Dan Buettner [mailto:[EMAIL PROTECTED] 
> 
> Michael, this looks pretty decent overall.
> 
> I'm a big fan of fully descriptive table and column names, 

[8<] (All good suggestions, thanks.)

> One performance suggestion: add an index on each table for 
> the claim_id column.  This will greatly speed retrieval of 
> material related to a given claim/overview.  ALTER TABLE x 
> ADD INDEX claim_id_idx (claim_id) 

Okay, this was exactly the kind of question I had. So, if I do that, then
mysql will just access that INDEX information internally? IOW, I don't
actually query on that field, or ever have to think about it again, right?

> 
> When you say "query the lot", what do you mean?  Get all 
> related "stuff" in a single SQL statement?  Possible, but 
> maybe a bit messy, and not as easy to maintain as a handful 
> of routines that each get documents, emails, pictures.  As 
> you add more tables holding related material the SQL would 
> become unwieldy and you'd likely break it down later anyway. 
> 

Yeah, I get that... but what I'm looking for is to select (all non-blob
fields) from (all the tables) where claim_id = x . [what do I do
here? some kind of a 'join'?] 

This way, I'd be able to get access to each record associated with that
claim_id from one, say, webpage. 

Like, having retrieved an image_id from the monolithic query, I could then
retrieve the associated image blob with another query.

Anyway, I'm sure this will all become clearer to me eventually... ;-)

Thanks a bunch,

Michael Higgins



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: REGEXP Character Classes

2007-05-02 Thread John Kebbel
I was experimenting with Character Classes because they were covered in
MySQL Crash Course. There are probably substitutes for all the character
classes--such as ^[a-z] for [:lower:]--that I probably should stick with
instead of wandering off into foreign territory.

Fooling with Character Classes did pay off, however, because I heard
about COLLATE and declaring columns as binary from other responses. 


On Wed, 2007-05-02 at 08:08 -0400, [EMAIL PROTECTED] wrote:
> 
> Wouldn't the regular expression be ^[a-z].* 
> 
> ^ = start of string 
> [ a-z] = class range for lower case 
> . = any character 
> * = mods last to grab anything after that... 
> 
> 
> actually you should just be able to get by with ^[a-z] 
> 
> 
> 
> John Kebbel
> <[EMAIL PROTECTED]> 
> 
> 05/02/2007 05:33 AM 
>  Please respond to
>   [EMAIL PROTECTED]
> 
> 
> 
> 
>To
> MySQL
>  
>cc
> 
>   Subject
> Re: REGEXP
> Character Classes
> 
> 
> 
> 
> 
> 
> 
> 
> I went to the MySQL documentation pages and read up on using COLLATE.
> I
> knew SELECT was case-insensitive, but I was sort of surprised that
> using
> a character class didn't override that. Anyway, I next tried the
> "status" command to see if it gave me any characterset information.
> 
> Client characterset:latin1
> Server characterset:latin1
> 
> Once I thought I understood what was going on with COLLATE and case
> sensitivity, I tried this command...
> 
> SELECT id, pswd, division, department, title, classification FROM
> pswds
> WHERE pswd REGEXP '[:lower:]' COLLATE latin1_bin;
> 
> It seemed to work fine. I searched the column to see if I could find
> any
> instances of all caps value, but did not find any. (They do exist; I
> created the data for this table from a Perl script solely to practice
> using  character class regular expressions.)
> 
> Then I tried this command. It should not have found any instances of
> all
> lower case passwords, but it did.
> 
> SELECT id, pswd, division, department, title, classification FROM
> pswds
> WHERE pswd REGEXP '[:upper:]' COLLATE latin1_bin;
> +--+--+--++++
> | id   | pswd | division | department | title  |
> classification |
> +--+--+--++++
> |8 | euwsrbwm | Customer Service | Accounting | Clerical   |
> 0f1b12
> |
> |   13 | mejccvoz | Customer Service | Receiving  | Clerical   |
> 437113
> |
> |   18 | kwkheprh | Customer Service | Purchasing | Clerical   | 29652
> |
> |   20 | qpvxvqhz | Customer Service | Accounting | Clerical   |
> bcb244
> |
> 
> 
> Is there something obvious that I'm missing here?
> 
> 
> -- 
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:
>  http://lists.mysql.com/[EMAIL PROTECTED]
> 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: View select results

2007-05-02 Thread Peter Brawley

Right, give the computed column an alias differeing from the column name, eg

SET @prev='';
SELECT 
 IF(area = @prev, '', @prev := area) AS AreaHdr,

 text,amount
FROM products
ORDER BY area DESC;

PB

spacemarc wrote:

2007/5/2, Peter Brawley <[EMAIL PROTECTED]>:

Works for me. Please post a CREATE TABLE stmt & enough INSERTs to
demonstrate the problem.


This is the dump (MySQL: 5.0.38): the table is not final version, just
to test the query.

CREATE TABLE `products` (
 `area` varchar(25) NOT NULL,
 `text` varchar(25) NOT NULL,
 `amount` int(3) NOT NULL
) ENGINE=MyISAM DEFAULT CHARSET=latin1;

INSERT INTO `products` (`area`, `text`, `amount`) VALUES
('area1', 'some text', 12),
('area1', 'other text here', 13),
('area3', 'example...', 22),
('area2', 'things', 123),
('area1', 'bla bla...', 24),
('area2', 'others again', 231),
('area1', 'english language..', 44),
('area1', 'server database', 53),
('area3', 'php language...', 22),
('area2', 'linux box', 951),
('area1', 'developer tools', 4),
('area2', 'others words', 1);




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: View select results

2007-05-02 Thread spacemarc

2007/5/2, Peter Brawley <[EMAIL PROTECTED]>:

Works for me. Please post a CREATE TABLE stmt & enough INSERTs to
demonstrate the problem.


This is the dump (MySQL: 5.0.38): the table is not final version, just
to test the query.

CREATE TABLE `products` (
`area` varchar(25) NOT NULL,
`text` varchar(25) NOT NULL,
`amount` int(3) NOT NULL
) ENGINE=MyISAM DEFAULT CHARSET=latin1;

INSERT INTO `products` (`area`, `text`, `amount`) VALUES
('area1', 'some text', 12),
('area1', 'other text here', 13),
('area3', 'example...', 22),
('area2', 'things', 123),
('area1', 'bla bla...', 24),
('area2', 'others again', 231),
('area1', 'english language..', 44),
('area1', 'server database', 53),
('area3', 'php language...', 22),
('area2', 'linux box', 951),
('area1', 'developer tools', 4),
('area2', 'others words', 1);



--
http://www.spacemarc.it

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: basic architecture review?

2007-05-02 Thread Dan Buettner

Michael, this looks pretty decent overall.

I'm a big fan of fully descriptive table and column names, avoiding
abbreviations except where truly needed, so I personally would spell out
"claim" and "claimant" for example.  I also like to separate words in table
and column names with underscores, which you're already doing in most cases.
And finally, I like to have the table name be plural and the primary key be
singular_id.  So, "users" table will have "user_id", for example (how I wish
Ruby on Rails did that).  I'd either rename your overview table to claims,
or change the id column to overview_id

In the carrdocs table (two r's) you have a column named cardoc_id (one r).
No biggie but you'll scratch your head more than once as you write SQL that
doesn't work the first time.

One performance suggestion: add an index on each table for the claim_id
column.  This will greatly speed retrieval of material related to a given
claim/overview.  ALTER TABLE x ADD INDEX claim_id_idx (claim_id)

When you say "query the lot", what do you mean?  Get all related "stuff" in
a single SQL statement?  Possible, but maybe a bit messy, and not as easy to
maintain as a handful of routines that each get documents, emails,
pictures.  As you add more tables holding related material the SQL would
become unwieldy and you'd likely break it down later anyway.

HTH,
Dan



On 5/2/07, Michael Higgins <[EMAIL PROTECTED]> wrote:


Hello, list --

No problem, yet. ;-)

Wondering if anyone would have a suggestion to ensure better performance,
or
could point out any likely errors in the database outlined below.

Basically, I have digital pictures, scanned text/forms and emails that all
relate to information indexed in a separate DB with "shipment_id". I don't
have any real experience with DB design, so any suggestions or things to
consider would be appreciated.

My thinking is that I create an overview with an id and store that id in
the
other tables so I can get all related documents. (My next question will be
how to query the lot in a single statement...)

All the tables are ENGINE=MyISAM and DEFAULT CHARSET=utf8. I don't know
squat about configuration parameters but an error in putting up the images
led me to change this line in my.cnf:

max_allowed_packet  = 4M

... because I don't know how to put up a binary in chunks, I guess. (I'm
using DBD::mysql and CGI in perl and inserting the uploaded file with a
placeholder in my SQL...)

Thanks in advance for any helpful suggestions, corrections or
clarifications. ;-)

Cheers,

Michael Higgins

# db info ###

+--+
| Tables_in_claims |
+--+
| carrdocs |
| claimsubs|
| emails   |
| overview |
| pictures |
+--+


mysql> describe carrdocs;
+---++--+-+-++
| Field | Type   | Null | Key | Default | Extra  |
+---++--+-+-++
| cardoc_id | int(11)| NO   | PRI | NULL| auto_increment |
| claim_id  | int(11)| NO   | | ||
| carr_doc  | mediumblob | YES  | | NULL||
| carr_doctype  | tinytext   | YES  | | NULL||
| carr_mimetype | tinytext   | YES  | | NULL||
+---++--+-+-++
5 rows in set (0.13 sec)

mysql> describe claimsubs;
+--++--+-+-++
| Field| Type   | Null | Key | Default | Extra  |
+--++--+-+-++
| claimsub_id  | int(11)| NO   | PRI | NULL| auto_increment |
| claim_id | int(11)| NO   | | ||
| claim_doc| mediumblob | YES  | | NULL||
| clm_doctype  | tinytext   | YES  | | NULL||
| clm_mimetype | tinytext   | YES  | | NULL||
| clmdoc_name  | tinytext   | YES  | | NULL||
+--++--+-+-++
6 rows in set (0.01 sec)

mysql> describe emails;
+--+-+--+-+-++
| Field| Type| Null | Key | Default | Extra  |
+--+-+--+-+-++
| email_id | int(11) | NO   | PRI | NULL| auto_increment |
| claim_id | int(11) | NO   | | ||
| email| text| YES  | | NULL||
+--+-+--+-+-++
3 rows in set (0.00 sec)

mysql> describe overview;
+-+-+--+-+-++
| Field   | Type| Null | Key | Default | Extra  |
+-+-+--+-+-++
| cla

Re: [X-POST] Fastest way to dump this huge table

2007-05-02 Thread Dan Buettner

MyISAM does table level locking, which is to say that read (select) and
write (insert/update/delete) cannot happen at the same time.  One will wait
for the other.

If your select takes 10 seconds, then any write operations will block for
those 10 seconds.  Other read processes should be unaffected, though perhaps
slightly slower depending on resources you are consuming.

InnoDB avoids the problem described above by implementing transactions and
row-level locking, so that reads can proceed while writes are happening in
many cases.  InnoDB does have disadvantages compared to MyISAM so it's not
always a no-brain switch.

Dan



On 5/2/07, Brian Dunning <[EMAIL PROTECTED]> wrote:


The table is MyISAM, does that matter?


On May 2, 2007, at 7:28 AM, Dan Buettner wrote:

> A few observations:
>
> 1 - if the table is in the InnoDB format, you aren't going to lock
> up their server, as InnoDB doesn't do table locking.  SHOW TABLE
> STATUS LIKE 'tablename' will tell you.
>
> 2 - Ryan's mysqldump script looks useful - also, there's a little-
> used option with mysqldump that lets you specify a where clause to
> get just the records you want into the SQL file.
>
> 3 - since you're not operating on the server itself, but
> transferring over the net, the time for the transfer could become a
> problem, especially if you're not using InnoDB.  You could copy the
> data into a temp table and then work with that to your heart's
> content, without tying up production tables.  Something like this:
> DROP TABLE IF EXISTS _tmp_tablename;
> CREATE TABLE _tmp_tablename LIKE tablename;
> INSERT INTO _tmp_tablename SELECT * FROM tablename WHERE whatiwant
> blah blah
> then use select into outfile, mysqldump, php etc. on the
> _tmp_tablename table.
> While this does involve copying lots of records, in my experience,
> this sort of thing can be very fast, since it's all self-contained
> in the database software.  Not having any indices on your temp
> table will help too.
>
> HTH,
> Dan
>
>
> On 5/2/07, Brian Dunning <[EMAIL PROTECTED]> wrote:
> I have a huge MySQL table, 2.1 million records, 200MB. Once a week I
> need to dump it in CSV format and zip the file.
>
> This is not on my server, and it's in production, so I don't want to
> risk testing different methods and possibly hanging up their server
> for a period of time, so I wanted to seek advice here first to find
> what's the best way to proceed.
>
> I can easily use PHP to query the table for the results I want and
> write a file line by line and then zip it, but I'm worried that might
> take too long and hang up the machine. The other way to go is some
> kind of sql dump command, which I guess would be faster, but not sure
> how much control I'd have over the exact format of the file. Any
> suggestions which way I should proceed? Not hanging up their server
> is my prime concern.
>
> --
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe: http://lists.mysql.com/mysql?
> [EMAIL PROTECTED]
>
>




basic architecture review?

2007-05-02 Thread Michael Higgins
Hello, list --

No problem, yet. ;-)

Wondering if anyone would have a suggestion to ensure better performance, or
could point out any likely errors in the database outlined below.

Basically, I have digital pictures, scanned text/forms and emails that all
relate to information indexed in a separate DB with "shipment_id". I don't
have any real experience with DB design, so any suggestions or things to
consider would be appreciated.

My thinking is that I create an overview with an id and store that id in the
other tables so I can get all related documents. (My next question will be
how to query the lot in a single statement...)

All the tables are ENGINE=MyISAM and DEFAULT CHARSET=utf8. I don't know
squat about configuration parameters but an error in putting up the images
led me to change this line in my.cnf:

max_allowed_packet  = 4M

... because I don't know how to put up a binary in chunks, I guess. (I'm
using DBD::mysql and CGI in perl and inserting the uploaded file with a
placeholder in my SQL...)

Thanks in advance for any helpful suggestions, corrections or
clarifications. ;-) 

Cheers,
 
Michael Higgins

# db info ###

+--+
| Tables_in_claims |
+--+
| carrdocs |
| claimsubs|
| emails   |
| overview |
| pictures |
+--+


mysql> describe carrdocs;
+---++--+-+-++
| Field | Type   | Null | Key | Default | Extra  |
+---++--+-+-++
| cardoc_id | int(11)| NO   | PRI | NULL| auto_increment |
| claim_id  | int(11)| NO   | | ||
| carr_doc  | mediumblob | YES  | | NULL||
| carr_doctype  | tinytext   | YES  | | NULL||
| carr_mimetype | tinytext   | YES  | | NULL||
+---++--+-+-++
5 rows in set (0.13 sec)

mysql> describe claimsubs;
+--++--+-+-++
| Field| Type   | Null | Key | Default | Extra  |
+--++--+-+-++
| claimsub_id  | int(11)| NO   | PRI | NULL| auto_increment |
| claim_id | int(11)| NO   | | ||
| claim_doc| mediumblob | YES  | | NULL||
| clm_doctype  | tinytext   | YES  | | NULL||
| clm_mimetype | tinytext   | YES  | | NULL||
| clmdoc_name  | tinytext   | YES  | | NULL||
+--++--+-+-++
6 rows in set (0.01 sec)

mysql> describe emails;
+--+-+--+-+-++
| Field| Type| Null | Key | Default | Extra  |
+--+-+--+-+-++
| email_id | int(11) | NO   | PRI | NULL| auto_increment |
| claim_id | int(11) | NO   | | ||
| email| text| YES  | | NULL||
+--+-+--+-+-++
3 rows in set (0.00 sec)

mysql> describe overview;
+-+-+--+-+-++
| Field   | Type| Null | Key | Default | Extra  |
+-+-+--+-+-++
| claim_id| int(11) | NO   | PRI | NULL| auto_increment |
| shipment_id | int(11) | NO   | UNI | ||
| claimant| varchar(60) | YES  | | NULL||
| clmnt_email | varchar(60) | YES  | | NULL||
| claim_rep   | varchar(60) | YES  | | NULL||
| rep_email   | varchar(60) | YES  | | NULL||
| carr_clm_no | varchar(30) | YES  | | NULL||
| pro_number  | varchar(30) | YES  | | NULL||
| carrier | varchar(60) | YES  | | NULL||
| claim_amt   | varchar(10) | YES  | | NULL||
| claim_notes | text| YES  | | NULL||
+-+-+--+-+-++
11 rows in set (0.00 sec)

mysql> describe pictures;
+--++--+-+-++
| Field| Type   | Null | Key | Default | Extra  |
+--++--+-+-++
| image_id | int(11)| NO   | PRI | NULL| auto_increment |
| claim_id | int(11)| NO   | | ||
| image_note   | text   | YES  | | NULL||
| image| mediumblob | YES  | | NULL||
| img_mimetype | tinytext   | YES  | | 

Re: [X-POST] Fastest way to dump this huge table

2007-05-02 Thread Brian Dunning

The table is MyISAM, does that matter?


On May 2, 2007, at 7:28 AM, Dan Buettner wrote:


A few observations:

1 - if the table is in the InnoDB format, you aren't going to lock  
up their server, as InnoDB doesn't do table locking.  SHOW TABLE  
STATUS LIKE 'tablename' will tell you.


2 - Ryan's mysqldump script looks useful - also, there's a little- 
used option with mysqldump that lets you specify a where clause to  
get just the records you want into the SQL file.


3 - since you're not operating on the server itself, but  
transferring over the net, the time for the transfer could become a  
problem, especially if you're not using InnoDB.  You could copy the  
data into a temp table and then work with that to your heart's  
content, without tying up production tables.  Something like this:

DROP TABLE IF EXISTS _tmp_tablename;
CREATE TABLE _tmp_tablename LIKE tablename;
INSERT INTO _tmp_tablename SELECT * FROM tablename WHERE whatiwant  
blah blah
then use select into outfile, mysqldump, php etc. on the  
_tmp_tablename table.
While this does involve copying lots of records, in my experience,  
this sort of thing can be very fast, since it's all self-contained  
in the database software.  Not having any indices on your temp  
table will help too.


HTH,
Dan


On 5/2/07, Brian Dunning <[EMAIL PROTECTED]> wrote:
I have a huge MySQL table, 2.1 million records, 200MB. Once a week I
need to dump it in CSV format and zip the file.

This is not on my server, and it's in production, so I don't want to
risk testing different methods and possibly hanging up their server
for a period of time, so I wanted to seek advice here first to find
what's the best way to proceed.

I can easily use PHP to query the table for the results I want and
write a file line by line and then zip it, but I'm worried that might
take too long and hang up the machine. The other way to go is some
kind of sql dump command, which I guess would be faster, but not sure
how much control I'd have over the exact format of the file. Any
suggestions which way I should proceed? Not hanging up their server
is my prime concern.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe: http://lists.mysql.com/mysql? 
[EMAIL PROTECTED]







RE: secure port 3306

2007-05-02 Thread Jay Blanchard
[snip]
I have a client that needs to be able to remotely connect to port 3306
securely.  I have tried to suggest an SSH Tunnel, but they do not want
their
clients to have SSH access.  Another problem is that even if we do
tunnel,
it needs to go thru one server that is connected to the Internet and
into
the MySQL server which is NOT accessible from the Internet.

Any suggestions?
[/snip]

IPSec tunnel

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



secure port 3306

2007-05-02 Thread Steven Buehler
I have a client that needs to be able to remotely connect to port 3306
securely.  I have tried to suggest an SSH Tunnel, but they do not want their
clients to have SSH access.  Another problem is that even if we do tunnel,
it needs to go thru one server that is connected to the Internet and into
the MySQL server which is NOT accessible from the Internet.

Any suggestions?

Thanks
Steve


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: View select results

2007-05-02 Thread Peter Brawley

>can I obtain a recordset like this?
>area1, value_one, thing_one
>  //,   value_two, thing_32
>  //,   value_three, thing_ dd
>area2, value_ten, thing_6w
>  //,   value_ff, thing_l

SET @prev='';
SELECT
 IF(area = @prev, '', @prev := area) AS area,
 ... other columns ...
FROM &c ...

PB


spacemarc wrote:

Hi
my table have three fields that, if selected, are shown like:

area1, value_one, thing_one
area1, value_two, thing_32
area1, value_three, thing_ dd
area2, value_ten, thing_6w
area2, value_ff, thing_l



can I obtain a recordset like this?

area1, value_one, thing_one
  //,   value_two, thing_32
  //,   value_three, thing_ dd
area2, value_ten, thing_6w
  //,   value_ff, thing_l

So, do not repeat more times the value of the first column (area1, 
area2...)




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: [X-POST] Fastest way to dump this huge table

2007-05-02 Thread Dan Buettner

A few observations:

1 - if the table is in the InnoDB format, you aren't going to lock up their
server, as InnoDB doesn't do table locking.  SHOW TABLE STATUS LIKE
'tablename' will tell you.

2 - Ryan's mysqldump script looks useful - also, there's a little-used
option with mysqldump that lets you specify a where clause to get just the
records you want into the SQL file.

3 - since you're not operating on the server itself, but transferring over
the net, the time for the transfer could become a problem, especially if
you're not using InnoDB.  You could copy the data into a temp table and then
work with that to your heart's content, without tying up production tables.
Something like this:
DROP TABLE IF EXISTS _tmp_tablename;
CREATE TABLE _tmp_tablename LIKE tablename;
INSERT INTO _tmp_tablename SELECT * FROM tablename WHERE whatiwant blah blah
then use select into outfile, mysqldump, php etc. on the _tmp_tablename
table.
While this does involve copying lots of records, in my experience, this sort
of thing can be very fast, since it's all self-contained in the database
software.  Not having any indices on your temp table will help too.

HTH,
Dan


On 5/2/07, Brian Dunning <[EMAIL PROTECTED]> wrote:


I have a huge MySQL table, 2.1 million records, 200MB. Once a week I
need to dump it in CSV format and zip the file.

This is not on my server, and it's in production, so I don't want to
risk testing different methods and possibly hanging up their server
for a period of time, so I wanted to seek advice here first to find
what's the best way to proceed.

I can easily use PHP to query the table for the results I want and
write a file line by line and then zip it, but I'm worried that might
take too long and hang up the machine. The other way to go is some
kind of sql dump command, which I guess would be faster, but not sure
how much control I'd have over the exact format of the file. Any
suggestions which way I should proceed? Not hanging up their server
is my prime concern.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




Re: [X-POST] Fastest way to dump this huge table

2007-05-02 Thread Ryan Stille

I use a script like this:

  #!/bin/bash
  DATE=`date +%A`

  DESTFILE="/home/mysql-backups/mysql-dump-$DATE"

  /usr/bin/mysqldump --skip-extended-insert -uroot -p"password" 
mydatabase > $DESTFILE.sql

  /usr/bin/zip -qjTm $DESTFILE.zip $DESTFILE.sql

I end up with:
mysql-dump-Friday.zip
mysql-dump-Saturday.zip
etc.

Modify to suit your needs.

-Ryan

Brian Dunning wrote:
I have a huge MySQL table, 2.1 million records, 200MB. Once a week I 
need to dump it in CSV format and zip the file.


This is not on my server, and it's in production, so I don't want to 
risk testing different methods and possibly hanging up their server 
for a period of time, so I wanted to seek advice here first to find 
what's the best way to proceed.


I can easily use PHP to query the table for the results I want and 
write a file line by line and then zip it, but I'm worried that might 
take too long and hang up the machine. The other way to go is some 
kind of sql dump command, which I guess would be faster, but not sure 
how much control I'd have over the exact format of the file. Any 
suggestions which way I should proceed? Not hanging up their server is 
my prime concern.


--MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]






--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: MySQL Workbench

2007-05-02 Thread Martijn Tonies
>I'd be interested to hear other options though...

Edward, we develop a commercial Windows tool named "Database Workbench" that
supports MySQL.

It does not yet do diagramming, but it's being worked at for v3, as well as
many
other features.

Have a look at www.upscene.com

Martijn Tonies
Database Workbench - tool for InterBase, Firebird, MySQL, NexusDB, Oracle &
MS SQL Server
Upscene Productions
http://www.upscene.com
My thoughts:
http://blog.upscene.com/martijn/
Database development questions? Check the forum!
http://www.databasedevelopmentforum.com


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: [X-POST] Fastest way to dump this huge table

2007-05-02 Thread Jay Blanchard
[snip]
I have a huge MySQL table, 2.1 million records, 200MB. Once a week I  
need to dump it in CSV format and zip the file.

This is not on my server, and it's in production, so I don't want to  
risk testing different methods and possibly hanging up their server  
for a period of time, so I wanted to seek advice here first to find  
what's the best way to proceed.

I can easily use PHP to query the table for the results I want and  
write a file line by line and then zip it, but I'm worried that might  
take too long and hang up the machine. The other way to go is some  
kind of sql dump command, which I guess would be faster, but not sure  
how much control I'd have over the exact format of the file. Any  
suggestions which way I should proceed? Not hanging up their server  
is my prime concern.
[/snip]

SELECT * INTO OUTFILE "/directory/myfile.csv"
FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"'
  LINES TERMINATED BY '\n'
  FROM table;

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



[X-POST] Fastest way to dump this huge table

2007-05-02 Thread Brian Dunning
I have a huge MySQL table, 2.1 million records, 200MB. Once a week I  
need to dump it in CSV format and zip the file.


This is not on my server, and it's in production, so I don't want to  
risk testing different methods and possibly hanging up their server  
for a period of time, so I wanted to seek advice here first to find  
what's the best way to proceed.


I can easily use PHP to query the table for the results I want and  
write a file line by line and then zip it, but I'm worried that might  
take too long and hang up the machine. The other way to go is some  
kind of sql dump command, which I guess would be faster, but not sure  
how much control I'd have over the exact format of the file. Any  
suggestions which way I should proceed? Not hanging up their server  
is my prime concern.


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: expire_logs_days

2007-05-02 Thread Tim Lucia
> -Original Message-
> From: Baron Schwartz [mailto:[EMAIL PROTECTED]
> Sent: Wednesday, May 02, 2007 7:55 AM
> To: Ofer Inbar
> Cc: mysql@lists.mysql.com
> Subject: Re: expire_logs_days
> 
> Hi,
> 
> Ofer Inbar wrote:
> > There's a system variable called expire_logs_days that lets you set a
> > number of days to keep binary logs, and automatically delete logs
> > older than that.  I've heard rumors that using this feature is
> > problematic.  I notice that in the MySQL documentation about binary
> > logging, it tells you to use "purge master logs" to delete old logs,
> > and does not mention the expire_logs_days variable as another option.
> > Is there a reason for this omission, or is it safe to use?
> >   -- Cos
> >
> 
> Actually, the manual does mention the variable, but it doesn't work for
us.
> We run a
> nightly cron job that just runs
> 
> /usr/bin/mysql -e "PURGE MASTER LOGS BEFORE DATE_SUB(CURRENT_DATE,
> INTERVAL 9 DAY)"
> 

We do the same thing, based on the "rumors" I read at the time I set it up.
(Where "rumors" means that googling for expire_logs_days reveals many with
problems and not much good news.)

I would recommend you figure out the value for "n day" above based on
keeping your binlog partition about 50% full.  This will allow you to go
back more days in the backup scheme if something went wrong with your
backups.  It also allows more room for large import operations which would
otherwise fill the /binlog partition.  21 days is about right for our usage
model with a 32G binlog partition, but YMMV.

# df -h /binlogs /data
FilesystemSize  Used Avail Use% Mounted on
/dev/sdb1  32G   15G   16G  48% /binlogs
/dev/sdb2 237G   50G  176G  22% /data

# cat /etc/cron.mysql/20-purgemasterlogs
#!/bin/sh
/usr/bin/mysql --defaults-file=/root/.my.cnf -e 'show master logs; purge
master logs before date_sub(now(), interval 21 day); show master logs;'
>/var/log/20-purgemasterlogs.log 2>&1

Tim

> Baron
> 




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: MySQL Workbench

2007-05-02 Thread Edward Kay
> -Original Message-
> From: Mogens Melander [mailto:[EMAIL PROTECTED]
> Sent: 01 May 2007 21:44

> On Tue, May 1, 2007 21:36, Afan Pasalic wrote:
> > Hi,
> > I'm looking for database modeling tool form MySQL. Anybody  used the
> > MySQL Workbench? I know the Workbench is in Alpha production, though
> > some "hints"?
> >
> > How about DB Designer?
> >
> > Thanks for any respond.
> >
> > -afan
> >
> 
> 
> I've had a lot of fun using Toad on Oracle. Quest got a freeware
> for MySQL, that i never got around to try, but have a look at:
> 
> http://www.quest.com/toad-for-mysql/

I've used DB Designer for a few years now for creating my initial schema. I 
find it very useful to graphically display table relationships and for creating 
foreign key constraints. It isn't perfect though and has several annoying bugs. 
Last time I looked, it seemed like development had been stopped.

For everyday queries/maintenance I use a combination of MySQL Control Centre, 
MySQL Administrator and MySQL Query Browser since each of these does certain 
things better in my opinion.

I've just download Toad - and have to say it looks pretty good. It doesn't 
appear to have the graphical representation of DB Designer but it may well be a 
good replacement for the three MySQL AB tools mentioned above.

I'd be interested to hear other options though...

Edward


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: expire_logs_days

2007-05-02 Thread Baron Schwartz

Hi,

Ofer Inbar wrote:

There's a system variable called expire_logs_days that lets you set a
number of days to keep binary logs, and automatically delete logs
older than that.  I've heard rumors that using this feature is
problematic.  I notice that in the MySQL documentation about binary
logging, it tells you to use "purge master logs" to delete old logs,
and does not mention the expire_logs_days variable as another option.
Is there a reason for this omission, or is it safe to use?
  -- Cos



Actually, the manual does mention the variable, but it doesn't work for us.  We run a 
nightly cron job that just runs


/usr/bin/mysql -e "PURGE MASTER LOGS BEFORE DATE_SUB(CURRENT_DATE, INTERVAL 9 
DAY)"

Baron

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: View select results

2007-05-02 Thread Baron Schwartz

Hi,

spacemarc wrote:

Hi
my table have three fields that, if selected, are shown like:

area1, value_one, thing_one
area1, value_two, thing_32
area1, value_three, thing_ dd
area2, value_ten, thing_6w
area2, value_ff, thing_l



can I obtain a recordset like this?

area1, value_one, thing_one
  //,   value_two, thing_32
  //,   value_three, thing_ dd
area2, value_ten, thing_6w
  //,   value_ff, thing_l

So, do not repeat more times the value of the first column (area1, 
area2...)


Giuseppe Maxia wrote a great article on this some time ago.  The technique is called 
cross-tabulation or pivot tables.  Here is a link:

http://www.onlamp.com/pub/a/onlamp/2003/12/04/crosstabs.html

Baron

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



View select results

2007-05-02 Thread spacemarc

Hi
my table have three fields that, if selected, are shown like:

area1, value_one, thing_one
area1, value_two, thing_32
area1, value_three, thing_ dd
area2, value_ten, thing_6w
area2, value_ff, thing_l



can I obtain a recordset like this?

area1, value_one, thing_one
  //,   value_two, thing_32
  //,   value_three, thing_ dd
area2, value_ten, thing_6w
  //,   value_ff, thing_l

So, do not repeat more times the value of the first column (area1, area2...)

--
http://www.spacemarc.it

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: REGEXP Character Classes

2007-05-02 Thread John Kebbel
I went to the MySQL documentation pages and read up on using COLLATE. I
knew SELECT was case-insensitive, but I was sort of surprised that using
a character class didn't override that. Anyway, I next tried the
"status" command to see if it gave me any characterset information.

Client characterset:latin1
Server characterset:latin1

Once I thought I understood what was going on with COLLATE and case
sensitivity, I tried this command...

SELECT id, pswd, division, department, title, classification FROM pswds
WHERE pswd REGEXP '[:lower:]' COLLATE latin1_bin;

It seemed to work fine. I searched the column to see if I could find any
instances of all caps value, but did not find any. (They do exist; I
created the data for this table from a Perl script solely to practice
using  character class regular expressions.)

Then I tried this command. It should not have found any instances of all
lower case passwords, but it did.

SELECT id, pswd, division, department, title, classification FROM pswds
WHERE pswd REGEXP '[:upper:]' COLLATE latin1_bin;
+--+--+--++++
| id   | pswd | division | department | title  |
classification |
+--+--+--++++
|8 | euwsrbwm | Customer Service | Accounting | Clerical   | 0f1b12
|
|   13 | mejccvoz | Customer Service | Receiving  | Clerical   | 437113
|
|   18 | kwkheprh | Customer Service | Purchasing | Clerical   | 29652
|
|   20 | qpvxvqhz | Customer Service | Accounting | Clerical   | bcb244
|


Is there something obvious that I'm missing here?


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: batch mode

2007-05-02 Thread Mogens Melander

On Wed, May 2, 2007 08:55, Mogens Melander wrote:
>
> On Tue, May 1, 2007 22:15, Brown, Charles wrote:
>>
>> Because I am running on batch mode therefore I'm trying to direct my
>> session output to a file -- meaning stdout. But I'm having a problem.
>> For instance this input: "use test_db" gave me no output but this input
>> "show tables" gave me an output.
>>
>> What is missing, what's the trick. Help me Y'all.
>
> You could try something like:
>
> mysql -u xxx -pyyy -e 'select * from dbname;'

Argh, that should have been:

mysql -s -u xxx -pyyy -e 'show tables from dbname;'

> mysql -u xxx -pyyy -e 'select * from dbname.tblname;'

And:

mysql -s -u xxx -pyyy -e 'select * from dbname.tblname;'

>
> --
> Later
>
> Mogens Melander
> +45 40 85 71 38
> +66 870 133 224
>
>
>
> --
> This message has been scanned for viruses and
> dangerous content by OpenProtect(http://www.openprotect.com), and is
> believed to be clean.
>
>
> --
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
>


-- 
Later

Mogens Melander
+45 40 85 71 38
+66 870 133 224



-- 
This message has been scanned for viruses and
dangerous content by OpenProtect(http://www.openprotect.com), and is
believed to be clean.


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Illegal mix of collations (latin1_swedish_ci,IMPLICIT) and (utf8_general_ci,COERCIBLE)

2007-05-02 Thread Sharique uddin Ahmed Farooqui

I have upgraded a website from drupal 4.7.4 to drupal 5.1 (I have also moved
website from one server to another), now when I try to login I'm getting
following error.


user warning: Illegal mix of collations (latin1_swedish_ci,IMPLICIT) and
(utf8_general_ci,COERCIBLE) for operation '=' query: user_is_blocked SELECT
name FROM users WHERE status = 0 AND name = LOWER('user1') in
/www/mysite/includes/database.mysql.inc on line 172.

user warning: Illegal mix of collations (latin1_swedish_ci,IMPLICIT) and
(utf8_general_ci,COERCIBLE) for operation '=' query: user_load SELECT * FROM
users u WHERE LOWER(name) = LOWER('user1') AND pass =
'7b063a8b8aa2219449cb35f4e415295f' AND status = 1 in
/www/mysite/includes/database.mysql.inc on line 172.

Sorry, unrecognized username or password. Have you forgotten your password?

I have also posted this in drupal forum http://drupal.org/node/140303.
What I've done is I have exported data from old db via phpMyAdmin and
imported into new database.
Since it related to mysql so i'm posting here.
--
Sharique uddin Ahmed Farooqui
(C++/C# Developer, IT Consultant)
http://www.managefolio.net/news/
A revolution is about to begin.
A world is about to change.
And you and me are "the initiator".