Re: loading scripts to mysql

2007-11-09 Thread Michael Gargiullo
On Fri, 2007-11-09 at 13:22 +0100, Pau Marc Munoz Torres wrote:

 Hi everybody
 
  I'm writing a function script in a flat file using vim, now i would like
 load it into my sql, there is some command to do it similar to load data
 into to fill tables?
 
 thanks
 


Sure,
From command line:
mysql -u username -p databasefile-containing-sql

-Mike


Updating 5.1.11 to 5.1.16(or latest)

2007-04-09 Thread Michael Gargiullo
Has anyone seen any issues updating from 5.1.11 to a later version?

 

We use partitioning extensively and was wondering how badly we’re going to get 
hosed upgrading.

We’re on Fedora Core and used the RPMs to install… I need to be able to turn 
off general_log…  It kills my DB every few days when it grows to 30Gb and fills 
the partition its on.

 

 

I plan on using mysqldump to dump all the data to flatfile (for full backup 
purposes), upgrade to 5.1.16 (or latest available) and restart the db…

 

 

Is this a sounds idea?

 

-Mike


-- 
No virus found in this outgoing message.
Checked by AVG Free Edition.
Version: 7.5.446 / Virus Database: 269.0.0/752 - Release Date: 4/8/2007 8:34 PM
 


RE: Querying large table

2007-03-29 Thread Michael Gargiullo
Do you need a count for all and run one at a time?  If so, try. Select ctg as 
catagory,count(*) as count from items group by catagory.  It will take a little 
while to run, but return all of your counts.

Does the items table have an index in ctg? 


Sent by Good Messaging (www.good.com)


 -Original Message-
From:   Shadow [mailto:[EMAIL PROTECTED]
Sent:   Thursday, March 29, 2007 07:00 PM Eastern Standard Time
To: mysql@lists.mysql.com
Subject:Querying large table

Hey, guys.

I have 2 tables: categories and items.
COUNT(*) categories = 63 833
COUNT(*) items = 742 993
I need to get number of items in a specific category, so I use
SELECT COUNT(*) FROM items WHERE ctg='ctg'

But each query takes ~ 10seconds.
Its really slow.

Can anybody propose some optimization?

Thanks.

 
-
Finding fabulous fares is fun.
Let Yahoo! FareChase search your favorite travel sites to find flight and hotel 
bargains.


RE: Help optimizing this query?

2007-01-08 Thread Michael Gargiullo


-Original Message-
From: Brian Dunning [mailto:[EMAIL PROTECTED] 
Sent: Sunday, January 07, 2007 1:12 PM
To: mysql
Subject: Help optimizing this query?

This is the query that's killing me in the slow query log, usually  
taking around 20 seconds:

select count(ip) as counted,stamp from ip_addr where stamp=NOW()- 
interval 14 day and source='sometext' group by stamp order by stamp  
desc;

Here is the table:

CREATE TABLE `ip_addr` (
   `ip` int(10) unsigned NOT NULL default '0',
   `stamp` date NOT NULL default '-00-00',
   `country` char(2) NOT NULL default '',
   `source` varchar(20) NOT NULL default '',
   PRIMARY KEY  (`ip`),
   KEY `country-source` (`country`,`source`),
   KEY `stamp-source` (`stamp`,`source`)
) ENGINE=MyISAM DEFAULT CHARSET=latin1;

Any help please?   :)

---

Just a thought? Put a normal index on source and another on stamp (not
combined).




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Loading data using infile

2006-11-03 Thread Michael Gargiullo
I have a script that builds a data file. The data looks like this:

62527427012682984, 191151, 177628526, 3634025281, 1, 58400,
80, 1899, , 2006/10/02 23:15:02, 19,
47,,2006-11-02-231557-cust.txt, 0, 0, IKE, ESP: AES-256 +
SHA1, cx-ccb_vpn, Cx-CCB, , 
62527427012682983, 191150, 177628526, 3634025281, 2, 4163,
0, 1899, , 2006/10/02 23:14:59, -1,
50,,2006-11-02-231557-cust.txt, 0, 0, IKE, ESP: AES-256 +
SHA1, cx-ccb_vpn, Cx-CCB, , 

When I run this load data command like so: 

LOAD DATA CONCURRENT LOCAL INFILE '/db/cust/tmp/firewall/cust.txt.ctl'
INTO TABLE LogData FIELDS TERMINATED BY ',' ENCLOSED BY '' LINES
TERMINATED BY '\n'

I don't get any errors, but get strange data inserted:

| 62527427012682983 |  0 |  0 |  0 |  0 |  0 |  0 |  0 | 0 | -00-00
00:00:00 | 0 | 0 | -00-00 00:00:00 | 2006-11-02-231557-cust.txt |
0 | 0 | IKE | ESP: AES-256 + SHA1 |  cx-ccb_vpn  | Cx-CCB |
 |   |
| 62527427012682984 |  0 |  0 |  0 |  0 |  0 |  0 |  0 | 0 | -00-00
00:00:00 | 0 | 0 | -00-00 00:00:00 | 2006-11-02-231557-cust.txt |
0 | 0 | IKE | ESP: AES-256 + SHA1 |  cx-ccb_vpn  | Cx-CCB |
 |   |

If I try one row it works:

insert into LogData values (62527427012682984, 191151, 177628526,
3634025281, 1, 58400, 80, 1899, 0, '2006/10/02 23:15:02', 19,
47,'','2006-11-02-231557-cust.txt', '0', '0', 'IKE', 'ESP: AES-256 +
SHA1', 'cx-ccb_vpn', 'Cx-CCB', '', '');
Query OK, 1 row affected, 1 warning (0.00 sec)

| 52527427012682984 | 191151 | 177628526 | 3634025281 |  1 |  58400 |
80 | 1899 | 0 | 2006-10-02 23:15:02 | 19 |  47 | -00-00 00:00:00 |
2006-11-02-231557-cust.txt | 0 | 0 | IKE | ESP: AES-256 + SHA1 |
cx-ccb_vpn | Cx-CCB |   |   |


The only difference I can see is the use of single quotes on the command
line.  What's very puzzling is that this same load file line works on my
other machine.  The only difference I can see between them is this table
has a few more fields as integers.


The table is mostly ints:

+---+--+--+-+-+-
--+
| Field | Type | Null | Key | Default |
Extra |
+---+--+--+-+-+-
--+
| event_id  | bigint(1) unsigned   | NO   | PRI | |
|
| dev_rec_num   | int(1) unsigned  | YES  | | NULL|
|
| src   | int(1) unsigned  | NO   | MUL | |
|
| dst   | int(1) unsigned  | NO   | MUL | |
|
| protocol_id_i | tinyint(1) unsigned  | YES  | | NULL|
|
| src_port_id_i | smallint(1) unsigned | YES  | | NULL|
|
| dst_port_id_i | smallint(1) unsigned | YES  | | NULL|
|
| origin_id_i   | smallint(1) unsigned | YES  | MUL | NULL|
|
| collection_point_id_i | smallint(1) unsigned | YES  | | NULL|
|
| date_time | datetime | NO   | MUL | |
|
| rule_id_i | smallint(1)  | YES  | MUL | NULL|
|
| action_id_i   | smallint(1) unsigned | YES  | | NULL|
|
| date_entered  | datetime | NO   | | |
|
| filename  | varchar(35)  | YES  | | NULL|
|
| user_def_1| varchar(50)  | YES  | | NULL|
|
| user_def_2| varchar(255) | YES  | | NULL|
|
| user_def_3| varchar(50)  | YES  | | NULL|
|
| user_def_4| varchar(50)  | YES  | | NULL|
|
| user_def_5| varchar(50)  | YES  | | NULL|
|
| user_def_6| varchar(50)  | YES  | | NULL|
|
| user_def_7| varchar(50)  | YES  | | NULL|
|
| user_def_8| varchar(50)  | YES  | | NULL|
|
+---+--+--+-+-+-
--+


Any Ideas?

-Mike

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Loading data using infile

2006-11-03 Thread Michael Gargiullo
Ok.. Disregard...  the spaces I added for troubleshooting after the
commas fubar'd it.



RE: Partition Help

2006-10-02 Thread Michael Gargiullo


snip


Thanks for the advice.

We've got 12GB of RAM, I'll increase the key_buffer_size.
Unfortunately
I can't turn off indexes, then index after. At these rates, I'd never
catch up.

I don't agree. It takes longer to build the index than to load the data
if 
you have indexes active when loading the data. But if you disable the 
index, or not have any indexes on the table during the Load Data, then 
re-enable the index later, MySQL will build the index at least 10x
faster 
if you have a large key_buffer_size because it does it all in memory.
I've 
had Load Data go from 24 hours to 40 minutes just by adding more memory
to 
key_buffer_size and disabling the index and re-enabling it later.

I'd recommend using at least 6000M for key_buffer_size as a start. You
want 
to try and get as much of the index in memory as possible.


I had hoped I could use partitions like in Oracle. 1 partition every
hour (or 3).  I don't think the merge tables will work however. We
currently only keep 15 days of data and that fills the array. If a
merge
table uses disk space, it won't work for us.

A Merge Table can be built in just ms. It is a logical join between the 
tables and does *not* occupy more disk space. Think of it as a view that

joins tables of similar schema together vertically so it looks like 1
large 
table.

Mike


Ah, very cool.

Thanks again.




Loading 500,000 rows with 200M rows in the DB with Indexes on takes 22
Minutes.

Loading 500,000 rows with 200M rows in the DB with indexes turned off
and then build indexes after the load took over 75 minutes. This would
probably work if we only inserted 40-80 million rows a day total, or had
a few hours where data was not being inserted.

Daily partitions are created then sub partitioned across 6 data disks
and 6 index disks.

We attempted to build a new table per hour, and merge them after 3
hours. We killed the processes after 2 hours. 1 hour of data is approx
18GB. The server only has 12GB of RAM.

I wish we could partition down to TO_HOUR instead of TO_DAY




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Partitioning to_hour

2006-09-28 Thread Michael Gargiullo
I know we can partition to_day using 5.1.  Are there plans to implement
range partitioning to_hour as well?

I'm in need of this granularity. I'm currently partitioned to_day then
sub partitioned x6 and split the Data and Indexes to different HDs for
disk speed.

Starting with an empty table, we were able to bulk insert 101 Million
rows in about 2 hours. It then took the next 12 hours to load the next
125 Million rows... It's progressively getting slower, we're now at 22
minutes to add 15 rows. (Which I kind of expected, we Index 5
columns)

Does anyone have any ideas?  I've tried a bunch of things including
killing the indexes, loading the data then rebuilding the index
(horrible idea...   Increased load to 45Min Plus for 15 rows).

We're attempting to validate that MySQL can handle the load. I'm in a
catch 22, I'd buy a support contract if I can prove MySQL can work. I
can't prove MySQL can work without speaking to an engineer, which they
won't do (and rightly so) without a contract.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Partition Help

2006-09-26 Thread Michael Gargiullo
I'm working on a project in which we'd like to convert from Oracle to
MySQL. We need to partition our data for speed concerns.  Currently in
Oracle I create 8, 3 hour partitions for each day (Currently running
450M -750M rec inserts/day). I was looking for matching functionality in
MySQL, but it seams daily partitions are as close as I'm going to come.

 

We're running 5.1.10 and I'm having a bit of trouble creating partitions
in both new tables and altering old tables.  Below is one example of
what I've tried.

 

Can anyone shed some light on this subject?

 

 

-Mike

 

create table t1 (c1 int default NULL, c2 varchar(30) default NULL, c3
datetime default NULL) engine=myisam PARTITION BY RANGE(to_days(c3))

  PARTITION p0 VALUES LESS THAN (to_days('2006-09-24'))(

SUBPARTITION s0a

  DATA DIRECTORY = '/FW_data1'

  INDEX DIRECTORY = '/FW_indx1'

  ),

 PARTITION p1 VALUES LESS THAN (to_days('2006-09-26'))(

SUBPARTITION s1a

  DATA DIRECTORY = '/FW_data2'

  INDEX DIRECTORY = '/FW_indx2'

  )

  PARTITION p2 VALUES LESS THAN (to_days('2006-09-28'))(

SUBPARTITION s2a

  DATA DIRECTORY = '/FW_data3'

  INDEX DIRECTORY = '/FW_indx3'

  )

);



RE: Partition Help

2006-09-26 Thread Michael Gargiullo


-Original Message-
From: mos [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, September 26, 2006 3:40 PM
To: mysql@lists.mysql.com
Subject: Re: Partition Help

At 02:03 PM 9/26/2006, you wrote:
I'm working on a project in which we'd like to convert from Oracle to
MySQL. We need to partition our data for speed concerns.  Currently in
Oracle I create 8, 3 hour partitions for each day (Currently running
450M -750M rec inserts/day). I was looking for matching functionality
in
MySQL, but it seams daily partitions are as close as I'm going to come.



We're running 5.1.10 and I'm having a bit of trouble creating
partitions
in both new tables and altering old tables.  Below is one example of
what I've tried.



Can anyone shed some light on this subject?



-Mike


Mike,
 How is this table being updated?

a) From one source like a batch job?
b) Or from hundreds of users concurrently?

If a), then why not just create 1 table per day (or 3 tables per day)
and 
when you want to reference (the entire day or) a week, just create a
Merge 
Table?
http://dev.mysql.com/doc/refman/5.1/en/merge-storage-engine.html

If b), then you need to use InnoDb tables because that has row locks 
compared to MyISAM's table locks.

Mike


We're using the Load infile function to load the data generated by
another process. We do not do updates, but occasionally need to either
walk the table or run a query against it. On Oracle, we currently need 3
hour partitions to keep the 5 indexes timely.

This system handles 450-750 Million inserted rows per day with 5 fields
being indexed. This number will be closer to 2 Billion records / day by
Spring 2007 we've been told.

For example, I diverted the full flow of data to MySQL for 15 minutes
and inserted 9 Million records with a back up of loader files.  I need
to speed this up. Unfortunately, table structure and indexes are static
and cannot be changed. 

-Mike



create table t1 (c1 int default NULL, c2 varchar(30) default NULL, c3
datetime default NULL) engine=myisam PARTITION BY RANGE(to_days(c3))

   PARTITION p0 VALUES LESS THAN (to_days('2006-09-24'))(

 SUBPARTITION s0a

   DATA DIRECTORY = '/FW_data1'

   INDEX DIRECTORY = '/FW_indx1'

   ),

  PARTITION p1 VALUES LESS THAN (to_days('2006-09-26'))(

 SUBPARTITION s1a

   DATA DIRECTORY = '/FW_data2'

   INDEX DIRECTORY = '/FW_indx2'

   )

   PARTITION p2 VALUES LESS THAN (to_days('2006-09-28'))(

 SUBPARTITION s2a

   DATA DIRECTORY = '/FW_data3'

   INDEX DIRECTORY = '/FW_indx3'

   )

);

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Partition Help

2006-09-26 Thread Michael Gargiullo

Mike


We're using the Load infile function to load the data generated by
another process. We do not do updates, but occasionally need to either
walk the table or run a query against it. On Oracle, we currently need
3
hour partitions to keep the 5 indexes timely.

This system handles 450-750 Million inserted rows per day with 5 fields
being indexed. This number will be closer to 2 Billion records / day by
Spring 2007 we've been told.

For example, I diverted the full flow of data to MySQL for 15 minutes
and inserted 9 Million records with a back up of loader files.  I need
to speed this up. Unfortunately, table structure and indexes are static
and cannot be changed.

-Mike


Mike,
 I've done a lot of Load Data with large tables and as you no
doubt 
discovered, as the number of rows in the table increases, the insert
speed 
decreases. This is due to the extra effort involved in maintaining the 
index as the rows are being loaded. As the index grows in size, it takes

longer to maintain the index. This is true of any database. MyISAM
tables 
are going to be faster than InnoDb in this case.

You can speed it up by:
1) Add as much memory as possible in the machine because building the
index 
will be much faster if it has lots of ram.
2) Modify your My.Cnf file so key_buffer_size=1500M or more.  (Assuming
you 
have 3gb or more installed) This allocates memory for building the
index.
3) If the table is empty before you add any rows to it, Load Data will
run 
much faster because it will build the index *after* all rows have been 
loaded. But if you have as few as 1 row in the table before running Load

Data, the index will have to be maintained as the rows are inserted and 
this slows down the Load Data considerably.
4) Try throwing an exclusive lock on the table before loading the data.
I'm 
not sure but this might help.
5) If your table already has rows in it before running Load Data, and
the 
table has indexes defined, it is much faster if your disable the indexes
to 
the table before running Load Data, and then enable the index after Load

Data has completed. See Alter Table Enable/Disable Indexes for more
info.
6) If you are using Alter Table to add indexes after the table has data,

make sure you are adding all indexes in one Alter Table statement
because 
MySQL will copy the table each time the Alter Table is run.

If you are going to be adding 2 billion rows per day, you might want to
try 
1 table per hour which will reduce the number of rows to  100 million 
which may be more manageable (assuming 24 hour day). You can then create
a 
merge table on the 24 rows so you can traverse them. You can of course 
create a merge table just for the morning hours, afternoon hours,
evening 
hours etc.. Name each table like: 20060925_1400 for 4PM on 9/25/2006. Of

course you may also want to summarize this data into a table so you
don't 
need all of this raw data lying around.

Hope this helps.

Mike


Thanks for the advice.

We've got 12GB of RAM, I'll increase the key_buffer_size.  Unfortunately
I can't turn off indexes, then index after. At these rates, I'd never
catch up.

I had hoped I could use partitions like in Oracle. 1 partition every
hour (or 3).  I don't think the merge tables will work however. We
currently only keep 15 days of data and that fills the array. If a merge
table uses disk space, it won't work for us.

I'll check out the key buffer size though.  Thanks.

-Mike

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Partition Help

2006-09-26 Thread Michael Gargiullo


-Original Message-
From: mos [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, September 26, 2006 5:27 PM
To: mysql@lists.mysql.com
Subject: RE: Partition Help

At 03:37 PM 9/26/2006, you wrote:
 
 Mike
 
 
 We're using the Load infile function to load the data generated by
 another process. We do not do updates, but occasionally need to
either
 walk the table or run a query against it. On Oracle, we currently
need
3
 hour partitions to keep the 5 indexes timely.
 
 This system handles 450-750 Million inserted rows per day with 5
fields
 being indexed. This number will be closer to 2 Billion records / day
by
 Spring 2007 we've been told.
 
 For example, I diverted the full flow of data to MySQL for 15 minutes
 and inserted 9 Million records with a back up of loader files.  I
need
 to speed this up. Unfortunately, table structure and indexes are
static
 and cannot be changed.
 
 -Mike


Mike,
  I've done a lot of Load Data with large tables and as you no
doubt
discovered, as the number of rows in the table increases, the insert
speed
decreases. This is due to the extra effort involved in maintaining the
index as the rows are being loaded. As the index grows in size, it
takes

longer to maintain the index. This is true of any database. MyISAM
tables
are going to be faster than InnoDb in this case.

You can speed it up by:
1) Add as much memory as possible in the machine because building the
index
will be much faster if it has lots of ram.
2) Modify your My.Cnf file so key_buffer_size=1500M or more.  (Assuming
you
have 3gb or more installed) This allocates memory for building the
index.
3) If the table is empty before you add any rows to it, Load Data will
run
much faster because it will build the index *after* all rows have been
loaded. But if you have as few as 1 row in the table before running
Load

Data, the index will have to be maintained as the rows are inserted and
this slows down the Load Data considerably.
4) Try throwing an exclusive lock on the table before loading the data.
I'm
not sure but this might help.
5) If your table already has rows in it before running Load Data, and
the
table has indexes defined, it is much faster if your disable the
indexes
to
the table before running Load Data, and then enable the index after
Load

Data has completed. See Alter Table Enable/Disable Indexes for more
info.
6) If you are using Alter Table to add indexes after the table has
data,

make sure you are adding all indexes in one Alter Table statement
because
MySQL will copy the table each time the Alter Table is run.

If you are going to be adding 2 billion rows per day, you might want to
try
1 table per hour which will reduce the number of rows to  100 million
which may be more manageable (assuming 24 hour day). You can then
create
a
merge table on the 24 rows so you can traverse them. You can of course
create a merge table just for the morning hours, afternoon hours,
evening
hours etc.. Name each table like: 20060925_1400 for 4PM on 9/25/2006.
Of

course you may also want to summarize this data into a table so you
don't
need all of this raw data lying around.

Hope this helps.

Mike


Thanks for the advice.

We've got 12GB of RAM, I'll increase the key_buffer_size.
Unfortunately
I can't turn off indexes, then index after. At these rates, I'd never
catch up.

I don't agree. It takes longer to build the index than to load the data
if 
you have indexes active when loading the data. But if you disable the 
index, or not have any indexes on the table during the Load Data, then 
re-enable the index later, MySQL will build the index at least 10x
faster 
if you have a large key_buffer_size because it does it all in memory.
I've 
had Load Data go from 24 hours to 40 minutes just by adding more memory
to 
key_buffer_size and disabling the index and re-enabling it later.

I'd recommend using at least 6000M for key_buffer_size as a start. You
want 
to try and get as much of the index in memory as possible.


I had hoped I could use partitions like in Oracle. 1 partition every
hour (or 3).  I don't think the merge tables will work however. We
currently only keep 15 days of data and that fills the array. If a
merge
table uses disk space, it won't work for us.

A Merge Table can be built in just ms. It is a logical join between the 
tables and does *not* occupy more disk space. Think of it as a view that

joins tables of similar schema together vertically so it looks like 1
large 
table.

Mike


Ah, very cool.

Thanks again.


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: lost connection....

2002-11-06 Thread Michael Gargiullo
Flavio,

If your running RedHat check
https://bugzilla.redhat.com/bugzilla/show_bug.cgi?id=75128

off topic question flavio.  Did you ever work in New York City?

-Mike

On Wed, 2002-11-06 at 10:09, Flavio wrote:
 I try to update my linux to next version, and it was running a previous
 version of mysql and when I update to version 3.23.52-3 it works correct,
 but when I try to connect from a windows Workstation to server it broke with
 this message in mysql.log :
 
 Number of processes running now: 1
 mysqld process hanging, pid 30326 - killed
 021106 11:59:08  mysqld restarted
 /usr/libexec/mysqld: ready for connections
 
  and in workstation shows: Lost connection to MySql server during query
 
 Anybody  knows what´s wrong ???
 
 []´s flavio
 
 
 -
 Before posting, please check:
http://www.mysql.com/manual.php   (the manual)
http://lists.mysql.com/   (the list archive)
 
 To request this thread, e-mail [EMAIL PROTECTED]
 To unsubscribe, e-mail [EMAIL PROTECTED]
 Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
 



-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: access denied for user: root@localhost

2002-11-05 Thread Michael Gargiullo
use: mysqladmin -u root -p

It will prompt you for the password

On Tue, 2002-11-05 at 10:03, Jack Chen wrote:
 Dear all,
 
 When I ran the follwing command:
 
 $ mysqladmin -u root password PASSWORD
 mysqladmin: connect to server at 'localhost' failed
 error: 'Access denied for user: 'root@localhost' (Using password: NO)'
 
 Could you inform me why?
 
 Thanks,
 
 Jack
 
 Jack Chen, Stein Lab, Cold Spring Harbor Labs
 1 Bungtown Road, Cold Spring Harbor, NY, 11724 
 Tel: 1 516 3676904; e-mail: [EMAIL PROTECTED]
 
 
 
 -
 Before posting, please check:
http://www.mysql.com/manual.php   (the manual)
http://lists.mysql.com/   (the list archive)
 
 To request this thread, e-mail [EMAIL PROTECTED]
 To unsubscribe, e-mail [EMAIL PROTECTED]
 Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
 



-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: [QueryAnalyzer]

2002-11-05 Thread Michael Gargiullo
Ask your question in a Microsoft mailing list.  I don't mean to be rude,
but this is a MySQL (Different product) mailing list.  try Microsoft's
newsgroups.

On Tue, 2002-11-05 at 10:06, [EMAIL PROTECTED] wrote:
 Hi,
 
 I have just installed SQLServer7 after uninstalling SQL2k, and trying to 
 laungh the Query Analyzer, but it doesn't seem wants to lauhch, is there any 
 suggestion for me to do ??
 
 Sam
 
 -
 Before posting, please check:
http://www.mysql.com/manual.php   (the manual)
http://lists.mysql.com/   (the list archive)
 
 To request this thread, e-mail [EMAIL PROTECTED]
 To unsubscribe, e-mail [EMAIL PROTECTED]
 Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
 



-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: access denied for user: root@localhost

2002-11-05 Thread Michael Gargiullo
right, sorry, I could have been more specific.

like so: mysqladmin -u root -p command, command...

example: mysqladmin -u root -p create mytestdatabase

On Tue, 2002-11-05 at 10:33, Jack Chen wrote:
 Stangely, it gave me a page of *information* like this: [any idea?
 thanks!]
 
 mysqladmin  Ver 8.23 Distrib 3.23.49, for redhat-linux-gnu on i386
 Copyright (C) 2000 MySQL AB  MySQL Finland AB  TCX DataKonsult AB
 This software comes with ABSOLUTELY NO WARRANTY. This is free software,
 and you are welcome to modify and redistribute it under the GPL license
 
 Administration program for the mysqld daemon.
 Usage: mysqladmin [OPTIONS] command command
 
   -#, --debug=...   Output debug log. Often this is 'd:t:o,filename`
   -f, --force Don't ask for confirmation on drop database; with
   multiple commands, continue even if an error
 occurs
   -?, --help  Display this help and exit
   --character-sets-dir=...
 Set the character set directory
   -C, --compressUse compression in server/client protocol
   -h, --host=#Connect to host
   -p, --password[=...]Password to use when connecting to server
   If password is not given it's asked from the tty
   -P  --port=...  Port number to use for connection
   -i, --sleep=sec Execute commands again and again with a sleep
 between
   -r, --relativeShow difference between current and previous
 values
 when used with -i. Currently works only with
 extended-status
   -E, --verticalPrint output vertically. Is similar to --relative,
 but prints output vertically.
   -s, --silentSilently exit if one can't connect to server
   -S, --socket=...Socket file to use for connection
   -u, --user=#User for login if not current user
   -v, --verbose Write more information
   -V, --version   Output version information and exit
   -w, --wait[=retries]  Wait and retry if connection is down
 
 
 
 Jack Chen, Stein Lab, Cold Spring Harbor Labs
 1 Bungtown Road, Cold Spring Harbor, NY, 11724 
 Tel: 1 516 3676904; e-mail: [EMAIL PROTECTED]
 
 
 On 5 Nov 2002, Michael Gargiullo wrote:
 
  use: mysqladmin -u root -p
  
  It will prompt you for the password
  
  On Tue, 2002-11-05 at 10:03, Jack Chen wrote:
   Dear all,
   
   When I ran the follwing command:
   
   $ mysqladmin -u root password PASSWORD
   mysqladmin: connect to server at 'localhost' failed
   error: 'Access denied for user: 'root@localhost' (Using password: NO)'
   
   Could you inform me why?
   
   Thanks,
   
   Jack
   
   Jack Chen, Stein Lab, Cold Spring Harbor Labs
   1 Bungtown Road, Cold Spring Harbor, NY, 11724 
   Tel: 1 516 3676904; e-mail: [EMAIL PROTECTED]
   
   
   
   -
   Before posting, please check:
  http://www.mysql.com/manual.php   (the manual)
  http://lists.mysql.com/   (the list archive)
   
   To request this thread, e-mail [EMAIL PROTECTED]
   To unsubscribe, e-mail 
[EMAIL PROTECTED]
   Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
   
  
  
  
 



-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: access denied for user: root@localhost

2002-11-05 Thread Michael Gargiullo
Just a question, did you ever change your password for Mysql?  If not
don't use the -p option

mysqladmin -u root create testdb



On Tue, 2002-11-05 at 11:08, Jack Chen wrote:
 Thanks, but I still got complains that can't access server:
 
 'Access denied for user: 'root@localhost' (Using password: YES)
 
 Is it possible that for some reason the password is wrong?
 
 You input is greatly appreaciated,
 
 Puzzled newbie,
 
 Jack
 
 
 Jack Chen, Stein Lab, Cold Spring Harbor Labs
 1 Bungtown Road, Cold Spring Harbor, NY, 11724 
 Tel: 1 516 3676904; e-mail: [EMAIL PROTECTED]
 
 
 On 5 Nov 2002, Michael Gargiullo wrote:
 
  right, sorry, I could have been more specific.
  
  like so: mysqladmin -u root -p command, command...
  
  example: mysqladmin -u root -p create mytestdatabase
  
  On Tue, 2002-11-05 at 10:33, Jack Chen wrote:
   Stangely, it gave me a page of *information* like this: [any idea?
   thanks!]
   
   mysqladmin  Ver 8.23 Distrib 3.23.49, for redhat-linux-gnu on i386
   Copyright (C) 2000 MySQL AB  MySQL Finland AB  TCX DataKonsult AB
   This software comes with ABSOLUTELY NO WARRANTY. This is free software,
   and you are welcome to modify and redistribute it under the GPL license
   
   Administration program for the mysqld daemon.
   Usage: mysqladmin [OPTIONS] command command
   
 -#, --debug=...   Output debug log. Often this is 'd:t:o,filename`
 -f, --force Don't ask for confirmation on drop database; with
 multiple commands, continue even if an error
   occurs
 -?, --help  Display this help and exit
 --character-sets-dir=...
   Set the character set directory
 -C, --compressUse compression in server/client protocol
 -h, --host=#Connect to host
 -p, --password[=...]Password to use when connecting to server
 If password is not given it's asked from the tty
 -P  --port=...  Port number to use for connection
 -i, --sleep=sec Execute commands again and again with a sleep
   between
 -r, --relativeShow difference between current and previous
   values
   when used with -i. Currently works only with
   extended-status
 -E, --verticalPrint output vertically. Is similar to --relative,
   but prints output vertically.
 -s, --silentSilently exit if one can't connect to server
 -S, --socket=...Socket file to use for connection
 -u, --user=#User for login if not current user
 -v, --verbose Write more information
 -V, --version   Output version information and exit
 -w, --wait[=retries]  Wait and retry if connection is down
   
   
   
   Jack Chen, Stein Lab, Cold Spring Harbor Labs
   1 Bungtown Road, Cold Spring Harbor, NY, 11724 
   Tel: 1 516 3676904; e-mail: [EMAIL PROTECTED]
   
   
   On 5 Nov 2002, Michael Gargiullo wrote:
   
use: mysqladmin -u root -p

It will prompt you for the password

On Tue, 2002-11-05 at 10:03, Jack Chen wrote:
 Dear all,
 
 When I ran the follwing command:
 
 $ mysqladmin -u root password PASSWORD
 mysqladmin: connect to server at 'localhost' failed
 error: 'Access denied for user: 'root@localhost' (Using password: NO)'
 
 Could you inform me why?
 
 Thanks,
 
 Jack
 
 Jack Chen, Stein Lab, Cold Spring Harbor Labs
 1 Bungtown Road, Cold Spring Harbor, NY, 11724 
 Tel: 1 516 3676904; e-mail: [EMAIL PROTECTED]
 
 
 
 -
 Before posting, please check:
http://www.mysql.com/manual.php   (the manual)
http://lists.mysql.com/   (the list archive)
 
 To request this thread, e-mail [EMAIL PROTECTED]
 To unsubscribe, e-mail 
[EMAIL PROTECTED]
 Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
 



   
  
  
  
  -
  Before posting, please check:
 http://www.mysql.com/manual.php   (the manual)
 http://lists.mysql.com/   (the list archive)
  
  To request this thread, e-mail [EMAIL PROTECTED]
  To unsubscribe, e-mail [EMAIL PROTECTED]
  Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
  
  
 
 
 -
 Before posting, please check:
http://www.mysql.com/manual.php   (the manual)
http://lists.mysql.com/   (the list archive)
 
 To request this thread

connecting to mysql server from other machines

2002-11-04 Thread Michael Gargiullo
I have a huge problem.  I have several remote machines that connect to a
mysql server.  On Friday, the mysql server started returning ERROR 2013:
Lost connection to MySQL server during query

Any idea.


Michael Gargiullo
Network Administrator
Warp Drive Networks
590 Valley Health Plaza
Paramus NJ 07650

1-201-576-9292 x260
1-201-225-0007 - F

www.warpdrive.net


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




RE: newbie: SQL question

2002-11-04 Thread Michael Gargiullo
http://hotwired.lycos.com/webmonkey/backend/databases/

might be a good place, haven't read it, but I send people that want to learn
HTML to webmonkey, and they liked it.

 -Original Message-
 From: Admin-Stress [mailto:meerkapot;yahoo.com]
 Sent: Monday, November 04, 2002 10:50 AM
 To: [EMAIL PROTECTED]
 Subject: newbie: SQL question


 Hi,

 I am just a starter. Anyone can suggest me good web resources for
 learning SQL command that I can
 use (compatible) with mySQL ?

 I read from www.mysql.com documentation, but it's not complete ...

 Well, if you have collection for beginner, please :)

 Thanks,

 kapot

 __
 Do you Yahoo!?
 HotJobs - Search new jobs daily now
 http://hotjobs.yahoo.com/

 -
 Before posting, please check:
http://www.mysql.com/manual.php   (the manual)
http://lists.mysql.com/   (the list archive)

 To request this thread, e-mail [EMAIL PROTECTED]
 To unsubscribe, e-mail
 [EMAIL PROTECTED]
 Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




RE: (beginner) mysql connection problem!

2002-11-04 Thread Michael Gargiullo
Sounds like mysql service hasn't started yet.  

 -Original Message-
 From: David Wu [mailto:dwu;stepup.ca]
 Sent: Monday, November 04, 2002 2:59 PM
 To: [EMAIL PROTECTED]
 Subject: (beginner) mysql connection problem!
 
 
 Hi everyone,
 
 On my local machine I had mysql installed, and I was able to log in and 
 doing a test on it.
 but today as I am trying to login using mysql or mysql -u root -p, I 
 got the error message saying; ERROR 2002: Can't connect to local MySQL 
 server through socket '/tmp/mysql.sock' (2), I had not touch the setting 
 of the mysql at all. What should I do to solve this problem?? Thank you 
 all for your help.
 
 
 -
 Before posting, please check:
http://www.mysql.com/manual.php   (the manual)
http://lists.mysql.com/   (the list archive)
 
 To request this thread, e-mail [EMAIL PROTECTED]
 To unsubscribe, e-mail 
 [EMAIL PROTECTED]
 Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
 
 

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php