Re: Possible bug in mysqldump?

2008-08-05 Thread Mark Maunder
Thanks Rolando,

I'm using InnoDB tables. According to the docs, the single-transaction
option:

 Creates a consistent snapshot by dumping all tables in
a
  single transaction. Works ONLY for tables stored in
  storage engines which support multiversioning
(currently
  only InnoDB does); the dump is NOT guaranteed to be
  consistent for other storage engines. Option
  automatically turns off --lock-tables.

That seems to contradict what you're saying. I think they key is that InnoDB
supports multiversioning and that single-transaction creates a snapshot
"version" of the db by briefly locking all tables. That has the same effect
as locking MyISAM tables for the duration of the dump - as I understand it.
Can anyone confirm this? So this still doesn't explain the different
behaviour between pipe and redirect that I'm seeing.

Regards,

Mark.


On Tue, Aug 5, 2008 at 11:55 AM, Rolando Edwards <[EMAIL PROTECTED]>wrote:

>  This is an excerpt from
> http://dev.mysql.com/doc/refman/5.0/en/mysqldump.html#option_mysqldump_master-data
>
> The --master-data option automatically turns off --lock-tables. It also
> turns on --lock-all-tables, unless *--single-transaction* also is
> specified, in which case, *a global read lock is acquired only for a short
> time at the beginning of the dump* (see the description for
> --single-transaction). *In all cases, any action on logs happens at the
> exact moment of the dump*. (Bold Italics mine)
>
> According to preceding statement, the option *"--single-transaction"* WILL
> NOT HOLD OFF ANY NEW TRANSACTIONS FOR THE DURATION OF THE mysqldump.
> Consequently, somewhere in the middle of the dump process, table locks are
> released prematurely by design.
>
>
>
> This is why I suggested locking all tables with FLUSH TABLES WITH READ LOCK
> on the master so no new transactions would sneak in during the pipe-fed
> mysql load from mysqldump.
>
>
>
> Locking the master with FLUSH TABLES WITH READ LOCK should be done even if
> you are dumping to a text file in order to have a perfect snapshot of the
> data.
>
>
>
> Additionally, the option *"--single-transaction"* WILL NOT PROTECT MyISAM
> tables from live changes being written to the dump file since you cannot run
> ACID compliant transactions against MyISAM, only InnoDB.
>
>
>
> Doing FLUSH TABLES WITH READ LOCK on the master prior to the mysqldump will
> guarantee that no transactions, regardless of whether it is for MyISAM or
> InnoDB, will come through during a mysqldump.
>
>
>  --
>
> *From:* Mark Maunder [mailto:[EMAIL PROTECTED]
> *Sent:* Tuesday, August 05, 2008 12:17 PM
> *To:* Rolando Edwards
> *Cc:* mysql@lists.mysql.com
> *Subject:* Re: Possible bug in mysqldump?
>
>
>
> Thanks for the reply Rolando.
>
> In both the examples I provided (pipe and text file) the CHANGE MASTER
> command appears at the top of the data import and is uncommented and
> therefore executes before the data is imported. I don't think this is a
> problem because the slave only starts replicating from the master once I run
> the "start slave" command. That command is only run after all data is
> imported.
>
> Unless the slave does some kind of processing before I run "start slave" I
> don't see this is the explanation.
>
> Thanks again - and please let me know your thoughts on this because I could
> be wrong.
>
> Mark.
>
> On Tue, Aug 5, 2008 at 8:47 AM, Rolando Edwards <[EMAIL PROTECTED]>
> wrote:
>
> When you use --master-data=1, it executes the CHANGE MASTER command first
> before adding data.
>
> Do the following to verify this:
>
> Run 'mysqldump --single-transaction --master-data=1 -h... -u... -p... >
> DataDump1.sql
> Run 'mysqldump --single-transaction --master-data=2 -h... -u... -p... >
> DataDump2.sql
>
> Run 'head -30 DataDump1.sql'
> You will see the CHANGE MASTER command before all CREATE TABLEs and
> INSERTs.
> Therefore, it will execute.
>
> Run 'head -30 DataDump2.sql'
> You will see the CHANGE MASTER command before all CREATE TABLEs and
> INSERTs.
> However, the command is commented Out !!!
> Therefore, it will not execute.
>
> After loading DataDump2.sql, you can then use the replication coordinates
> (log file name and log position) in the Commented Out CHANGE MASTER Command
> After the data are loaded.
>
> In theory, it is a paradigm bug because the CHANGE MASTER command when
> using --master-data=1 should appear on the bottom of the mysqldump and not
> at the top. Yet, it i

Re: Possible bug in mysqldump?

2008-08-05 Thread Mark Maunder
Thanks for the reply Rolando.

In both the examples I provided (pipe and text file) the CHANGE MASTER
command appears at the top of the data import and is uncommented and
therefore executes before the data is imported. I don't think this is a
problem because the slave only starts replicating from the master once I run
the "start slave" command. That command is only run after all data is
imported.

Unless the slave does some kind of processing before I run "start slave" I
don't see this is the explanation.

Thanks again - and please let me know your thoughts on this because I could
be wrong.

Mark.

On Tue, Aug 5, 2008 at 8:47 AM, Rolando Edwards <[EMAIL PROTECTED]>wrote:

> When you use --master-data=1, it executes the CHANGE MASTER command first
> before adding data.
>
> Do the following to verify this:
>
> Run 'mysqldump --single-transaction --master-data=1 -h... -u... -p... >
> DataDump1.sql
> Run 'mysqldump --single-transaction --master-data=2 -h... -u... -p... >
> DataDump2.sql
>
> Run 'head -30 DataDump1.sql'
> You will see the CHANGE MASTER command before all CREATE TABLEs and
> INSERTs.
> Therefore, it will execute.
>
> Run 'head -30 DataDump2.sql'
> You will see the CHANGE MASTER command before all CREATE TABLEs and
> INSERTs.
> However, the command is commented Out !!!
> Therefore, it will not execute.
>
> After loading DataDump2.sql, you can then use the replication coordinates
> (log file name and log position) in the Commented Out CHANGE MASTER Command
> After the data are loaded.
>
> In theory, it is a paradigm bug because the CHANGE MASTER command when
> using --master-data=1 should appear on the bottom of the mysqldump and not
> at the top. Yet, it is at the top and executes immediately and then tries to
> load your data and read from the master's binary logs at the same time,
> guaranteeing duplicate key collision.
>
> This is why importing mysqldump straight to mysql via a pipe produces the
> error you are experiencing.
>
> Try this:
>
> 1) In mysql session 1, Run FLUSH TABLES WITH READ LOCK on the master.
>
> 2) In mysql session 1, run SHOW MASTER STATUS.
>
> 3) Record the log file and position from mysql session 1.
>
> 4) In mysql seesion 2, run 'STOP SLAVE;'
>
> 5) Run 'mysqldump --single-transaction mysqldump --single-transaction
> --master-data=1 -u root -pmypass -h masterHost dbName | mysql -u root
> -pmypass -h slaveHost dbName'. Let it run to completion.
>
> Notice I did not use --master-data in the mysqldump
>
> 5) In mysql session 2, run "CHANGE MASTER TO MASTER_LOG_FILE=' from SHOW MASTER STATUS>,MASTER_LOG_POS=' STATUS>';"
>
> 6) In mysql session 2,run 'START SLAVE'.
>
> 7) In mysql session 1, run 'UNLOCK TABLES'
>
> Give it a try !!!
>
> -Original Message-
> From: Mark Maunder [mailto:[EMAIL PROTECTED]
> Sent: Tuesday, August 05, 2008 3:02 AM
> To: mysql@lists.mysql.com
> Subject: Possible bug in mysqldump?
>
> Hi all,
>
> I'm busy setting up replication and have encountered what looks like a bug
> in mysqldump. The following commands work perfectly:
>
> Running the following commands in the mysql client on the slave:
> stop slave;
> reset slave;
> create database dbName;
> CHANGE MASTER TO MASTER_HOST='masterHost', MASTER_USER='root',
> MASTER_PASSWORD='mypass';
>
> Then running the following on the command line on the slave:
>
> mysqldump --single-transaction --master-data=1 -u root -pmypass -h
> masterHost dbName >masterDB.sql ;
>
> mysql -u root -pmypass -h slaveHost dbName< masterDB.sql
>
> Then running the following in the mysql client on the slave:
>
> start slave;
>
> At this point the slave comes up perfectly and is in sync with the master.
>
> However, if I do exactly the same thing, but import the data using a pipe
> command:
>
> mysqldump --single-transaction --master-data=1 -u root -pmypass -h
> masterHost dbName | mysql -u root -pmypass -h slaveHost dbName
>
> When i start the slave I get a duplicate key error. In other words, the
> slave is trying to execute entries in the masters log that have already
> been
> run.
>
> I can't figure out why this is a problem and this has forced me to store
> data on disk as a file as an intermediate step when setting up slaves.
>
> The only difference between the two methods is that in the first case the
> data is stored on disk and then imported via the client and in the second
> case it's piped directly to the client. In both cases the data that
> mysqldump produces is the same. Both include the CHANGE MASTER command that
> sets the log file and position.
>
> Is this a bug in mysqldump, or am I missing something?
>
> Thanks in advance,
>
> Mark.
>



-- 
Mark Maunder <[EMAIL PROTECTED]>
http://markmaunder.com/
+1-206-6978723


Possible bug in mysqldump?

2008-08-05 Thread Mark Maunder
Hi all,

I'm busy setting up replication and have encountered what looks like a bug
in mysqldump. The following commands work perfectly:

Running the following commands in the mysql client on the slave:
stop slave;
reset slave;
create database dbName;
CHANGE MASTER TO MASTER_HOST='masterHost', MASTER_USER='root',
MASTER_PASSWORD='mypass';

Then running the following on the command line on the slave:

mysqldump --single-transaction --master-data=1 -u root -pmypass -h
masterHost dbName >masterDB.sql ;

mysql -u root -pmypass -h slaveHost dbName< masterDB.sql

Then running the following in the mysql client on the slave:

start slave;

At this point the slave comes up perfectly and is in sync with the master.

However, if I do exactly the same thing, but import the data using a pipe
command:

mysqldump --single-transaction --master-data=1 -u root -pmypass -h
masterHost dbName | mysql -u root -pmypass -h slaveHost dbName

When i start the slave I get a duplicate key error. In other words, the
slave is trying to execute entries in the masters log that have already been
run.

I can't figure out why this is a problem and this has forced me to store
data on disk as a file as an intermediate step when setting up slaves.

The only difference between the two methods is that in the first case the
data is stored on disk and then imported via the client and in the second
case it's piped directly to the client. In both cases the data that
mysqldump produces is the same. Both include the CHANGE MASTER command that
sets the log file and position.

Is this a bug in mysqldump, or am I missing something?

Thanks in advance,

Mark.


Re: newbie needs help

2006-07-21 Thread Mark Maunder

While this is offtopic, just a side note:

You probably want to store your images on disk with the filename in
the database rather than the actual image binary data in the db.
Filesystems are very good at storing and retreiving chunks of binary.
Databases do it because... well... I'm not really sure why.

Mark.

On 7/21/06, Scott Haneda <[EMAIL PROTECTED]> wrote:

> I'm going throught some tutorial about uploading and displaying
> images files.  But the display script isn't working.  Here's what I have:

I think you may want to bring this to a php based list, not a mysql one.
--
-
Scott HanedaTel: 415.898.2602
<http://www.newgeo.com> Novato, CA U.S.A.



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]





--
Mark Maunder <[EMAIL PROTECTED]>
http://www.markmaunder.com/
+1-206-6978723

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Finding a point inside a polygon

2006-07-20 Thread Mark Maunder

Thanks, but according to the documentation the Contains() function is
the same as the MBRContains() function which only tests if the point
is inside the minimum bounding rectangle of the polygon, not the
actual polygon.

See the following:
http://dev.mysql.com/doc/refman/5.0/en/functions-that-test-spatial-relationships-between-geometries.html


From the manual:


"Currently, MySQL does not implement these functions according to the
specification. Those that are implemented return the same result as
the corresponding MBR-based functions."




On 7/21/06, ViSolve DB Team <[EMAIL PROTECTED]> wrote:

Hello Mark,

You can locate a point, whether inside or outside a polygon area using the
query below:

 SELECT contains(geomfromtext(@poly),geomfromtext(@p));

where @poly,@p are variables.
Eg: set @p='Point(3 2)';
  set @poly= 'Polygon((0 0,0 3,3 0,0 0),(1 1,1 2,2 1,1 1))
';

If the return value of the select statement is
0  -> Outside the polygon
1  -> Inside the polygon

Thanks,
ViSolve MySQL Support Team.
- Original Message -
From: "Mark Maunder" <[EMAIL PROTECTED]>
To: 
Sent: Friday, July 21, 2006 4:40 AM
Subject: Finding a point inside a polygon


> I'd like to test whether a point is truly inside a polygon, not just
> insude the minimum bounding rectangle. Is there a way to do this in
> MySQL with the spatial extensions? I love mysql but I'm forced to
> consider migrating to postgresql (ugh!) because it has built in
> support for testing spatial relationships between polygons and points.
>
> Thanks.
>
> Mark.
>
> --
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:
> http://lists.mysql.com/[EMAIL PROTECTED]
>
>





--
Mark Maunder <[EMAIL PROTECTED]>
http://www.markmaunder.com/
+1-206-6978723

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Finding a point inside a polygon

2006-07-20 Thread Mark Maunder

I'd like to test whether a point is truly inside a polygon, not just
insude the minimum bounding rectangle. Is there a way to do this in
MySQL with the spatial extensions? I love mysql but I'm forced to
consider migrating to postgresql (ugh!) because it has built in
support for testing spatial relationships between polygons and points.

Thanks.

Mark.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: database back up

2006-07-20 Thread Mark Maunder

/var/lib/mysql/
is a common location. If you're on unix try the following command:

find /var -name "mysql"


On 7/20/06, Martin Jespersen <[EMAIL PROTECTED]> wrote:

You can usually find the database files under the "var" subdirectory
under your installation, unless another datadir was specified at
compiletime.

;) Martin

Joko Siswanto wrote:
> Dear All
>
> if myqsl service can't start, where can i found the file and back up it?
> [under windows and linux]
>
> Thanks,
> Joko Siswanto
>

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: mysql and large integers

2004-12-03 Thread Mark Maunder
Thanks very much Paul. My day has just improved. 

On Fri, 2004-12-03 at 16:53, Paul DuBois wrote:
> At 16:34 -0800 12/3/04, Mark Maunder wrote:
> >It looks like when mysql coerces character strings into integers, it
> >turns them into signed int's. Obviously if the column is unsigned, this
> >is a problem. Don't use quotes you say. Problem is that the perl DBI API
> >seems to put quotes around everything. So when I grab a really really
> >large integer from the db using the perl api, and then try to get a
> >child record referencing the same large integer ID, the DB doesn't give
> >me anything because it's coercing a large chunk of text into a signed
> >integer and truncating it.
> 
> You don't indicate when it is that DBI is putting "quotes around
> everything", but if what you mean is that values bound to placeholders
> get quoted, you can suppress that.  perldoc DBI shows this information:
> 
> Data Types for Placeholders
> 
> The "\%attr" parameter can be used to hint at the data type the
> placeholder should have. Typically, the driver is only interested
> in knowing if the placeholder should be bound as a number or a
> string.
>
>   $sth->bind_param(1, $value, { TYPE => SQL_INTEGER });
> 
> As a short-cut for the common case, the data type can be passed
> directly, in place of the "\%attr" hash reference. This example is
> equivalent to the one above:
>
>   $sth->bind_param(1, $value, SQL_INTEGER);
>
> The "TYPE" value indicates the standard (non-driver-specific) type
> for this parameter. To specify the driver-specific type, the driver
> may support a driver-specific attribute, such as "{ ora_type => 97
> }".
>  
> The SQL_INTEGER and other related constants can be imported using
>   use DBI qw(:sql_types);
> 
> See "DBI Constants" for more information.
> 
> 
> >
> >Another not-really-a-bug but definitely a pitfall. And it sucks because
> >after not being able to use md5 hashes to index my records using
> >BINARY(16) because binary isn't really binary because it cuts off
> >spaces, I'm losing a digit of my next-best-thing thanks to unsigned
> >integers which are actually signed.
> >
> >Don't make me go spend my life savings on Oracle!
> >
> >Here's an example in case you're really bored. The problem below exists
> >because 9358082631434058695 > 2^63
> >
> >##First with no quotes around the large integer:
> >mysql> select job_id from wordbarrel_9a where
> >job_id=9358082631434058695;
> >+-+
> >| job_id  |
> >+-+
> >| 9358082631434058695 |
> >+-+
> >1 row in set (0.00 sec)
> >
> >##Then with quotes:
> >mysql> select job_id from wordbarrel_9a where
> >job_id='9358082631434058695';
> >Empty set (0.00 sec)
> >
> >mysql> desc wordbarrel_9a;
> >+--+-+--+-+-+---+
> >| Field| Type| Null | Key | Default | Extra |
> >+--+-+--+-+-+---+
> >| job_id   | bigint(20) unsigned |  | PRI | 0   |   |
> >+--+-+--+-+-+---+
-- 
Mark D. Maunder <[EMAIL PROTECTED]>
http://www.workzoo.com/
"The Best jobs from the Best Job Sites."


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



mysql and large integers

2004-12-03 Thread Mark Maunder
It looks like when mysql coerces character strings into integers, it
turns them into signed int's. Obviously if the column is unsigned, this
is a problem. Don't use quotes you say. Problem is that the perl DBI API
seems to put quotes around everything. So when I grab a really really
large integer from the db using the perl api, and then try to get a
child record referencing the same large integer ID, the DB doesn't give
me anything because it's coercing a large chunk of text into a signed
integer and truncating it. 

Another not-really-a-bug but definitely a pitfall. And it sucks because
after not being able to use md5 hashes to index my records using
BINARY(16) because binary isn't really binary because it cuts off
spaces, I'm losing a digit of my next-best-thing thanks to unsigned
integers which are actually signed. 

Don't make me go spend my life savings on Oracle! 

Here's an example in case you're really bored. The problem below exists
because 9358082631434058695 > 2^63

##First with no quotes around the large integer:
mysql> select job_id from wordbarrel_9a where
job_id=9358082631434058695;
+-+
| job_id  |
+-+
| 9358082631434058695 |
+-+
1 row in set (0.00 sec)

##Then with quotes:
mysql> select job_id from wordbarrel_9a where
job_id='9358082631434058695';
Empty set (0.00 sec)

mysql> desc wordbarrel_9a;
+--+-+--+-+-+---+
| Field| Type| Null | Key | Default | Extra |
+--+-+--+-+-+---+
| job_id   | bigint(20) unsigned |  | PRI | 0   |   |
+--+-+--+-+-+---+





-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: if last binary byte is space (ascii 32) mysql drops it

2004-12-03 Thread Mark Maunder
So what you're saying is that BINARY isn't binary because it chomps
spaces off the end, thereby corrupting the binary data. Sounds like a
bug. Should I report it?

On Fri, 2004-12-03 at 12:30, Paul DuBois wrote:
> I agree about using the TINYBLOB to avoid trailing space truncation, but
> BINARY and VARBINARY are MySQL data types now.
> 
> http://dev.mysql.com/doc/mysql/en/BINARY_VARBINARY.html



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: if last binary byte is space (ascii 32) mysql drops it

2004-12-03 Thread Mark Maunder
Thing is I don't want a dynamic table for performance reasons. I'm
storing an MD5 hash which is 16 bytes. As a workaround I'm only using 8
bytes of the hash and storing it in a bigint(20) column for now. So I
guess eighteen quintillion, four hundred forty six quadrillion, seven
hundred forty four trillion, seventy three billion, seven hundred nine
million, five hundred fifty two thousand possible combinations will have
to be unique enough for now. 

This turned out to be a very hard to debug little issue for me. Perhaps
others will be more lucky. I'd like to see it fixed asap. 

On Fri, 2004-12-03 at 12:10, Dan Nelson wrote:
> In the last episode (Dec 03), Mark Maunder said:
> > This all started when one of the 16 byte binary primary keys kicked out
> > a duplicate key error. It seems mysql does not store the last byte of
> > the binary value if it is a space. That is, ascii 32 or hex 20. 
> > 
> > How do I force it to store the space? Thanks!
> > 
> > create table testtable ( id binary(16) NOT NULL PRIMARY KEY ) ENGINE=MyISAM;
> 
> There is no native mysql BINARY type; it's mapped to CHAR(n) BINARY.
> CHARS and VARCHARS trim trailing blanks (A known issue, but low
> priority I think).  Try using a TINYBLOB column type instead.
-- 
Mark D. Maunder <[EMAIL PROTECTED]>
http://www.workzoo.com/
"The Best jobs from the Best Job Sites."


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



if last binary byte is space (ascii 32) mysql drops it

2004-12-03 Thread Mark Maunder
This all started when one of the 16 byte binary primary keys kicked out
a duplicate key error. It seems mysql does not store the last byte of
the binary value if it is a space. That is, ascii 32 or hex 20. 

How do I force it to store the space? Thanks!

create table testtable ( id binary(16) NOT NULL PRIMARY KEY )
ENGINE=MyISAM;

insert into testtable ( id ) values
(0x3b3331105ee3f0779ad5f041e75f9420);

select hex(id) from testtable;
#HEX value retreived is 3B3331105EE3F0779AD5F041E75F94 and has length 30

select hex(id) from testtable where
id=0x3b3331105ee3f0779ad5f041e75f9420;
#nothing found




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Prepared statement for MySQL 4.1

2004-11-11 Thread Mark Maunder
Scott,

http://dev.mysql.com/tech-resources/articles/4.1/prepared-statements.html

regards,

Mark.

On Thu, 2004-11-11 at 15:38, Scott Hamm wrote:
> I've read the article about 'prepared statement' found in MySQL 4.1, and am
> not sure if I understood what 'prepared statement' does and how can it
> benefit us.  Can anyone elaborate on what 'prepared statement' could do with
> examples where possible?
> 
> Thanks,
> 
> 
> The Newbie Scott



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Specific issues with MySQL 4.0.16 on Linux or Windows 2000/3 related to tmp tables?

2004-11-10 Thread Mark Maunder
ed_log_groups  1
> innodb_max_dirty_pages_pct  90
> interactive_timeout 28800
> join_buffer_size131072
> key_buffer_size 134217728
> language   
> /opt/mysql-pro-4.0.16/share/mysql/english/
> large_files_support ON
> local_infileON
> locked_in_memoryOFF
> log ON
> log_update  OFF
> log_bin OFF
> log_slave_updates   OFF
> log_slow_queriesOFF
> log_warningsOFF
> long_query_time 10
> low_priority_updatesOFF
> lower_case_table_names  OFF
> max_allowed_packet  1048576
> max_binlog_cache_size   4294967295
> max_binlog_size 1073741824
> max_connections 100
> max_connect_errors  10
> max_delayed_threads 20
> max_heap_table_size 16777216
> max_join_size   4294967295
> max_relay_log_size  0
> max_seeks_for_key   4294967295
> max_sort_length 1024
> max_user_connections0
> max_tmp_tables  32
> max_write_lock_count4294967295
> myisam_max_extra_sort_file_size 268435456
> myisam_max_sort_file_size   2147483647
> myisam_repair_threads   1
> myisam_recover_options  OFF
> myisam_sort_buffer_size 8388608
> net_buffer_length   16384
> net_read_timeout30
> net_retry_count 10
> net_write_timeout   60
> new OFF
> open_files_limit1024
> pid_file   
> /opt/mysql-pro-4.0.16/data/testsystem.pid
> log_error
> port3306
> protocol_version10
> query_alloc_block_size  8192
> query_cache_limit   1048576
> query_cache_size0
> query_cache_typeON
> query_prealloc_size 8192
> range_alloc_block_size  2048
> read_buffer_size131072
> read_only   OFF
> read_rnd_buffer_size262144
> rpl_recovery_rank   0
> server_id   0
> slave_net_timeout   3600
> skip_external_locking   ON
> skip_networking OFF
> skip_show_database  OFF
> slow_launch_time2
> socket  /tmp/mysql.sock
> sort_buffer_size2097144
> sql_mode0
> table_cache 64
> table_type  MYISAM
> thread_cache_size   0
> thread_stack126976
> tx_isolationREPEATABLE-READ
> timezonePST
> tmp_table_size  209715200
> tmpdir  /tmp/
> transaction_alloc_block_size8192
> transaction_prealloc_size   4096
> version 4.0.16-pro-log
> wait_timeout28800
> 
> --- Mark Maunder <[EMAIL PROTECTED]> wrote:
> 
> > Please include the full query you're running, the
> > table structure, and
> > the number of rows in the table. A dump of 'show
> > variables;' would be
> > helpful too. 
> > 
> > On Wed, 2004-11-10 at 21:44, foo bar wrote:
> > > Hi Everyone,
> > > 
> > > I've been Googling unsuccessfully for specific
> > issues
> > > relating to queries run on MySQL version 4.0.16
> > > against "tmp" tables.  I have witnessed several
> > > occurrences where queries running on various
> > platforms
> > > hang in a "Copying to tmp table" state for hours
> > or
> > > days at a time.  When the same query is manually
> > run
> > > from the MySQL command line client, the query
> > returns,
> > > even on very large tables.  Could someone please
> > give
> > > me a hint as to possible tuning (or upgrade?)
> > ideas to
> > > fix this situation?  I can't find anything else on
> > > each problem system that would give me a hint as
> > to
> > > why this problem randomly occurs.  I've made
> > attempts
> > > at adjusting the "tmp_table_size" limits to have
> > the
> > > system try to do "order by" queries in memory, but
> > > this does not seem to help the situation.
> > > 
> > > Any help would be appreciated!
> > > 
> > > -Kevin
> > > 
> > > "show full processlist" output (notice that the
> > query
> > > has been running for 7.9 days!!!):
> > > | 33 | someuser | localhost:34329 | sometable |
> > Query 
> > >  | 687465 
> > > | Copying to tmp table | select
> > >   date_add( date_format(time, '%Y-%c-%d
> > 00:00:00'),
> > > INTERVAL 0 HOUR) time,
> > >   ...
> > > group by 1
> > > order by 1
> > > 
> > > 
> > >   
> > > __ 
> > > Do you Yahoo!? 
> > > Check out the new Yahoo! Front Page. 
> > > www.yahoo.com
> > 
> > 
> > 
> 
> 
> 
>   
> __ 
> Do you Yahoo!? 
> Check out the new Yahoo! Front Page. 
> www.yahoo.com
-- 
Mark D. Maunder <[EMAIL PROTECTED]>
http://www.workzoo.com/
"The Best jobs from the Best Job Sites."


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Memory used by each open table?

2004-11-10 Thread Mark Maunder
Hi,

If I set the table cache to 2 how much memory will it consume? And
how much latency is there when mysql has to open a table before
executing a query?

Some background:

I have a database with around 1000 tables. I'll have roughly 20
concurrent connections to the DB. And in my queries I'll have a max of
10 tables in a join. So according to
http://dev.mysql.com/doc/mysql/en/Table_cache.html
I should set my table_cache to 20*10 as a minimum. This db is going to
be hit very hard though and I'd like to avoid the cost of
opening/closing tables. So I'd like to have each mysql thread have a
full cache of all tables. That's 1000*20=20,000.

Thanks,

Mark.


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Specific issues with MySQL 4.0.16 on Linux or Windows 2000/3 related to tmp tables?

2004-11-10 Thread Mark Maunder
Please include the full query you're running, the table structure, and
the number of rows in the table. A dump of 'show variables;' would be
helpful too. 

On Wed, 2004-11-10 at 21:44, foo bar wrote:
> Hi Everyone,
> 
> I've been Googling unsuccessfully for specific issues
> relating to queries run on MySQL version 4.0.16
> against "tmp" tables.  I have witnessed several
> occurrences where queries running on various platforms
> hang in a "Copying to tmp table" state for hours or
> days at a time.  When the same query is manually run
> from the MySQL command line client, the query returns,
> even on very large tables.  Could someone please give
> me a hint as to possible tuning (or upgrade?) ideas to
> fix this situation?  I can't find anything else on
> each problem system that would give me a hint as to
> why this problem randomly occurs.  I've made attempts
> at adjusting the "tmp_table_size" limits to have the
> system try to do "order by" queries in memory, but
> this does not seem to help the situation.
> 
> Any help would be appreciated!
> 
> -Kevin
> 
> "show full processlist" output (notice that the query
> has been running for 7.9 days!!!):
> | 33 | someuser | localhost:34329 | sometable | Query 
>  | 687465 
> | Copying to tmp table | select
>   date_add( date_format(time, '%Y-%c-%d 00:00:00'),
> INTERVAL 0 HOUR) time,
>   ...
> group by 1
> order by 1
> 
> 
>   
> __ 
> Do you Yahoo!? 
> Check out the new Yahoo! Front Page. 
> www.yahoo.com



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Posting Question

2004-11-09 Thread Mark Maunder
Post it, I'll help. 

On Tue, 2004-11-09 at 19:21, Lewick, Taylor wrote:
> I am asking before I post so I don't anger everyone...
> 
> Is this list okay to post a specific question regarding multiple row
> inserts..  
> 
> I am doing this in perl, and I need some help with the perl part...
> 
>  
> 
> Thanks,
> Taylor
-- 
Mark D. Maunder <[EMAIL PROTECTED]>
http://www.workzoo.com/
"The Best jobs from the Best Job Sites."


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: massive fulltext indexes - what hardware?

2004-11-07 Thread Mark Maunder
We won't be serving concurrent queries. 

On Sun, 2004-11-07 at 10:41, Michael J. Pawlowsky wrote:
> Another thing to consider is how many transactions per minute/second you 
> will need to serve.
> 
> Mark Maunder wrote:
> > I'm busy building an application that will have 10 million records, each
> > with a chunk of text - about 500 words each, on average. Does anyone
> > have any benchmarks they can share with mysql's fulltext search
> > performance on indexes of this size?
> 
> 
-- 
Mark D. Maunder <[EMAIL PROTECTED]>
http://www.workzoo.com/
"The Best jobs from the Best Job Sites."


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



massive fulltext indexes - what hardware?

2004-11-06 Thread Mark Maunder
I'm busy building an application that will have 10 million records, each
with a chunk of text - about 500 words each, on average. Does anyone
have any benchmarks they can share with mysql's fulltext search
performance on indexes of this size?

What I'd like to know is what size server I need to run this app on. How
much RAM, how much CPU and what sort of disk channel performance I need
to provide?

I'd like to get sub 1 second responses, and all fulltext queries will be
boolean using the 'IN BOOLEAN MODE' modifier.

Thanks,

Mark.


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: load index into cache not working

2004-11-05 Thread Mark Maunder
This bug is a problem with the reporting when sending a SIGHUP or the
command mysqladmin debug. What I'm seeing is the process simply isn't
growing in memory. I'm looking at the process size in 'top'. 

I do notice that it grows once I start hitting it with queries. I'd
expect it to grow as soon as I preload the index. Isn't that the point
of preloading?

On Fri, 2004-11-05 at 05:25, Gleb Paharenko wrote:
> Hi.
> 
> There is a bug: http://bugs.mysql.com/bug.php?id=4285.
> 
> 
> 
> Mark Maunder <[EMAIL PROTECTED]> wrote:
> > I have a large fulltext index (the MYI file is about 750 Megs) and I've
> > set my key_buffer_size to 1 Gig. I do:
> > load index into cache fttest;
> > and I watch the Mysql process in memory, and it doesn't grow. It just
> > hangs around 250Megs. Why isn't the index loading into memory?
> > 
> > Thanks,
> > 
> > Mark.
> > 
> > 
> 
> 
> -- 
> For technical support contracts, goto https://order.mysql.com/?ref=ensita
> This email is sponsored by Ensita.NET http://www.ensita.net/
>__  ___ ___   __
>   /  |/  /_ __/ __/ __ \/ /Gleb Paharenko
>  / /|_/ / // /\ \/ /_/ / /__   [EMAIL PROTECTED]
> /_/  /_/\_, /___/\___\_\___/   MySQL AB / Ensita.NET
><___/   www.mysql.com
> 
-- 
Mark D. Maunder <[EMAIL PROTECTED]>
http://www.workzoo.com/
"The Best jobs from the Best Job Sites."


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: "Indexes use different block sizes" error with preloading fulltext indexes.

2004-11-04 Thread Mark Maunder
OK, thanks. I've reported this as a bug, and another, below. I must
admit, this production release seems flakey:
http://bugs.mysql.com/bug.php?id=6447

On Fri, 2004-11-05 at 04:03, Haitao Jiang wrote:
> Mark
> 
> It is a known problem with this feature. So far I am not aware of any
> solution to it. Just want to let you know that you are not alone
> having this problem.
> 
> Haitao
> 
> 
> On Fri, 05 Nov 2004 02:16:49 +, Mark Maunder <[EMAIL PROTECTED]> wrote:
> > I keep getting this error when trying to preload a fulltext index. I've
> > checked the block size of the fulltext index using myisamchk (is there
> > an easier way to find out block size?) and it is 2048. The block size of
> > the primary key on the same table is 1024. Is that what it means by
> > "Indexes use different block sizes"?
> > 
> > As you can see from below, I've tried to only load the fulltext index,
> > and the error persists. I have also tried setting the global
> > key_cache_block_size to 2048 and that didn't work. I have also tried
> > creating a separate key cache with it's own 2048 block size and
> > preloading the index into that, and that didn't work either.
> > 
> > Any help is appreciated.
> > 
> > mysql> load index into cache fttest INDEX (ft);
> > ++--+--+---+
> > | Table  | Op   | Msg_type |
> > Msg_text  |
> > ++--+--+---+
> > | workzoo.fttest | preload_keys | error| Indexes use different block
> > sizes |
> > | workzoo.fttest | preload_keys | status   | Operation
> > failed  |
> > ++--+--+---+
> > 2 rows in set (0.00 sec)
> > 
> > Mark.
> > 
> > --
> > MySQL General Mailing List
> > For list archives: http://lists.mysql.com/mysql
> > To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
> > 
> >
-- 
Mark D. Maunder <[EMAIL PROTECTED]>
http://www.workzoo.com/
"The Best jobs from the Best Job Sites."


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



load index into cache not working

2004-11-04 Thread Mark Maunder
I have a large fulltext index (the MYI file is about 750 Megs) and I've
set my key_buffer_size to 1 Gig. I do:
load index into cache fttest;
and I watch the Mysql process in memory, and it doesn't grow. It just
hangs around 250Megs. Why isn't the index loading into memory?

Thanks,

Mark.


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



"Indexes use different block sizes" error with preloading fulltext indexes.

2004-11-04 Thread Mark Maunder
I keep getting this error when trying to preload a fulltext index. I've
checked the block size of the fulltext index using myisamchk (is there
an easier way to find out block size?) and it is 2048. The block size of
the primary key on the same table is 1024. Is that what it means by
"Indexes use different block sizes"? 

As you can see from below, I've tried to only load the fulltext index,
and the error persists. I have also tried setting the global
key_cache_block_size to 2048 and that didn't work. I have also tried
creating a separate key cache with it's own 2048 block size and
preloading the index into that, and that didn't work either. 

Any help is appreciated. 

mysql> load index into cache fttest INDEX (ft);
++--+--+---+
| Table  | Op   | Msg_type |
Msg_text  |
++--+--+---+
| workzoo.fttest | preload_keys | error| Indexes use different block
sizes |
| workzoo.fttest | preload_keys | status   | Operation
failed  |
++--+--+---+
2 rows in set (0.00 sec)

Mark.



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Leasing time on a superfast mysql box

2004-09-27 Thread Mark Maunder
I have a large database of zip codes with longitude and latitude of
each, and I periodically generate a lookup table for each zip showing
all zip codes within various radii. The process takes a day on my poor
workstations 2.5GHz CPU, but I need to do it faster. Does anyone know of
somewhere I can borrow or lease some time on a very fast mysql server to
do this? All I need is mysql and perl on the machine. All processing
including the trig is done by mysql. 

Thanks,

Mark


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: HEAP tables vs MYISAM on ramdisk

2004-02-23 Thread Mark Maunder
411 is packed with features I'm dying to have on my production server,
but I had it on my dev box, and I got some table corruption which,
admittedly, I was too lazy to try to reproduce. So I've downgraded to
production 4 again. I have a heavily updated fulltext index which may be
the root of the evil. 

The last 3 paragraphs of this doc give me the impression that key
caching helps with updates too:
http://www.mysql.com/doc/en/MyISAM_key_cache.html

The strange thing is that I have a 128 meg key_buffer on my server, but
I still get a huge speed increase by moving the MYI file of my heavily
utilized fulltext index table to a ramdisk. I suppose a cache by
definition can't be as efficient as if the entire index were being
accessed directly from memory. But the MYI file is only 14 megs, so the
entire thing should be cacheable.

On Mon, 2004-02-23 at 19:47, Eric B. wrote:
> Index caches are new to 4.1.x, but key caches have been around for a while.
> Definitely in 4.0, can't remember about 3.x.  Either way though, I don't see
> either helping with inserts or updates.  Only with queries.
> 
> MySQL dsadoes suggest using a seperate key cache for temporary tables though:
> http://www.mysql.com/doc/en/Multiple_key_caches.html
> 
> I haven't tried 4.11 yet (I'm just about to d/l it), but would expect it to
> be pretty stable.
> 
> Eric
> 
> 
> "Mark Maunder" <[EMAIL PROTECTED]> wrote in message
> news:[EMAIL PROTECTED]
> > The table I'm using is non-critical data, so it's not really an issue
> > for me. But I was browsing through the mysql manual looking for a way to
> > rebuild an MYI file from the .frm and MYD file (is there a way?) when I
> > came across this:
> >
> > http://www.mysql.com/doc/en/CACHE_INDEX.html
> >
> > Index caches are only available in MySQL 4.11 unfortunately, so perhaps
> > my ramdisk idea is a workaround for index caching until 411 is stable?
> >
> > On Mon, 2004-02-23 at 15:34, Eric B. wrote:
> > > How are you ensuring syncronization between the ram disk and the HD?  Is
> > > there a writeback / writethrough mechanism for ram disks?  Are you not
> > > risking major data loss if ever you have a power failure or PC failure?
> > >
> > > Thanks for the info!
> > >
> > > Eric
> > >
> > > "Mark Maunder" <[EMAIL PROTECTED]> wrote in message
> > > news:[EMAIL PROTECTED]
> > > > Since HEAP tables don't support fulltext indexes, is moving MYISAM
> > > > tables to ramdisk an acceptable workaround?
> > > >
> > > > On Sat, 2004-02-21 at 18:35, Mark Maunder wrote:
> > > > > I've noticed a 4 times insert speed improvement by moving the MYI
> index
> > > > > file of a myisam table to a ramdisk. The MYD file is still on a
> physical
> > > > > disk, and I benchmarked the difference between moving just the index
> > > > > file, or moving both, and it was only a 10% difference in speed. The
> > > > > table has a large fulltext index.
> > > > >
> > > > > Has anyone else played with moving MYI files to ramdisk for
> performance?
> > > > > Any caveats that you know of, besides running out of ramdisk space?
> > > > >
> > > > >
> >
> >
> >
> > -- 
> > MySQL General Mailing List
> > For list archives: http://lists.mysql.com/mysql
> > To unsubscribe:
> http://lists.mysql.com/[EMAIL PROTECTED]
> >
> >
> 




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: HEAP tables vs MYISAM on ramdisk

2004-02-23 Thread Mark Maunder
The table I'm using is non-critical data, so it's not really an issue
for me. But I was browsing through the mysql manual looking for a way to
rebuild an MYI file from the .frm and MYD file (is there a way?) when I
came across this:

http://www.mysql.com/doc/en/CACHE_INDEX.html

Index caches are only available in MySQL 4.11 unfortunately, so perhaps
my ramdisk idea is a workaround for index caching until 411 is stable?

On Mon, 2004-02-23 at 15:34, Eric B. wrote:
> How are you ensuring syncronization between the ram disk and the HD?  Is
> there a writeback / writethrough mechanism for ram disks?  Are you not
> risking major data loss if ever you have a power failure or PC failure?
> 
> Thanks for the info!
> 
> Eric
> 
> "Mark Maunder" <[EMAIL PROTECTED]> wrote in message
> news:[EMAIL PROTECTED]
> > Since HEAP tables don't support fulltext indexes, is moving MYISAM
> > tables to ramdisk an acceptable workaround?
> >
> > On Sat, 2004-02-21 at 18:35, Mark Maunder wrote:
> > > I've noticed a 4 times insert speed improvement by moving the MYI index
> > > file of a myisam table to a ramdisk. The MYD file is still on a physical
> > > disk, and I benchmarked the difference between moving just the index
> > > file, or moving both, and it was only a 10% difference in speed. The
> > > table has a large fulltext index.
> > >
> > > Has anyone else played with moving MYI files to ramdisk for performance?
> > > Any caveats that you know of, besides running out of ramdisk space?
> > >
> > >



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



HEAP tables vs MYISAM on ramdisk

2004-02-22 Thread Mark Maunder
Since HEAP tables don't support fulltext indexes, is moving MYISAM
tables to ramdisk an acceptable workaround?

On Sat, 2004-02-21 at 18:35, Mark Maunder wrote:
> I've noticed a 4 times insert speed improvement by moving the MYI index
> file of a myisam table to a ramdisk. The MYD file is still on a physical
> disk, and I benchmarked the difference between moving just the index
> file, or moving both, and it was only a 10% difference in speed. The
> table has a large fulltext index.
> 
> Has anyone else played with moving MYI files to ramdisk for performance?
> Any caveats that you know of, besides running out of ramdisk space? 
> 
> 
-- 
Mark Maunder <[EMAIL PROTECTED]>
ZipTree.com


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



moving MYI's to ramdisk

2004-02-21 Thread Mark Maunder
I've noticed a 4 times insert speed improvement by moving the MYI index
file of a myisam table to a ramdisk. The MYD file is still on a physical
disk, and I benchmarked the difference between moving just the index
file, or moving both, and it was only a 10% difference in speed. The
table has a large fulltext index.

Has anyone else played with moving MYI files to ramdisk for performance?
Any caveats that you know of, besides running out of ramdisk space? 





-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Last inserted id

2004-02-19 Thread Mark Maunder
'connection ID' below should be 'last insert id'. Sorry, it's 2am here
and I'm fresh out of coffee. BTW the information you want is here:
http://www.mysql.com/doc/en/Information_functions.html#IDX1409

"The last ID that was generated is maintained in the server on a
per-connection basis. This means the value the function returns to a
given client is the most recent AUTO_INCREMENT value generated by that
client. The value cannot be affected by other clients, even if they
generate AUTO_INCREMENT values of their own. This behavior ensures that
you can retrieve your own ID without concern for the activity of other
clients, and without the need for locks or transactions."


On Thu, 2004-02-19 at 09:26, Mark Maunder wrote:
> The two simultaneous insert statements will be have separate connections
> to the database and last_insert_id() is connection specific. So if
> you're running apache, and you're worried about two different apache
> child processes getting the same connection ID, don't. Because those two
> children will have separate connections to the DB.
> 
> If you're forking or threading and using the same connection, it becomes
> a bit more complex.
> 
> Mark.
> 
> On Thu, 2004-02-19 at 09:17, Binay wrote:
> > Hi 
> > 
> > I have a php script which insert a row in one of my table. Now i want the 
> > auto_generated id produced by this insert query. I know i can use mysql_insert_id 
> > function to fetch that auto_generated id. But my question is say two or more 
> > person visiting the same page/script causes a insert operation in the table at the 
> > same time. so there are chances of getting wrong auto_generated ids for different 
> > visitors. why am i saying this can be clear from below example.
> > 
> > Say one insert operation is in the progress and by the time control switches/call 
> > to mysql_insert_id function another insert operation starts .. so ultimately 
> > mysql_insert_id will fetch 2nd insert operation id which should not be the case. 
> > How to resolve this case??
> > 
> > Thanks
> > 
> > Binay
-- 
Mark Maunder <[EMAIL PROTECTED]>
ZipTree.com


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Last inserted id

2004-02-19 Thread Mark Maunder
The two simultaneous insert statements will be have separate connections
to the database and last_insert_id() is connection specific. So if
you're running apache, and you're worried about two different apache
child processes getting the same connection ID, don't. Because those two
children will have separate connections to the DB.

If you're forking or threading and using the same connection, it becomes
a bit more complex.

Mark.

On Thu, 2004-02-19 at 09:17, Binay wrote:
> Hi 
> 
> I have a php script which insert a row in one of my table. Now i want the 
> auto_generated id produced by this insert query. I know i can use mysql_insert_id 
> function to fetch that auto_generated id. But my question is say two or more person 
> visiting the same page/script causes a insert operation in the table at the same 
> time. so there are chances of getting wrong auto_generated ids for different 
> visitors. why am i saying this can be clear from below example.
> 
> Say one insert operation is in the progress and by the time control switches/call to 
> mysql_insert_id function another insert operation starts .. so ultimately 
> mysql_insert_id will fetch 2nd insert operation id which should not be the case. How 
> to resolve this case??
> 
> Thanks
> 
> Binay



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Fulltext performance and RAM upgrade

2004-02-11 Thread Mark Maunder
I am considering an upgrade on our server from 512 megs of RAM to 1 or
possibly 1.5 gigs, and would like to know if I'm going to get a
significant performance boost. Any suggestions or information is much
appreciated. Our configuration is as follows:

The table has around 100,000 records (but will grow up to 1 million
records) with a single fulltext index on two fields (both varchar). The
MYI file for the table is about 20 megs. 

The OS is redhat linux kernel version 2.4.18-27. The web server is
Apache. The processor is an AMD Duron 1 GigaHZ. The table is hit hard by
the webserver. It does huge multi-row inserts (multiple inserts in a
single statement) of up to 2000 records, and then queries the same table
immediately expecting the index to be updated with the new data which
has been added to the existing 100,000 records. We get multiple
concurrent requests like this hitting the DB very hard. 

I run my web server and mysql server on the same machine. I have the
following settings in my.cnf:

key_buffer=128M
table_cache=256
sort_buffer=1M
read_buffer_size=1M

Also, I have ft_min_word_len=2 (we have to match state abbreviations)
and have disabled stopwords. 

The fulltext queries are a combination of boolean and non-boolean (if
that helps).

I am hoping that with the extra RAM I can increase the key_buffer to
around 700 Megs (Will that help?) and get better file caching from the
operating system. Running `free` on the OS gives me the following:

 total   used   free sharedbuffers
cached
Mem:505940 498268   7672  0   9352
194856
-/+ buffers/cache: 294060 211880
Swap:  1052248   80601044188

The web server takes about half of available RAM. The slow queries are
the selects on the fulltext index after the huge inserts. I am also
getting some slow insert statements on other tables, but less so. I
suspect it's a side effect of heavy load on the DB from the big
insert/select statements on the fulltext index.

Any help or suggestions are appreciated. Thanks,

Mark.








-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



slow inserts and selects for fulltext indexes (mysql4)

2002-03-20 Thread Mark Maunder

Hi,

Is there a way to speed up inserts on a table with three fulltext
indexes? I'm using the multiple value insert format like:
insert into blah (field1, field2) values ('val1', 'val2'), ('val2',
'val3'), etc..

Perhaps this is a bug in the current mysql4 bk snapshot, but inserts and
selects on a table with three fulltext indexes are taking up to 23
seconds!

On average it's still taking up to 2.5 seconds to insert 50 records into
a table with 15000 records already (Which I think is kinda slow). But
every now and then it'll really slow down. I just logged 9 seconds for a
fulltext select on a table with 15000 records, and 23 seconds for an
insert of 106 rows on the same table. The machine has 400 megs of RAM
and is a PIII 750 with IDE drives. (as opposed to SCSI). Running MySQL
version 4 (the latest bk snapshot).

Here are the entries from the 'slow.log':
##
SET timestamp=1016613725;
select SQL_CACHE
id,search_category,search_jobtype,joblink,title,d_date_posted,n_date_posted,jobsite,jobsite_home,jobsite_con

/usr/local/mysql/libexec/mysqld, Version: 4.0.2-alpha-log, started with:

Tcp port: 3306  Unix socket: /tmp/mysql.sock
Time Id CommandArgument
# Time: 020320  9:01:40
# User@Host: root[root] @ localhost []
# Query_time: 9  Lock_time: 0  Rows_sent: 0  Rows_examined: 0
use workzoo;
SET timestamp=1016614900;
insert into search_cache (ctime,
search_category,search_jobtype,joblink,title,d_date_posted,n_date_posted,jobsite,jobsite_home

# Time: 020320  9:02:04
# User@Host: root[root] @ localhost []
# Query_time: 23  Lock_time: 0  Rows_sent: 106  Rows_examined: 13081
#

My my.cnf is included below. Based on the config above, particularly the
amount of RAM, is this the ideal config for my machine? (considering
that I do large inserts, updates and select on all three fulltext
indexes on the same table)

btw. I'm getting great performance on multiple sequential updates (over
200 records takes under .1 seconds) by locking the table.

[mysqld]
port= 3306
socket  = /tmp/mysql.sock
skip-locking

memlock

set-variable= key_buffer=100M
set-variable= max_allowed_packet=5M
set-variable= table_cache=256
set-variable= record_buffer=1M
set-variable= sort_buffer=20M
set-variable= net_buffer_length=8K
set-variable= myisam_sort_buffer_size=20M
set-variable= long_query_time=1
set-variable= ft_min_word_len=1
#Query cache configured for on demand only
set-variable= query_cache_limit=2M
set-variable= query_cache_size=2M
set-variable= query_cache_startup_type=2

bind-address=10.1.1.1

log-bin
server-id   = 1

# Point the following paths to different dedicated disks
#tmpdir = /tmp/
#log-update = /path-to-dedicated-directory/hostname
log = /var/log/mysql/general.log
log-slow-queries = /var/log/mysql/slow.log

Thanks,

Mark.


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail 
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: mysql.sock

2002-03-17 Thread Mark Maunder

Art,

Do you mean, whenever you try to run the 'mysql' client you get that error? That
usually happens because you are trying to connect to the mysql server using the
mysql client running on the same machine i.e. localhost. It's because the mysql
client uses a socket file when connecting locally and the mysqld server is not
storing the file in the default location where the client expects it. Fix it by
putting this entry in the file /etc/my.cnf under the [client] section.
socket  = /tmp/mysql.sock
or whatever the location of your socket file is. If you can't find it do a
find / -name "mysql.sock"

Hope that helps,

~mark.
http://www.workzoo.com/


Art Fore wrote:

> When I try to run the mysqld or safe_mysql, I get "connot connect to local
> MySQL server through socket '/var/lib/mysql/mysql.sock' (111)'. Check that
> mysql is running and that the socket : '/var/lib/mysql/mysql.sock' exists!
>
> This file does not exist. Where does it come from or how do you create it? I
> have done a search on the website, but no results. /etc/hosts file is also
> correct.
>
> Art
>
> -
> Before posting, please check:
>http://www.mysql.com/manual.php   (the manual)
>http://lists.mysql.com/   (the list archive)
>
> To request this thread, e-mail <[EMAIL PROTECTED]>
> To unsubscribe, e-mail <[EMAIL PROTECTED]>
> Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




production site running mysql 4.0

2002-03-16 Thread Mark Maunder

Just in case you were wondering what the state of mysql version 4 is.
We're running a production site with a reasonably loaded MySQL 4.0
back-end. You can visit the site at http://www.workzoo.com/

The main motivation for mysql4 was the enhanced fulltext index support
which is awesome and rapidly evolving.

Good job Monty and the rest of the gang. (and thanks for all the support
via the lists).

~mark.


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: new user questions

2002-02-12 Thread Mark Maunder

Jim,

I'd like to help, but I dont answer questions directly. Please post your
question to the mysql mailing list in future. I have crossposted this to the
list.

kind regards,

Mark Maunder.

Jim Chivas wrote:

> Mark:
>
> I work in a School where the teachers want to use mysql.
>
> A while back you referred me to thge web page on
> www.mysql.com/doc/U/s/user_Account_Management.html as a reference for the
> following questions:
>
> Some were answered but some were not. Could you supply the syntax to do
> the following:
>
> 1. Once I have created a Teacher what is the 'grant syntax to allow this
> Teacher to create/modify their new databases/tables but not to by default
> use any other tables.
>
> 2. Once I have setup their students, what grant syntax would I use to
> allow the students to add/modify but NOT delete the database?
>
> 2. Once the school year is over how do I remove any databases these
> Teachers have created?
>
> 3. WHere are these mysql databases stored? IN the users $HOME area or ??
>
> Any help would be appreciated.
>
> Thanks
>
> Jim
>
> -- -
>
> Jim Chivas  email:  [EMAIL PROTECTED]
> Information And Computing services  fax:(604) 323-5349
> Langara College Voice:  (604) 323-5390
> 100 West 49th  Avenue   http://www.langara.bc.ca
> Vancouver, B.C., Canada
> V5Y 2Z6


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: new user questions

2002-02-10 Thread Mark Maunder

Jim,

I'd like to help, but I dont answer questions directly. Please post your
question to the mysql mailing list in future. I have crossposted this to the
list.

kind regards,

Mark Maunder.

Jim Chivas wrote:

> Mark:
>
> I work in a School where the teachers want to use mysql.
>
> A while back you referred me to thge web page on
> www.mysql.com/doc/U/s/user_Account_Management.html as a reference for the
> following questions:
>
> Some were answered but some were not. Could you supply the syntax to do
> the following:
>
> 1. Once I have created a Teacher what is the 'grant syntax to allow this
> Teacher to create/modify their new databases/tables but not to by default
> use any other tables.
>
> 2. Once I have setup their students, what grant syntax would I use to
> allow the students to add/modify but NOT delete the database?
>
> 2. Once the school year is over how do I remove any databases these
> Teachers have created?
>
> 3. WHere are these mysql databases stored? IN the users $HOME area or ??
>
> Any help would be appreciated.
>
> Thanks
>
> Jim
>
> -- -
>
> Jim Chivas  email:  [EMAIL PROTECTED]
> Information And Computing services  fax:(604) 323-5349
> Langara College Voice:  (604) 323-5390
> 100 West 49th  Avenue   http://www.langara.bc.ca
> Vancouver, B.C., Canada
> V5Y 2Z6


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




MySQL on the Playstation2?

2002-01-30 Thread Mark Maunder

Linux for PS2 is being released in Europe in May this year:
http://www.scee.com/corporate/pressreleases.jhtml

Who's going to be the first to get MySQL to compile on PS2. ;-)

~mark.


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: MYSQL : XML storage

2002-01-09 Thread Mark Maunder

Clive Bredenkamp wrote:

> Hi All,
>
> I have about 15GB of xml files each ranging from about 400bytes to 4k (some
> exceptions being up to a few MB, but mainly small), and am planning to stick
> these files in a database for better mainteance.
>
> Does anyone have advice on the best way in which to import so many files or
> any advice on storage structure.
>

Howzit,

You would get slightly better performance if you stored them on disk and used the
database to index them. Alternativelly you can use the BLOB datatype to store
binary data up to 64k (or the TEXT type if it is not binary). Also, check out
MEDIUMBLOB (up to 16Megs) and LONGBLOB (Up to 4 Gigs) for larger files.

As far as import options go, you can either use the command line client like so:

mysql -h mark -u root -pblah db_name < input_sql_file.sql

Or you can write yourself a neat little perl script to do it too.

~mark


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: temp workaround for fulltext 50% occurence threshold

2001-12-20 Thread Mark Maunder

Sergei Golubchik wrote:

>
> > > Hmm, them 4.0.1 (with IN BOOLEAN MODE) won't help either :-(
> > > It uses the same scoring scheme as the above query in 4.0.0
> >
> > Where can I get 4.0.1? I dont see it on the website.
>
> Because it's not officially out yet :-)
> (you can always use it before "official" release, fetching our
> source tree - see the manual - but as far as I understand you'd
> rather not to compile mysql yourself)
>
> We expect 4.0.1 to be out this week.
>
> > OK the scoring isn't that important. Will IN BOOLEAN MODE without using '+' or '-'
> > operators give me a list ordered by best match first without the 50% threshold?
>
> Without 50% threshhold, yes.
> Best match - hmm, it has very simple scoring,
> for query "aaa bbb" (OR-type query), the row that contain two words is
> always scored higher than the row with only one word.
> There's no complex statistics involved.

That's exactly what I need. Thanks very much for all your help!


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: temp workaround for fulltext 50% occurence threshold

2001-12-20 Thread Mark Maunder

Sergei Golubchik wrote:

> Mark, it's no point in discussing how things could be
> done in 3.23 branch - nothing can be changed there,
> this is exactly the reason we call it "stable".
>
> How to get rid of 50% threshold is explained in the manual -
> for MySQL-3.23.x the only way is to modify the source (one #define)
> and rebuild. Boolean search in MySQL-4.0 doesn't has 50% threshold.

I'm using version 4.

I tried using 'IN BOOLEAN MODE' under version 4.0.0 but it doesn't recognize
it. I'm using the MySQL 4.0.0-alpha Server (i386) (6.2M)  binary RPM.

Should I compile from source? I would like to but according to the website:
"If the compiler reports version 2.96, then there is a problem (this is the
case, for example on RH 7.x series or Mandrake 8.x). In this case, you should
not try to compile your own binary before downgrading to one of the compilers
mentioned above.".
I'd rather not downgrade my compiler.

The manual says "Since version 4.0.1 MySQL can also perform boolean fulltext
searches using IN BOOLEAN MODE modifier." but 4.0.1 is nowhere to be found.

I tried using the + operator (making all words required) hoping that it would
put the database into boolean mode and it would disregard the 50% threshold,
but I cant get it to return the score correctly with a query like:
mysql> SELECT id, body, MATCH title,body AGAINST (
-> '+test +phrase') AS score
-> FROM articles WHERE MATCH (title,body) AGAINST
-> ('+test +phrase');

The scores returned are all equal to each other - and that of course does a
logical AND which is not what I want. I just want it to be ordered by 'best
match first' with a point score returned.

kind regards,

Mark.


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




temp workaround for fulltext 50% occurence threshold

2001-12-20 Thread Mark Maunder

I've figured out a temp workaround for the problem/feature of words that
appear in more than 50% of records in a fulltext index being considered
stopwords. I just added as many dummy records as there are real records
in the table. A fulltext search will now not disregard any words based
on their frequency.

For performance I added a column called dummy with a flag set indicating
if the record is real or dummy. I added an index on the dummy column and
include a 'where dummy=1' clause in my SQL when doing fulltext searches.
I also have a cron job that runs a report every 20 minutes that makes
sure that 51% of the database is populated with dummy records. (*yuck!*)

Clumsy, yet effective. If anyone has a better solution out there, I
would very much like to hear from you.

I agree with your logic of words that occur more frequently have a
lesser weight - it makes alot of natural language sense. But there
should be a way to either disable the '50% occurence = zero weight'
setting or perhaps disable word weighting altogether for small datasets.

kind regards,

Mark.


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




4.0.1 bug reports. Has it been released yet?

2001-12-20 Thread Mark Maunder

I've seen various bug reports for mysql 4.0.1 but no sign of it on the
site. Has it been released yet? I think the 'IN BOOLEAN MODE' modifier
for a fulltext search may solve all my problems.

kind regards,

Mark.


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




mysql 4.0.0 fulltext stopwords and word weighting

2001-12-19 Thread Mark Maunder

Hi,

Is there a way to prevent the 50% occurence threshold in mysql's
fulltext search logic that causes words that appear in more than 50% of
records to be considered stopwords? I have a table that has less than
1000 records and would like to do a fulltext search on two columns and
have them returned in plain old 'nearest match' order, rather than
'weighted words based on occurences of the word' order.

>From the manual:
>"Word MySQL is present in more than half of rows, and as such, is
effectively treated as a stopword (that is, with >semantical value
zero). It is, really, the desired behavior - a natural language query
should not return every second >row in 1GB table. "

I don't desire this behaviour. I don't mind every second row in the
table, as long as they're ordered by best match first, and no words are
ignored, regardless of how frequently they appear. I saw mention of the
'boolean search' that seems to disregard the 50% threshold, but that's
only in version 4.0.1 which isn't released yet, and I'm not sure if it
will order by best match first.

thanks for all your help,

Mark.



-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




MySQL 4.0 updated?

2001-11-08 Thread Mark Maunder

Hi,

Is MySQL 4.0 Alpha updated periodically with bugfixes? i.e. Is it worth
periodically re-downloading and re-installing MySQL 4.0 to ensure I have
the most stable version?

tnx,

~Mark.


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: mysql question

2001-11-02 Thread Mark Maunder

Jim Chivas wrote:

> Greetings:
>
> I saw a reply you sent to a mysql user about setting up new users to
> mysql. I am hoping you can clarify some questions for me.
>
> I referenced the url you gave out at the mysql.com/documentation/. It
> was suppose to show how to setup users but does not give actual examples.
>
> Can you offer me the correct syntax to:
>
> 1. create a new user.
>
> 2 allow this new user the capability to create and and/change to a
> database they create.
>
> 3. How to remove any databases a particular user has created.
>
> 4. How to remove a user from using mysql (delete them off the mysql
> system)
>
> I am asking these questions from a teaching point of view. My faculty want
> to teach mysql so either I or they must be able to create their student
> users and remove them after the class has completed.
>
> Is it possible to allow each instructor to create and delete users and
> their databases or must the 'root' user do it?
>
> If so how would I create an mysql instructor id to perform this creation
> and deletion functions?

Hi Jim,

Please CC the mysql list (address above) on questions like this so that the
question/answer is archived and the rest of the world can gain from your
problem and the potential solutions.

Check out the following URL:
http://www.mysql.com/doc/U/s/User_Account_Management.html
It contains specific examples on creating users etc. (4.3.5 Adding new users
to MySQL)

If you still have questions dont hesitate to ask.



-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




[off topic] open source jobsite

2001-10-31 Thread Mark Maunder

Hi,

We have just launched a non-profit open source jobsite. Check it out at
http://www.freeusall.com/ It's built on MySQL 4.0 (Alpha), Perl and
Apache. We'd appreciate any feedback you might have.

kind regards,

Mark Maunder.


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: EMERGENCY - Our production database is crashed

2001-10-23 Thread Mark Maunder

David Potter wrote:

> Dear list members,
>
> We are running Mysql 3.23 on Redhat Linux 7.1.
>
> We have an emergency. This is the first time we have ever had a problem.  Our 
>production database suddenly crashed.  I have tried to repair the tables with 
>myisamchk commands, -r, -o, etc and nothing works.
>
> Here is the output:
>
> myisamchk: error: 'journal.frm' doesn't have a correct index definition. You need to 
>recreate it before you can do a repair
> myisamchk: error: 'journal.MYD' doesn't have a correct index definition. You need to 
>recreate it before you can do a repair
>
> Our first choice would be to repair the above tables.  But if this is too hard, our 
>next best option would be to just create a new database and restore the data from one 
>of our backups.  So then I tried to create a new database.  Even that does not work.  
>I enter "drop database deptpros" and the system just hangs.  I then tried "create 
>database prod" and the system just hangs again.

Filesystem corruption or bad memory? Is your filesystem full? (just throwing some 
ideas around)



-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: how to sub-select?..

2001-10-22 Thread Mark Maunder

Trond Eivind Glomsrød wrote:

> "Moshe Gurvich" <[EMAIL PROTECTED]> writes:
>
> > I'm trying to run:
> >
> > delete from followups where task_id not in (select task_id from tasks)
> >
> > but it gives me an error:
> > Error: 1064 - You have an error in your SQL syntax near 'select task_id from
> > tasks)' at line 1
> >
> > where's the problem and how to work around?
>
> MySQL doesn't support subselects.
>

You should RTFM, the manual covers your question in detail. Here it is:
http://www.mysql.com/doc/M/i/Missing_Sub-selects.html




-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: just found out this list is being published on the web [OT]

2001-10-22 Thread Mark Maunder

Robert Alexander wrote:

> I just found out, while looking for other things, that what seems to be the entire 
>content of the MySQL list is being published on the web.
>
> I, for one, really don't like this idea. I have a reasonable expectation that what I 
>post here is for viewing by subscribers to this list, the MySQL community, not the 
>whole world.

That's standard policy for the majority of mailing lists out there. 
(http://groups.yahoo.com for examples) Not publishing them would remove most of their 
value, since the archives are a massive searchable knowledge base (which should be 
searched
before posting a question). Use a spam filter or alternative email address if you're 
concerned about privacy.


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Proposed Guidelines for Posting to the MySQL list

2001-10-21 Thread Mark Maunder

Robert Alexander wrote:

>  - Replies are directed to the POSTER and not to the list. This keeps traffic
>and clutter down.
>
>  - Those who don't post a SUMMARY are likely to find future questions
>going unanswered.
>

So answers to questions go directly to the poster and are not cc'd to the list? I
suppose it's more efficient because each discussion thread has only a question and
it's summary both posted by the same author. But won't this make the list a bit
sterile? Other authors wont be able to participate in a discussion and there wont
be the public aggregation of viewpoints  - just a reliance on the original poster
(many of whom are newbies) to consolidate all replies they receive and post a
coherent summary based on individual emails. I took a look at the SunManager's
archive and it seems that many of the questions don't have summary posts. I'm not a
subscriber though, so perhaps I missed something. I also find it useful sometimes
when browsing the archives to look at all posts in each thread because sometimes
they provide insights into related issues.

I think summary posts are definitelly a must, but users should be required to cc
the list when replying to posts. Also descriptive tags are really useful in the
subject. For example the mod_perl (perl under Apache - http://perl.apache.org) list
uses the following:

<--snip-->
5.2.8.  It can be helpful if you use a tag [in square brackets] in the
"Subject:" line, as well as the brief description of your post.  It
does not matter whether you use [UPPER CASE] or [lower case] or even a
[Mixture Of Both] in the tag.

Some suggested tags are:

  ADMIN  Stuff about running the List.

  ADVOCACY Promoting the use of mod_perl, printing T-shirts, stuff like
  that.  Please don't start another discussion about whether we
  should put this on a different list, we've been there before.

  ANNOUNCE Announcements of new software tools, packages and updates.

  ASP  Joshua Chamas' implementation of Perl embedded in HTML.

  BENCHMARK Apache/mod_perl performance issues.

  BUG  Report of possible fault in mod_perl or associated software
  - it's better if you can send a patch instead!

  DBI  Stuff generally concerning Apache/mod_perl interaction
  with databases.

  FYI  For information only.

  JOB  Any post about mod_perl jobs is welcome as long as it is
  brief and to the point.  Note: Not "JOBS".

  MASON  Jonathan Swartz' implementation of Perl embedded in HTML.

  NEWS  Items of news likely to be interesting to mod_perlers.

  OffTopic Or [OT] Off-topic items, please try to keep traffic low.

  PATCH  Suggested fix for fault in mod_perl or associated software.

  QUESTION Questions about mod_perl which is not covered by one of the
  more specific headings.

  RareModules Occasional reminders about little-used modules on CPAN.

  RFC  Requests for comment from the mod_perl community.

  SITE  Stuff about running the Apache/mod_perl servers.

  SUMMARY After investigation and perhaps fixing a fault, and after an
  extended discussion of a specific topic, it is helpful if
  someone summarizes the thread.  Don't be shy, everyone will
  appreciate the effort.

If you can't find a tag which fits your subject, don't worry.  If you
have a very specific subject to discuss, feel free to choose your own
tag, for example [mod_proxy] or [Perl Sections] but remember that the
main reasons for the "Subject:" line are to save people time and to
improve the response to your posts.
<--snip-->




-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: fulltext not for me/alternatives

2001-10-21 Thread Mark Maunder

Ben Edwards wrote:

> I have a bit of a problem with using freetext indexes because there are a
> LOT of important 3 letter words in my database and as I am using shared
> hosting so do not have the option to recompile MySql.  Can't quite figure
> why 3 is not the default (car, dog, cat war, man, bed, ).  Maybe so you
> would have to recompile to be able to find s_e_x ;).

Is there a way to get mysql to change the default from 3 to 2 letter words (or
less) that are ignored in fulltext indexes?
I'm running version 4 alpha. I checked the TODO and it's not listed. Is it
possible to make this a config.h option? A minimum of 4 letters for a word to be
included in a fulltext index seems a bit restrictive.

tnx!




-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Utilizing the Database Server's Cache

2001-10-20 Thread Mark Maunder

Jeremy Zawodny wrote:

> On Sat, Oct 20, 2001 at 05:02:22PM +0100, Mark Maunder wrote:
> >
> > The only time you'll see a real performance increase is where you're
> > repeadedly calling execute() on the same statement handle with
> > different values for the placeholders - usually this occurs in a
> > loop. This will save you having to do a repeated prepare()
>
> Right.
>
> > so the database server can reuse the old execution plan. Let me know
> > if you want a example.
>
> That's true for some database servers but not MySQL (yet).

I wasn't aware of that - thanks Jeremy. Any ETA as to when this might be
implemented? (part of version 4?) Also just out of curiousity, how much of a
performance hit does mysql take in compiling an execution plan?

tnx!

ps: Here's that example anyway for future ref:

use DBI;

open(FH, "; #slurp
}
close(FH);
$content =~ s/\r\n/\n/g; #in case it's a dos file
my @email_list = split("\n", $content); #assumes 1 email addr per line

my $dbh = DBI->connect("DBI:mysql:dbname:hostname:3306, 'root' ,'password');
my $sth = $dbh->prepare("select (name, address1) from users where email=?");

foreach my $email (@email_list) #email_list loaded from file or something
{
$sth->execute($email); #shorthand for bind_param, same effect
my ($name, $addr1) = $sth->fetchrow();
print "$name - $addr1\n";
}
$sth->finish();
$dbh->disconnect();

#Something like that anyway.
#Creating the initial DB connection has a higher performance impact than
creating an execution plan, so
# something else (more significant) you should look at is optimising your code
by either
# using a global $DBH that everyone shares, or you could
# use something like Apache::DBI for persistent connections under mod_perl if
you're writing a web app which
# will give you a major performance increase.


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Utilizing the Database Server's Cache

2001-10-20 Thread Mark Maunder

Scott Alexander wrote:

> I've been reading a document at
> http://www.saturn5.com/~jwb/dbi-performance.html
> by Jeffrey William Baker.
>
> And I have changed my perl code in one script to use placeholders and
> bound parameters.
>
> On my test server 500 mhz rh 7.1 128 MB I haven't noticed any speed
> differences. (/mysql/mysqladmin  Ver 8.21 Distrib 3.23.42, for
> pc-linux-gnu on i586)
>
> Can  mysql use placeholders ?
>
> In this script I have queries which only have one WHERE condition value,
> other queries have 3 WHERE condtion values. Is it better to
> use place holder in more complex queries?

Scott,

The only time you'll see a real performance increase is where you're repeadedly
calling execute() on the same statement handle with different values for the
placeholders - usually this occurs in a loop. This will save you having to do a
repeated prepare() - so the database server can reuse the old execution plan. Let
me know if you want a example.

~mark.




-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




MySQL 4.0 table crash when updating record with fulltext index

2001-10-19 Thread Mark Maunder

>Description:
When doing the following update I get a table crash.
>How-To-Repeat:

CREATE TABLE tester (
  id int(11)  auto_increment,
  title varchar(100)  default '',
  PRIMARY KEY  (id),
#If you comment out the following index, then the table is briefly marked as crashed
# but appears to recover although the update does not succeed
  KEY ind5 (title),
#The fulltext index appears to be the root of this prob.  
  FULLTEXT KEY FT1 (title)
) TYPE=MyISAM;

insert into tester (title) values ('this is a test');
update tester set title='this is A test' where id=1;
check table tester;

>Fix:
No known workaround. Please advise if you are aware of one. Thanks.

>Submitter-Id:  
>Originator:Mark Maunder
>Organization: 
SwiftCamel Software LTD
>MySQL support: none
>Synopsis:  Table crash when doing update of record with fulltext index.
>Severity:  critical
>Priority:  medium 
>Category:  mysql
>Class: sw-bug 
>Release:   mysql-4.0.0-alpha (Source distribution)
>Server: /usr/bin/mysqladmin  Ver 8.22 Distrib 4.0.0-alpha, for pc-linux-gnu on i686
Copyright (C) 2000 MySQL AB & MySQL Finland AB & TCX DataKonsult AB
This software comes with ABSOLUTELY NO WARRANTY. This is free software,
and you are welcome to modify and redistribute it under the GPL license

Server version  4.0.0-alpha-log
Protocol version10
Connection  Localhost via UNIX socket
UNIX socket /tmp/mysql.sock
Uptime: 8 hours 3 min 52 sec

Threads: 32  Questions: 18198  Slow queries: 295  Opens: 33  Flush tables: 1  Open 
tables: 12  Queries per second avg: 0.627
>Environment:

System: Linux mark.swiftcamel.com 2.4.2-2 #1 Sun Apr 8 20:41:30 EDT 2001 i686 unknown
Architecture: i686

Some paths:  /usr/bin/perl /usr/bin/make /usr/bin/gmake /usr/bin/gcc /usr/bin/cc
GCC: Reading specs from /usr/lib/gcc-lib/i386-redhat-linux/2.96/specs
gcc version 2.96 2731 (Red Hat Linux 7.1 2.96-81)
Compilation info: CC='gcc'  CFLAGS=''  CXX='c++'  CXXFLAGS=''  LDFLAGS=''
LIBC: 
lrwxrwxrwx1 root root   13 Jul 28 17:24 /lib/libc.so.6 -> libc-2.2.2.so
-rwxr-xr-x2 root root  1236396 Apr  6  2001 /lib/libc-2.2.2.so
-rw-r--r--1 root root 26350254 Apr  6  2001 /usr/lib/libc.a
-rw-r--r--1 root root  178 Apr  6  2001 /usr/lib/libc.so
lrwxrwxrwx1 root root   10 Jul 28 17:44 /usr/lib/libc-client.a -> 
c-client.a
Configure command: ./configure  --prefix=/usr/local/mysql4 
--localstatedir=/usr/local/mysql4/data


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: MYSQL 4.0 bug with fulltext (case change) updates

2001-10-19 Thread Mark Maunder

Mark Maunder wrote:

> Hi,
>
> I think this is a bug. The script to recreate the problem is included
> below. This problem appears consistently as long as there's a fulltext
> index and a regular index on the same field and you do an update to
> change the case of a single char. It doesn't matter if the fulltext
> index includes other fields. If the regular index is removed, then after
> the insert (where the case of one char in a field is changed) the table
> is briefly marked as crashed, and then seems to automagically fix itself

Just an update to the original post: The table doesn't crash, but the update does
not succeed if the regular index is removed - so this isn't a workaround.


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




MYSQL 4.0 bug with fulltext (case change) updates

2001-10-19 Thread Mark Maunder

Hi,

I think this is a bug. The script to recreate the problem is included
below. This problem appears consistently as long as there's a fulltext
index and a regular index on the same field and you do an update to
change the case of a single char. It doesn't matter if the fulltext
index includes other fields. If the regular index is removed, then after
the insert (where the case of one char in a field is changed) the table
is briefly marked as crashed, and then seems to automagically fix itself
(which is worrying).

The bug also occurs when you change the case of a single char in a
varchar field and add a word to the sentence e.g.:
'experience with c required' changes to 'experience with C is required'
also causes the problem.

Thanks as always,

~mark
ps: If anyone knows of a workaround please let me know. thanks.

__BEGIN__
#This works fine on mysql 3 but crashes the table on 4 alpha
CREATE TABLE tester (
  id int(11)  auto_increment,
  title varchar(100)  default '',
  PRIMARY KEY  (id),
#If you comment out the following index, then the table is briefly
marked as crashed
# but appears to recover.
  KEY ind5 (title),
#The fulltext index appears to be the root of this prob.
  FULLTEXT KEY FT1 (title)
) TYPE=MyISAM;

insert into tester (title) values ('this is a test');
update tester set title='this is A test' where id=1;
check table tester;
__END__

Gives us:
Table   Op  Msg_typeMsg_text
freeusall.testercheck   warning Table is marked as crashed
freeusall.testercheck   error   Checksum for key: 2 doesn't
match checksum for records
freeusall.testercheck   error   Corrupt

We're running MySQL 4.0 alpha on Linux kernel 2.4.2 intel. Using MyISAM
tables.





-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: secure web server to database server connection

2001-01-26 Thread Mark Maunder

Hi Patrick,

You can try something like:
ssh -L 6969:ukdb-qa: root@ukdb-qa
Use the above on a remote host (lets call it wms-qa). This assumes ukdb-qa is
running a mysqld on port . This command will cause wms-qa to listen on port
6969 for connections and forward them all across a secure connection to ukdb-qa
and to then connect from ukdb-qa to localhost on port . So you have a secure
channel.

When you test this on wms-qa you will use a command like
mysql -h 127.0.0.1 -u root -P 6969 test
If you try to use mysql -h localhost you will get a can't find socket file error.
I think that's cause mysql sees the localhost and looks for a socket file to
connect via instead of going to the port. (a bug?)

I tested this on two linux boxes succesfully (after I fixed that socket file
error thing I mentioned). You'll need to run an sshd on both of them of course.

If you do any benchmarking with this setup let me know as I'm curious about
performance.

Mark Maunder.

Patrick Goetz wrote:

> Currently, every system I've set up is small enough so as to have the web
> server and the database server on the same machine.  Consequently, loss of
> security due to packet-sniffing can be completely controlled by using,
> say, apache-ssl, since communications between the web server and the
> database server take place inside a single machine.
>
> It just occurred to me, however, that this becomes a much bigger problem
> when the web server(s) and the database server are running on different
> machines.  Does anyone know if there is a canonical way of securing the
> data connection between the web and database servers or is this usually
> handled by simply putting the database server behind a firewall?
>
> -
> Before posting, please check:
>http://www.mysql.com/manual.php   (the manual)
>http://lists.mysql.com/   (the list archive)
>
> To request this thread, e-mail <[EMAIL PROTECTED]>
> To unsubscribe, e-mail <[EMAIL PROTECTED]>
> Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Apache - MySQL - PHP (Auto-start Apache)

2001-01-25 Thread Mark Maunder

(quicky coz this is way off topic):
ln -s /usr/local/apache/bin/apachectl /etc/rc.d/init.d/httpd
ln -s /etc/rc.d/init.d/httpd /etc/rc.d/rc3.d/S90httpd
ln -s /etc/rc.d/init.d/httpd /etc/rc.d/rc5.d/S90httpd
ln -s /etc/rc.d/init.d/httpd /etc/rc.d/rc6.d/K90httpd

That should do it. Should bring up your httpd whether you're running runlevel 3 
(command
line) or runlevel 5 (X windows) on redhat whatever version. (I think). You can use the 
same
scheme for starting mysqld too (excuse for posting to list)

*duck*

Freaked Personality wrote:

> You don't configure apache to do this, since apache has no idea when the
> server boots up...
>
> You have to edit your start-up scripts which are mostly in /etc/rc.d to
> start up apache. There's a huge number of ways to do this (to script
> it) ie with error checking/without error checking with mail on error etc
> etc depending on your distribution you then have to add files to run
> level directories (links) or have to do run level checking in the start-up
> scripts which isn't explained that easy. Knowing which distribution you
> use could prove very helpful :-)
>
> On Tue, 23 Jan 2001, John Halladay wrote:
>
> > I'm currently running Apache 1.3.12, MySQL 3.22.32-1, and PHP 4.0.3 together
> > on RedHat 7.0 and everything works fine, although every time I boot up I
> > have to manually start the Web Server with command
> > /usr/local/apache/bin/apachectl start.
> >
> > Does anyone know how to configure Apache so that it will start up
> > automatically when I boot up Linux?  (I know it's a little off the MySQL
> > subject, but I figured someone would have a similar setup.)
> >
> > Thanks,
> > John Halladay
> >
> > -
> > Before posting, please check:
> >http://www.mysql.com/manual.php   (the manual)
> >http://lists.mysql.com/   (the list archive)
> >
> > To request this thread, e-mail <[EMAIL PROTECTED]>
> > To unsubscribe, e-mail 
><[EMAIL PROTECTED]>
> > Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
> >
>
> -
> Before posting, please check:
>http://www.mysql.com/manual.php   (the manual)
>http://lists.mysql.com/   (the list archive)
>
> To request this thread, e-mail <[EMAIL PROTECTED]>
> To unsubscribe, e-mail <[EMAIL PROTECTED]>
> Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




RE: Doing multiple updates

2001-01-24 Thread Mark Maunder

Sounds good. You should probably have a cleaner process of some kind just in
case one of your threads dies before it can release the lock. I think with
get_lock you have a timeout that protects you from that. I'm curious about
the internals of get_lock - perhaps it's more efficient to use a soft lock
like you've described. I dunno.

-Original Message-
From: Cal Evans [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, January 24, 2001 9:10 PM
To: Randy Johnson; [EMAIL PROTECTED]
Subject: RE: Doing multiple updates


Randy,

My recommendation (and there are probably many people who will disagree with
me) is to use a "soft-lock" schema.

In your account table, add a field named lockedBy. I usually add a field
named lockedAt also as a timestamp.

the basic flow is this:

1: Check to see if the record has something in lockedBy
-NO-
2: Update the record with your userID
3: Check to see if there record has something in lockedBy and it is you.
-YES-
4: you have successfully soft-locked the record for update. Go ahead and
make your update
5: update the record to remove the lock.

WARNING: This system assumes that you have control over all processes that
access the database.  If there is a chance that someone will come in and be
able to modify the data outside of your code then they can bypass your
checks and modify the data.

In many DBMSs (not sure yet about MySQL) record locks keep people for even
seeing the data and some (M$ SQL) used to lock whole tables to do a single
update. (sux big time)

HTH,
Cal
http://www.calevans.com


-Original Message-
From: Randy Johnson [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, January 24, 2001 2:48 PM
To: [EMAIL PROTECTED]
Subject: Doing multiple updates


Hello,

I am creating a mock site that has a money balance  that people can login
and pretend to pay money for stuff online via my site.

I am using php with mysql to implement this.

How do I make sure that a balance for a particular account is (locked) so
only 1 spend for that account can happen at one time?

for example my php script grabs the balance from the payer account and
receiver account. How do I make sure that the balance is locked so the
balance is read before the updated transaction occurs causing the person to
spend money that he/she doesn't have.

I hope I have provided enough explanation for you guys to point me in the
right direction.

thanks

randy


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php



-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail
<[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




RE: Doing multiple updates

2001-01-24 Thread Mark Maunder

Sounds like you want a mutex and you can use get_lock and release_lock in
mysql for that.

http://www.mysql.com/doc/M/i/Miscellaneous_functions.html

GET_LOCK(str,timeout) 
Tries to obtain a lock with a name given by the string str, with a timeout
of timeout seconds. Returns 1 if the lock was obtained successfully, 0 if
the attempt timed out, or NULL if an error occurred (such as running out of
memory or the thread was killed with mysqladmin kill). A lock is released
when you execute RELEASE_LOCK(), execute a new GET_LOCK(), or the thread
terminates. This function can be used to implement application locks or to
simulate record locks. It blocks requests by other clients for locks with
the same name; clients that agree on a given lock string name can use the
string to perform cooperative advisory locking: 
mysql> select GET_LOCK("lock1",10);
-> 1
mysql> select GET_LOCK("lock2",10);
-> 1
mysql> select RELEASE_LOCK("lock2");
-> 1
mysql> select RELEASE_LOCK("lock1");
-> NULL

Note that the second RELEASE_LOCK() call returns NULL because the lock
"lock1" was automatically released by the second GET_LOCK() call. 

RELEASE_LOCK(str) 
Releases the lock named by the string str that was obtained with GET_LOCK().
Returns 1 if the lock was released, 0 if the lock wasn't locked by this
thread (in which case the lock is not released), and NULL if the named lock
didn't exist. The lock will not exist if it was never obtained by a call to
GET_LOCK() or if it already has been released. 

Mark.

-Original Message-
From: Randy Johnson [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, January 24, 2001 8:48 PM
To: [EMAIL PROTECTED]
Subject: Doing multiple updates


Hello,

I am creating a mock site that has a money balance  that people can login
and pretend to pay money for stuff online via my site.

I am using php with mysql to implement this.

How do I make sure that a balance for a particular account is (locked) so
only 1 spend for that account can happen at one time?

for example my php script grabs the balance from the payer account and
receiver account. How do I make sure that the balance is locked so the
balance is read before the updated transaction occurs causing the person to
spend money that he/she doesn't have.

I hope I have provided enough explanation for you guys to point me in the
right direction.

thanks

randy


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail
<[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




RE: MySQL-3.22.32 host.ISM not found. Plz HELP

2001-01-24 Thread Mark Maunder

It's one of MySQL's permission files which is usually in
/var/lib/mysql/mysql
If the file is there it may be permissioned incorrectly. It usually needs to
be owned by the mysql user and group so
cd /var/lib/mysql
chown mysql.mysql mysql -R
if this seems to be the problem.

The permission files in this dir are included with the installation and I
think there's a script to set them up initially so check the docs for that
cause I can't remember what it's called for the moment.

Mark.

-Original Message-
From: Manuel Leos [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, January 24, 2001 7:43 PM
To: MySQL
Subject: MySQL-3.22.32 host.ISM not found. Plz HELP



Hi everybody, I'm trying to install MySQL-3.22.32 
on a RH 7.0 everything looks normal but the server 
dies sending this message

mysqld: Can't find file: 'host.ISM' (errno: 2)
 
How do I fix this?, Where can I get this file? or How
can I build it?
 
Thanks


__
Do You Yahoo!?
Yahoo! Auctions - Buy the things you want at great prices. 
http://auctions.yahoo.com/

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail
<[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




RE: automation question: How do I copy the data from one table to another table with a time stamp every night?

2001-01-24 Thread Mark Maunder

Hey Chuck,

We replicate data on a regular basis from Oracle to MySQL and back using
perl cron scripts. Here's a basic one for ya:

#!/usr/bin/perl

use strict;
use DBI;

my $dbh1=DBI->connect("DBI:mysql:dbname:host.mark.com:6969", 'root'
,'password');
my $dbh2=DBI->connect("DBI:mysql:dbname2:host2.mark.com:6969", 'root'
,'password');

my $sth1 = $dbh1->prepare("select name, sex from members");
my $time = time();
my $sth2 = $dbh2->prepare("insert into members (name, sex, stamp) values (?,
?, $time)");

$sth1->execute();
my ($name, $sex);

while(($name, $sex) = $sth1->fetchrow())
{
print "inserting $name\t$sex\n";
$sth2->execute($name, $sex);
}
$sth1->finish();
$sth2->finish();
$dbh1->disconnect();
$dbh2->disconnect();
exit;
--cut--

I probably forgot something really silly because I haven't tested this at
all but I think that'll get ya started. You'll need perl5 with DBI
installed. Once you install DBI do a 
perldoc DBI 
and that will give you tons of info. Perl is definitelly the way to go if
you're sucking data from one DB, parsing and inserting into another.

Mark.


-Original Message-
From: Chuck Barnett [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, January 24, 2001 6:22 PM
To: Steve Ruby
Cc: [EMAIL PROTECTED]
Subject: Re: automation question: How do I copy the data from one table
to another table with a time stamp every night?


Thanks for replying.
I know the SQL commands, I want to know how to automate the whole sequence.

I've always written php pages that do the calls.  I want to write a script
to do it on the server as a cron job or something.

thanks
Chuck
- Original Message -
From: "Steve Ruby" <[EMAIL PROTECTED]>
To: "Chuck Barnett" <[EMAIL PROTECTED]>; <[EMAIL PROTECTED]>
Sent: Wednesday, January 24, 2001 12:34 PM
Subject: Re: automation question: How do I copy the data from one table to
another table with a time stamp every night?


> Chuck Barnett wrote:
> >
> > How do I copy the data from one table to another table with a time stamp
> > every night?
> >
> > example:  table A contains x,y,z.
> >
> > at midnight, I want to copy table A's contents to table B (B has same
> > fields, just adds a date field) and reset x,y,z to zero.
> >
>
>
> what do you mean by "reset to zero"  If you want to copy the files
> from A to B you can just do
>
> insert into B select x,y,z,now() from A;
> delete form a;
>
> if your date field in B is a TIMESTAMP type you can avoid the now()
> part and do
>
> inesrt into B (x,y,z) select x,y,z from A;
> delete from a;
>
> See the manual about the insert statement for more info.
>


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail
<[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




RE: Command line utility?

2001-01-24 Thread Mark Maunder

mysql etoys < test.txt | more
where test.txt contains something like:
desc addresses;
desc members;

Or try 
 mysqldump dbname addresses members orders -d | more

That'll give you the data definition language without any data for each
table. (addresses, members and orders in the above example)

If you're not on the db server itself you'll have to add something like 
mysqldump -h hostname -u root -P [port] -p[passwd] dbname ...and then the
rest of the above parameters

Hope that helps.

Mark.

-Original Message-
From: Don [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, January 24, 2001 4:41 PM
To: msql list
Subject: Command line utility?


Is there a user friendly utility that would allow one to display the
structure of a table or view the contents of one?  Of course, this can
be done from within "mysql -p" but there is no way to pause at each
screen so most of my information scrolls off of the screen.

I was trying with "mysqladmin  | more" but no success and I
don't see any examples in the manual.  Example of what I tried was:

mysqladmin my_db my_table | more

but all I get is:
mysqladmin: connect to server at 'localhost' failed
error: 'Access denied for user: 'webmail@localhost' (Using password:
NO)'

Thanks,
Don


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail
<[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




isamchk and myisamchk inconsistently report errors

2001-01-24 Thread Mark Maunder

I have had the following problem with both mysql 3.22 (using isamchk) and
3.23 (using myisamchk). 
I use either isamchk table.ISM or isamchk -e table.ISM. The utility reports
a miscellaneous error (error: Record at:   20224201  Can't find key for
index:  2 for example). 

I run the check again immediatelly and the error has dissapeared. I assume
simply doing a check is a read only operation so how is this possible if the
data file is not changing? I went as far as doing a diff on tables before
and after the check to make sure nothing changes and it seems to be read
only. I have also made absolutelly sure the database is down so nothing is
writing to the files.

Has anyone experienced this before?

thanks,

Mark.


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php