MySQL PASSWORD function

2002-01-29 Thread John Kemp

Hi all,

I can't find a description of the algorithm used in the mySQL PASSWROD 
function. I understand it's a hashing algorithm of some kind, but I 
don't know which algorithm (and I suspect it's *not* MD5.)

Can anyone tell me what algorithm PASSWORD uses? The reason I ask is 
that we're trying to implement role-based security using our existing 
MySQL table of users, accessed via Java Servlet auth functions, which 
can read the User table through JDBC. BUT they don't know anything 
about PASSWORD-encrypted passwords, so I need to write something that 
hashes the password entered in the same way MySQL hashes a password (or 
abandon the use of servlet auth :-)

Any clues?

John Kemp,
Director, Software Development
Streetmail Inc.
http://www.streetmail.com


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: quick questions about redhat and mysql

2002-01-25 Thread John Kemp

Hank,

1. RH 7.1 supports files  2GB if you choose the Enterprise/SMP flavour 
(you'll be asked to choose which install you want to pursue when you put 
the install CD in) No kernel re-compile needed.
2. MySQL will support large files. Also, if you use Innodb tables, many 
of the large file concerns you have will be avoided, as Innodb filespace 
is not dependent on file sizes - you just create a number of files, and 
tables are distributed across as many large or small files as you want. 
Check out www.innodb.com for more on this.
3. Do not use the RH mysql version. Get it from the Mysql website, and 
use 3.23.47-max (which includes Innodb support) or higher.
4. Stability of Mysql on RH 7.1 is excellent. We haven't had a mysql 
crash in 6 months - basically since we upgraded to 3.23.XX mysql, and 
improved the configuration and memory usage of our database. We run RH 
7.1 from the install CD, with SMP configuration which supports large 
memory, files  2GB, and so far, its performance has been excellent. I 
have become very impressed with the stability of MySQL on Linux. We have 
a pretty large database ( 65 million rows in one table, 3 or 4 others 
with 10 million+ rows, total db size  20GB to date ) and use this to 
dynamically generate webpages and emails for our 700,000 users. MySQL 
has totally made me not purchase Oracle.

Cheers,

John

Director, Software Development
Streetmail Inc.
http://www.streetmail.com

Henry Hank wrote:

 Hello,
   I am trying to decide which OS to use for a dedicated mysql server for my
 website.  I need filesize 2GB support.  I really wanted to use Solaris, but
 there are no drivers for my RAID adapter (long story).  I was considering
 NetBSD, but I don't know it well.  I was then back to thinking about Linux. 
 So, I can't keep all the OS/kernel/requirements in my head... 
 
 Can RedHat 7.1/7.2, out of the box (ie. no patches, kernel re-compiles, etc),
 support files 2GB?  
 
 Will I be able to easily re-compile mysql to support large files?  
 
 Should I stick with the version of mysql on the RH disto, or download/install
 the latest stable version?
 
 Can someone comment on the stability of mysql on redhat 7.1/7.2? 
 
 Thanks!
 
 -Hank
 
 
 __
 Do You Yahoo!?
 Great stuff seeking new owners in Yahoo! Auctions! 
 http://auctions.yahoo.com
 
 -
 Before posting, please check:
http://www.mysql.com/manual.php   (the manual)
http://lists.mysql.com/   (the list archive)
 
 To request this thread, e-mail [EMAIL PROTECTED]
 To unsubscribe, e-mail [EMAIL PROTECTED]
 Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
 
 .
 
 



-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Memory limit issue with mysql.3.23.41

2002-01-18 Thread John Kemp

Kevin,

The Mysql documentation suggests you use no more than 75% or 80% of 
physical memory to allocate for key_buffer. As Heikki and Jeremy 
confirmed for me this week, sort_buffer and record_buffer are the ones 
that grow per thread - Heikki suggested 1Mb for each of those. Using 
swap as RAM (as Kyle suggests) is not a good idea for any process, as it 
will certainly slow your system.

John

Kyle Hayes wrote:

 If you only have 2GB of RAM and you are allocating 5120M (=5GB) for the 
 key_buffer, you have a problem.  That should make your system swap like 
 mad and everything slow to a crawl.  Also, note that some of these config 
 options may effect per-thread allocation.  I don't know which ones off the 
 top of my head, but I think the manual makes it clear.
 
 Things to remember:  MySQL stores index data in the key_buffer.  It does 
 not store data there.  MySQL does not cache data.  If you tell it to 
 allocate more memory than the system has, you'll use up all memory for the 
 index, allocate swap for it (really slow), and leave no space for the OS 
 to cache data.  We usually allocate between 30 and 50% of the available 
 DRAM for the index and leave the remaining things much smaller.  This lets 
 the OS do some caching on its own.  If you take all the memory for the 
 indexes of MySQL, I am amazed that it was able to run with any speed at 
 all.  What it looks like is that you allocated your 2G of memory and then 
 wandered quickly into the weeds.
 
 Swap != RAM.
 
 Best,
 Kyle
 
 On Friday 18 January 2002 07:37, Franklin, Kevin wrote:
 [snip]
 
The behavior suggests that we are running out of memory / swap, but we
have over 2 gig of memory and 10 gig of swap free.

 
 If you hit swap, you hit the wall in performance and go splat.  We run PCs 
 with this much RAM.  RAM=Performance with MySQL.  Even for Sun's RAM is 
 pretty cheap.
 
 
Our server settings are:
key_buffer=5120M

 
 Danger Will Robinson!  This is larger than your RAM!
 
 
max_allowed_packet=1M
table_cache=1024
sort_buffer=6M

 
 This is pretty big and allocated on a thread by thread basis I think.
 
 
record_buffer=4M

 
 This might be allocated on a thread by thread basis too, but I can't 
 remember.  Hmm, looks like it is.  Do you run absolutely huge queries?  Do 
 you really need 10MB _per thread_?
 
 
thread_cache=12
thread_concurrency=12

 
 You have a lot of processors? If so, you don't have much RAM.
 
 
myisam_sort_buffer_size=512M

 
 This is somewhat high given that you've already use all available RAM for 
 indexes more than twice over.
 
 
The server tends to crash upon reaching a total memory usage of around 4
GB

 
 I am surprised that it responds at all after it uses up RAM and starts to 
 swap.  You must have a good disk subsystem.
 
 
Here is the output from the error log.  Of particular interest to us is
the negative key_buffer_size quoted.  The same value (-4096) appears
with each crash.  Is there some sort of memory limit imposed on the
server or do you have suggestions for debugging this problem?

 
 I think that you rolled a 32-bit integer somewhere.  Try setting 
 key_buffer to 1G.  This could be a bug in MySQL.  See below.  This might 
 mean that it is a 32-bit executable.
 
 
mysqld got signal 11;

 
 That's not good.  Is your ulimit set to 2G for MySQL.  Is MySQL actually a 
 64-bit executable?  Perhaps it is a 32-bit executable.
 
 
key_buffer_size=-4096

 
   This doesn't look good.
 
 
record_buffer=4190208
sort_buffer=6291448
max_used_connections=308
max_connections=1024
threads_connected=309
It is possible that mysqld could use up to
key_buffer_size + (record_buffer + sort_buffer)*max_connections =
2093044 K bytes of memory

 
 Read this line carefully.  I think the server is trying to tell you 
 something.  Note that it is really using -4096 as the key_buffer_size in 
 the calculation above.  Note that the math seems to be wrong too since you 
 set record_buffer to 4M and sort_buffer to 6M, you should be allocating 
 10M per thread.  That's a lot of RAM for a system with only 2G.
 
 
Hope that's ok, if not, decrease some variables in the equation

 
 I think it has the right idea here.
 
 Best,
 Kyle
 
 



-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Memory

2002-01-17 Thread John Kemp

Heikki  Jeremy,

Thanks again for your help. I'll try it out and see how things go. I 
will just note that in most of the MySQL documentation talks about how 
one can set key_buffer or innodb_buffer_pool up to 75% of your maximum 
memory, which in the case of our db machines is 4GB. I have been 
monitoring our memory usage under load, and even with the numbers I 
listed below, I have not yet seen a server crash from 200 simultaneous 
connections, which has been our peak so far. We did see this a while 
back on an earlier version (we're now running 3.23.45-max) but our 
server has been running like this for 45 days and counting.

So I guess this is more of a case (as Jeremy seemed to be suggesting) 
that this *might* happen if we have a lot of simultaneous, tricky 
queries (when we'd see lots of sort_buffer and record_buffer get 
allocated for each connection/thread)

This also implies that if you're mostly using Innodb tables for querying 
that you probably wouldn't see this happening, because:

1) Innodb doesn't lock the whole table - so simultaneous queries on the 
same table will get done faster, and thus memory will be freed up quicker.
2) Does querying or updating the Innodb tablespace effect record_buffer 
and sort_buffer anyway?

The more I think about it, the more I realize that since we started 
doing work with Innodb tables, our database memory problems have 
basically gone away.

Any thoughts on that?

John
Director, Software Development
Streetmail Inc.


Heikki Tuuri wrote:

 John,
 
 I suggest setting record_buffer to 1 MB. Disk reads in blocks of 1 MB are
 probably as fast as in blocks of 10 MB.
 
 Also set sort_buffer to 1 MB, and only increase it if there are performance
 problems.
 
 The maximum process space of Linux x86 is 2 GB, and better play safe.
 
 Jeremy, I think some Intel x86 processors support segmented memory above  4
 GB. Is that supported in Linux?
 
 Jeremy, also thanks for your article in the latest Linux Magazine. I too
 learned something about tuning MySQL :).
 
 Best regards,
 
 Heikki Tuuri
 Innobase Oy
 ---
 Order technical MySQL/InnoDB support at https://order.mysql.com/
 See http://www.innodb.com for the online manual and latest news on InnoDB
 
 Jeremy Zawodny wrote in message ...
 
On Wed, Jan 16, 2002 at 09:46:48PM -0500, John Kemp wrote:

 From the innodb.com site (bugs  fixes):

This has me worried, but I haven't seen this behaviour on our site. We

 have
 
innodb_buffer_pool = 1100Mb
key_buffer = 400 Mb
record_buffer = 10Mb
sort_buffer = 20Mb
max_connections = 220

which according to this formula gives me 1100 + 400 + (220 * (20 + 10))
+ ( 220 * 1)  = 8320Mb at max capacity. At roughly half capacity (96
connections) we're using only 1390Mb, so I'm finding it hard to believe
it's going to scale that badly right now. We only have 4Gb memory on
our linux-based database machine right now - should I be upgrading? ;-)

Does anyone have any information that either supports or refutes the
statement above? I'd be interested if you did

Well, the sort_buffer and record_buffer will only be allocated on an
as-needed basis.  And they'll exist for very short periods of time,
ideally.

So your 1390 comes mainly from innodb_buffer_pool + key_buffer which
are the two global buffers (non-thread-specific) that are involved.
You'll likely see a single MySQL thread peek above that on occasion,
but you'd need things to get pretty bad before you eat up all your
memory.

See

 http://jeremy.zawodny.com/mysql/mysql2.pdf

for a bit of info on the difference between global and per-thread
memory in MySQL.

Jeremy
--
Jeremy D. Zawodny, [EMAIL PROTECTED]
Technical Yahoo - Yahoo Finance
Desk: (408) 349-7878   Fax: (408) 349-5454   Cell: (408) 685-5936

MySQL 3.23.41-max: up 14 days, processed 336,064,611 queries (270/sec. avg)

 
 
 
 -
 Before posting, please check:
http://www.mysql.com/manual.php   (the manual)
http://lists.mysql.com/   (the list archive)
 
 To request this thread, e-mail [EMAIL PROTECTED]
 To unsubscribe, e-mail [EMAIL PROTECTED]
 Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
 
 .
 
 



-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: list of tables in a database

2002-01-16 Thread John Kemp

show tables will get you a list of tables. To get the schema of tables 
type 'describe tablename'

J

Bret Ewin wrote:

 I need to know how to get a list of all the tables in a database. In Oracle
 you could select OWNER, TABLE_NAME from ALL_TABLES to get a list of all
 tables and their schemas. How does one do this in MySQL?
 
 Thanks,
 Bret
 
 
 -
 Before posting, please check:
http://www.mysql.com/manual.php   (the manual)
http://lists.mysql.com/   (the list archive)
 
 To request this thread, e-mail [EMAIL PROTECTED]
 To unsubscribe, e-mail [EMAIL PROTECTED]
 Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
 
 .
 
 



-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Memory

2002-01-16 Thread John Kemp

 From the innodb.com site (bugs  fixes):

 On Linux x86 you must be careful you do not set memory usage too high. 
glibc will allow the process heap to grow over thread stacks, which 
will crash your server. Make sure

 innodb_buffer_pool_size + key_buffer +
 max_connections * (sort_buffer + record_buffer) + max_connections * 1 MB

 is significantly smaller than 2 GB. Each thread will use a stack 
(often 1 MB) and in the worst case also sort_buffer + record_buffer 
 additional memory.

This has me worried, but I haven't seen this behaviour on our site. We have

innodb_buffer_pool = 1100Mb
key_buffer = 400 Mb
record_buffer = 10Mb
sort_buffer = 20Mb
max_connections = 220

which according to this formula gives me 1100 + 400 + (220 * (20 + 10)) 
+ ( 220 * 1)  = 8320Mb at max capacity. At roughly half capacity (96 
connections) we're using only 1390Mb, so I'm finding it hard to believe 
it's going to scale that badly right now. We only have 4Gb memory on 
our linux-based database machine right now - should I be upgrading? ;-)

Does anyone have any information that either supports or refutes the 
statement above? I'd be interested if you did

Thanks,

John

John Kemp
Director, Software Development
Streetmail Inc.


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Altering InnoDB tables

2002-01-15 Thread John Kemp

Heikki,

I didn't config the log that big, but the buffer pool is using 1GB of 
memory, and yes, the UNIQUE index is definitely causing a lot of disk I/O!

I will be very happy to see your change, and thanks for (as usual) being 
so quick to make changes to an already very welcome addition to MySQL.

I'll add this - we're running several tables that have 5-10 million 
rows, one logging table with 56 million rows on a web-based system that 
is used by 30 people at our company, editing our newsletters (which are 
stored in the database), and almost 1 million email subscribers, who 
receive 3 database-driven emails every week, and use our similarly 
data-driven website. So MySQL is dealing with a lot of roughly 
simultaneous requests. Row-level locking has improved the performance 
massively, and made my internal and external customers happier, which 
makes my day better (fewer complaints!)

Thanks again,

John Kemp
Director, Software Development
Streetmail

Heikki Tuuri wrote:

 John,
 
 did you configure innodb_buffer_pool_size and InnoDB log files big? Look at
 the online manual on the recommended sizes.
 
 If the table is very big, then UNIQUE secondary keys will cause a lot of
 disk i/o, because a disk read is required to check the constraint for each
 row, and the insert buffer cannot be used.
 
 Hmm... a fix would be to disable UNIQUE checking during the conversion. I
 could add an option where you can switch the UNIQUE checking off in my.cnf.
 That option would be used during big conversion procedures, if you already
 know that the data satisifies the constraint.
 
 Ok, I will introduce the my.cnf option
 
 innodb_no_unique_checks
 
 in 3.23.4x. Thank you for bringing this up :).
 
 About Robert Ricci's question: SHOW TABLE STATUS FROM ... reports FOREIGN
 KEY constraints in the table comment section.
 
 Best regards,
 
 Heikki Tuuri
 Innobase Oy
 ---
 Order technical MySQL/InnoDB support at https://order.mysql.com/
 See http://www.innodb.com for the online manual and latest news on InnoDB
 
 
 
 John Kemp wrote in message ...
 
I am also in the process of altering tables from MyISAM to INNODB. I
have one big problem - a very large table ( 5 million rows) with 5
indices/indexes on it, including a couple that are UNIQUE but with NULL
values allowed. Creating the last of these indices is taking an absurd
amount of time on my test system, which makes it impossible for me to
recommend this change for production as our systems would be down for
many days.

Here are the steps I took (note that after each failure, I recreated the
original data structure)

1) I tried altering the table directly to be Innodb. I stopped this
after 4 days, when all it said in the process list was 'copying to tmp
table' - after 4 days!
2) Then I tried mysqldumping the table structure, and changing MyISAM -
INNODB, dropped the table, and ran the mysqldump output back into the
database. The INSERTS ran horribly slowly - something like 50,000
inserts in 4 hours. This extrapolates to a query that would run for
several weeks on 5 million rows, so I stopped it again.
3) Then I tried dropping all the indices, altering the table to INNODB
and re-creating the indicies. This worked pretty quickly until the final
index creation, which is still running after more than 4 days.

What am I doing wrong? This seems like an unacceptable length of time to
convert a valid MyISAM table to Innodb, yet I REALLY need the row-level
locking ASAP!

Any help would be appreciated.

Table structure is shown below, as is the current mysqladmin processlist.

Thanks,

John



CREATE TABLE Subscriber (
  SubscriberID int(11) NOT NULL auto_increment,
  Email varchar(255) NOT NULL default '',
  Zipcode varchar(5) default NULL,
  InitialContactDate date NOT NULL default '-00-00',
  FirstName varchar(20) default NULL,
  LastName varchar(20) default NULL,
  OrigDate date NOT NULL default '-00-00',
  AlertWhenAvailable char(1) default NULL,
  Gender char(1) default NULL,
  DOB date default NULL,
  Password varchar(25) default NULL,
  SubscriberTypeID mediumint(9) default NULL,
  Attempts tinyint(4) NOT NULL default '0',
  Username varchar(255) default NULL,
  Address varchar(255) default NULL,
  City varchar(100) default NULL,
  State char(2) default NULL,
  Married char(1) default NULL,
  Age varchar(5) default NULL,
  Income varchar(20) default NULL,
  optin int(11) default NULL,
  Country char(2) default 'US',
  SourcePartnerID int(11) default '0',
  Occupation char(2) default NULL,
  PRIMARY KEY  (SubscriberID),
  UNIQUE KEY AK1Subscriber (Email),
  UNIQUE KEY XAK2Subscriber (Username),
  KEY xie2Subscriber (optin,Gender,DOB),
  KEY xie3Subscriber (Zipcode)
) TYPE=MyISAM;

__

++---+---++-++-

 --+---+
 
| Id | User  | Host

Re: innoDB confusion

2002-01-15 Thread John Kemp

Walt,

It'll go to the database_machine_name.err file on that machine.

John

Weaver, Walt wrote:

 Heikki,
 
 I must be missing something really simple here, but I can't get the InnoDB
 Monitor to work. I created the innodb_monitor table with create table
 innodb_monitor(a int) type=innodb;
 
 What do I do next? The manual says mysqld will print the output to stdout. I
 guess I'm confused as to how I capture this.
 
 I'm running on Linux Redhat 6.2 with the 2.4 kernel, mysql 3.23.44.
 
 Thanks,
 -- Confused In Bozeman
 
 -Original Message-
 From: Heikki Tuuri [mailto:[EMAIL PROTECTED]]
 Sent: Tuesday, January 15, 2002 11:18 AM
 To: [EMAIL PROTECTED]
 Subject: Re: innoDB confusion
 
 
 Hi!
 
 Weaver, Walt wrote in message ...
 
Do a show table status at the mysql prompt. Under type you should see
InnoDB.

 
 Yes, and in really problematic situations, where you have lost your .frm
 files, for example, you can use innodb_table_monitor as explained in the
 InnoDB online manual.
 
 Best regards,
 
 Heikki Tuuri
 Innobase Oy
 ---
 Order technical MySQL/InnoDB support at https://order.mysql.com/
 See http://www.innodb.com for the online manual and latest news on InnoDB
 
 
--Walt Weaver
 Bozeman, Montana

-Original Message-
From: Rutledge, Aaron [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, January 15, 2002 10:57 AM
To: Mysql List (E-mail)
Subject: RE: innoDB confusion


One final question, and I am done.  You have helped me
tremendously--thank you.  Wow, I really didn't expect the developer of
the software to answer my questions--this is terrific.

Is there way to verify that the tables I create are in the InnoDB
tablespace?  I am assuming that if I set 'default-table-type=innodb' in
my.ini that all tables created by the MySQL client will be created in
the innodb tablespace.  However, they also show up in the MySQL data
directory as .frm files.  How do I also know that it was created in the
InnoDB tablespace?  Aaron

 
 
 
 
 -
 Before posting, please check:
http://www.mysql.com/manual.php   (the manual)
http://lists.mysql.com/   (the list archive)
 
 To request this thread, e-mail [EMAIL PROTECTED]
 To unsubscribe, e-mail
 [EMAIL PROTECTED]
 Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
 
 -
 Before posting, please check:
http://www.mysql.com/manual.php   (the manual)
http://lists.mysql.com/   (the list archive)
 
 To request this thread, e-mail [EMAIL PROTECTED]
 To unsubscribe, e-mail [EMAIL PROTECTED]
 Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
 
 .
 
 



-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Altering InnoDB tables

2002-01-14 Thread John Kemp

I am also in the process of altering tables from MyISAM to INNODB. I 
have one big problem - a very large table ( 5 million rows) with 5 
indices/indexes on it, including a couple that are UNIQUE but with NULL 
values allowed. Creating the last of these indices is taking an absurd 
amount of time on my test system, which makes it impossible for me to 
recommend this change for production as our systems would be down for 
many days.

Here are the steps I took (note that after each failure, I recreated the 
original data structure)

1) I tried altering the table directly to be Innodb. I stopped this 
after 4 days, when all it said in the process list was 'copying to tmp 
table' - after 4 days!
2) Then I tried mysqldumping the table structure, and changing MyISAM - 
INNODB, dropped the table, and ran the mysqldump output back into the 
database. The INSERTS ran horribly slowly - something like 50,000 
inserts in 4 hours. This extrapolates to a query that would run for 
several weeks on 5 million rows, so I stopped it again.
3) Then I tried dropping all the indices, altering the table to INNODB 
and re-creating the indicies. This worked pretty quickly until the final 
index creation, which is still running after more than 4 days.

What am I doing wrong? This seems like an unacceptable length of time to 
convert a valid MyISAM table to Innodb, yet I REALLY need the row-level 
locking ASAP!

Any help would be appreciated.

Table structure is shown below, as is the current mysqladmin processlist.

Thanks,

John



CREATE TABLE Subscriber (
   SubscriberID int(11) NOT NULL auto_increment,
   Email varchar(255) NOT NULL default '',
   Zipcode varchar(5) default NULL,
   InitialContactDate date NOT NULL default '-00-00',
   FirstName varchar(20) default NULL,
   LastName varchar(20) default NULL,
   OrigDate date NOT NULL default '-00-00',
   AlertWhenAvailable char(1) default NULL,
   Gender char(1) default NULL,
   DOB date default NULL,
   Password varchar(25) default NULL,
   SubscriberTypeID mediumint(9) default NULL,
   Attempts tinyint(4) NOT NULL default '0',
   Username varchar(255) default NULL,
   Address varchar(255) default NULL,
   City varchar(100) default NULL,
   State char(2) default NULL,
   Married char(1) default NULL,
   Age varchar(5) default NULL,
   Income varchar(20) default NULL,
   optin int(11) default NULL,
   Country char(2) default 'US',
   SourcePartnerID int(11) default '0',
   Occupation char(2) default NULL,
   PRIMARY KEY  (SubscriberID),
   UNIQUE KEY AK1Subscriber (Email),
   UNIQUE KEY XAK2Subscriber (Username),
   KEY xie2Subscriber (optin,Gender,DOB),
   KEY xie3Subscriber (Zipcode)
) TYPE=MyISAM;

__

++---+---++-++---+---+
| Id | User  | Host  | db | Command | Time   | State 
  | Info
  |
++---+---++-++---+---+
| 87 | jkemp | localhost | streetmail | Query   | 250195 | copy to tmp 
table | alter table Subscriber add unique index XAK2Subscriber (Username) |
| 95 | jkemp | localhost || Query   | 0  | 
  | show processlist
  |
++---+---++-++---+---+

__
Robert P Ricci wrote:

 I've recently converted from using MyISAM to InnoDB tables, and am
 pleased with InnoDB so far. However, the inability to use 'ALTER TABLE'
 with InnoDB tables is very problematic, as our software is still under
 heavy development and our database schema changes frequently. I'm
 working on a way to automate the drop/create/repopulate process to alter
 tables, but am running into a few problems:
 
 Is there a way to prevent clients from noticing that the table in
 question is temporarily gone? Something like locking the entire database
 for writing?
 
 Second, mysqldump doesn't give foreign key constraints in its table
 creation commands. Is there any way to retrieve these?
 
 



-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: InnoDB : Lock wait timeout exceeded; Try restarting transaction

2002-01-09 Thread John Kemp

Hi SAm,

I actually had a similar problem myself, but was unable to prove it was 
the persistent connection itself causing this. I'm wondering if this 
means that INNODB thinks that a connection that is now 'sleeping' (ie. 
where a connection was created, used, but is now unused but still open) 
might be locking the whole table erroneously for some reason? Which 
version are you using? I could not figure out why Innodb would think the 
table was locked, other than if someone specifically said 'LOCK TABLE' 
in a query, which wasn't the case.

Any thoughts?

John



Sam Lam wrote:

 I recently switched to InnoDB  persistent connections from PHP.
 
 Lately I've been getting these errors Lock wait timeout exceeded; Try 
 restarting transaction on an UPDATE on table. The system is in 
 development so there is at most one other user ( a back end Perl script).
 
 When I switched PHP back to non-persistent connections I stopped getting 
 that error.
 
 How does one use persistent PHP connections  InnoDB to avoid this error ?
 
 
 -
 Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)
 
 To request this thread, e-mail [EMAIL PROTECTED]
 To unsubscribe, e-mail 
 [EMAIL PROTECTED]
 Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
 
 .
 



-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: InnoDB : Lock wait timeout exceeded; Try restarting transaction

2002-01-09 Thread John Kemp

Heikki,

Hmm. That's interesting. So if you do a single command, say

INSERT INTO Table1 (X, Y, Z) VALUES ( A, B, C) ;

You actually need to write (I'm not sure of the exact transactional 
syntax for Mysql) -

BEGIN ; --begin a transaction
INSERT INTO Table1 (X, Y, Z) VALUES ( A, B, C) ;
COMMIT ; -- end a transaction

Even for a single statement with a single row update/insert?

That's not something that is generally necessary with other RDBMSs.

Can you confirm that?

Thanks,

John

Heikki Tuuri wrote:

 Hi!
 
 Looks like your are not committing your transactions. Every UPDATE and
 INSERT automatically sets row level locks, which are only removed when you
 do a COMMIT or ROLLBACK.
 
 Or you have set innodb_lock_wait_timeout too small in my.cnf.
 
 InnoDB does not set table level locks. Only LOCK TABLES sets table level
 locks.
 
 You can use innodb_lock_monitor to make the mysqld server program print
 (somewhat cryptic) information about who has locks and where. See the InnoDB
 manual at http://www.innodb.com/ibman.html
 
 Best regards,
 
 Heikki Tuuri
 Innobase Oy
 ---
 InnoDB - transactions, row level locking, and foreign key support for MySQL
 See http://www.innodb.com, download MySQL-Max from http://www.mysql.com
 
 ..
 
 Hi SAm,
 
 I actually had a similar problem myself, but was unable to prove it was
 the persistent connection itself causing this. I'm wondering if this
 means that INNODB thinks that a connection that is now 'sleeping' (ie.
 where a connection was created, used, but is now unused but still open)
 might be locking the whole table erroneously for some reason? Which
 version are you using? I could not figure out why Innodb would think the
 table was locked, other than if someone specifically said 'LOCK TABLE'
 in a query, which wasn't the case.
 
 Any thoughts?
 
 John
 
 
 
 Sam Lam wrote:
 
 
I recently switched to InnoDB  persistent connections from PHP.

Lately I've been getting these errors Lock wait timeout exceeded; Try
restarting transaction on an UPDATE on table. The system is in
development so there is at most one other user ( a back end Perl script).

When I switched PHP back to non-persistent connections I stopped getting
that error.

How does one use persistent PHP connections  InnoDB to avoid this error ?


 
 
 
 
 -
 Before posting, please check:
http://www.mysql.com/manual.php   (the manual)
http://lists.mysql.com/   (the list archive)
 
 To request this thread, e-mail [EMAIL PROTECTED]
 To unsubscribe, e-mail [EMAIL PROTECTED]
 Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
 
 .
 
 



-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: InnoDB : Lock wait timeout exceeded; Try restarting transacti on

2002-01-09 Thread John Kemp

Walt / Heikki

Yes, I agree - with autocommit on, you do not need to supply COMMIT, and 
yes, that's the same as other RDBMSs. What I'm pointing out though is 
that I thought Heikki was suggesting that we need to supply the 
BEGIN/COMMIT for a single command even if autocommit was on, in order to 
not get the error Lock wait timeout exceeded; Try restarting 
transaction. If that's the case, that would indicate that Mysql 
behaviour would be different to what I would expect.

With auto-commit ON, and no explicit LOCK TABLES' mentioned, I would 
not expect to get any error about a lock wait timeout unless the same 
ROW were being selected as were being updated, inserted or deleted. With 
a sleeping connection being the second potential row-locker, I would not 
expect to see this error ever.

What I'm saying is that this looks like a bug. Sam mentioned that he 
only had one other connection when he got this error, and that this 
connection was a sleeping, persistent connection. The behaviour I saw 
was the same. A sleeping connection was supposedly locking a row needed 
(for a delete in my case). I don't think a sleeping connection (ie. one 
not actually running a query) should ever lock out a running query.

I hope that clears up the confusion.

John

Weaver, Walt wrote:

 John wrote:
 
That's not something that is generally necessary with other RDBMSs.

 
 I disagree. I would imagine all transaction-oriented RDBMS's work this way.
 Oracle certainly does. You need to do an explicit commit or rollback to
 release the lock. (or close the cursor, etc.)
 
 Unless, of course, autocommit is on.
 
 --Walt Weaver
   Bozeman, Montana
 
 -Original Message-
 From: John Kemp [mailto:[EMAIL PROTECTED]]
 Sent: Wednesday, January 09, 2002 1:40 PM
 To: Heikki Tuuri
 Cc: [EMAIL PROTECTED]
 Subject: Re: InnoDB : Lock wait timeout exceeded; Try restarting
 transaction
 
 
 Heikki,
 
 Hmm. That's interesting. So if you do a single command, say
 
 INSERT INTO Table1 (X, Y, Z) VALUES ( A, B, C) ;
 
 You actually need to write (I'm not sure of the exact transactional 
 syntax for Mysql) -
 
 BEGIN ; --begin a transaction
 INSERT INTO Table1 (X, Y, Z) VALUES ( A, B, C) ;
 COMMIT ; -- end a transaction
 
 Even for a single statement with a single row update/insert?
 
 That's not something that is generally necessary with other RDBMSs.
 
 Can you confirm that?
 
 Thanks,
 
 John
 
 Heikki Tuuri wrote:
 
 
Hi!

Looks like your are not committing your transactions. Every UPDATE and
INSERT automatically sets row level locks, which are only removed when you
do a COMMIT or ROLLBACK.

Or you have set innodb_lock_wait_timeout too small in my.cnf.

InnoDB does not set table level locks. Only LOCK TABLES sets table level
locks.

You can use innodb_lock_monitor to make the mysqld server program print
(somewhat cryptic) information about who has locks and where. See the

 InnoDB
 
manual at http://www.innodb.com/ibman.html

Best regards,

Heikki Tuuri
Innobase Oy
---
InnoDB - transactions, row level locking, and foreign key support for

 MySQL
 
See http://www.innodb.com, download MySQL-Max from http://www.mysql.com

..

Hi SAm,

I actually had a similar problem myself, but was unable to prove it was
the persistent connection itself causing this. I'm wondering if this
means that INNODB thinks that a connection that is now 'sleeping' (ie.
where a connection was created, used, but is now unused but still open)
might be locking the whole table erroneously for some reason? Which
version are you using? I could not figure out why Innodb would think the
table was locked, other than if someone specifically said 'LOCK TABLE'
in a query, which wasn't the case.

Any thoughts?

John



Sam Lam wrote:



I recently switched to InnoDB  persistent connections from PHP.

Lately I've been getting these errors Lock wait timeout exceeded; Try
restarting transaction on an UPDATE on table. The system is in
development so there is at most one other user ( a back end Perl script).

When I switched PHP back to non-persistent connections I stopped getting
that error.

How does one use persistent PHP connections  InnoDB to avoid this error ?






-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail

 [EMAIL PROTECTED]
 
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php

.



 
 
 
 -
 Before posting, please check:
http://www.mysql.com/manual.php   (the manual)
http://lists.mysql.com/   (the list archive)
 
 To request this thread, e-mail [EMAIL PROTECTED]
 To unsubscribe, e-mail
 [EMAIL PROTECTED]
 Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php

Re: MySQL on Linux 2.4 question

2002-01-09 Thread John Kemp

Walt,

Yup, we use Innodb with 3.23.46 on Linux 2.4.2. I can't tell you whether 
things are better than they were on 2.2 kernels but we're updating 
tables just about that quickly I'd say. Largest table we have is a log 
table which has 50 million rows in it so far. Thanks to Innodb row 
locking, we've significantly improved insert versus select query 
performance.

I've heard (I think I read this in a magazine) that prior to 2.4.4 
kernel, there are some virtual memory issues with Mysql, so I'd be 
interesetd to know what kernel version you have... memory management 
changed a lot between 2.2 and 2.4, so that would probably account for 
the changes you're seeing.

JOhn

Weaver, Walt wrote:

 Anyone else out there been playing with the new Linux 2.4 kernel? 
 
 I just upgraded a test server from 2.2 to 2.4 and reran some of my InnoDB
 tests. The results were dramatic; updating a 600,000 row table went from 21
 minutes and change to 6 minutes and change.
 
 Haven't tried 2.4 and MyISAM tables yet; on 2.2 the same update ran in about
 8 minutes.
 
 This is obviously a big improvement, if my tests are valid. It's SUCH a big
 improvement that I'm wondering if I messed something up.
 
 Any comments/statement of support/snide remarks will all be appreciated.
 
 Thanks,
 
 --Walt Weaver
   Bozeman, Montana
 
 -
 Before posting, please check:
http://www.mysql.com/manual.php   (the manual)
http://lists.mysql.com/   (the list archive)
 
 To request this thread, e-mail [EMAIL PROTECTED]
 To unsubscribe, e-mail [EMAIL PROTECTED]
 Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
 
 .
 
 



-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: InnoDB : Lock wait timeout exceeded; Try restarting transaction

2002-01-09 Thread John Kemp

Heikki,

Thanks very muich for the explanation. That's an interesting question 
for the connection modules in Apache/PHP/DBI etc. Perhaps this is a 
problem with the way the connections are opened by those programs. I 
hadn't thought of that, so it would definitely be a good thing to test 
before we say the bug is in Mysql. I already looked for LOCK TABLES in 
all of our code, and it's not there - anywhere.

Thanks for your help in clearing this up,

Cheers,

John

Heikki Tuuri wrote:

 Hi!
 
 It is a bug if the sleeping connection is in the auto-commit mode. But we
 need more information of the problem. If you encounter it, please send the
 exact sequence of SQL commands which leads to the problem.
 
 You may also test
 
 SET AUTOCOMMIT=1
 
 explicitly in your program.
 
 Note that LOCK TABLES switches auto-commit off until the table locks have
 been released.
 
 Regards,
 
 Heikki
 
 ...
 Walt / Heikki
 
 Yes, I agree - with autocommit on, you do not need to supply COMMIT, and
 yes, that's the same as other RDBMSs. What I'm pointing out though is
 that I thought Heikki was suggesting that we need to supply the
 BEGIN/COMMIT for a single command even if autocommit was on, in order to
 not get the error Lock wait timeout exceeded; Try restarting
 transaction. If that's the case, that would indicate that Mysql
 behaviour would be different to what I would expect.
 
 With auto-commit ON, and no explicit LOCK TABLES' mentioned, I would
 not expect to get any error about a lock wait timeout unless the same
 ROW were being selected as were being updated, inserted or deleted. With
 a sleeping connection being the second potential row-locker, I would not
 expect to see this error ever.
 
 What I'm saying is that this looks like a bug. Sam mentioned that he
 only had one other connection when he got this error, and that this
 connection was a sleeping, persistent connection. The behaviour I saw
 was the same. A sleeping connection was supposedly locking a row needed
 (for a delete in my case). I don't think a sleeping connection (ie. one
 not actually running a query) should ever lock out a running query.
 
 I hope that clears up the confusion.
 
 John
 
 
 
 
 -
 Before posting, please check:
http://www.mysql.com/manual.php   (the manual)
http://lists.mysql.com/   (the list archive)
 
 To request this thread, e-mail [EMAIL PROTECTED]
 To unsubscribe, e-mail [EMAIL PROTECTED]
 Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
 
 .
 
 



-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: PHP memory problem

2002-01-02 Thread John Kemp

There are a couple of potential things you could change.

Dan,

1. On the MySQL end, there is a configuration value or two for timing 
out sleeping MySQL queries - by default the timeouts are set to be 8 
hours. You can make the timeouts less in my.cnf - However, be careful. 
You may have connections that NEED to be open that long. Think about 
whether you need to have persistent connections to the database. If not, 
then set the values lower for the following:

# 14400 seconds is 4 hours, (default is 8 hours or 28800 seconds)
# if you don't have the need for persistent db connections, you can set 
these much lower - ie. 10 minutes say, or an hour

set-variable = interactive_timeout=14400
set-variable = wait_timeout=14400

2. On The Apache/PHP end - if someone presses the 'stop' button in the 
browser, your script won't necessary complete, which means it may hold a 
db connection, which means Apache/PHP won't kill the process holding the 
db connection. This can cause the process to hold the memory. A good way 
to test this is to check the number of sleeping connections (ps auwx | 
grep mysqld | wc -l will give you the number) and then restart your web 
server, and run the same command again to check the number of mysqld's. 
You can also run mysqladmin processlist to check this stuff. You should 
also make sure that in your code you are definitely closing your 
database connections (even if an error occurs)

John


Dan Liu wrote:

 Hi everyone,
 when we execute queries through MySQL using PHP, a significant amount of memory is 
being used in the process.  We have noticed that the memory is not being released 
again for several hours.  Despite attempts to force it to release memory in the code, 
the problem is still occuring.  Does anyone know anything about why this behavior may 
be occuring and any solutions we could try?  Is this a PHP problem or a MySQL 
problem?  Using pconnect versus connect does not appear to be the problem.
 
 Dan
 
 
 -
 Before posting, please check:
http://www.mysql.com/manual.php   (the manual)
http://lists.mysql.com/   (the list archive)
 
 To request this thread, e-mail [EMAIL PROTECTED]
 To unsubscribe, e-mail [EMAIL PROTECTED]
 Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
 
 .
 
 



-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: mysql.sock??

2001-12-19 Thread John Kemp

You should have an error file in /usr/local/mysql/var (or wherever your 
database files are) called something like linuxdev1.err. I'd look in 
there to see where your error is. Do you have a my.cnf file with startup 
options? If not, maybe you should create this (see the documentation at 
www.mysql.com/doc for more info) Perhaps your databases and 
configuration aren't in /usr/local/mysql/var? BUt look for the error 
file first. That should tell you something useful.

John

John Kemp,
Director, Software Development
Streetmail Inc.


Mike Blain wrote:

 I keep trying to start it and get this:
 
 [root@linuxdev1 mysql-3.23.46]#
 /usr/local/mysql/bin/safe_mysqld --user=mysql 
 [1] 1822
 [root@linuxdev1 mysql-3.23.46]# Starting mysqld daemon with databases from
 /usr/local/mysql/var
 011219 18:21:40  mysqld ended
 
 It stats and immediately stops. Been combing forums and install instructions
 for info on this and haven't had much luck.
 
 keep trying this to no avail as well:
 
 [root@linuxdev1 mysql-3.23.46]# /etc/init.d/mysqld start
 bash: /etc/init.d/mysqld: No such file or directory
 
 
 
 -Original Message-
 From: Quentin Bennett [mailto:[EMAIL PROTECTED]]
 Sent: Wednesday, December 19, 2001 5:44 PM
 To: '[EMAIL PROTECTED]'; [EMAIL PROTECTED]
 Subject: RE: mysql.sock??
 
 
 Hi,
 
 mysqld will create it when it runs - have you started the server?
 
 Quentin
 
 -Original Message-
 From: Mike Blain [mailto:[EMAIL PROTECTED]]
 Sent: Thursday, 20 December 2001 2:07 p.m.
 To: [EMAIL PROTECTED]
 Subject: mysql.sock??
 
 
 
 I just recently installed MySQL from source. Trying to set the root password
 and keep getting this error:
 
 /usr/local/mysql/bin/mysqladmin: connect to server at 'localhost' failed
 error: 'Can't connect to local MySQL server through socket '/tmp/mysql.sock'
 (111)'
 Check that mysqld is running and that the socket: '/tmp/mysql.sock' exists!
 
 How do I create mysql.sock?
 
 Thanks,
 Mike
 
 
 -
 Before posting, please check:
http://www.mysql.com/manual.php   (the manual)
http://lists.mysql.com/   (the list archive)
 
 To request this thread, e-mail [EMAIL PROTECTED]
 To unsubscribe, e-mail
 [EMAIL PROTECTED]
 Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
 
 The information contained in this email is privileged and confidential
 and intended for the addressee only. If you are not the intended
 recipient, you are asked to respect that confidentiality and not
 disclose, copy or make use of its contents. If received in error
 you are asked to destroy this email and contact the sender immediately.
 Your assistance is appreciated.
 
 -
 Before posting, please check:
http://www.mysql.com/manual.php   (the manual)
http://lists.mysql.com/   (the list archive)
 
 To request this thread, e-mail [EMAIL PROTECTED]
 To unsubscribe, e-mail
 [EMAIL PROTECTED]
 Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
 
 
 
 -
 Before posting, please check:
http://www.mysql.com/manual.php   (the manual)
http://lists.mysql.com/   (the list archive)
 
 To request this thread, e-mail [EMAIL PROTECTED]
 To unsubscribe, e-mail [EMAIL PROTECTED]
 Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
 
 
 



-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Innodb and NULL in UNIQUE index

2001-12-13 Thread John Kemp

Hi,

We have just started out using Innodb tables to get maximum performance 
when we have a large number of users fairly simultaneously accessing 
some of our large tables. Row-locking is a good thing for us. That's why 
we like Innodb. But... there's a problem. One of our tables contains a 
unique index, which we need. And when I try and alter that table to be 
an Innodb table, I get:

Error 1062: Duplicate value 'NULL' in key 3.

I thought that perhaps databases shouldn't allow NULL values in a UNIQUE 
index - obviously, if there are several NULL values for a field, a 
unique index would seem to be no good. But it seems that generally 
accepted practice is to allow NULL in a UNIQUE index.

In which case, how can I get my UNIQUE index with NULLs to work with Innodb?

Or is this a B U G?

John Kemp
Director, Software Development
Streetmail, Inc.


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php