Re: Moving database from windoze to Linux

2001-07-27 Thread Dave Hewlett

It may be a solution for you...i simply copied the Mysql databases from
windows to suse 7.1 and it worked just fine!

David.

Glyn Davies wrote:

 Good day,I have created a database using MySQL under Windoze and am
 trying to move the data to MySQL running on Linux. I have exported the
 data to a res file - can't see how to do a mysqldump in Windoze. I
 then try and import the file on the Linux machine using LOAD DATA, but
 end up with either NULL in all the fields except the primary key field
 or if I use ENCLOSED BY '', I end up with the data enclosed by
 quotes.How do I do this properly, please?TIAGlyn
 Glyn Davies
 Cirrus-TechVue
 South Africa
 Tel: +27 11 783 1508
 www.cirrus.co.za __
 The information contained in this email is intended solely for the
 use of the individual or entity to whom it is addressed and others
 authorised to receive it.

 Cirrus Techvue is not liable for the proper, complete transmission
 of the information contained in this email, or any delay in its
 receipt, and does not warrant that the mail is virus-free.


 -
 Before posting, please check:
http://www.mysql.com/manual.php   (the manual)
http://lists.mysql.com/   (the list archive)

 To request this thread, e-mail [EMAIL PROTECTED]
 To unsubscribe, e-mail [EMAIL PROTECTED]
 Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: use one database or many databases

2001-03-14 Thread Dave Hewlett

I've run into a situation where i need to split an enormous database (only say a 
million rows but oh so long possibly  many K)
Using JDBC and prepared statements this could be facilitated if we could have the 
driver implement a setSQL(int x, "--text--") function which could replace a '?' 
placeholder for the table name in the SQL statement.
I can imagine other uses for this too.

Unfortunately not a feature in JDBC2.0 or 3.0.

Dave.

James Blackwell wrote:

 Yes, but I'm really lazy and changing one connect string is easier than going 
through 200,000 lines of code and changing table names. ;)

 That's a good spin on it.  Thanks.

 --James

 ---
 From your message of Wed, 14 Mar 2001 14:09:30 -0600:

 Or, you could use one database, and lookup the clients table names, and
 use a merge table for reports.
 James Blackwell wrote:
 
  I have a similiar situation where I've got a huge database that maintains data for 
quite a few clients.  Queries have gotten extremely sluggish.
 
  What I'm working on right now is to have a control database with a single table 
that contains a unique identifier for each client and a database name.  When they log 
in it figures out the name of the database to use by looking in this table.  Each 
instance of the program only accesses this one database (after finding it in the 
control)  Since all of the programs that make up the suite call the same routine to 
establish a connection, it is a fairly painless update that I hope will provide 
substantial performance increases.
 
  The only drawback to this is if you need to run a lot of reports across clients.  
A few administrative reports wouldn't be so bad, but I wouldn't want to like 
construct a web page on the fly based on a query accessing 50 different data sources. 
 Since this isn't the case here, it shouldn't be a problem.
 
  I'm by no means a guru, but this just seems like a logical way to handle the 
problem.  If there is some major logic flaw here, please let me know now! ;)
 
  --James
  [EMAIL PROTECTED]
 
  ---
 From your message of Wed, 14 Mar 2001 08:26:21 -0600:
 
 If all the data will be used by the same application then I would suggest
 that you stick with a single database.
 Cal
 http://www.calevans.com
 -Original Message-
 From: abdelhamid bettache [mailto:[EMAIL PROTECTED]]
 Sent: Wednesday, March 14, 2001 8:11 AM
 To: [EMAIL PROTECTED]
 Subject: use one database or many databases
 Hello,
   I have to design a huge database for all the universities , is it better
 to consider a database for each university or one for all universities ..
   If I consider one database so I'll have one table for all students wich
 contain about 30 rows .
 thank you
 __
 Get your free domain name and domain-based
 e-mail from Namezero.com
 New! Namezero Plus domains now available.
 Find out more at: http://www.namezero.com
 -
 -
 Before posting, please check:
http://www.mysql.com/manual.php   (the manual)
http://lists.mysql.com/   (the list archive)
 To request this thread, e-mail [EMAIL PROTECTED]
 To unsubscribe, e-mail [EMAIL PROTECTED]
 Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
 
 

 -
 Before posting, please check:
http://www.mysql.com/manual.php   (the manual)
http://lists.mysql.com/   (the list archive)

 To request this thread, e-mail [EMAIL PROTECTED]
 To unsubscribe, e-mail [EMAIL PROTECTED]
 Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: insert delayed and apparent blockage of thread

2001-03-13 Thread Dave Hewlett



Dave Hewlett wrote:

 To...

 I previously placed this as a comment on a seemingly similar situation.
 However no one noticed it. As it is a possible bug in mysql i have re-entered it.

 
  I had an experience the other day in a controlled test environment that appears
  to be similar. An advantage is that i know precisely what was taking place.
  Consider the following:
  1) A servlet makes a single 'insert delayed' into a relation. (no auto increment
  field - and just a simple primary key char(16) )
  2) Shortly afterwards the same servlet attempts to see if the record has been
  added.
  3a) On occasions that it finds it, all continues just fine.
  3b) When it is not found the transaction continues normally but the delayed
  insert NEVER occurs. Furthermore if a further attempt is made to make another
  insert delayed on the same relation this also never occurs.
 
  In the case 3b (which only happens occasionally - not most of the time) i
  presume the thread handling the inserts has become jammed in some way.
 
  Regards,
 
  Dave.
 

 PS i know i have done something unusual in my program (which i have corrected)
 nevertheless it should not have had this effect - perhaps an unpredictable effect at
 worst.


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: shutdown and insert delayed

2001-03-09 Thread Dave Hewlett

Steven,

I had an experience the other day in a controlled test environment that appears
to be similar. An advantage is that i know precisely what was taking place.
Consider the following:
1) A servlet makes a single 'insert delayed' into a relation. (no auto increment
field - and just a simple primary key char(16) )
2) Shortly afterwards the same servlet attempts to see if the record has been
added.
3a) On occasions that it finds it, all continues just fine.
3b) When it is not found the transaction continues normally but the delayed
insert NEVER occurs. Furthermore if a further attempt is made to make another
insert delayed on the same relation this also never occurs.

In the case 3b (which only happens occasionally - not most of the time) i
presume the thread handling the inserts has become jammed in some way.

Regards,

Dave.

Steven Roussey wrote:

 Hi,

 I was trying to use debugging (creating a trace and log file) in order to
 find the crashing problem we have been experiencing. However, I tend to come
 up against another problem in the course of things -- mysql not shutting
 down.

 This time I have a log file and a trace file. When I shut down, most of the
 processes quit, but a few remain (exhibit A). At this point the log stops
 (no additional info being written)(exhibit B), but the trace continues. Then
 the trace file stops (exhibit C). Everything is on hold. So I have a trace
 file, and log file that are not be written to, but still have processes
 running. What is going on?

 Sincerely,

 Steven Roussey
 Network54.com
 http://network54.com/?pp=e

 exhibit A:
 11519 pts/2S  0:00 /bin/sh ./bin/safe_mysqld --user=root -O
 back_log=20 -O table_cache=3500 --log --debug=d,info,query
 11567 pts/2S  1:48
 /usr/local/mysql/libexec/mysqld --basedir=/usr/local/mysql --datadir=/usr/lo
 cal/mysql/var --user=ro
 11569 pts/2S  0:28
 /usr/local/mysql/libexec/mysqld --basedir=/usr/local/mysql --datadir=/usr/lo
 cal/mysql/var --user=ro
 11570 pts/2S  0:01
 /usr/local/mysql/libexec/mysqld --basedir=/usr/local/mysql --datadir=/usr/lo
 cal/mysql/var --user=ro
 11597 pts/2S  2:49
 /usr/local/mysql/libexec/mysqld --basedir=/usr/local/mysql --datadir=/usr/lo
 cal/mysql/var --user=ro
  5391 pts/2S  0:00 /usr/local/mysql/bin/mysqladmin shutdown
  5405 pts/2S  0:00
 /usr/local/mysql/libexec/mysqld --basedir=/usr/local/mysql --datadir=/usr/lo
 cal/mysql/var --user=ro
  5546 pts/0R  0:00 ps ax

 Exhibit B:
 # tail switch.network54.com.err -n20
 010307 12:33:41  Aborted connection 87779 to db: 'logging' user: 'apache'
 host: `tank.f' (Got timeout reading communication packets)
 010307 12:33:41  Aborted connection 90741 to db: 'logging' user: 'apache'
 host: `morpheus.f' (Got timeout reading communication packets)
 010307 12:33:41  Aborted connection 90700 to db: 'logging' user: 'apache'
 host: `mouse.f' (Got timeout reading communication packets)
 010307 12:33:41  Aborted connection 90714 to db: 'logging' user: 'apache'
 host: `neo.f' (Got timeout reading communication packets)
 010307 12:33:41  Aborted connection 90722 to db: 'logging' user: 'apache'
 host: `morpheus.f' (Got timeout reading communication packets)
 010307 12:33:41  Aborted connection 90733 to db: 'logging' user: 'apache'
 host: `morpheus.f' (Got timeout reading communication packets)
 010307 12:33:41  Aborted connection 90736 to db: 'logging' user: 'apache'
 host: `morpheus.f' (Got timeout reading communication packets)
 010307 12:33:41  Aborted connection 90729 to db: 'logging' user: 'apache'
 host: `neo.f' (Got timeout reading communication packets)
 010307 12:33:41  Delayed insert thread couldn't get requested lock for table
 log_day_20010307
 010307 12:33:41  Aborted connection 72115 to db: 'logging' user: 'apache'
 host: `neo.f' (Got an error writing communication packets)
 010307 12:33:41  Aborted connection 70096 to db: 'logging' user: 'apache'
 host: `mouse.f' (Got an error writing communication packets)
 010307 12:33:41  Aborted connection 66934 to db: 'logging' user: 'apache'
 host: `mouse.f' (Got an error writing communication packets)
 010307 12:33:41  Aborted connection 56536 to db: 'logging' user: 'apache'
 host: `mouse.f' (Got an error writing communication packets)
 010307 12:33:41  Aborted connection 46514 to db: 'logging' user: 'apache'
 host: `tank.f' (Got an error writing communication packets)
 010307 12:33:41  Aborted connection 40577 to db: 'logging' user: 'apache'
 host: `tank.f' (Got an error writing communication packets)
 010307 12:33:41  Aborted connection 45130 to db: 'logging' user: 'apache'
 host: `tank.f' (Got an error writing communication packets)
 010307 12:33:41  Aborted connection 39806 to db: 'logging' user: 'apache'
 host: `mouse.f' (Got an error writing communication packets)
 010307 12:33:41  Aborted connection 39819 to db: 'logging' user: 'apache'
 host: `tank.f' (Got an error writing communication packets)
 010307 12:33:41  Aborted connection 39476 

Re: Performance of Mysql updation ..

2001-02-14 Thread Dave Hewlett

Thiru,

Try using DELAYED parameter with your INSERT. - it batches them up more efficiently
in another thread.

Dave.

Thiru wrote:

 Hello,

 I am creating a script which works offline from the mainstream of our system
 and updates our databases.
 These scripts includes INSERTING, UPDATING and DELETING records.

 I am performing a updating operation using Python which takes really some time.
 Something like,
 update TABLE1 set col1=val1 where col2=something;
 col2 is indexed.
 val of something is set inside Python script.

 This statement is executed atleast 25 times.

 I beleive for the amt of work it is doing it is really fast, but
 by any means is there a way to step up the updation speed further??

 like changing or adding some config options to my.cnf etc..

 Please help.

 Thiru

 -o0o
   "There is no finish line, you can always learn"
  "You have to keep pressure on yourself,  you have to work on your weaknesses".

 Thiru
 S/W Engineer, Service Dvlpment Group
 Infoseek,Japan  Voice - (81)-3-5453-2056
 http://www.infoseek.co.jp - Click Here
 http://www.rakuten.co.jp - Click Here
 http://house.infoseek.co.jp  - Click Here
 http://profile.infoseek.co.jp  - Click Here
 http://chat.infoseek.co.jp  - Click Here
 

 -
 Before posting, please check:
http://www.mysql.com/manual.php   (the manual)
http://lists.mysql.com/   (the list archive)

 To request this thread, e-mail [EMAIL PROTECTED]
 To unsubscribe, e-mail [EMAIL PROTECTED]
 Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Choice of BLOB or store in a directory

2001-01-31 Thread Dave Hewlett

I would much appreciate to hear from experienced mySQL practitioners on
the following:

I have a large relation in mind (strictly fixed length for performance)
with potentially 1-3 million entries.
Associated with each of these rows i have a variable number of blobs.
I would not consider storing them in the table itself for obvious
performance and other reasons but i have  choices and  i would like all
your opinions on them :

A) Would you store them in a single relation?
B) Because of max table size would you consider one table for each
possible kind of blob?
C) Consider storing them as files in directories?
D) If recommending (A/B) which table type is recommended?

In particular has anyone experience on the choice between (B)  (C) on
performance grounds?
e.g. how does directory search compare to an index in mySQL?

Regards,

Dave.


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php