Clean up temp files after creating new db

2004-05-20 Thread Jonathan Villa
I'm administering a Red Hat machine which is used soley for MySQL.  The
person working with the db has more than 15 million records in various
tables...they were trying to run some queries to create one table from
these many tables.  When this was happening, they ran out of disk space. 
They had about 4GB free prior to running the queries... but are now down
to 1.3GB (after I freed up some disk space, they were at 0) and no new
table.  Are there temp files created when running queries...I have
restarted mysql several times, looked in /usr/local/mysql/data but nothing
looks odd...I even tried restarting the machine...

I also tried a find / -size +5000 but nothing interesting is returned...

any ideas?




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: GRANT semantics...

2004-05-20 Thread Michael Stassen

John Ratliff wrote:
I'm trying to use a grant statement to grant privileges to a user on a 
set of database names.

e.g. for some user k, I want them to be able to have complete access to 
any database named k_*.

I know this can be done. The test database is setup this way by default.
Here is what I tried:
grant all on k_*.* to 'user'@'somedomain' identified by 'somepassword'
It replies with an 'you have an error in you SQL syntax somewhere.'
However, I can do this:
grant all on k_db.* to 'user'@'somedomain' identified by 'somepassword'
update db set db = 'k\_%' where user = 'user'
This is how the test database looked in the mysql.db table, so I 
mirrored it. And that works.
But, can this be done in a grant statement? I was fairly sure I did this 
before, but this is a new installation and it doesn't work.

I've tried variations of the *. I tried k_%.* k_%, k\_%.*, etc. None 
of these work.

I have looked in the mysql manual in the user creation section, but I 
haven't found anything similar to what I'm trying to do. The 
grant/revoke syntax also tells nothing. It just says db_name.* if you 
want to grant privileges on a database.

My mysql version is 4.0.18.
Thanks,
John Ratliff
You must have missed it.  From the GRANT and REVOKE Syntax page in the 
manual http://dev.mysql.com/doc/mysql/en/GRANT.html:

Note: the `_' and `%' wildcards are allowed when specifying database names 
in GRANT statements that grant privileges at the global or database levels. 
This means, for example, that if you want to use a `_' character as part of 
a database name, you should specify it as `\_' in the GRANT statement, to 
prevent the user from being able to access additional databases matching the 
wildcard pattern; for example, GRANT ... ON `foo\_bar`.* TO 

So, you need to backslash escape the underscore, and you need to quote the 
db name with backticks.

GRANT ALL ON `k\_%`.* TO [EMAIL PROTECTED] IDENTIFIED BY 'somepassword';
Michael
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


test please ignore

2004-05-20 Thread Steve Davies
had no mysql list mail for a while
just testing please ignore
mysql

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Query help with grouping and latest date.

2004-05-20 Thread Duncan Hill
I have a table that has data that looks like:
++-+---+-+
| id | recdate | mount | perused |
++-+---+-+
|  1 | 2004-05-20 10:46:12 | QUAR  |  80 |
|  2 | 2004-05-20 11:05:11 | QUAR  |  99 |
|  2 | 2004-05-20 11:33:27 | ROOT  |  99 |
|  2 | 2004-05-20 11:33:34 | QUAR  |  88 |
++-+---+-+

I'm trying to work out a query that would list the latest entry per id, per 
mount, and not having much luck.  In this case, the result would be
1,QUAR,80
2,ROOT,99
2,QUAR,88

(there are a few other fields to be displayed, but I'll take a base query and 
adapt).

Can anyone give me a shove in the right direction ?

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Query help with grouping and latest date.

2004-05-20 Thread Egor Egorov
Duncan Hill [EMAIL PROTECTED] wrote:
 I have a table that has data that looks like:
 ++-+---+-+
 | id | recdate | mount | perused |
 ++-+---+-+
 |  1 | 2004-05-20 10:46:12 | QUAR  |  80 |
 |  2 | 2004-05-20 11:05:11 | QUAR  |  99 |
 |  2 | 2004-05-20 11:33:27 | ROOT  |  99 |
 |  2 | 2004-05-20 11:33:34 | QUAR  |  88 |
 ++-+---+-+
 
 I'm trying to work out a query that would list the latest entry per id, per 
 mount, and not having much luck.  In this case, the result would be
 1,QUAR,80
 2,ROOT,99
 2,QUAR,88
 
 (there are a few other fields to be displayed, but I'll take a base query and 
 adapt).
 
 Can anyone give me a shove in the right direction ?
 

From version 4.1 subquery helps you to get result using one query.
In earlier version you can use temporary table, for example:

CREATE TEMPORARY TABLE tmp SELECT id, MAX(recdate) as recdate, mount FROM tbl GROUP BY 
id, mount;
SELECT tbl.id, tbl.mount, tbl.perused FROM tbl, tmp
WHERE tbl.id=tmp.id AND tbl.recdate=tmp.recdate;

Some info you can also find in the manual at:
http://dev.mysql.com/doc/mysql/en/example-Maximum-column-group-row.html




-- 
For technical support contracts, goto https://order.mysql.com/?ref=ensita
This email is sponsored by Ensita.net http://www.ensita.net/
   __  ___ ___   __
  /  |/  /_ __/ __/ __ \/ /Egor Egorov
 / /|_/ / // /\ \/ /_/ / /__   [EMAIL PROTECTED]
/_/  /_/\_, /___/\___\_\___/   MySQL AB / Ensita.net
   ___/   www.mysql.com




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Query help with grouping and latest date.

2004-05-20 Thread Egor Egorov
Egor Egorov [EMAIL PROTECTED] wrote:
 Duncan Hill [EMAIL PROTECTED] wrote:
 I have a table that has data that looks like:
 ++-+---+-+
 | id | recdate | mount | perused |
 ++-+---+-+
 |  1 | 2004-05-20 10:46:12 | QUAR  |  80 |
 |  2 | 2004-05-20 11:05:11 | QUAR  |  99 |
 |  2 | 2004-05-20 11:33:27 | ROOT  |  99 |
 |  2 | 2004-05-20 11:33:34 | QUAR  |  88 |
 ++-+---+-+
 
 I'm trying to work out a query that would list the latest entry per id, per 
 mount, and not having much luck.  In this case, the result would be
 1,QUAR,80
 2,ROOT,99
 2,QUAR,88
 
 (there are a few other fields to be displayed, but I'll take a base query and 
 adapt).
 
 Can anyone give me a shove in the right direction ?
 
 
 From version 4.1 subquery helps you to get result using one query.
 In earlier version you can use temporary table, for example:
 
 CREATE TEMPORARY TABLE tmp SELECT id, MAX(recdate) as recdate, mount FROM tbl GROUP 
 BY id, mount;
 SELECT tbl.id, tbl.mount, tbl.perused FROM tbl, tmp
 WHERE tbl.id=tmp.id AND tbl.recdate=tmp.recdate;
 

Oops!
Forgot one condition in the WHERE clause:

SELECT tbl.id, tbl.mount, tbl.perused FROM tbl, tmp
WHERE tbl.id=tmp.id AND tbl.mount=tmp.mount AND tbl.recdate=tmp.recdate;



-- 
For technical support contracts, goto https://order.mysql.com/?ref=ensita
This email is sponsored by Ensita.net http://www.ensita.net/
   __  ___ ___   __
  /  |/  /_ __/ __/ __ \/ /Egor Egorov
 / /|_/ / // /\ \/ /_/ / /__   [EMAIL PROTECTED]
/_/  /_/\_, /___/\___\_\___/   MySQL AB / Ensita.net
   ___/   www.mysql.com




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



OT: anyone had a surge of spam recently?

2004-05-20 Thread Daniel Kasak
Hi all.
Over the last 2 days, both my home and work email addresses have been 
hit with a sudden surge in spam, and the content in both cases is the 
same. Another strange thing is that my home address got basically none - 
until now.
I'm assuming my addresses have been harvested from this list, as this is 
the only list I post to from both addresses.
Anyone else noticed an increase?

Dan
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


tests please ignore

2004-05-20 Thread Victor Medina
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
- --
~ |...|
~ |  _    _|Victor Medina M   |
~ |\ \ \| |  _ \ / \   |Linux - Java - MySQL  |
~ | \ \ \  _| | |_) / _ \  |Dpto. Sistemas - Ferreteria EPA   |
~ | / / / |___|  __/ ___ \ |[EMAIL PROTECTED]  |
~ |/_/_/|_|_| /_/   \_\|Tel: +58-241-8507325 - ext. 325   |
~ ||Cel: +58-412-8859934  |
~ ||geek by nature - linux by choice  |
~ |...|
- ---
.- Este mensaje está digitalmente firmado para garantizar
~   su origen
.- El intercambio de llaves públicas se realiza a petición
~   de las partes interesadas via e-mail
- ---
.- This message has been digitally signed
.- Public Key (PGP or GPG) available upon request
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.2.2 (GNU/Linux)
Comment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org
iD8DBQFArK3Q8WJSBCrOXJ4RApvwAJ9F1KP/8wBfDorSv9I04Z2DNlvO9gCeO0kU
iTiOcrZ1bXUdPK7YCRljGkE=
=B1R4
-END PGP SIGNATURE-
-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]

Re: Query help with grouping and latest date.

2004-05-20 Thread Duncan Hill
On Thursday 20 May 2004 12:49, Egor Egorov might have typed:
 Duncan Hill [EMAIL PROTECTED] wrote:
  I have a table that has data that looks like:
  ++-+---+-+
 
  | id | recdate | mount | perused |
 
  ++-+---+-+
 
  |  1 | 2004-05-20 10:46:12 | QUAR  |  80 |
  |  2 | 2004-05-20 11:05:11 | QUAR  |  99 |
  |  2 | 2004-05-20 11:33:27 | ROOT  |  99 |
  |  2 | 2004-05-20 11:33:34 | QUAR  |  88 |
 

 From version 4.1 subquery helps you to get result using one query.

Thanks Egor, I'll start poking at it using subqueries, as I'm running 4.1.1.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Table Relation Design Question

2004-05-20 Thread Lewick, Taylor
Hi all, I am having a little trouble deciding how to relate some of my
tables together.

 

I was wondering about creating one big lookup table to relate 3 or 4
tables together, but wasn't sure if that was a good id, or should I have
a look up table

For each pair of tables.

 

Here is a simple example of my tables.

 

Orgs:  org_id   org_name  (org_id is primary key)

 

Contacts:  con_id, con_name, org_id (con_id is primary, org_id is
foreign key)

 

Events:   ev_id, ev_name, org_id, con_id (ev_id is primary, con_id and
org_id are foreign keys)

 

This is centered around organizations, so every contact must belong to
an org, likewise for an event.  I will create an org called None in case
they just want

To track the occasional lone contact or internal event.

 

But because an organization can have many contacts and many events, I
was thinking of using lookup tables.

 

I.e., Contacts are assigned to Organizations,

So have a table called assigned with org_id and con_id as a composite
primary key. And each is a foreign key back to the correct table...

 

And should I have a table that links orgs and events and contacts and
events, or should I have one lookup table

That relates them all together, i.e. orgs contacts, and events..?

 

To simplify, is It better to have many smaller lookup tables or one big
one?

 

Thanks,

Taylor



Very large query text?

2004-05-20 Thread Boyd E. Hemphill
I am having a tough time of it in production today ... Any help would be
appreciated.

I am executing a query of about 10k in size. 

When I do this it takes about 15 seconds.

If I remove a bunch of case, sum and if statements but get the same
explain plan the query runs in 5 seconds (it size is less than 500
bytes).

What is the limiting factor?  Is it the size of the query or all the
computation I am doing with case, etc?  

I have tried increasing query_prealloc_block but this did not help.  Any
suggestions or advice would be appreciated.

Thanks
Boyd


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Query help with grouping and latest date.

2004-05-20 Thread emierzwa
Since your on 4.1, give this a try...

select * 
from tbl as a
where a.recdate=(select max(b.recdate)
 from tbl as b
 where b.id=a.id and b.mount=a.mount)

Ed
-Original Message-
From: Duncan Hill [mailto:[EMAIL PROTECTED] 
Subject: Re: Query help with grouping and latest date.

On Thursday 20 May 2004 12:49, Egor Egorov might have typed:
 Duncan Hill [EMAIL PROTECTED] wrote:
  I have a table that has data that looks like:
  ++-+---+-+
 
  | id | recdate | mount | perused |
 
  ++-+---+-+
 
  |  1 | 2004-05-20 10:46:12 | QUAR  |  80 |
  |  2 | 2004-05-20 11:05:11 | QUAR  |  99 |
  |  2 | 2004-05-20 11:33:27 | ROOT  |  99 |
  |  2 | 2004-05-20 11:33:34 | QUAR  |  88 |
 

 From version 4.1 subquery helps you to get result using one query.

Thanks Egor, I'll start poking at it using subqueries, as I'm running
4.1.1.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



User information in the binlog (again...)

2004-05-20 Thread SGreen
I remember reading about someone inquiring about including the user's
information in each query logged to the binlog. Neither mysqlbinlog or
show binlog events shows which user performed which changes to the
database (at least for me:4.1.1a-alpha-nt-log / binary distro ) .  Is that
information available somewhere else? I just spent the last hour or so
trying to dig that old thread out of the archives (I have been using Google
and the search page at lists.mysql.com) but I just can't seem to find it.
Could someone please help point me back to that old thread?

May I humbly suggest a startup flag/variable that will permit integrating
the users name _and_ their IP address as metainformation (comments) into
the binlog?  I just reread the online documentation and didn't see a task
of that nature in any of the ToDos. IMO having the userid and IP address in
the binlog would be very useful in disaster analysis. I am not as
interested in what queries they run as much as what data they change but
those two basic pieces of information would go a LONG way to track down who
screwed up the data. (Yes, I DO plan to restrict direct access to the
actual server and have an application handle the majority of the data
requests and updates. For those power users who feel they must have
direct data access, I plan on giving them read-only access anyway. The
extra information will help me figure out who did what if things still go
sour.)

Any other thoughts out there?

Thanks!
Shawn Green
Database Administrator
Unimin Corporation - Spruce Pine




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Very large query text?

2004-05-20 Thread Victor Pendleton
Can you post the explain plan and the query?

-Original Message-
From: Boyd E. Hemphill
To: [EMAIL PROTECTED]
Sent: 5/20/04 9:32 AM
Subject: Very large query text?

I am having a tough time of it in production today ... Any help would be
appreciated.

I am executing a query of about 10k in size. 

When I do this it takes about 15 seconds.

If I remove a bunch of case, sum and if statements but get the same
explain plan the query runs in 5 seconds (it size is less than 500
bytes).

What is the limiting factor?  Is it the size of the query or all the
computation I am doing with case, etc?  

I have tried increasing query_prealloc_block but this did not help.  Any
suggestions or advice would be appreciated.

Thanks
Boyd


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: OT: anyone had a surge of spam recently?

2004-05-20 Thread ct85711
I know my emial; accounts been getting quite alot of spam, though it's been like that 
for a while now for my self.  All I know my email was harvested from other lists too; 
which may have been the case for myself.

Chris
 
 From: Daniel Kasak [EMAIL PROTECTED]
 Date: 2004/05/20 Thu AM 08:14:03 CDT
 To: [EMAIL PROTECTED]
 Subject: OT: anyone had a surge of spam recently?
 
 Hi all.
 
 Over the last 2 days, both my home and work email addresses have been 
 hit with a sudden surge in spam, and the content in both cases is the 
 same. Another strange thing is that my home address got basically none - 
 until now.
 I'm assuming my addresses have been harvested from this list, as this is 
 the only list I post to from both addresses.
 Anyone else noticed an increase?
 
 Dan
 
 -- 
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
 
 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Limit operations by condition

2004-05-20 Thread Marco Lazzeri
I would like to grant SELECT/UPDATE/INSERT/DELETE privileges _only_ on particular set 
of rows.

The GRANT syntax I'm dreaming could be:

GRANT SELECT, UPDATE, INSERT, DELETE
ON table_a TO user_a WHERE table_a.foo = 'bar'


I've tried using TEMPORARY TABLEs as follow:

CREATE TEMPORARY TABLE table_a 
SELECT * FROM table_a WHERE foo = 'bar';

And it was good, but just for SELECT (I can't define rules on UPDATE/INSERT/DELETE to 
commit the modifies on the original table).

If it was PostgreSQL, I will be using VIEWs with RULEs.

Any hints?

Thanks,
Marco



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Help with Joins

2004-05-20 Thread Peter Brawley
I want to compose a query that will pull out each
distinct instance of a form (in other words, no
duplicates) and then list which procedures use that
form

Something like...

SELECT DISTINCT formName
FROM forms INNER JOIN procedures USING (formName)

PB
  - Original Message -
  From: Robert Reed
  To: [EMAIL PROTECTED]
  Sent: Wednesday, May 19, 2004 11:12 PM
  Subject: Help with Joins


  Greetings.

  I have a table that contains procedures and a table
  that contains forms.  Each procedure may have 1 or
  more forms or it may have no associated forms.  Each
  form may be relevant to more than 1 procedure.  The
  procedure table has 1 entry per procedure.  The forms
  table may have more than 1 entry per form depending on
  how many procedures use that form.  Each form entry
  has a foriegn key tying it to the record number of the
  procedure.

  I want to compose a query that will pull out each
  distinct instance of a form (in other words, no
  duplicates) and then list which procedures use that
  form

  So:  Form Name   Procedure(s) Name

  Is this a clear enough explanation for folks?  I'm
  willing to RTFM if somebody will point me to the
  chapter(s) that discuss different join syntax and how
  it's used in MySQL.  I'm using 3.23.54 in my
  production environment at the moment and my tables are
  MyISAM.

  Thanks in Advance

  =
  Robert Reed
  512-869-0063 home
  512-818-2460 cell




  __
  Do you Yahoo!?
  Yahoo! Domains - Claim yours for only $14.70/year
  http://smallbusiness.promotions.yahoo.com/offer

  --
  MySQL General Mailing List
  For list archives: http://lists.mysql.com/mysql
  To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]




Copying BLOBs

2004-05-20 Thread Ben Ricker





I have a database that stores information in Japanese characters stored in
a blob formatted fields. I am having trouble copying these fields from one
database to another.

I tried doing a mysqldump on the table and then copyng the Insert statement
generated but that failed; the inserted filed had a number of escaped
character designations; looked like some sort of ASCII code or something to
that effect.

Any ideas?

Thanks,

Ben Ricker



-
CONFIDENTIALITY NOTICE
This e-mail message and any attachments are only for the use of the intended recipient 
and may contain information that is privileged, confidential or exempt from disclosure 
under applicable law.  If you are not the intended recipient, any disclosure, 
distribution or other use of this e-mail message or attachments is prohibited.  If you 
have received this e-mail message in error, please delete and notify the sender 
immediately. Thank you.


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



max db size for individual users

2004-05-20 Thread Jay Fitzgerald
Does anyone know how or where to set the maximum size limit on a per 
user basis for their db? This is not for a table, but for their entire db.

example:
   GRANT ALL ON usera.* TO [EMAIL PROTECTED] IDENTIFIED BY password 
WITH max_db_size=5M;

Therefore, usera would be limited to 5 megs of entire db usage but can 
have as many tables as they need.

TIA,
Jay
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Principles of Data-Base Management

2004-05-20 Thread Eve Atley

Not specifically a MySQL question, but my boss has a lending library, and I
found a book here titled Principles of Data-Base Management by James
Martin, publish date of 1976. I thought I might peruse this to get a
background on DB Mangement, but the publishing date concerns me. Has anyone
read this book, and if so, is it worth reading? Is it safe to read in order
to get a general background of concepts, or has database management changed
enough to make the book obsolete? I'm concerned that the most recent
programming lanaguage discussed here is COBOL, FORTRAN and assembly
languages. :)

Thanks,
Eve



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Limit operations by condition

2004-05-20 Thread Victor Pendleton
You can grant those permissions on columns but not on individual rows. 

-Original Message-
From: Marco Lazzeri
To: [EMAIL PROTECTED]
Sent: 5/20/04 10:04 AM
Subject: Limit operations by condition

I would like to grant SELECT/UPDATE/INSERT/DELETE privileges _only_ on
particular set of rows.

The GRANT syntax I'm dreaming could be:

GRANT SELECT, UPDATE, INSERT, DELETE
ON table_a TO user_a WHERE table_a.foo = 'bar'


I've tried using TEMPORARY TABLEs as follow:

CREATE TEMPORARY TABLE table_a 
SELECT * FROM table_a WHERE foo = 'bar';

And it was good, but just for SELECT (I can't define rules on
UPDATE/INSERT/DELETE to commit the modifies on the original table).

If it was PostgreSQL, I will be using VIEWs with RULEs.

Any hints?

Thanks,
Marco



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Limit operations by condition

2004-05-20 Thread Marco Lazzeri
Yes, I know. Perhaps, I'm searching for workarounds.

Il gio, 2004-05-20 alle 17:58, Victor Pendleton ha scritto:
 You can grant those permissions on columns but not on individual rows. 
 
 -Original Message-
 From: Marco Lazzeri
 To: [EMAIL PROTECTED]
 Sent: 5/20/04 10:04 AM
 Subject: Limit operations by condition
 
 I would like to grant SELECT/UPDATE/INSERT/DELETE privileges _only_ on
 particular set of rows.
 
 The GRANT syntax I'm dreaming could be:
 
 GRANT SELECT, UPDATE, INSERT, DELETE
 ON table_a TO user_a WHERE table_a.foo = 'bar'
 
 
 I've tried using TEMPORARY TABLEs as follow:
 
 CREATE TEMPORARY TABLE table_a 
 SELECT * FROM table_a WHERE foo = 'bar';
 
 And it was good, but just for SELECT (I can't define rules on
 UPDATE/INSERT/DELETE to commit the modifies on the original table).
 
 If it was PostgreSQL, I will be using VIEWs with RULEs.
 
 Any hints?
 
 Thanks,
 Marco
 
 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Principles of Data-Base Management

2004-05-20 Thread David Fleming
Eve Atley [EMAIL PROTECTED] writes:

 Not specifically a MySQL question, but my boss has a lending library, and I
 found a book here titled Principles of Data-Base Management by James
 Martin, publish date of 1976. I thought I might peruse this to get a
 background on DB Mangement, but the publishing date concerns me. Has anyone
 read this book, and if so, is it worth reading? Is it safe to read in order
 to get a general background of concepts, or has database management changed
 enough to make the book obsolete? I'm concerned that the most recent
 programming lanaguage discussed here is COBOL, FORTRAN and assembly
 languages. :)
 
 Thanks,
 Eve


Not familiar with that book, but can recommend another:

Database Design for Mere Mortals: A Hands-On Guide to Relational
Database Design, Second Edition by Michael J. Hernandez (2003)

Excellent, IMHO.

-- 
David Fleming



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Heap table limitations

2004-05-20 Thread Daniel Cummings
We want to convert a 200+ meg table to a heap table.  We've gotten pretty
good performance converting from InnoDb to MyIsam table but converting to a
heap table would give an even bigger performance boost.

 

Does making a heap file this large make any sense?

 

Are there size limitations to heap tables?

 

TIA

 

Dan



Re: max db size for individual users

2004-05-20 Thread Egor Egorov
Jay Fitzgerald [EMAIL PROTECTED] wrote:
 Does anyone know how or where to set the maximum size limit on a per 
 user basis for their db? This is not for a table, but for their entire db.
 
 example:
GRANT ALL ON usera.* TO [EMAIL PROTECTED] IDENTIFIED BY password 
 WITH max_db_size=5M;
 
 Therefore, usera would be limited to 5 megs of entire db usage but can 
 have as many tables as they need.
 

You can't do it with MySQL.



-- 
For technical support contracts, goto https://order.mysql.com/?ref=ensita
This email is sponsored by Ensita.net http://www.ensita.net/
   __  ___ ___   __
  /  |/  /_ __/ __/ __ \/ /Egor Egorov
 / /|_/ / // /\ \/ /_/ / /__   [EMAIL PROTECTED]
/_/  /_/\_, /___/\___\_\___/   MySQL AB / Ensita.net
   ___/   www.mysql.com




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Multi-threading problems in MySql

2004-05-20 Thread Sasha Pachev
Daniel Cummings wrote:
We have one query which takes approximately 2 minutes.  MySql seems to be
unresponsive to any other threads until this query has completed.  
On some platforms, the thread library has a limitation/bug that does not permit 
thread pre-emption, which means that if a thread does not want to give up CPU, 
it will not be forced to do so. What platform is this on?

In any case, my recommendation would be to do whatever it takes to keep that 
queyr from running for 2 minutes. Try to optimize it, or re-write it so it gets 
its data in small chunks.

--
Sasha Pachev
Create online surveys at http://www.surveyz.com/
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: mysqld sock conflicts

2004-05-20 Thread Sasha Pachev
Greg Willits wrote:
I have two mysql apps running on the same machine (OS X 10.3.3). A mysql 
3.23.54 on port 14551, and a mysql 4.0.16 on 3306. Each has a config 
file specifying the port and a unique socket name in /tmp. They have 
coexisted just peachy for a very long time.

Now however, w/o any changes to either MySQL3, MySQL4, or the OS, every 
time I issue a terminal command to one of the MySQL3 bin apps preceded 
by the usual cd /x/y/z/bin, the commands are being sent to the 
/usr/local/mysql bin apps on 3306. If I shut mysqld 3306 down (which 
closes the sock file), then any commands to mysql 14551 gripes that 
there is no socket file even though the one it should be using is still 
available.

Removing and reinstalling both mysql's (now I have 4.0.18) does not fix 
it. Each does in fact create its own sock file in /tmp, and each mysqld 
server runs just fine. I can manually specify the --socket for the 14551 
bin apps and they'll work, but I've used mysql3 and mysql4 side by side 
since one of the later 4.0 betas and I've never had to specify the 
socket when launching any of the mysql3 utils.

There has to be some other kind of socket related config file somewhere 
that has something to do with this? I have no idea what could have 
changed all of a sudden and on its own.
Greg:
Do not worry about why it stopped working - it was not supposed to anyway, and 
if it did, it was pure luck :-)

A clean way to solve the problem would be to create small shell scripts called 
mysql-3 and mysql-4 that will connect to the right instance.

--
Sasha Pachev
Create online surveys at http://www.surveyz.com/
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Limit operations by condition

2004-05-20 Thread Sasha Pachev
Marco Lazzeri wrote:
Yes, I know. Perhaps, I'm searching for workarounds.
Il gio, 2004-05-20 alle 17:58, Victor Pendleton ha scritto:
You can grant those permissions on columns but not on individual rows. 
Marco:
Your choices are limited to creating a wrapper for your users. If they have 
direct access, they can either read no rows, or all of them.

--
Sasha Pachev
Create online surveys at http://www.surveyz.com/
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Table Relation Design Question

2004-05-20 Thread Sasha Pachev

 

To simplify, is It better to have many smaller lookup tables or one big
one?
Traylor:
You can create three entity tables (organization,contact,event) + the relation 
tables (org_contact, org_event, contact_event). Small lookup tables are usually 
better, and also give you points for sticking to a normalized design.

--
Sasha Pachev
Create online surveys at http://www.surveyz.com/
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


RE: Heap table limitations

2004-05-20 Thread emierzwa
You can't do efficient ranging on a HASH indexed column without a table
scan...

Select * from heap_table where indx_col between 247 and 258

This would table scan your 200meg table even if indx_col is a PRIMARY
KEY, using the default hash key. Hash key was the only choice prior to
4.1, but now you can use BTREE for HEAP tables. Again, it depends on how
you use your tables.

If your server blinks, scheduled or otherwise, you loose all of your
table contents...forcing you to reload your data.

Make sure you place a practical MAX_ROWS= on your create table to
prevent your table from accidentally eating all of you're available
memory.

Ed

-Original Message-
From: Daniel Cummings [mailto:[EMAIL PROTECTED] 
Subject: Heap table limitations

We want to convert a 200+ meg table to a heap table.  We've gotten
pretty
good performance converting from InnoDb to MyIsam table but converting
to a
heap table would give an even bigger performance boost.

Does making a heap file this large make any sense?

Are there size limitations to heap tables?
TIA
Dan

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: User information in the binlog (again...)

2004-05-20 Thread Sasha Pachev
[EMAIL PROTECTED] wrote:
I remember reading about someone inquiring about including the user's
information in each query logged to the binlog. Neither mysqlbinlog or
show binlog events shows which user performed which changes to the
database (at least for me:4.1.1a-alpha-nt-log / binary distro ) .  Is that
information available somewhere else? I just spent the last hour or so
trying to dig that old thread out of the archives (I have been using Google
and the search page at lists.mysql.com) but I just can't seem to find it.
Could someone please help point me back to that old thread?
Shawn:
Enable both log and log-bin. Then you can match the thread_id field in 
mysqlbinlog output with the Connect command and track down the user.

--
Sasha Pachev
Create online surveys at http://www.surveyz.com/
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Heap table limitations

2004-05-20 Thread Sasha Pachev
Daniel Cummings wrote:
We want to convert a 200+ meg table to a heap table.  We've gotten pretty
good performance converting from InnoDb to MyIsam table but converting to a
heap table would give an even bigger performance boost.
You think it would, or it already has in your testing? Heap vs. MyISAM is not 
that much of a speed up on an OS that caches well even in the ideal situation 
for HEAP.

 

Does making a heap file this large make any sense?
Actually, there is no file. It is all in memory. But if you have the RAM the 
size should not be a problem.

Are there size limitations to heap tables?
Amount of RAM + max_heap_table_size setting in the server.
--
Sasha Pachev
Create online surveys at http://www.surveyz.com/
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: User information in the binlog (again...)

2004-05-20 Thread Hans-Peter Grimm
[EMAIL PROTECTED] wrote:
I remember reading about someone inquiring about including the user's
information in each query logged to the binlog. Neither mysqlbinlog or
show binlog events shows which user performed which changes to the
database (at least for me:4.1.1a-alpha-nt-log / binary distro ) .  Is that
information available somewhere else? 
Have you read about the general log (--log option)?
Example:
Time Id CommandArgument
040520 20:12:20   3 Connect [EMAIL PROTECTED] on
040520 20:12:27   3 Init DB test
  3 Query   show databases
  3 Query   show tables
  3 Field List  t1
040520 20:12:30   3 Query   select * from t1
040520 20:12:31   3 Quit
040520 20:12:47   4 Connect [EMAIL PROTECTED] on
  4 Query   SHOW SLAVE HOSTS
  4 Binlog Dump
Column 3 contains the connection id. To find out who performed a given 
query, follow the log back to the Connect command. It will also show 
you user and host.

Hans-Peter
 just spent the last hour or so
trying to dig that old thread out of the archives (I have been using Google
and the search page at lists.mysql.com) but I just can't seem to find it.
Could someone please help point me back to that old thread?
May I humbly suggest a startup flag/variable that will permit integrating
the users name _and_ their IP address as metainformation (comments) into
the binlog?  I just reread the online documentation and didn't see a task
of that nature in any of the ToDos. IMO having the userid and IP address in
the binlog would be very useful in disaster analysis. I am not as
interested in what queries they run as much as what data they change but
those two basic pieces of information would go a LONG way to track down who
screwed up the data. (Yes, I DO plan to restrict direct access to the
actual server and have an application handle the majority of the data
requests and updates. For those power users who feel they must have
direct data access, I plan on giving them read-only access anyway. The
extra information will help me figure out who did what if things still go
sour.)
Any other thoughts out there?
Thanks!
Shawn Green
Database Administrator
Unimin Corporation - Spruce Pine



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Table Relation Design Question

2004-05-20 Thread Garth Webb
On Thu, 2004-05-20 at 07:13, Lewick, Taylor wrote:
 Hi all, I am having a little trouble deciding how to relate some of my
 tables together.
 
  
 
 I was wondering about creating one big lookup table to relate 3 or 4
 tables together, but wasn't sure if that was a good id, or should I have
 a look up table
 
 For each pair of tables.
 
  
 
 Here is a simple example of my tables.
 
  
 
 Orgs:  org_id   org_name  (org_id is primary key)
 
  
 
 Contacts:  con_id, con_name, org_id (con_id is primary, org_id is
 foreign key)
 
  
 
 Events:   ev_id, ev_name, org_id, con_id (ev_id is primary, con_id and
 org_id are foreign keys)

Its probably unnecessary to have a 'con_id' in the Event table since you
can get that information from the 'org_id'.  An exception might be if
multiple contacts for one organization are allowed, and for any given
event you want the ability to specify just one of those contacts.

Removing the con_id would also help eliminate the problem where an
org_id on an event does not agree with the org_id of the contact given
by con_id (input or programming gremlin).  Or, you have an existing
event where the contact has the same org_id as the event's org_id, but
there is a change of contacts at org_id's organization.  Someone updates
the contact information and now your event either contains an invalid
contact, or an ID to a non-existent contact

 This is centered around organizations, so every contact must belong to
 an org, likewise for an event.  I will create an org called None in case
 they just want
 
 To track the occasional lone contact or internal event.
 
  
 
 But because an organization can have many contacts and many events, I
 was thinking of using lookup tables.
 
  
 
 I.e., Contacts are assigned to Organizations,
 
 So have a table called assigned with org_id and con_id as a composite
 primary key. And each is a foreign key back to the correct table...
 
  
 
 And should I have a table that links orgs and events and contacts and
 events, or should I have one lookup table
 
 That relates them all together, i.e. orgs contacts, and events..?
 
  
 
 To simplify, is It better to have many smaller lookup tables or one big
 one?
 
  
 
 Thanks,
 
 Taylor
-- 
. Garth Webb
. [EMAIL PROTECTED]
.
. shoes *  * schoenen *  * chaussures * zapatos
. Schuhe *  * pattini *  * sapatas * 

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: mysqld sock conflicts

2004-05-20 Thread Greg Willits
On May 20, 2004, at 11:08 AM, Sasha Pachev wrote:
Greg Willits wrote:
I have two mysql apps running on the same machine (OS X 10.3.3). A 
mysql 3.23.54 on port 14551, and a mysql 4.0.16 on 3306. Each has a 
config file specifying the port and a unique socket name in /tmp. 
They have coexisted just peachy for a very long time.
Now however, w/o any changes to either MySQL3, MySQL4, or the OS, 
every time I issue a terminal command to one of the MySQL3 bin apps 
preceded by the usual cd /x/y/z/bin, the commands are being sent to 
the /usr/local/mysql bin apps on 3306. If I shut mysqld 3306 down 
(which closes the sock file), then any commands to mysql 14551 gripes 
that there is no socket file even though the one it should be using 
is still available.
Do not worry about why it stopped working - it was not supposed to 
anyway, and if it did, it was pure luck :-)
I was lucky for over two years across several machines then. I think 
that may be my best streak of anything ever. Too bad there was no money 
involved! ;-)

A clean way to solve the problem would be to create small shell 
scripts called mysql-3 and mysql-4 that will connect to the right 
instance.
I've started that process, though I'm not much of a shell scripter yet. 
So it's time to dig in or  get used to specifying the socket I guess. 
OK. Well, at least I know.

Thank you.
-- greg willits
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


delete temp data...

2004-05-20 Thread Jonathan Villa
I'm administering a Red Hat machine which is used soley for MySQL.  The
person working with the db has more than 15 million records in various
tables...they were trying to run some queries to create one table from
these many tables.  When this was happening, they ran out of disk space.
They had about 4GB free prior to running the queries... but are now down
to 2.1GB (after I freed up some disk space, they were at 0) and no new
table.  Are there temp files created when running queries...I have
restarted mysql several times, looked in /usr/local/mysql/data but nothing
looks odd...I even tried restarting the machine...

in /usr/local/mysql/data there is one dir named the same as the db they
are working on, then there is this other file called ibdata1 which is 2.1G
in size.I take a look at the first few lines and all I get is garbled
data, as if it were a binary file or something.

any ideas?


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: R: query string too long?

2004-05-20 Thread Victoria Reznichenko
Leonardo Francalanci [EMAIL PROTECTED] wrote:
 Could you provide structure of tables and some data for testing?
 
 How? I tried to post a couple of zip files (2 mails, 10k each) but it
 doesn't work.

Don't send attachments to the mailing list. You can upload file to the 
ftp://support.mysql.com/pub/mysql/secret/ and tell a file name or enter a bug report 
at http://bugs.mysql.com/.

 Some data (and schema)


I tested it on with my test data for others PARTITIONED_*_* tables on v4.1.2 and 
SELECT worked well.


-- 
For technical support contracts, goto https://order.mysql.com/?ref=ensita
This email is sponsored by Ensita.net http://www.ensita.net/
   __  ___ ___   __
  /  |/  /_ __/ __/ __ \/ /Victoria Reznichenko
 / /|_/ / // /\ \/ /_/ / /__   [EMAIL PROTECTED]
/_/  /_/\_, /___/\___\_\___/   MySQL AB / Ensita.net
   ___/   www.mysql.com





-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



mysql on solaris 8/9

2004-05-20 Thread Aysun Alay
Our application needs to support both solaris 8 and solaris 9. I was
wondering if mysql has a backwards compatibility in terms of operating
systems. Can I use mysql-pro-4.0.20-sun-solaris2.9-sparc.tar  file to
install mysql on Solaris 8?

Thanks
Aysun


Re: delete temp data...

2004-05-20 Thread Ware Adams
Jonathan Villa wrote:


in /usr/local/mysql/data there is one dir named the same as the db
they are working on, then there is this other file called ibdata1
which is 2.1G in size.I take a look at the first few lines and all
I get is garbled data, as if it were a binary file or something.

That's an innodb data file, and given that it is 2.1G it sounds like
someone is using it to create InnoDB tables.  As you create new tables
this file can grow if it is set to autoextend.

Even when using InnoDB exclusively you will see a directory
corresponding to each database which holds the table definition.  The
data and indices are contained in the ibdata files.

There's a lot of info on this in the InnoDB section of the MySQL manual:

http://dev.mysql.com/doc/mysql/en/InnoDB_overview.html

and at the InnoDB web site:

http://www.innodb.com/ibman.php

Good luck,
Ware

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



set wait_timeout

2004-05-20 Thread [EMAIL PROTECTED]
hi,
i have problem with automatic setting of variable wait_timeout.
mysql version 4.0.20 (and 4.0.18) on slackware linux.
i've tried to set it in several ways:
1) in startup script as a parameter to mysql_safe:
-O wait_timeout=30
2a) in configuration file /etc/my.cnf, section [mysqld]:
set-variable=wait_timeout=30
2b) in configuration file /etc/my.cnf, section [mysqld]:
wait_timeout=30
3) from the shell (i've planed to place it into the startup script after 
mysqld_safe start):
mysql -e 'set wait_timeout=30'

nothing worked, i got no error message and the value of wait_timeout 
stood default: 28800.
when i run 'set wait_timeout=30;' as privileged user in interactive 
mysql session (running 'mysql'), the value changes as i wish.
what should i do to set the variable at startup?

thanks,
rudolf
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Rename database

2004-05-20 Thread Ngim Ngau - Kzresults
Hi, Is there a way I can rename a database? or at least copy an old database
with existing tables into
a new one?

Thanks.


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Rename database

2004-05-20 Thread Bartis, Robert M (Bob)
http://dev.mysql.com/doc/mysql/en/RENAME_TABLE.html

-Original Message-
From: Ngim Ngau - Kzresults [mailto:[EMAIL PROTECTED]
Sent: Thursday, May 20, 2004 4:50 PM
To: [EMAIL PROTECTED]
Subject: Rename database


Hi, Is there a way I can rename a database? or at least copy an old database
with existing tables into
a new one?

Thanks.


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Rename database

2004-05-20 Thread Ed Reed
Is there anything wrong with just stopping the server and renaming the
database's directory in the DATA directory? I've used that method
without any problems. It also works very well for making a copy of the
database to a new differently named database. Are there any pitfalls
that I haven't encountered yet?
 
Thanks

 Bartis, Robert M (Bob) [EMAIL PROTECTED] 5/20/04 1:37:20 PM

http://dev.mysql.com/doc/mysql/en/RENAME_TABLE.html 

-Original Message-
From: Ngim Ngau - Kzresults [mailto:[EMAIL PROTECTED] 
Sent: Thursday, May 20, 2004 4:50 PM
To: [EMAIL PROTECTED] 
Subject: Rename database


Hi, Is there a way I can rename a database? or at least copy an old
database
with existing tables into
a new one?

Thanks.


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql 
To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED] 

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql 
To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED]





Join Question

2004-05-20 Thread Roger Counihan
Hi -

I'm running 4.0.18, so I can't use subqueries.  I need to run a query to get the sum 
of costs from multiple tables for a given master table.  

What I have right now which is clearly not going to work is:

SELECT conveyor.serial, SUM(conveyorsupport.cost), SUM(conveyoraccessories.cost) from 
(conveyor LEFT JOIN conveyorsupport on conveyor.serial = conveyorsupport.serial) LEFT 
JOIN conveyoraccessories on conveyor.serial = conveyoraccessories.serial

This will return a product for the sum of conveyor accessories (however many supports 
there are times the sum of accessories).  

I believe if I was running a later version, I could use a subqueries, or if it was 
supported, correlated sub queries in the from statement.

Thanks,

Roger


Sample Data (other fields exist but I believe are superfluous)

Conveyor:

Serial
10
11

Support
SerialCost
1100
1150
10001200
10001250

Accessory
SerialCost
150
175
10001100
10001200

Desired Result
SerialSupportCostAccessoryCost
10250125
11450300

Join Issue

2004-05-20 Thread Roger Counihan
Hi -

I'm running 4.0.18, so I can't use subqueries.  I need to run a query to
get the sum of costs from multiple tables for a given master table.  

What I have right now which is clearly not going to work is:

SELECT conveyor.serial, SUM(conveyorsupport.cost),
SUM(conveyoraccessories.cost) from (conveyor LEFT JOIN conveyorsupport
on conveyor.serial = conveyorsupport.serial) LEFT JOIN
conveyoraccessories on conveyor.serial = conveyoraccessories.serial

This will return a product for the sum of conveyor accessories (however
many supports there are times the sum of accessories).  

I believe if I was running a later version, I could use a subqueries, or
if it was supported, correlated sub queries in the from statement.

Thanks,

Roger

Sample Data (other fields exist but I believe are superfluous)

Conveyor:

Serial
10
11

Support
SerialCost
1100
1150
10001200
10001250

Accessory
SerialCost
150
175
10001100
10001200

Desired Result
SerialSupportCostAccessoryCost
10250125
11450300




RE: Rename database

2004-05-20 Thread Victor Pendleton
innodb and dbd 

-Original Message-
From: Ed Reed
To: [EMAIL PROTECTED]
Sent: 5/20/04 3:52 PM
Subject: RE: Rename database

Is there anything wrong with just stopping the server and renaming the
database's directory in the DATA directory? I've used that method
without any problems. It also works very well for making a copy of the
database to a new differently named database. Are there any pitfalls
that I haven't encountered yet?
 
Thanks

 Bartis, Robert M (Bob) [EMAIL PROTECTED] 5/20/04 1:37:20 PM

http://dev.mysql.com/doc/mysql/en/RENAME_TABLE.html 

-Original Message-
From: Ngim Ngau - Kzresults [mailto:[EMAIL PROTECTED] 
Sent: Thursday, May 20, 2004 4:50 PM
To: [EMAIL PROTECTED] 
Subject: Rename database


Hi, Is there a way I can rename a database? or at least copy an old
database
with existing tables into
a new one?

Thanks.


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql 
To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED] 

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql 
To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED]




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Limit operations by condition

2004-05-20 Thread Marco Lazzeri
Hi Sasha,
I've tried writing a python wrapper but It's an hard work because I've
to wrap every type of queries, join, where, order, group and any other
MySQL 4.0 clause.

Have you ever written or used a wrapper like the one I need?
Anyone knows a simple way for developing it?

This is not just a SELECT problem.

From another way, can I retrieve Oids (or something like Oids) for
records on which I'm going to apply the query?

Thanks

Scrive Sasha Pachev [EMAIL PROTECTED]:

 Marco Lazzeri wrote:
  Yes, I know. Perhaps, I'm searching for workarounds.
 
  Il gio, 2004-05-20 alle 17:58, Victor Pendleton ha scritto:
 
 You can grant those permissions on columns but not on individual
rows.

 Marco:

 Your choices are limited to creating a wrapper for your users. If they
have
 direct access, they can either read no rows, or all of them.


 --
 Sasha Pachev
 Create online surveys at http://www.surveyz.com/

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:   
http://lists.mysql.com/[EMAIL PROTECTED]






-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: BLOB's - General Guidance

2004-05-20 Thread Udikarni
Another perspective on the subject of BLOB vs. Links.

Links are easier to implement and may be an OK way to start. However, a file system is 
really a crude database, and I emphasize crude. It's not very good at handling high 
transaction rates, access from multiple machines, or volume.

If your application grows quickly and before you know it you have hundreds of folders 
with thousands of files in each - your file system will slow to a crawl. All the 
performance, security, and consistancy features developers have worked so hard to put 
into database engines don't or barely exist in file systems.

So - if you go the link approach - you'll be fine for a while, but when you see the 
directory structure starting to buckle - it might be time to give BLOBs another look.


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Rename database

2004-05-20 Thread emierzwa
MERGE tables have a table_name.MRG that contains pathed names to the
original table location. You can carefuly hand edit these or run an
ALTER command to fix them.

ALTER TABLE merge_table_name  UNION=(table_1,table_2...)

Ed

-Original Message-
From: Ed Reed [mailto:[EMAIL PROTECTED] 
Sent: Thursday, May 20, 2004 2:53 PM
To: [EMAIL PROTECTED]
Subject: RE: Rename database


Is there anything wrong with just stopping the server and renaming the
database's directory in the DATA directory? I've used that method
without any problems. It also works very well for making a copy of the
database to a new differently named database. Are there any pitfalls
that I haven't encountered yet?
 
Thanks

 Bartis, Robert M (Bob) [EMAIL PROTECTED] 5/20/04 1:37:20 PM

http://dev.mysql.com/doc/mysql/en/RENAME_TABLE.html 

-Original Message-
From: Ngim Ngau - Kzresults [mailto:[EMAIL PROTECTED] 
Sent: Thursday, May 20, 2004 4:50 PM
To: [EMAIL PROTECTED] 
Subject: Rename database


Hi, Is there a way I can rename a database? or at least copy an old
database
with existing tables into
a new one?

Thanks.


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql 
To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED] 

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql 
To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED]




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: BLOB's - General Guidance

2004-05-20 Thread Greg Willits
On May 20, 2004, at 2:45 PM, [EMAIL PROTECTED] wrote:
Another perspective on the subject of BLOB vs. Links.
Links are easier to implement and may be an OK way to start. However, 
a file system is really a crude database, and I emphasize crude. 
It's not very good at handling high transaction rates, access from 
multiple machines, or volume.

If your application grows quickly and before you know it you have 
hundreds of folders with thousands of files in each - your file system 
will slow to a crawl. All the performance, security, and consistancy 
features developers have worked so hard to put into database engines 
don't or barely exist in file systems.

So - if you go the link approach - you'll be fine for a while, but 
when you see the directory structure starting to buckle - it might be 
time to give BLOBs another look.
Interesting. Would make sense that scale would affect the perceived and 
real limits and hassles of one method vs another, and flip-flop 
strengths  weaknesses. Thx.

-- greg willits

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


thread stack issues

2004-05-20 Thread Steven Roussey
Since going from 4.0.18 to 4.0.20 (or 4.0.19) I now receive these warnings
on startup:

040520 14:55:21  mysqld started
040520 14:55:21  Warning: Asked for 196608 thread stack, but got 126976
/usr/sbin/mysqld: ready for connections.
Version: '4.0.20-standard'  socket: '/tmp/mysql.sock'  port: 3306

I noticed on another server that it had the same problem with v4.0.18. So
some servers have a problem with this version and others do not. All have
the warning with 4.0.20. They are configured differently. What configuration
options would be effecting this?

-steve--



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: BLOB's - General Guidance

2004-05-20 Thread Luis R. Rodriguez
On Thu, May 20, 2004 at 05:45:58PM -0400, [EMAIL PROTECTED] wrote:
 Another perspective on the subject of BLOB vs. Links.
 
 Links are easier to implement and may be an OK way to start. However, a file system 
 is really a crude database, and I emphasize crude. It's not very good at handling 
 high transaction rates, access from multiple machines, or volume.
 
 If your application grows quickly and before you know it you have hundreds of 
 folders with thousands of files in each - your file system will slow to a crawl. All 
 the performance, security, and consistancy features developers have worked so hard 
 to put into database engines don't or barely exist in file systems.
 
 So - if you go the link approach - you'll be fine for a while, but when you see the 
 directory structure starting to buckle - it might be time to give BLOBs another look.

I'm going to be honest, sorry. Your argument seems to make sense and
this information is really relevant for me -- thanks. I, however, just
get this fuzzy feeling inside that tells me not to trust anything
technical any AOL user may say. Sorry again, but this time for making it
seem like a troll. It's not, seriously, I'm just being honest.

Can anyone non-AOL-lite back up Udikarni's argument? 8)

Luis
-- 
GnuPG Key fingerprint = 113F B290 C6D2 0251 4D84  A34A 6ADD 4937 E20A 525E


pgpyK2Pw5sOiK.pgp
Description: PGP signature


Repeat loops in mysql, large data issue, suggestions wanted

2004-05-20 Thread Scott Haneda
Faced with some larger than usual for me data requirements, I thought I
would ask some others what they think about my logic.

MySql 4

In short, I will have a file upload on a web server that will digest a file,
I will be able to dictate the format of this file. There may be a few
formats, the main one will be:

data\tdata\tdata\r

So, that is some data, a tab, some more data, another tab, then a return.

My trouble is that the data file could be 100,000 lines in length, I have a
few options:

Iterate through the file, one line at a time, using insert delayed I can put
the data into a table with no trouble, this is the simplest method, but
perhaps has performance issues.  In any language, repeating 100,000 times
will take some time, of course, it will happen in the background, but it
still will take some time.  There is also the trouble with a crashing
server, I would have a incomplete set of data inserts, and no real simple
way to deal with this.

I was thinking, perhaps MySql is up to this task, what if I were to insert
the entire file into one field in mysql, then I have all the data in mysql,
this insert should happen much faster, as it is just data, it could be a few
MB's in size, but still should not take too long to get it into MySql.  It
is also only one operation, so the chance of a server crash interfering is
less.

Is it then possible to have a SQL statement that would somehow take the lump
field data, select it, chop it up, and insert 100,000 rows, in this case,
three fields per row?

I was thiking I could easily do some simple string replaces on the data and
get it to one large INSERT statement, this is not all that atractive to me
as I am not 100% certain each line in the file would be in the correct
format, someone on accident may have put in 5 tabs on one line.  In order to
check for this I am back to repeating through each line server side and
testing the line for integrity.

Any ideas and thoughts?

MySql 4


-- 
-
Scott HanedaTel: 415.898.2602
http://www.newgeo.com   Fax: 313.557.5052
[EMAIL PROTECTED]Novato, CA U.S.A.



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: thread stack issues

2004-05-20 Thread Donny Simonton
I have this same problem on 4.1.1 as well.

Donny

 -Original Message-
 From: Steven Roussey [mailto:[EMAIL PROTECTED]
 Sent: Thursday, May 20, 2004 7:33 PM
 To: [EMAIL PROTECTED]
 Subject: thread stack issues
 
 Since going from 4.0.18 to 4.0.20 (or 4.0.19) I now receive these warnings
 on startup:
 
 040520 14:55:21  mysqld started
 040520 14:55:21  Warning: Asked for 196608 thread stack, but got 126976
 /usr/sbin/mysqld: ready for connections.
 Version: '4.0.20-standard'  socket: '/tmp/mysql.sock'  port: 3306
 
 I noticed on another server that it had the same problem with v4.0.18. So
 some servers have a problem with this version and others do not. All have
 the warning with 4.0.20. They are configured differently. What
 configuration
 options would be effecting this?
 
 -steve--
 
 
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:
 http://lists.mysql.com/[EMAIL PROTECTED]
 




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Mysql 4.1.1, InnoDB - Slow TRUNCATE operations with Multiple Tablespaces

2004-05-20 Thread Richard Dale
Hi everybody,
 
I'm experiencing that the command TRUNCATE TABLE  with InnoDB takes an
extraordinary amount of time on large ( 400MB) tables - ie about 3-4
minutes.  In comparison, performing a DROP TABLE  then recreating it
takes about 2 seconds.  When the truncate operation is performed from a PERL
script with DBI, this usually results in a lost connection to databse
error which is rather frustrating.
 
I am using the new multiple tablespaces feature since during my development
I'll be truncating the tables a lot (perhaps this was a bad move?)
 
Has anyone else experienced such slow TRUNCATE operations?
 
Here's a summary of my environment:
Platform: Mysql 4.1.1-aplha, InnoDB 
Noteable parameters:  innodb_file_per_table, 160MB innodb buffer pool
Hardware: Win XP, 3Ghz P4 (HT), 1GB RAM, multiple 7200RPM drives
Database location:  On its own database on its own hard drive.
Swapping:  WinXP reporting commit charge  800MB
 
Note:  I'll be moving to a Linux-based development server soon and will be
able to tell whether the above is specific to Windows or Linux.
 
Best regards,
Richard Dale.
Norgate Investor Services
- Premium quality Stock, Futures and Foreign Exchange Data for
  markets in Australia, Asia, Europe, UK  USA -
 http://www.premiumdata.net/ www.premiumdata.net 
 
 


Re: BLOB's - General Guidance

2004-05-20 Thread David Blomstrom
 [EMAIL PROTECTED] wrote:
  Another perspective on the subject of BLOB vs.
 Links.
  
  Links are easier to implement and may be an OK way
 to start. However, a file system is really a crude
 database, and I emphasize crude. It's not very
 good at handling high transaction rates, access from
 multiple machines, or volume.
  
  If your application grows quickly and before you
 know it you have hundreds of folders with thousands
 of files in each - your file system will slow to a
 crawl. All the performance, security, and
 consistancy features developers have worked so hard
 to put into database engines don't or barely exist
 in file systems.
  
  So - if you go the link approach - you'll be fine
 for a while, but when you see the directory
 structure starting to buckle - it might be time to
 give BLOBs another look.

I'm confused. It sounds like you're basicallly saying
that databases slow down as they grow bigger. That's
logical.

But then you suggest that, when a database begins to
get too big, BLOBs may be better than storing links.

I don't understand that. How can storing images as
BLOBs be more efficient that creating a field that
simply stores links to those images? Or am I missing
something?





__
Do you Yahoo!?
Yahoo! Domains – Claim yours for only $14.70/year
http://smallbusiness.promotions.yahoo.com/offer 

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: BLOB's - General Guidance

2004-05-20 Thread jabbott

You might want to read it again.  He is saying the performance and consistancy exists 
in the database whereas it doesn't in a file system.  

--ja

On Thu, 20 May 2004, David Blomstrom wrote:

  [EMAIL PROTECTED] wrote:
   Another perspective on the subject of BLOB vs.
  Links.
   
   Links are easier to implement and may be an OK way
  to start. However, a file system is really a crude
  database, and I emphasize crude. It's not very
  good at handling high transaction rates, access from
  multiple machines, or volume.
   
   If your application grows quickly and before you
  know it you have hundreds of folders with thousands
  of files in each - your file system will slow to a
  crawl. All the performance, security, and
  consistancy features developers have worked so hard
  to put into database engines don't or barely exist
  in file systems.
   
   So - if you go the link approach - you'll be fine
  for a while, but when you see the directory
  structure starting to buckle - it might be time to
  give BLOBs another look.
 
 I'm confused. It sounds like you're basicallly saying
 that databases slow down as they grow bigger. That's
 logical.
 
 But then you suggest that, when a database begins to
 get too big, BLOBs may be better than storing links.
 
 I don't understand that. How can storing images as
 BLOBs be more efficient that creating a field that
 simply stores links to those images? Or am I missing
 something?
 
 
 
   
   
 __
 Do you Yahoo!?
 Yahoo! Domains – Claim yours for only $14.70/year
 http://smallbusiness.promotions.yahoo.com/offer 
 
 

-- 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: BLOB's - General Guidance

2004-05-20 Thread David Blomstrom
 [EMAIL PROTECTED] wrote:
  Another perspective on the subject of BLOB vs.
 Links.
  
  Links are easier to implement and may be an OK way
 to start. However, a file system is really a crude
 database, and I emphasize crude. It's not very
 good at handling high transaction rates, access from
 multiple machines, or volume.
  
  If your application grows quickly and before you
 know it you have hundreds of folders with thousands
 of files in each - your file system will slow to a
 crawl. All the performance, security, and
 consistancy features developers have worked so hard
 to put into database engines don't or barely exist
 in file systems.
  
  So - if you go the link approach - you'll be fine
 for a while, but when you see the directory
 structure starting to buckle - it might be time to
 give BLOBs another look.

I'm confused. It sounds like you're basicallly saying
that databases slow down as they grow bigger. That's
logical.

But then you suggest that, when a database begins to
get too big, BLOBs may be better than storing links.

I don't understand that. How can storing images as
BLOBs be more efficient that creating a field that
simply stores links to those images? Or am I missing
something?





__
Do you Yahoo!?
Yahoo! Domains – Claim yours for only $14.70/year
http://smallbusiness.promotions.yahoo.com/offer 

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Repeat loops in mysql, large data issue, suggestions wanted

2004-05-20 Thread David Griffiths
Assuming you insert 100,000 rows, you also have to consider that any 
indexes on the table will need to be re-analyzed to fix the statics. 
Also, the inserts will be slower due to any indexes.

You didn't mention the table-type (storage-engine) you were planning on 
using, but if you use InnoDB, and do it all in one transaction (ie turn 
off auto-commit and commit just once at the endi), then any failure 
during the mass-insert will cause a rollback, so you don't have to worry 
about the integrity of your data.

Is it then possible to have a SQL statement that would somehow take the 
lump field data, select it, chop it up, and insert 100,000 rows, in this 
case, three fields per row?

I haven't heard of a SQL function like that. In addition, you'll suffer 
the pain twice by inserting it once as one big field, selecting it out 
again, breaking it up, and then re-inserting it.

Is the code going to be running on the machine with the database? That 
could improve it.

One other thing to consider is to use IMPORT DATA to do a bulk load 
rather than a tonne of insert statements. You can do this from a 
command-line on the machine where the MySQL server is installed. See the 
docs at http://www.mysql.com to get the syntax of the IMPORT DATA - it 
should work on the format of the file as you specified below.

David.
Scott Haneda wrote:
Faced with some larger than usual for me data requirements, I thought I
would ask some others what they think about my logic.
MySql 4
In short, I will have a file upload on a web server that will digest a file,
I will be able to dictate the format of this file. There may be a few
formats, the main one will be:
data\tdata\tdata\r
So, that is some data, a tab, some more data, another tab, then a return.
My trouble is that the data file could be 100,000 lines in length, I have a
few options:
Iterate through the file, one line at a time, using insert delayed I can put
the data into a table with no trouble, this is the simplest method, but
perhaps has performance issues.  In any language, repeating 100,000 times
will take some time, of course, it will happen in the background, but it
still will take some time.  There is also the trouble with a crashing
server, I would have a incomplete set of data inserts, and no real simple
way to deal with this.
I was thinking, perhaps MySql is up to this task, what if I were to insert
the entire file into one field in mysql, then I have all the data in mysql,
this insert should happen much faster, as it is just data, it could be a few
MB's in size, but still should not take too long to get it into MySql.  It
is also only one operation, so the chance of a server crash interfering is
less.
Is it then possible to have a SQL statement that would somehow take the lump
field data, select it, chop it up, and insert 100,000 rows, in this case,
three fields per row?
I was thiking I could easily do some simple string replaces on the data and
get it to one large INSERT statement, this is not all that atractive to me
as I am not 100% certain each line in the file would be in the correct
format, someone on accident may have put in 5 tabs on one line.  In order to
check for this I am back to repeating through each line server side and
testing the line for integrity.
Any ideas and thoughts?
MySql 4
 


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Repeat loops in mysql, large data issue, suggestions wanted

2004-05-20 Thread Scott Haneda
on 05/20/2004 09:36 PM, David Griffiths at [EMAIL PROTECTED] wrote:

 You didn't mention the table-type (storage-engine) you were planning on
 using, but if you use InnoDB, and do it all in one transaction (ie turn
 off auto-commit and commit just once at the endi), then any failure
 during the mass-insert will cause a rollback, so you don't have to worry
 about the integrity of your data.

I can use any I like, I wont have not even built the DB yet, so I am open to
any suggestions.

I don't see how I can do it all in one transaction, what does that mean?

I think you may mean something like rather than doing 100,000 separate
inserts, somehow build that into just one insert string, so it is then one
connection?  Can you elaborate?

 Is it then possible to have a SQL statement that would somehow take the
 lump field data, select it, chop it up, and insert 100,000 rows, in this
 case, three fields per row?
 
 I haven't heard of a SQL function like that. In addition, you'll suffer
 the pain twice by inserting it once as one big field, selecting it out
 again, breaking it up, and then re-inserting it.

I was thinking that perhaps MySql would be more efficient at it than some
server side middleware.

 Is the code going to be running on the machine with the database? That
 could improve it.

The code that would do the insert will run on hardware that is on the same
network as MySql, but certainly not the same machine, they are 2 different
OS's so this is not possible to run them on the same machine.

 One other thing to consider is to use IMPORT DATA to do a bulk load
 rather than a tonne of insert statements. You can do this from a
 command-line on the machine where the MySQL server is installed. See the
 docs at http://www.mysql.com to get the syntax of the IMPORT DATA - it
 should work on the format of the file as you specified below.

Can import data be used on MySql if the data is not on the same machine as
MySql?

-- 
-
Scott HanedaTel: 415.898.2602
http://www.newgeo.com   Fax: 313.557.5052
[EMAIL PROTECTED]Novato, CA U.S.A.



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: delete temp data...

2004-05-20 Thread I D
Jonathan Villa wrote:
I'm administering a Red Hat machine which is used soley for MySQL.  The
person working with the db has more than 15 million records in various
tables...they were trying to run some queries to create one table from
these many tables.  When this was happening, they ran out of disk space.
They had about 4GB free prior to running the queries... but are now down
to 2.1GB (after I freed up some disk space, they were at 0) and no new
table.  Are there temp files created when running queries...I have
restarted mysql several times, looked in /usr/local/mysql/data but nothing
looks odd...I even tried restarting the machine...
in /usr/local/mysql/data there is one dir named the same as the db they
are working on, then there is this other file called ibdata1 which is 2.1G
in size.I take a look at the first few lines and all I get is garbled
data, as if it were a binary file or something.
any ideas?
for finding (and deleteing) biiig temp files check also /tmp; /var/tmp;
etc.
HTH
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Repeat loops in mysql, large data issue, suggestions wanted

2004-05-20 Thread David Griffiths

You didn't mention the table-type (storage-engine) you were planning on
using, but if you use InnoDB, and do it all in one transaction (ie turn
off auto-commit and commit just once at the endi), then any failure
during the mass-insert will cause a rollback, so you don't have to worry
about the integrity of your data.
   

I can use any I like, I wont have not even built the DB yet, so I am open to
any suggestions.
I don't see how I can do it all in one transaction, what does that mean?
 

What that means is you do 100,000 inserts, and then do one commit at the 
end. If the connection to the database dies, or the database itself 
crashes, then all the rows inserted will be rolled back, and no data 
will be in your database.

I think you may mean something like rather than doing 100,000 separate
inserts, somehow build that into just one insert string, so it is then one
connection?  Can you elaborate?
 

No, you still have to do all the inserts, but either they all get in, or 
none of them get in, depending on what happens as you are inserting.

Is it then possible to have a SQL statement that would somehow take the
lump field data, select it, chop it up, and insert 100,000 rows, in this
case, three fields per row?
I haven't heard of a SQL function like that. In addition, you'll suffer
the pain twice by inserting it once as one big field, selecting it out
again, breaking it up, and then re-inserting it.
   

I was thinking that perhaps MySql would be more efficient at it than some
server side middleware.
 

I haven't seen a function like that; it might exist. It might not.
Is the code going to be running on the machine with the database? That
could improve it.
   

The code that would do the insert will run on hardware that is on the same
network as MySql, but certainly not the same machine, they are 2 different
OS's so this is not possible to run them on the same machine.
 

Not sure I follow. If you wrote it in C, and compiled it on the machine 
where teh MySQL database was or if you wrote it in Java, it could 
run anywhere. You can also write it in PERL and run it on the database 
machine, assuming you install PERL.

One other thing to consider is to use IMPORT DATA to do a bulk load
rather than a tonne of insert statements. You can do this from a
command-line on the machine where the MySQL server is installed. See the
docs at http://www.mysql.com to get the syntax of the IMPORT DATA - it
should work on the format of the file as you specified below.
   

Can import data be used on MySql if the data is not on the same machine as
MySql?
 

I don't think so. But IMPORT DATA doesn't require any coding. For 
example, you  just put this into a file:

LOAD DATA LOCAL INFILE 'file_with_all_the_data.txt'
INTO table the_table_where_rows_go
FIELDS TERMINATED BY '\t'
OPTIONALLY ENCLOSED BY '' ESCAPED BY ''
LINES TERMINATED BY '\r\n';
This reads a file, and breaks each line up by a \t (and the line ends 
with \r\n in this example). It just dumps all the data into the table. 
The columns in the table have to be in the same order as the fields in 
each line.

No coding required.
David
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Repeat loops in mysql, large data issue, suggestions wanted

2004-05-20 Thread Scott Haneda
on 05/20/2004 10:18 PM, David Griffiths at [EMAIL PROTECTED] wrote:

 
 I can use any I like, I wont have not even built the DB yet, so I am open to
 any suggestions.
 
 I don't see how I can do it all in one transaction, what does that mean?
 
  
 
 What that means is you do 100,000 inserts, and then do one commit at the
 end. If the connection to the database dies, or the database itself
 crashes, then all the rows inserted will be rolled back, and no data
 will be in your database.

Do you have to do something before the first insert to tell it there is a
marker point to rollback to?

Perhaps someone can show me a example ala php on this one?
-- 
-
Scott HanedaTel: 415.898.2602
http://www.newgeo.com   Fax: 313.557.5052
[EMAIL PROTECTED]Novato, CA U.S.A.



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]