if the data is the concern, not the data structure, why not encrypt the data itself?
> -Original Message-
> From: David Crane [mailto:[EMAIL PROTECTED]
> Sent: Tuesday, May 11, 2004 2:15 PM
> To: [EMAIL PROTECTED]
> Subject: Re: how do i encrypt the .frm file with a password
>
>
> Since
One of the ways around this that I've seen is to maintain an additional field for the
'full path' info.
so that you have:
table_name
---
node_id integer auto_increment
node_name varchar(50) not null
parent_id integer
full_path varchar(255)
the obvious downside is that your application
you can do something like:
select phone_number, count(1)
from your_table_name_here
group by phone_number
> -Original Message-
> From: Jan Blomqvist [mailto:[EMAIL PROTECTED]
> Sent: Monday, March 15, 2004 8:27 AM
> To: [EMAIL PROTECTED]
> Subject: any select statement like uniq in unix
>
I'm not 100% sure on this, but what about the .myd file timestamp?
> -Original Message-
> From: gerald_clark [mailto:[EMAIL PROTECTED]
> Sent: Friday, February 06, 2004 9:09 AM
> To: Phil
> Cc: [EMAIL PROTECTED]
> Subject: Re: How to determine when a MySQL database was last modified?
>
>
>From what I've read ( I think in the MySQL docs, might have been here in the list),
>technically it will take less time to add the indexes after the table creation, than
>the overhead of index updating per-insert.
Either way, it's gonna take a loong time.
> -Original Message-
> F
another option is to set your pager option:
mysql> pager more
then re-run your query
> -Original Message-
> From: Mike Johnson [mailto:[EMAIL PROTECTED]
> Sent: Thursday, January 29, 2004 11:01 AM
> To: Adel Ardalan; [EMAIL PROTECTED]
> Subject: RE: Paging!
>
>
> From: Adel Ardalan [ma
Stored procedure versioning/backup/restoring has always been a pain in the butt for
all dbs that support them. What is done 9/10 times is the sql script that creates
them is stored, versioned, and used for backup...
now a 'show create stored procedure blah_blah' function would be nice... ;)
ok... you might have two options:
1- (don't know if this will work)
do a min(cast(Value * 100 as signed integer) / 100
2- or min(Value + 0.0)
and see what happens.
> -Original Message-
> From: Eldon Ziegler [mailto:[EMAIL PROTECTED]
> Sent: Tuesday, January 13, 2004 11:24 AM
> T
Could you CAST them first, then apply MIN?
> -Original Message-
> From: Eldon Ziegler [mailto:[EMAIL PROTECTED]
> Sent: Tuesday, January 13, 2004 9:56 AM
> To: [EMAIL PROTECTED]
> Subject: MIN with negative numbers in VARCHAR
>
>
> We have a table with floating point measurement values s
Dan Greene
> -Original Message-
> From: robert_rowe [mailto:[EMAIL PROTECTED]
> Sent: Thursday, January 08, 2004 12:48 PM
> To: [EMAIL PROTECTED]
> Subject: Re: When does using multiple databases make sense?
>
>
>
> In our accounting software our users can mana
erios where the normal connection closure may be
skipped...
My $0.02...
Dan Greene
> -Original Message-
> From: Mike Morton [mailto:[EMAIL PROTECTED]
> Sent: Tuesday, January 06, 2004 2:15 PM
> To: [EMAIL PROTECTED]
> Subject: Sleeping Processes
>
>
> I have a m
but as my strong suit is Java, I don't think I'd
be too much help... ;)
Thanks,
Dan Greene
I've found, in my many years of app dev work with Oracle, MSSQL, and now a little
MySQL, that stored procedures are a very mixed blessing.
Depending on your application architecture, they be just what you want, a means of
encapsulating a complicated, data-centric function. Database stored proce
left joins are your key
select * from a
left join b on id = tbl_id and b.tbl = "a"
where b.tbl_id is null
(not 100% sure on my syntax, but note the join, and the limitation on 'left' result
set is specified within the left join clause)
> -Original Message-
> From: Dean A. Hoover [m
What if you were to make a 3rd database, containing shared elements, such as your user
table (I presume we're not talking the MySQL system user table) and then have
necessary permissions granted between your other database users to read that table
jointly, as you can query cross databases...
So if I follow you, you allow your clients to execute direct sql on the database, both
select and update, but _they_ simply don't want to see the row id data?
If that's the case, then too bad for them if they are capable of writing sql, then
they have to handle the result, or omit the field
Without a unique identifier, the only way you're gonna get to update right is to use
other data in the row to identifiy the record...
so with a table structure of
name
address
state
your users can select anything they want, filtering w/ a where clause.
If you grant them update on the table, t
you can use the concat function:
http://www.mysql.com/doc/en/String_functions.html
SELECT concat(vend_city,', ',vend_state,' ',vend_zip)
FROM Vendors
ORDER BY vend_name;
> -Original Message-
> From: Gilbert Wilson [mailto:[EMAIL PROTECTED]
> Sent: Wednesday, December 10, 2003 2:42 PM
>
If it's an option, I would run your datafile through a processor (sed on unix,
ultraedit on windows) to search and replace the string
""
with
\"
and try it with
fields terminated by ',' optionally enclosed by '"'
as mentioned by Mike Johnson's posting
(escaped by '\' is default)
> -Orig
pping over window's internal multi-display support.
My $0.02,
Dan Greene
> -Original Message-
> From: Ray Ragan [mailto:[EMAIL PROTECTED]
> Sent: Tuesday, December 09, 2003 3:10 PM
> To: [EMAIL PROTECTED]
> Subject: Bug in WinMySQLadmin 1.4
>
>
> My SQL Team,
&
how about recreating the table w/o the autoincrement, then reload the data, then alter
table to reimplement the autoincrement?
> -Original Message-
> From: Matthew [mailto:[EMAIL PROTECTED]
> Sent: Thursday, December 04, 2003 7:12 PM
> To: Alex E.Wintermann; [EMAIL PROTECTED]
> Subject: R
from what I've read in the docs, if you use 'grant' you don't have to flush, but if
you insert into user tables directly, you do
> -Original Message-
> From: Skippy [mailto:[EMAIL PROTECTED]
> Sent: Tuesday, December 02, 2003 11:56 AM
> To: [EMAIL PROTECTED]
> Subject: Re: mysql remot
The time zone matters... your results are exactly 8 hours off... PST is gmt -8. So it
looks like the from_unixtime function is converting to what the time was locally at
that moment in GMT. Not what I would have expected either
What do you get when you run-
select unix_timestamp(urtime) f
The most guarenteed way to have a feature in a product pointed out is to make a public
statement that it's not available :)
keyboard in mouth,
Dan Greene
> -Original Message-
> From: John Griffin [mailto:[EMAIL PROTECTED]
> Sent: Monday, December 01, 2003 10:49 AM
&g
I think that to answer Todd's question, no there is no inherent way in MySQL to export
results as XML the way that sqlserver and oracle do.
Another solution that comes to mind (if you're using java) is to write a simple class
that converts a result set to an array of hashmaps (row in array is ro
yo yo yo,
another phat option is to create another page on your system that returns the %
number, and using flash, call that url and parse the results, and update your flash
chart accordingly. We've done similar things to have flash talk to our systems (set
up web pages that return xml rather
don't forget to change the port number that the server is listening on if you plan on
running them simultaneously
> -Original Message-
> From: Peter Sap [mailto:[EMAIL PROTECTED]
> Sent: Monday, November 24, 2003 4:59 PM
> To: [EMAIL PROTECTED]
> Subject: Re: My.cnf
>
>
> You could in
well... I'm not 100% sure of your table structure, but if the children are in same
tables, then join to each:
select *
from action_items a, child_table b, child_table c
where a.owner_id = b.person_id
and a.creator_id = c.person_id
now if the values in the action_items table could be null (no
one more idea:
try:
mysql --skip-column-names --raw < test1 | tar xf -
> -Original Message-
> From: Denis Mercier [mailto:[EMAIL PROTECTED]
> Sent: Wednesday, November 19, 2003 2:41 PM
> To: [EMAIL PROTECTED]
> Subject: Re: piping blob into shell command (tar)
>
>
> On Wed, 2003-11-19
ok... try this:
mysql --skip-column-names < test1 > mytestoutput.tar
tar xvf mytestoutput.tar
and if it works, try
cat mytestoutput.tar | tar xf -
to see if it works
> -Original Message-
> From: Denis Mercier [mailto:[EMAIL PROTECTED]
> Sent: Wednesday, November 19, 2003 2:41 PM
This may be simplistic, but is mysql putting any text before / after blob content,
such as column name, '1 row processed OK', that may be 'corrupting' the tar data?
using a very small tar file, and run your command, piping to more instead of tar to
see if there is any extra text that mysql is ad
I can't believe I'm saying this, but MySQL may not be the db of choice for your
particular app...
I figure you have few options:
1- keep db structure same, keep mysql, suffer performance issues
2- keep db structure, switch db, suffer migration costs
3- change db structure, keep mysql, suffer app
There recently was a thread discussing this with a very nice summary by the person who
had the issue... do a search on the archives for 'maintaining the size of a db' to
find the thread
> -Original Message-
> From: M.D. DeWar [mailto:[EMAIL PROTECTED]
> Sent: Tuesday, November 18, 2003 1
you need to tell mysql what field to equate to the value in your where clause:
update items set ItemDescription = ''
where ItemDescription = ''
replacing the < > items with the text you have below
> -Original Message-
> From: Andrew [mailto:[EMAIL PROTECTED]
> Sent: Monday, November
.
The MySQL documentation is, IMHO (in my humble opinion - another oldie acronym),
incredibly good for a free product. Searching it for answers, particularly about
syntax, should be everyone's, including my, first step in solving an issue that we're
having.
[getting down off of soap
two last gotchas I thought of...
1- the routine will erase all the old records, but the day's logs will increment
between executions, so you may want to give it a day's worth of 'padding' if the 20GB
is a hard limit (disk size). No worries if it's flexible
2- until you have filled to your size
your best bet would be to do a port-forwarding methodology, at the os level, not in
mysql itself
I've used some for SOAP tracing, to intercept and view http traffic, but I'm sure
there are some out there that just forward traffic w/o a gui
> -Original Message-
> From: Misaochankun [mail
2003 3:17 PM
> To: Michael McTernan; Dan Greene
> Cc: MySQL Mailing List
> Subject: RE: maintaining size of a db
>
>
> Well, it sort of helps. But that section is
> about future enhancements intended for mysql. I
> need to set something up now, with the current
> stable ver
Sent: Wednesday, November 12, 2003 11:19 AM
> To: Dan Greene; MySQL Mailing List
> Subject: RE: maintaining size of a db
>
>
> Yes sir, exactly. It's just that's what I'm
> looking for, and can't figure out. I can set up
> a cron job, but what exactly w
cronjob a sql script that runs a delete statement for old jobs daily
> -Original Message-
> From: Scott H [mailto:[EMAIL PROTECTED]
> Sent: Wednesday, November 12, 2003 10:40 AM
> To: MySQL Mailing List
> Subject: Re: maintaining size of a db
>
>
> --- Egor Egorov wrote:
> > Scott H wr
you could put a shell script as the actual cron job, and make the file only read-able
by root, using an environment variable as the password passed (defined in shell script
file), so that way even if someone 'sniffs' the process via 'ps -ef' they don't see
the actual password (if they happen to
I think that I must be missing something, as I agree with all the suggestions that to
change the seperate date element columns to a single date field, but Meli's original
post had a date falling into multiple quarters. Now to my knowledge, a date can only
be in one quarter, from a calendar poin
I seem to remember someone saying that if a query would end up returning more than x%
of a table (I think it was either 30% or 50%), then mysql just ends up doing a full
scan regardless...
> -Original Message-
> From: Eric Anderson [mailto:[EMAIL PROTECTED]
> Sent: Thursday, November 06,
you could do
select article_num from $table
order by article_num desc LIMIT 75
> -Original Message-
> From: Mark [mailto:[EMAIL PROTECTED]
> Sent: Thursday, November 06, 2003 8:40 AM
> To: [EMAIL PROTECTED]
> Subject: Last 75 entries from a table
>
>
> Hello,
>
> Using MySQL 3.23.58, w
you are going to want a 'buster' table... also known as a many-to-many table
so you have:
contacts
Contact_Key
Industries
-
Industry_Key
Contact_Industry_assoc
--
Contact_Key
Industry_Key
> -Original Message-
> From: DeBerry, Casey [mailto:[EMAI
well... when I do db design, I tend to start with the objects of my system. The one
that comes to mind in your case is people.
so you'll need a people table.
well what are the details of a person?
first_name
Last_name
Middle_name1
Middle_name2
Maiden_name
[any other basic bio data]
so you'l
Is there a way to do this on a live running (i.e. production) server?
> -Original Message-
> From: Dathan Vance Pattishall [mailto:[EMAIL PROTECTED]
> Sent: Tuesday, October 28, 2003 1:49 PM
> To: 'Iago Sineiro'; 'MySql Mail List'
> Subject: RE: Howto reduce size of MYISAM files after dele
I just recently helped someone else with a similar issue
is your PHP code (and therefore your insert) getting executed when the form is being
displayed to the user initially, and then again when the form is submitted?
> -Original Message-
> From: Erich C. Beyrent [mailto:[EMAIL PRO
lleges are expensive'
I'll send you that memo....
:D
Dan Greene
> -Original Message-
> From: Randy Chrismon [mailto:[EMAIL PROTECTED]
> Sent: Monday, October 27, 2003 2:51 PM
> To: [EMAIL PROTECTED]
> Subject: My Company DB Wars
>
>
>
> My old Mic
As you are selecting all records (no where clause), it will scan the whole table every
time, I believe...
does anyone know if he added the other columns to his index, or had 4 seperate indexes
(one per column) would they be used in this operation?
>
> Ok, I've found the optimization chapter i
You probably want to restructure your tables (if that's an option)
if it's a heirachal menu system (one parent only), then you'll need to put a
parent_id column, and then map each child to the parent.
then once you have your parent menu (158), you select * from menus where parent_id =
158
If
Indexes are your friend. Perhaps your best friend (as far as databases go)
---
To give light on your question... you can index any field... if the entries are
distinct, it's called a 'unique index' which are the best kind to use. Otherwise you
have a 'non-unique ind
I may be missing something, but wouldn't a substring(bigdata, 0,instring("--header
end--")) suffice? Of course, my syntax is all kinds of wrong... but you get the
idea...
> -Original Message-
> From: Jan Magnusson [mailto:[EMAIL PROTECTED]
> Sent: Thursday, October 23, 2003 10:56 AM
> T
y.
For now, I think that you're stuck doing it in your application. Just encapsulate how
that data can be updated, and have all code that updates it call your
function/method/etc...
My $0.02...
Dan Greene
> -Original Message-
> From: Mumba Chucks [mailto:[EMAIL PROTECTED]
&g
from one dan to another.
select man.description as man_description, cat.description as cat_description
from main_table as main, manufacturer_table as man, categories_table as cat
where main.manufacturer_id = man.manufacturer_id
and main.category_id = cat.category_id
and title_id = 3
Dan
To make this back to being a MySQL question
What if you used the encode or md5 functions when you retrieve the list of results,
and then do your lookup on the item details by matching it to decoding the same id?
ex. get list-
select fid as open_fid, MD5(fid) as crypt_id
from your_table
wher
Mahesh,
The best advice, from what I've heard around the list is to base your decision on this
based on your filesystem. Some filesystems handle large # of files well (1 db, many
tables) some don't. Some handle many directories well (many db's 1 tbl each), some
don't (ala your example).
P
ctober 09, 2003 3:39 PM
> To: Dan Greene
> Subject: Re: FW: FW: When inserting data it insert 2 records
> one correct
> and one blank!
>
>
> Im lost, im new on php and mysql so i may be omitting some
> statements or
> logics i send you the entire script, its an lbi t
sending on to list while I think... ;)
-Original Message-
From: Emilio Ruben Estevez [mailto:[EMAIL PROTECTED]
Sent: Thursday, October 09, 2003 10:34 AM
To: Dan Greene
Subject: RE: When inserting data it insert 2 records one correct and one
blank!
Ok, here it is the full statement
Emilio,
Can you supply the full insert statement? Also, is this happening for all users, or
one in particular? I ask, as I've seen issues with web app users double-clicking the
submit button causing similar issues
> -Original Message-
> From: Emilio Ruben Estevez [mailto:[EMAIL P
I know it's not the answer you're looking for... :( but dealing with overnights has
caused me so much aggravation in past apps I've written, I've tended to make the
client create two (or more) 'bookings' for the covered time... don't know if it's an
option for you, but it's my $0.02.
> -
an option for A-
no idea if this will work, but what if you moved your actual data file to new drive,
and soft linked it from the other drive?
> -Original Message-
> From: Brendan J Sherar [mailto:[EMAIL PROTECTED]
> Sent: Tuesday, October 07, 2003 7:27 AM
> To: [EMAIL PROTECTED]
> Subje
try
select commentid, commentref
from comments
ORDER by
commentref ,commentid DESC;
> -Original Message-
> From: Cummings, Shawn (GNAPs) [mailto:[EMAIL PROTECTED]
> Sent: Thursday, October 02, 2003 2:37 PM
> To: [EMAIL PROTECTED]
> Subject: sorting/grouping
>
>
>
>
> I have a v
d the right one...
Sorry that I'm being incredibly vague I set it up months ago
Dan Greene
> -Original Message-
> From: Steven Nakhla [mailto:[EMAIL PROTECTED]
> Sent: Friday, October 03, 2003 9:42 AM
> To: MySQL
> Subject: Tomcat, Connection Pooling, and MySQL
>
you want to do
insert into mytable (column1, column2, column3)
(select thiscolumn, '1', now() from anotherTable);
> Mysql 3..
>
> I can't figure this one out
>
>
>
> I need to move data from one mysql table to another
>
> The hurdle for me is adding additional column values.
>
> Here is w
seems to store only the number of hours worked
>
> Thank you,
> Syed Ali
> (609) 951-2989
>
>
>
> -Original Message-
> From: Dan Greene [mailto:[EMAIL PROTECTED]
> Sent: Monday, September 29, 2003 11:12 AM
> To: Syed Ali; [EMAIL PROTECTED]
> Subject:
In the last timetracking system I built, I used a 'timeentry' table which was basically
timeentry_id (pk)
employee_id
date
num_hours_worked
task_id
I put a few other indexes in there based on the searching that I had to do, but to
each their own...
> -Original Message-
> From: Syed
_total;
on delete
update count_table set total_count = total_count - 1, je_total = je_total -
:old.je_total;
hope this helps,
Dan Greene
> -Original Message-
> From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
> Sent: Friday, September 26, 2003 7:34 AM
> To: [EMAIL PROTE
thout a central 'app server' as such, you can't really take advantage of connection
pooling. The closest you can do is write into your app that if the connection is idle
for x amount of time, then it closes the connection, and reestablishes it upon next
data request.
it depends on how you want it
if you want them consecutively, you probably want to UNION two queries together, but
you need to make sure that the column types are identical...
i.e.
select meeting_name as event_name, meet_date as event_date
from meetings
where meet_date >= curdate()
union
se
SELECT state_ID FROM financial_master WHERE category_ID = '1'
becomes
SELECT unique state_ID FROM financial_master WHERE category_ID = '1'
-or-
SELECT distinct state_ID FROM financial_master WHERE category_ID = '1'
2 points !
> -Original Message-
> From: Comcast [mailto:[EMAIL PROTECTED]
I believe it, as there's an open source project c-jdbc (clustered jdbc) that can do it
for x number of machines running any jdbc compatible db's (and they don't even need to
be the same type...)
> -Original Message-
> From: Neil Aggarwal [mailto:[EMAIL PROTECTED]
> Sent: Friday, Septembe
It's actually mentioned in the user comments in the online manual (was there after the
other reply looking for MINUS support),
that in a union, in any column that is a literal, the top most query defines the
datatype (non-literals obviously use the column type) for the column.
I agree it's n
ok...
I think this'll work...
select unique t1.*
from table1 as t1
left outer join table2 as t2 on t2.name = t1.name
where t2.name is null
> -Original Message-
> From: xander xxx [mailto:[EMAIL PROTECTED]
> Sent: Thursday, September 11, 2003 2:56 AM
> To: Dan Greene
not to respond with, 'read the manual', but
There is an entire section of the manual devoted to user management, I would start at
www.mysql.org
click on the documentation link on top, and start there...
(user account management is :
http://www.mysql.com/doc/en/User_Account_Management.htm
What most data structures do is use foreign keys to the unique numerical id column, in
your case, company_id. That way, if the company code changes, the id does not, and
therefore, no issues on update...
> -Original Message-
> From: Daevid Vincent [mailto:[EMAIL PROTECTED]
> Sent: Wed
If you're accessing your db through JDBC, an idea that I've been following is the
c-jdbc project...
http://c-jdbc.objectweb.org/
it's software raid clustering for databases... it's still in beta, but it looks very
promising for easy clustering. Combined w/ MySQL's master/slave setup, it could
And although it blows out your single-query theory further out of the water,
you could query your listing for a count of each starting letter of the last name,
getting something like this:
[syntax will be wrong (closer to Oracle syntax), but I don't have a mysql installation
at work to get it r
I have this strange feeling of deja-vu
This was just posted to the list recently (last week?), and currently, you cannot do
cross-table full-text indexes, excepting boolean text searches, which would be slow.
I have this strange feeling of deja-vu
Dan Greene
> -Original Mess
Having developed such an app myself (albeit in Oracle where the wonder of 'connect by'
exists which according to 'to-do' lists is coming for MySQL yippie!), this is what
I recommend...
1- if your forum is 'threaded', i.e. a message is in response to another, you can use
that linking to get
I would use MySQL CC (command center, I think...)... I seem to remember someone
mentioning that mysql gui is discontinued...
CC is available from the mysql.org site, and is very easy to install on windows xp
(it's on my laptop...)
> -Original Message-
> From: Liwen Han [mailto:[EMAIL
what I've done is
select ceil(value)
(not sure if ceil is the function on MySQL, but there is a ceiling function, I'm
sure...)
> -Original Message-
> From: Fabio Bernardo [mailto:[EMAIL PROTECTED]
> Sent: Friday, September 05, 2003 2:59 PM
> To: Mysql (E-mail)
> Subject: Round Question
well, that'll teach me not to update my java api bookmarks to 1.4
the getGeneratedKeys() calls works like a charm!
Thanks for the help
> -Original Message-
> From: Mark Matthews [mailto:[EMAIL PROTECTED]
> Sent: Tuesday, September 02, 2003 4:09 PM
> To: Dan Gr
Ahh... the proverbial 'how many licks does it take to core the database'
There is no straightforward answer. That's the bad news.
However, the basics are identical for all databases
basically, though,
the more memory you have, the more concurrent connections you can handle.
The more CPU
se is somewhat lacking in PHP/MySQL,
Dan Greene
> -Original Message-
> From: Parker Morse [mailto:[EMAIL PROTECTED]
> Sent: Wednesday, September 03, 2003 11:43 AM
> To: [EMAIL PROTECTED]
> Subject: MySQL 4.0.14 stops responding to PHP 4.3.2
>
>
> I've be
I may be missing something, but why not just do:
SELECT CustomerName, ApplicationName, Status, COUNT(1) AS Count
FROM LogMessage
GROUP BY Status, CustomerName, ApplicationName with rollup;
which should return all the data you need in 1 query, which has got to run faster than
4 seperate queri
I don't know the answer to your question, but as a side note, I've always found
cat'ing /dev/null into a file to be safer if the file may be in use
cat /dev/null > foo.log
> -Original Message-
> From: Jesse Sheidlower [mailto:[EMAIL PROTECTED]
> Sent: Tuesday, September 02, 2003 10:27
(newbie to MySQL)
I've been banging my head against the wall on this one for a bit now, and I understand
that last_insert_id() is per-connection based, but most webapps are connection pooled
(simple) or clustered (harder). What are my options to get the id of the inserted row
in a webapp? As a
88 matches
Mail list logo