Did you try doing a find for the lib? Perhaps it needs to be in the ld.conf or
ldlibpath?
-
Sent from my NYPL BlackBerry Handheld.
- Original Message -
From: Gleb Paharenko [EMAIL PROTECTED]
Sent: 12/06/2005 05:51 PM
To: mysql@lists.mysql.com
Subject: Re: question regar
I have never seen this. Mysql would have to do a wget of the file then dump it.
Last I knew it wasn't a web browser. There may be a way to do the wget inline
though, or at least write something in shell or perl to do it. Is this cron'd
or something, or a one time thing?
-
Sent
Don't know if that's the word I'd use, but credibility is lost for that
nonsense. No proffesionalism whatsoever. If I were admin I'd ban for less.
-
Sent from my NYPL BlackBerry Handheld.
- Original Message -
From: Raz [EMAIL PROTECTED]
Sent: 11/17/2005 05:08 AM
To: ky
I think you should go through the change log just incase. Especially if you are
using one of the api's in your apps (php, etc...). Don't be rash about
upgrading if it is production.
-
Sent from my NYPL BlackBerry Handheld.
- Original Message -
From: Nathan Gross [EMAIL
I would venture to guess that support was the issue. That would make a fair
comparison.
-
Sent from my NYPL BlackBerry Handheld.
- Original Message -
From: Joerg Bruehe [EMAIL PROTECTED]
Sent: 11/04/2005 06:28 AM
To: Jigal van Hemert <[EMAIL PROTECTED]>
Cc: mysql@list
Logs?
-
Sent from my NYPL BlackBerry Handheld.
- Original Message -
From: tom wible [EMAIL PROTECTED]
Sent: 10/24/2005 06:58 PM
To: mysql@lists.mysql.com
Cc: [EMAIL PROTECTED]
Subject: demon quits immediately...
>Description:
[EMAIL PROTECTED] mysql-standard-5.0.15-linu
This is particularly troublesome when it come to isp accounts. I.E., all aol
users showing Reston as their base.
I tried something like this using whois databases and had the same issues. I
canned the idea for now.
P
-
Sent from my NYPL BlackBerry Handheld.
- Original Mes
Are there indexes on the table? Could be that.
--Original Message--
From: Sujay Koduri
To: mysql
Sent: Sep 13, 2005 5:24 AM
Subject: Major Difference in response times when using Load Infile utility
hi ,
I am using the Load Infile utility to load data from file to MySQL DB.
When trying t
Aren't the openssl dev libraries in a separate rpm?
--Original Message--
From: RedRed!com IT Department
To: mysql
Sent: Aug 19, 2005 4:52 PM
Subject: mysql-4.1.13 and openssl-0.9.8 configuration issue
I'm running Red Hat Enterprise v4 and I have installed OpenSSL-0.9.8 and
am trying to ins
I had a similar setup, involving log parsing. It was impractical to put all
of the data in one table and expect to get timely results. In order to do
it, I scripted the generation of the temp table (think I used a merge,
since it is not actually moving data and is fast). So... I scripted the
date r
I would go raid 10. This made a huge difference on one of my systems. You could
probably get fancier than that, but r10 works well. A different fs would likely
help. You said nothing about the layout of the db, but I'm sure you have that
under control.
P
-
Sent from my NYPL Bla
My first guess is the indexes. Maybe create them after the import. It will
nonetheless take a bit of time!
--Original Message--
From: Jarle Aase
To: MySQL list
Sent: Mar 17, 2005 11:53 PM
Subject: Problem: Slow "LOAD FILE" performance with innodb
Hi list,
I'm trying to import some data in
Does the app display all 1000 rows at once? Does your app require all
fields? Only retrieve what you need for the page. If the app displays all
1000 rows, it may remain slow depending on how you get them (order, group,
function) and indexing.. Also, the link and disk may matter depending on
the si
There is likely an rpm package for the php mysql module. Install that, or
compile php with mysql support.
P
--Original Message--
From: Kamal Ahmed
To: mysql
Cc: Kamal Ahmed
Sent: Mar 9, 2005 3:58 PM
Subject: PHP/MyphpAdmin/MySql
Dear MySql List experts,
I have the following installed on
A single transaction logs into the db several times?
I assume its a browser based transaction, no?
Are you limiting connections (in my.conf)? Have you tuned the config, if
yes how so?
--Original Message--
From: Larry Lowry
To: mysql
Sent: Jan 21, 2005 12:49 PM
Subject: Connection perfo
If it were all in one row, you may be able to compare datetime fields.
I do not know if you can do this with 2 rows, and the query will probably
be rough.
Did you design the table? Can you create it so that your row has start and
stop times, instead of creating another row?
> -Original M
without problem.
Thanks-
P
---------
Peter J. Milanese The New York Public Library
System Administrator mailto:[EMAIL PROTECTED]
Tel.: +1 (212) 621-0203Fax: +1 (212) 247-5848
--
MySQL General Mailing List
For lis
I currently run LVS (pre-distribution) on my farm, which gets about 100M
hits/month.
Good points about LVS are that it is completely rock solid, and runs on
minimal hardware.
I have never run MySQL behind it, as I think that would be a bit flaky for
a live site. Probably
worth checking out tho
It's not really complaining about /etc/hosts.
It's complaining that "hostname" command does not return anything that is
in /etc/hosts.
type 'hostname' at the command prompt. Add what that returns to the
/etc/hosts entry for the machine.
Let me know how that goes-
P
Darryl Rodden <[EMAIL P
This complicates the database. This discussion has come up hundreds of
times on this list.
NULL/NOT-NULL is a basic check, and by saying NOTNULL, you're telling
mysql to fail it.
Same thing with Syntax checks, they're just that.
The different INT fields are provided to make the database more e
Set the fieldtype to 'bigint'
It's the limit on int
"Jeff McKeon" <[EMAIL PROTECTED]>
05/25/2004 02:29 PM
To: <[EMAIL PROTECTED]>
cc:
Subject:Very Strange data corruption
Query:
insert into
MIS.simcard(ID,ShipID,Service_Provider,SN,v1,v2,f1,d1,puk1,
I would think that seek time may be interdependent on disk speed and
Filesystem type... I can see why it would matter sort of...
Jeremy Zawodny <[EMAIL PROTECTED]>
05/13/2004 02:45 PM
Please respond to mysql
To: Peter J Milanese <[EMAIL PROTECTED]>
cc:
Does the filesystem matter as much as disk throughput? I'd imagine that
is where the bottleneck would be, at least as I've seen...
Tim Cutts <[EMAIL PROTECTED]>
05/13/2004 11:13 AM
To: Jacob Friis Larsen <[EMAIL PROTECTED]>
cc: [EMAIL PROTECTED]
Subject:
Silly question.
Are you sure it is running? as expected?
P
"Stephen Camilleri" <[EMAIL PROTECTED]>
05/11/2004 11:10 AM
To: "Mysql List" <[EMAIL PROTECTED]>
cc:
Subject:RE: mysql on redhat9
Thanks Egor..
Yes I did. I've been trying to create grant t
Did I miss something, or is this not concerning a 1 second spike to 100%?
I do not see why this is problematic, and I'm a bit curious as to how it
is...
I mean, the DB does use 100% CPU if it's available. And for a 1 second
period,
it could very well just be the sample time for the tool monitor
You may be able to limit resource usage for mysql. I am not sure how to do
it in windows though.
100% for a second while it executes the query is not abnormal in my
opinion.
"Nick A. Sugiero" <[EMAIL PROTECTED]>
05/05/2004 01:12 PM
To: <[EMAIL PROTECTED]>
cc:
Javascript is a client side language. I seriously doubt it alone would do
anything for you.
My Sql <[EMAIL PROTECTED]>
05/05/2004 12:18 PM
To: [EMAIL PROTECTED]
cc:
Subject:Html and mysql..
Hi all,
I have got one serious doubt.
Can we access mysql dat
Gerald-
In my experience, I have inserted and retrieved from a decent sized db
(a few million records per day), and have gotten
them out in the same order. There were no other operations on the db
except for cronological ones, i.e. delete the
first hundred rows, insert a hundred rows. The resu
Does the database not return it in the order that the entries are
submitted?
I've done some log parsing/caching in databases, and the order had always
been the same whether
I use an order by date or not. One thing logs to the db, the other grabs.
Had no problem without an
order.
P
"Boyd
try getting 'mytop'.
Do a google on it... It's like the 'top' utility, but displays information
regarding mysql procs.
P
"Ronan Lucio" <[EMAIL PROTECTED]>
04/20/2004 06:58 PM
To: <[EMAIL PROTECTED]>
cc:
Subject:Process Monitoring
Hi,
We have a MySQL
Just something I noticed missing here
The lack of error checking on the server side means better performance in
my opinion. When you're throwing
a couple thousand hits per second at it, this is visible. I would have to
agree that error checking does belong
on the client side (at least from
Awesome. Hope it works out.
P
Dan Johnson <[EMAIL PROTECTED]>
04/12/2004 02:16 PM
To: [EMAIL PROTECTED]
cc:
Subject:Re: Queries per second average
Victor Pendleton wrote:
>I agree with Peter, 50 queries per second is not a MySQL limit. Have you
>che
I've done hundreds if not thousands of queries per second...
I do not see how the server can be an issue unless it's configuration is
bare.. And I don't know how much
that should affect it if it's a decent server :-/ If there are
configuration constraints, it could be disk that's mussing
it up.
You are correct Jim..
This is certainly not Cartesian.
"Jim Page - EMF Systems Ltd" <[EMAIL PROTECTED]>
04/07/2004 10:09 AM
Please respond to "Jim Page - EMF Systems Ltd"
To: "gerald_clark" <[EMAIL PROTECTED]>
cc: <[EMAIL PROTECTED]>
Subject:Re: stuc
I ran into the same issues on RH8, with a default implmentation. It can be
overcome, but the mysql failed to
write to the table after 2gb or so. It turned out to be a filesystem
limitation issue, which was fixable. I am
not sure, but given the size of files nowadays, RH9 defaults probably take
-
Peter J. Milanese
"Volnei Galbino" <[EMAIL PROTECTED]>
04/06/2004 03:09 PM
Please respond to asc28671
To: " [EMAIL PROTECTED]" <[EMAIL PROTECTED]>
cc:
Subject:measuring the time used by the query
Hello,
I am makin
ve
no performance issues generating the
graphs on the fly with any reasonable timeframe.
Anyhow.. Hope that helps a bit.
Peter J. Milanese
Jack Coxen <[EMAIL PROTECTED]>
04/06/2004 02:01 PM
To: "MySQL List (E-mail)" <[EMAIL PROTECTED]>
cc:
A million rows is not a lot for mysql.
The pro of doing it in one table is that the coding is much easier.
Reporting is much easier.
The pro of doing it as individual databases would be easier locking,
quicker responses on
queries (not so much), and the ability to secure it much more (dynamica
with (with LVM)
Back to the point, the 1gig limit stated in the initial email can be
overcome. Things you have to keep in mind
are which OS to choose, which architecture, and the underlying filesystem.
P
"Donny Simonton" <[EMAIL PROTECTED]>
03/09/2004 09:09 AM
Yes. There's a limit.
Start mysql with --big-tables. I think there's a finer way of doing it,
just don't remember what it was ;)
P
"Donny Simonton" <[EMAIL PROTECTED]>
03/09/2004 08:00 AM
To: "'Jigal van Hemert'" <[EMAIL PROTECTED]>,
<[EMAIL PROTECTED]>
cc:
S
You need to install the client side of mysql on the linux box (libmysql.*)
then compile php with the mysql modules (I think they exist in RPMs)
P
-"Eric W. Holzapfel" <[EMAIL PROTECTED]> wrote: -
To: [EMAIL PROTECTED]
From: "Eric W. Holzapfel" <[EMAIL PROTECTED]>
Date: 02/10/2004 02:55
Create a unique index of the two fields. Primary key is one column only.
That should take care of it.
P
-Seena Blace <[EMAIL PROTECTED]> wrote: -
To: [EMAIL PROTECTED]
From: Seena Blace <[EMAIL PROTECTED]>
Date: 02/10/2004 01:25PM
Subject: COMPOSITE PRIMARY KEY?
Hi,
I want to create a
When you create the table I think you just set it..
ie- create table blah AUTO_INCREMENT=
P
-"Scott Purcell" <[EMAIL PROTECTED]> wrote: -
To: <[EMAIL PROTECTED]>
From: "Scott Purcell" <[EMAIL PROTECTED]>
Date: 02/09/2004 12:21PM
Subject: auto_increment pseudo sequence?
Hello
And then there's the whole opensource thing :)
Your platform focus depends a lot on it too. If you're a MS shop, I'd
imagine SQL would be the way to go. All the fancy MS integration stuff is
there.
Connecting to it from other OS's is generally done with ODBC. ODBC is
pretty stinky as it's an abb
Sorry. Obviously didn't see this...
-Phil <[EMAIL PROTECTED]> wrote: -
To: Dan Greene <[EMAIL PROTECTED]>
From: Phil <[EMAIL PROTECTED]>
Date: 02/06/2004 09:36AM
cc: gerald_clark <[EMAIL PROTECTED]>, [EMAIL PROTECTED]
Subject: RE: How to determine when a MySQL database was last modified
Don't know if it can be done in the database without lots of legwork.
You can just use the filesystem to do it though.
ls -la within the database directories.
It'd probably be a lot easier to use perl or php file functions, then you'd
be able to do all your calculations in epoch.
P
-Phil
( I think in the MySQL docs, might have been here in
the list), technically it will take less time to add the indexes after the
table creation, than the overhead of index updating per-insert.
Either way, it's gonna take a loong time.
> -Original Message-
>
I'd start with the indexes in place. 5+mil records will take quite some
time to index after the fact.
P
-Krasimir_Slaveykov <[EMAIL PROTECTED]> wrote: -
To: [EMAIL PROTECTED]
From: Krasimir_Slaveykov <[EMAIL PROTECTED]>
Date: 01/30/2004 09:14AM
Subject: SQL and productivity
Hello ,
I
Doesn't Excel have ODBC functionality? If so, there is MyODBC. I don't know
if it will provide everything you need, but its a simple start.
P
-Brian Harris <[EMAIL PROTECTED]> wrote: -
To: Annie Law <[EMAIL PROTECTED]>
From: Brian Harris <[EMAIL PROTECTED]>
Date: 01/28/2004 04:39PM
cc:
t around a lot of that based on
your table layout. I was using it to test live web stats in a web farm.
Developed the application, then ran into caching issues and haven't looked
at it since. MySQL did the right thing though.
P
-"STE-MARIE, ERIC" <[EMAIL PROTECTED]> wrot
-Forwarded by Peter J Milanese/MHT/Nypl on 01/20/2004 02:37PM -
To: <[EMAIL PROTECTED]>
From: Peter J Milanese/MHT/Nypl
Date: 01/20/2004 02:34PM
cc: <[EMAIL PROTECTED]>
Subject: RE: Slow query times
You may also want to try :
count(1)
instead of
count(*)
count(*) pu
-Forwarded by Peter J Milanese/MHT/Nypl on 01/20/2004 02:36PM -
To: "STE-MARIE, ERIC" <[EMAIL PROTECTED]>
From: Peter J Milanese/MHT/Nypl
Date: 01/20/2004 02:31PM
cc: [EMAIL PROTECTED]
Subject: Re: Advice needed for high volume of inserts
It'll work.
I do slig
subselects were slated for release with 5.0 Not 4.1
P
Jeremy Zawodny <[EMAIL PROTECTED]>
06/10/2003 11:21 AM
Please respond to mysql
To: "Peter J. Milanese" <[EMAIL PROTECTED]>
cc: Kaarel <[EMAIL PROTECTED]>, [EMAIL PROTECTED]
MySQL has the backing of many large corporations (for which MySQL was
initially written). MySQL
support/userbase will not go away that easily. They have a plan for
enhancements. They
accomplish these enhancements ahead of schedule.
This is how I translate 'lack of commercial support company m
Hey Kaarel-
I've been sticking with MySQL mostly for it's support. Large
community, lots of documentation, and they have a future plan (which they
tend to actually complete ahead of schedule). While the featureset is
'supposedly' not as advanced as pgsql, mysql does in fact work. I hit
- Forwarded by Peter J. Milanese/MHT/Nypl on 06/05/2003 07:35 AM -
Peter J. Milanese
06/04/2003 09:55 PM
To: Peter J. Milanese/MHT/Nypl
cc: "Mike Hillyer" <[EMAIL PROTECTED]>, [EMAIL PROTECTED]
Subject:RE: Suggestions on joins/me
lues?
P
"Mike Hillyer" <[EMAIL PROTECTED]>
06/02/2003 03:04 PM
To: "Peter J. Milanese" <[EMAIL PROTECTED]>
cc: <[EMAIL PROTECTED]>
Subject:RE: Suggestions on joins/merges
Ok...
select ref.zipcodes.
The basic layout of concerned tables:
BIG1|
BIG2|
BIG3ref.zipcodes
Whereas, BIG# should be treated as one table
"Mike Hillyer" <[EMAIL PROTECTED]>
06/02/2003 01:38 PM
To: "Peter J. Milanese" <[EMAIL PROTECTED]>
Apologies.. I left that out...
The ambiguous column is 'zipcode'. it is common between all tables.
P
"Mike Hillyer" <[EMAIL PROTECTED]>
06/02/2003 01:38 PM
To: "Peter J. Milanese" <[EMAIL PROTECTED]>, <[EMAIL PROTECTED]>
Greetings:
I have a series of large tables. 5+gb each.
They have identical structures.
Sample of the query I want to run:
select ref.zipcodes.state as state, count(1) as count from
BIGTABLE,BIGTABLE2,BIGTABLE3,ref.zipcodes where tsb between $Start and $End
and zipcode=ref.zipcodes.zipcode group
60 matches
Mail list logo