Check your border router access list.
Patrick Sherrill
patr...@coconet.com
Coconet Corporation
SW Florida's First ISP
(239) 540-2626 Office
(239) 770-6661 Cell
Confidentiality Notice. This email message, including any attachments,
is for the sole use of the intended recipient(s) an
We always store as strings to avoid rounding issues and then convert for calcs
to whatever precision we need.
Pat...
Sent from my iPhone
> On Dec 17, 2014, at 6:24 AM, Lucio Chiappetti wrote:
>
>> On Tue, 16 Dec 2014, Hartmut Holzgraefe wrote:
>>> On 16.12.2014 15:16, xiangdongzou wrote:
>>>
ts current incarnation - so I consider it to be
accurate, at least in that respect.
Patrick
myList - everything you could possibly want (to buy)
-Original Message-
From: Gavin Towey [mailto:gto...@ffn.com]
Sent: Tuesday, October 26, 2010 3:52 PM
To: Patrick Thompson; mysql@lists.mysql.c
Sorry, that should be 200MB not 20MB (still doesn't seem like much to me)
Patrick
myList - everything you could possibly want (to buy)
-Original Message-
From: Patrick Thompson
Sent: Monday, October 25, 2010 5:24 PM
To: 'Gavin Towey'; mysql@lists.mysql.com
Subject: RE: m
FF'
'innodb_log_buffer_size', '1048576'
'innodb_log_file_size', '25165824'
'innodb_log_files_in_group', '2'
'innodb_log_group_home_dir', '.\'
'innodb_max_dirty_pages_pct', '90'
'innodb_max_purge_lag',
Thanks Martin, though I'm somewhat confused by your message - there are no
joins in the query (unless the longtext s thought of that way) and the Explain
seems to indicate the query is using the ItemsById primary index (which is what
I would expect).
Patrick
myList<http://www.my
LT CHARSET=latin1;
This is just the retrieve side - which seems to be around 1.5 times slower than
the equivalent Sql Server numbers.
The update is much slower - 3 to 5 times slower depending on the record size.
It makes sense to me to focus on the retrieve, maybe the update is just a
reflecti
That's true for the deletes - but not for save and get. The ddl is available
here
http://cipl.codeplex.com/SourceControl/changeset/view/2460#57689
The code that accesses it is here
http://cipl.codeplex.com/SourceControl/changeset/view/2460#57729
Patrick
myList<http://www.my
Logical Processor(s)
Installed Physical Memory (RAM) 4.00 GB
Total Virtual Memory 6.75 GB
Page File Space 3.37 GB
Disk 120GB SSD with 22GB available
If this isn't the right place to ask this question, can someone point me to
somewhere that is.
Thanks
Patrick
Are you using...
m
>Email: john.dais...@butterflysystems.co.uk
>
>
>
>
>
> --
> Best Regards,
>
> Prabhat Kumar
> MySQL DBA
> Datavail-India Mumbai
> Mobile : 91-9987681929
> www.datavail.com
>
> My Blog: http://adminlin
I seem to recall the issue with the debug library, but don't recall the fix.
Do you get the same permissions (access) error with the release library?
Pat...
- Original Message -
From: "Miguel Cardenas"
To:
Sent: Saturday, January 10, 2009 10:22 AM
Subject: VC++ 2008 / MySQL debug / U
Hello,
I successfully changed ft_word_min_len to '1' + rebuilt my fulltext
index (dropped and readded it). But - for some reason the mysqlD still
does not return anything unless the wordlength is >=3.
Any thoughts about this?
regards,
Patrick
--
MySQL General Mailing
> Your doing a left join which can increase the number of rows returned.
> This is then GROUP BYed and run through a HAVING. Is:
> posts.poster_id=users.id
> a one to one relationship? If it is not, then count(*) would be a
> larger number and pass the HAVING. This may not be your problem, but
hey all,
I have my query that counts posts per user:
SELECT count(*) as counted, c.user_id FROM posts c group by c.user_id
having counted>1 order by counted DESC LIMIT 20
I wanted to add user login for each count so I did:
SELECT count(*) as counted, u.login FROM posts c left join users u on
p
> I would have thought your not = though is matching a lot more rows every
> time..
The field is UNIQUE PRIMARY KEY in both tables, so there
should be 0 or 1 matches.
> I would look into using where not exists as a subselect
My MySQL book (O'Reilly second edition) does not mention
subqueries or
I have two MyISAM tables; each uses 'phone' as a primary key. Finding
rows where the primary keys match is efficient:
mysql> explain select bar.phone from foo,bar where foo.phone=bar.phone;
++-+---++---+-
+-+---+---+--
Hi all,
I'm doing a "select * from comments where c.content REGEXP
'http://[^i].*'" and I would like to sort the urls found by repetition
of the same urls.
As an example if I get 3 records with http://google.com url in the
content and two with http://mysql.com I would get the first the 3
comments
Hey all,
I have comments(id,content) and votes(comment_id,vote). vote is a tinyint.
I would like to select total votes for each comment, I tried:
"select content, sum(v.votes) from comments c left join votes v on
c.id=v.comment_id"
but it only returns first result obviously, any idea how I coul
Hey all,
I have 2 tables:
Profiles(id).
Relationships(id,friend_id,befriender_id).
friend_id and befriender_id represent profiles ids.
I want to find all the profiles that are neither friends neither
befrienders with a given profile.
this is the query I use with profile id=1:
select * from pro
Do you have to do something special with InnoDB tables to accept
various character sets like accented, European characters? Using the
default, these accented characters come out as garbage.
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lis
Hey all,
I have a table 'clients' like this:
id int(5),
name varchar(55),
address varchar(55)
I would like to select all the records that have '%x%' and '%y%' but
'%x%' can be in name and '%y%' can be in address. Also in my query
there are generally more words to match (x,y,z,t etc) and I can't u
hey all,
I have two tables like that:
artists(id,name)
albums(id,artist_id,album_name)
and I need to transfer the data of this database to three tables that
look like this:
artists(id,name)
albums(id,name)
artists_albums(album_id,artist_id)
any idea what's the fastest query to do this?
thanx i
Hey all,
I host my app on a friend server who make backup every night, well
yesterday he installed another distro so I asked him for my db backup
and it turns out the only backup he did was the whole hard drive. So
he just sent me a tarball of my database directory containing:
ads_categories.MYD,
On 10/7/06, Patrick Aljord <[EMAIL PROTECTED]> wrote:
thanx it works the trigger is created successfully but it has no
effect. here it is:
delimiter //
create trigger testref before insert on bookmarks
for each row
begin
if new.title like '%xxx%'
then
thanx it works the trigger is created successfully but it has no
effect. here it is:
delimiter //
create trigger testref before insert on bookmarks
for each row
begin
declare dummy char(2);
if new.title like '%xxx%'
then
set new.id='xxx';
end if;
end;
//create t
I meant the error is:
mysql> CREATE TRIGGER testref BEFORE INSERT ON bookmarks
-> FOR EACH ROW
-> BEGIN
-> IF NEW.title LIKE '%xxx%' THEN
-> SET NEW.id ='xxx';
ERROR 1064 (42000): You have an error in your SQL syntax; check the
manual that corresponds to your MySQL server version f
I would like to prohibit the value 'xxx' on my column title, and if it
does contain the value I would like to create an exception by
assigning 'xxx' to the primary key id which is int(5).
This is what I do but I get an error on its creation so I guess it's
not the right way:
CREATE TRIGGER testre
Niels Larsen wrote:
Niels,
Do you mean in the Makefile for zlib?
Thanks!
Patrick
Connie,
I had the same error with another program recently, but probably the
fix for yours is the same: try compile zlib with -fPIC by adding
" -fPIC" to CFLAGS in the Makefile.
Niels Larsen
Logg
ts e.g. a socket
per database or would I have to run several instances of a MySQL server giving
each instance a different (socket) configuration?
I tried to figure that out myself and read the manual etc. but I couldn't come
up with an answer. Did I miss some documentation?
Thanks,
Patrick
John Hicks wrote:
> -Patrick wrote:
>> Folks, I could really use your assistance.
>> Take a look here: http://pastebin.com/687889
>>
>> How can I manipulate totalRows_numberComments so that I get the number
>> of blg_comment_com.idart_com PER blg_article_art.id_art
not cool.
Thanks
-Patrick
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
;
It is supposed to print the month only IF it there is entries matching
the date. So if there were 3 entries made for one month, then all
entries for that month should be printed. Right now, this prints every
row in existence.
-Patrick
--
MySQL General Mailing List
For list ar
iated.
Thanks
-Patrick
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
On 4/26/06, Shawn Green <[EMAIL PROTECTED]> wrote:
>
>
>
> --- Patrick Aljord <[EMAIL PROTECTED]> wrote:
>
> > On 4/26/06, Patrick Aljord <[EMAIL PROTECTED]> wrote:
> > > I have a table confs like this:
> > > id int 5 auto_increment pr
On 4/26/06, Patrick Aljord <[EMAIL PROTECTED]> wrote:
> I have a table confs like this:
> id int 5 auto_increment primary key;
> conf text;
>
> and another table conf_ip like this:
> id int 5 auto_increment primary key;
> conf_id int 5; ==>foreing key of confs
> ip
I have a table confs like this:
id int 5 auto_increment primary key;
conf text;
and another table conf_ip like this:
id int 5 auto_increment primary key;
conf_id int 5; ==>foreing key of confs
ip varchar 150;
I would like to
select id, conf from confs where ip!='some val';
how can I do this?
th
Philippe Poelvoorde wrote:
> 2006/4/25, -Patrick <[EMAIL PROTECTED]>:
>
>> $query_rsComments = sprintf("SELECT id_com WHERE idart_com=%s ORDER BY
>> date_com ASC", $KTColParam1_rsComments);
>>
>>
>> can anyone see what Im trying to do he
y given to output a number.. using
mysql_num_rows(). But Im getting syntax and check line errors..
Any thoughts?
-Patrick
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Daniel da Veiga wrote:
> On 4/25/06, -Patrick <[EMAIL PROTECTED]> wrote:
>
>> Hi Folks,
>> Here is the table for the articles:
>>
>> CREATE TABLE `blg_article_art` (
>> `id_art` int(11) NOT NULL auto_increment,
>> `idtop_art` int(11) NOT NUL
Sorry about that..
$totalrows_rsComments gives a value of 0. But no matter what I do I
can't seem to alter it. It stays at zero.
-Patrick
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
the correct communication for
the tables, or where am I going wrong with the two above attempts?
Thank you,
-Patrick
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
hey all,
I have a table "mytable" that looks like this:
id tinyint primary key auto_increment
row1 varchar 150
row2 varchar 150
I would like to remove all duplicates, which means that if n records
have the same row1 and row2, keep only one record and remove the
duplicates. Any idea how to do this?
Hello,
I wanted only to report that I removed and re-added the Index as Martijn
suggested and now it's OK.
Thanks again for your help
Regards,
Patrick
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
stop the service during the
week) and tell you the results.
Shall also perform a REPAIR TABLE?
Regards,
Patrick
> -Original Message-
> From: Martijn Tonies [mailto:[EMAIL PROTECTED]
> Sent: Tuesday, 04 April 2006 10:34
> To: Patrick Herber; mysql@lists.mysql.com
> Subject: R
+
|3 |
+--+
Can you please tell me what the problem could be and what can I do to solve
it?
Thanks a lot!
Regards,
Patrick
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
I've confirmed that this does affect ALL incoming connections.
On 3/30/06, patrick <[EMAIL PROTECTED]> wrote:
> Is there any way to make this the default behaviour? I did a Google
> search, and it was suggested I put the following line in /etc/my.cnf:
>
> [mysqld]
> init
t they are from the
command-line client. Is there a way to set this just for the client,
like some option that would go in the [mysql] section?
Patrick
On 3/28/06, Wolfram Kraus <[EMAIL PROTECTED]> wrote:
> patrick wrote:
> > I'm wondering if there's any way to force updates
I'm wondering if there's any way to force updates on InnoDB tables to
require an explicit COMMIT when running queries from the mysql
command-line client (similar to Oracle's command line client)?
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http:
Why, when I create a table as follows:
mysql> create table requestid ( request_id int not null default
1, constraint requestid_innodb_pk_cons primary key(request_id) )
ENGINE=InnoDB;
Query OK, 0 rows affected (0.02 sec)
Do I get the following?
mysql> select request_id from requestid
At 12:54 PM 2/10/2006, Mark Matthews wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Patrick Duda wrote:
> I guess I don't understand this locking stuff. I have a InnoDB table
> that has one thing in it, a counter. All I want to do is have multiple
> instances of the
At 10:52 AM 2/10/2006, [EMAIL PROTECTED] wrote:
Patrick Duda <[EMAIL PROTECTED]> wrote on 10/02/2006 16:28:56:
> I guess I don't understand this locking stuff. I have a InnoDB table
that
> has one thing in it, a counter. All I want to do is have multiple
> instances o
executeUpdate(updateQuery);
...
c.commit();
c.setAutoCommit(true);
If I have multiple instances of this code running I end up with duplicate
keys. I thought this was suppose to lock the table so that would not happen.
What am I not doing right? What am I not understanding about locking?
Thanks
Patrick
Hi,
I am running MySQL 4.0.1 with j/connector 3.1 and I am having problems
trying to figure out why I am not getting the results I am expecting.
I have a table that is used for generating primary keys. It only has one
item, an int that is incremented each time a key is needed. This is not m
Hi,
We have moved from Mysql4 to MySQL5 and are currently planning our new database
schema. In this new approach we would like to move to InnoDB's storage engine
for transaction support and still want to use MySQL's FULLTEXT search
capabillities. And to make things easy we also want to replica
Bonjour,
J'utilise une technologie CMS, installée depuis l'année dernière sur une
base MySQL 4.0.16-nt. Le CMS à crée lui meme lus tables et y fait référence
en utilisant l'encodage UTF-8.
Depuis, nous avons migré à la version MySQL 4.1.16-nt et l'encodage par
défaut choisi a été latin_swedish
I would suggest a union
SELECT name, count(*)
FROM (SELECT name1 as name from mytable union select name2 as name from
mytable union select name3 as name from table)
GROUP BY name
but perhaps there's a better way...
Regards,
Patrick
> -Original Message-
> From: Critt
Do you mean something like that?
UPDATE tablename SET date2=DATE_ADD(date1, INTERVAL -3 MONTH)
Regards,
Patrick
> -Original Message-
> From: Shaun [mailto:[EMAIL PROTECTED]
> Sent: Monday, 16 January 2006 15:27
> To: mysql@lists.mysql.com
> Subject: UPDATE Date Column
Hi,
Do you mean you have such a structure
Table A
ID_a
ID_b
ID_c
...
Table B
ID_b
Value_b
...
Table C
ID_c
Value_c
...
?
In that case you can
SELECT Value_b, Value_c
FROM A
LEFT JOIN B on A.ID_b=B.ID_b
LEFT JOIN C on A.ID_c=C.ID_c
WHERE ID_a=xxx
Regards,
Patrick
> -Origi
o use
innodb_data_file_path?
Thanks a lot and regards,
Patrick
> -Original Message-
> From: Jocelyn Fournier [mailto:[EMAIL PROTECTED]
> Sent: Sunday, 15 January 2006 15:09
> To: Patrick Herber
> Cc: mysql@lists.mysql.com
> Subject: Re: ERROR 1114 (HY000): The table is f
ibdata7:500M;ibdata8:500M;ibdata9:500M;ibdata10:500M
:autoextend
Also in this case I got the same error message.
What should I do in order to convert this table?
Should I set in the innodb_data_file_path for example 50 Files, each big 4GB
?
Thanks a lot for your help.
Best regards,
Patrick
PS:
What type client are you using?
With the C API you would test for the return value (0 or 1) and process
accordingly.
You could use 'INSERT IGNORE' syntax, but then you would not know what
records failed (you could test for how many were inserted with mysql_info()
using the C API).
See Chap
ad, that he wasn't looking for
help in db design, just a solution to the punctuation issue.
Pat...
- Original Message -
From: [EMAIL PROTECTED]
To: Patrick
Cc: Chance Ellis ; mysql@lists.mysql.com
Sent: Monday, October 03, 2005 4:30 PM
Subject: Re: Table names with periods
Repl
825 SE 47th Terrace
Cape Coral, FL 33904
- Original Message -
From: Chance Ellis
To: Patrick
Sent: Monday, October 03, 2005 2:22 PM
Subject: Re: Table names with periods
Patrick,
I have been trying to figure out how I can convert an IP address to a 32bit
integer within a S
Historically any form of punctuation, parameter delimiter, or filepath
delimiter in either a database name, field or column name, file or table
name would not be recommended; even if the RDBMS or File Handler allows it.
If you are able to stick to alphanumeric characters using underscores
char
t referenced in slave.o) (data)
collect2: ld returned 1 exit status
make[4]: *** [mysqld] Error 1
I'm using the GCC binary from the HP-UX Software Porting Archive site:
Output of GCC -v:
Using built-in specs.
Target: hppa2.0w-hp-hpux11.11
Configured with: ../gcc/configure
Thread model: s
Folks,
Go with what you know best. If you are a good Windows admin etc go with
windows. If you are a good Linux/Unix admin go with Linux. What little
performance gain from one or the other will be lost if you do not run a
tight ship all around. Performance and stability goes way beyond what
OS
ny thoughts?
--
Patrick Campbell
OurVacationStore.com
Website Administrator
Tel. 602.896.4729
it's dependent on the OS
having the Jet engine. I'd be very interested to know if anyone has
done an equivalent to that in Linux.
--
___ Patrick Connolly
{~._.~}
_( Y )_ Good judgment comes from experience
(:_~*~_:) Experience comes from bad jud
The return you are getting is correct for the format you are using. A 90
second difference is in fact 1 minute, 30 seconds(130).
To get the time difference in seconds convert the datetime or timestamp to a
julian date or unixtime and then process.
SELECT start_time, end_time, UNIX_TIMESTAMP(end_
Hi,
I seems not to figure out how to import a xml file into mysql 4.x
I'm working on, Linux, help would he fine.
Patrick
--
"You're dead, Jim."
-- McCoy, "Amok Time," stardate 3372.7..
Fingerprint = 2792 057F C445 9486 F932 3AEA D3A3 1B0C 1059 273B
ICQ# 31693270
7;#'
--fields-optionally-enclosed-by='"' --ignore-lines='1' --replace --verbose
recepten /tmp/recepten.txt
I see in some text fields:
4 stuks bizonmedaillon Covee
some strage signs like and anyone a idea how i get rid of it?
Patrick
--
Sex is like hacking. You get in
#x27;
Database changed
With phpmyadmin these tables are marked "in use"
What can i do to get this back to work, had no time to create a backup
script witch i schall create as fast as possible now.
Patrick
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
way to do this, the manupulation of the data goes with php5 on a
website.
TIA
Patrick
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
ting slow
> At 09:49 AM 12/9/2004, Patrick Marquetecken wrote:
> >Hi,
> >
> >I have 3 snort sensors logging to a central mySQL database after two
weeks
> >the size of the database is about 3.3GB and the machine is getting slow,
> >as i'm not used to be work
punctuation are frequently used as delimiters in other programs, os's and
applications, so when you use them in elements other than strings you often
limit the portability (i.e. import and export) of your structures.
I hope you find this information valuable.
Pat...
Patrick Sherrill
Co
ur running out of diskspace then that's a problem in itself.
I got a lot of disk space left.
>
> -Original Message-
> From: Patrick Marquetecken [mailto:[EMAIL PROTECTED]
> Sent: Thursday, December 09, 2004 7:49 AM
> To: [EMAIL PROTECTED]
> Subject: MYSQL is getting slow
>
>
On Thu, 9 Dec 2004 13:13:10 -0600
Jeff Smelser <[EMAIL PROTECTED]> wrote:
> On Thursday 09 December 2004 01:06 pm, Patrick Marquetecken wrote:
>
> > and for ansewring Jeff Smelser i have installed mysql 4.x on linux and then
> > dit from the commandline create database
On Thu, 09 Dec 2004 16:17:17 +
Darryl Waterhouse <[EMAIL PROTECTED]> wrote:
> On Thu, 2004-12-09 at 10:08 -0600, gerald_clark wrote:
>
> >
> > Patrick Marquetecken wrote:
> >
> > >Hi,
> > >
> > >I have 3 snort sensors logging to a
;s First ISP
there are just two issues that I would look at if his solution
- Original Message -
From: "Ian Sales" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Cc: <[EMAIL PROTECTED]>; "Patrick Sherrill" <[EMAIL PROTECTED]>
Sent: Thursday, December 09, 200
12 mb, 99% used and no swap, HD of 40GB.
TIA
Patrick
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
David,
Please provide the complete LOAD DATA INFILE command you used.
Pat...
[EMAIL PROTECTED]
CocoNet Corporation
SW Florida's First ISP
- Original Message -
From: "David Ziggy Lubowa" <[EMAIL PROTECTED]>
To: "Eric Bergen" <[EMAIL PROTECTED]>
Cc: <[EMAIL PROTECTED]>
Sent: Wednesday, Decem
ich are the 1st, 5th,28th
|> and 71st fields.
|> Is there a statement to do that.
|>
I think it would be simpler to pre-process the file using cut with the
appropriate delmiter if it's not tab-delimited already. Then import
the reduced file.
HTH
--
___ Patrick Con
GH wrote:
Greetings:
This is just a friendly reminder that if you are registered in the
United States to VOTE on November 2, 2004 (TOMORROW)
Need to know where you vote?
Please see the attached file (it is an image) that contains some information
Do we care? Realy? Unlikely. Maybe you should s
18 Oct 2004 18:08:24 +0800, èææ Patrick Hsieh wrote:
> >
> > I am planing to transfer data from postgresql to mysql. Is there any
> > useful tools, scripts or utilities to achieve this?
>
> pg_dump
>
> First dump the schema, edit that until you have something MySQL
Hello list,
I am planing to transfer data from postgresql to mysql. Is there any
useful tools, scripts or utilities to achieve this? Any infomation is
highly appreciated!
---
Patrick Hsieh(èææ) <[EMAIL PROTECTED]>
MSN: [EMAIL PROTECTED] | ICQ: 97133580
Skype: pahud_at_pahud.net
Given the many 'standards' for formatting phone numbers, I would recommend
using a char or varchar. Regex is intended for string types.
Do yourself a favor run an alter table and change the column to a char or
varchar.
I hope this helps...
Pat...
[EMAIL PROTECTED]
CocoNet Corporation
SW Florida
between what works and what doesn't.
I'd prefer not to do the correspondence through this list which
already has lots of traffic.
Ideas are most welcome.
Thanx
--
___ Patrick Connolly
{~._.~}
_( Y )_ Good judgment comes from experience
(:_~*~_
Somewhere about Mon, 13-Sep-2004 at 07:23PM +0300 (give or take), Egor Egorov wrote:
|> Patrick Connolly <[EMAIL PROTECTED]> wrote:
|>
|> > I've been trying to contact MySQL AB using the "contact us" link. I
|> > got a auto-response to the effect that I
experienced similar?
--
___ Patrick Connolly
{~._.~}
_( Y )_ Good judgment comes from experience
(:_~*~_:) Experience comes from bad judgment
(_)-(_)
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://li
is Red Hat Linux release 9
(Shrike) 2.4.20.
Any suggestions?
Patrick Campbell
OurVacationStore.com
Website Administrator
[EMAIL PROTECTED]
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
called S (dialect R) which
handles matrices in a multitude of ways. Removing duplicates is
straightforward in that language, but from what I know about SQL so
far, it is rather complicated in MySQL.
What do other people do with duplicates?
TIA
--
___ Patrick Connolly
{~._.~}
_
Somewhere about Tue, 10-Aug-2004 at 02:19PM +0200 (give or take), Carsten Pedersen
wrote:
|> Hi Patrick,
|>
|> On Tue, 2004-08-10 at 12:16, Patrick Connolly wrote:
|> > Is this the most appropriate list to mention misprints? There doesn't
|> > seem to be an i
Is this the most appropriate list to mention misprints? There doesn't
seem to be an indication where additional suggestions are to be sent.
I found something that, though not exactly incorrect, works for
reasons other than what a reader might think, so it's misleading.
--
___
k.CSV;
mysqlimport --fields-terminated-by=',' --ignore-lines=1 db_name Bank.CSV;
done
Something tells me that greater minds have a better way.
--
___ Patrick Connolly
{~._.~}
_( Y )_ Good judgment comes from experience
(:_~*~_:) Experience comes fr
Somewhere about Sun, 01-Aug-2004 at 11:31AM -0400 (give or take), Michael Stassen
wrote:
|>
|> Patrick Connolly wrote:
[...]
|> > Looks to me the mysql user should have no trouble with it:
|> >
|> > -rw-rw-r--1 pat pat 332 Jun 28 20:42 Orders.txt
|
unnecessary which was
another surprise to me. Is there something I'm missing here?
|>
|> Michael
Thanks Michael.
--
___ Patrick Connolly
{~._.~}
_( Y )_ Good judgment comes from experience
(:_~*~_:) Experience comes from bad judgment
(_)-(_
re. At one stage I thought it might be an obscure hardware
difficulty with this aged machine (over 5 years) because of another
obscure problem I had using fetchmail from a POP server. However, I
noticed that once I switched off the ISP's virus checking, that
problem vanis
I previously had a server runnning RH 7.3, cPanel 9.41 and MySQL
4.0.20. I'm moving to a different server running Fedora 1, DirectAdmin
and MySQL 4.0.17.
I have a large database (200mb) and I'm trying to move it over.
I made a dump using "mysqldump -u USER -pPASSWORD DATABASE >
filename.sql", tra
Hum,
Well, I'm back with another one... When adding a
join to the previous query, it sloows down once again
even though it retrieves less datat. Here's the info :
mysql> explain SELECT ti.posi, ti.docid, d.filename,
ti.id, c.name
FROM corpus_documents cd, corpus c, documents d,
tokens_ins ti,
Hello Victor,
> What version of MySQL are you using? Have you
> checked the cardinality on
> these tables?
Problem solved! Optimizing the table brought the query
time down to 17 secs Wow!
Thanks for the input Victor and merci to Arnaud for
the quick fix.
1 - 100 of 340 matches
Mail list logo