Re: Remote Access to MySQL

2015-03-09 Thread patrick

Check your border router access list.

Patrick Sherrill
patr...@coconet.com

Coconet Corporation
SW Florida's First ISP

(239) 540-2626 Office
(239) 770-6661 Cell

Confidentiality Notice.  This email message, including any attachments, 
is for the sole use of the intended recipient(s) and may contain 
confidential and privileged information.  Any unauthorized review, use, 
disclosure or distribution is prohibited.  If you are not the intended 
recipient, please contact the sender by reply email and destroy all 
copies of the original message.


On 3/9/2015 8:54 AM, Johan De Meersman wrote:

- Original Message -

From: Rafael Ribeiro rafaelribeiro...@gmail.com
Subject: Remote Access to MySQL

After move this Virtual Machine to a new one (got a new IP - 2.2.2.2), we lost
the ability to connect to mysql remotely, from external IPs.


It doesn't work is not a helpful comment :-) What error message are you 
getting? Can you connect to other services on the host? Can you connect from a local 
shell?




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: mysql float data type

2014-12-17 Thread Patrick Sherrill
We always store as strings to avoid rounding issues and then convert for calcs 
to whatever precision we need. 
Pat...

Sent from my iPhone

 On Dec 17, 2014, at 6:24 AM, Lucio Chiappetti lu...@lambrate.inaf.it wrote:
 
 On Tue, 16 Dec 2014, Hartmut Holzgraefe wrote:
 On 16.12.2014 15:16, xiangdongzou wrote:
 
 Can anyone tell me why 531808.11 has been changed to 531808.12 ?
 
 typical decimal-binary-decimal conversion/rounding error.
 
 never used DECIMAL nor intend to, but the issue is typical of precision 
 issues among float (32-bit) and double (64-bit) in ANY programming language. 
 Google for IEEE floating point
 
 Some (most) users are unaware that a 32-bit real (REAL*4 for oldtimer Fortran 
 users like myself) have about 7 digits of precision, while 64-bit (doubles, 
 double precision, REAL*8) get to about 16.
 
 So if a quantity needs high precision (typically this occurs for angular 
 quantities where arcseconds are important), use double.
 
 -- 
 
 Lucio Chiappetti - INAF/IASF - via Bassini 15 - I-20133 Milano (Italy)
 For more info : http://www.iasf-milano.inaf.it/~lucio/personal.html
 
 Do not like Firefox =29 ?  Get Pale Moon !  http://www.palemoon.org
 
 -- 
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql
 

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



RE: mySql versus Sql Server performance

2010-10-26 Thread Patrick Thompson
Thanks a lot - I'll try playing around with some of that stuff and at least get 
the memory utilization between mySql and Sql Server comparable. At some point I 
would like to see the abstraction layer running under mono - but won't get to 
that for a while - when I do I can post linux based numbers. If you think the 
comparisons 
(http://cipl.codeplex.com/wikipage?title=Data%20Provider%20Comparison) are 
misleading, I would welcome any comments or be happy to edit the page to 
reflect any concerns - the point of the numbers is how CIPl performs against 
the underlying stores in its current incarnation - so I consider it to be 
accurate, at least in that respect.

Patrick
myList - everything you could possibly want (to buy)


-Original Message-
From: Gavin Towey [mailto:gto...@ffn.com] 
Sent: Tuesday, October 26, 2010 3:52 PM
To: Patrick Thompson; mysql@lists.mysql.com
Subject: [SPAM] RE: mySql versus Sql Server performance
Importance: Low

It's not much, but the dataset is definitely larger than your buffer pool. You 
could try this query to show how much data+index is in innodb: SELECT 
SUM(data_length+index_length) as data size FROM INFORMATION_SCHEMA.TABLES WHERE 
ENGINE='InnoDB';  =
Then SET GLOBAL buffer_pool_size= that number

I don't have a much experience running mysql on windows; I think much more time 
is spent optimizing the server performance on linux based systems rather than 
windows.


-Original Message-
From: Patrick Thompson [mailto:patrick.thomp...@channelintelligence.com]
Sent: Monday, October 25, 2010 2:24 PM
To: Gavin Towey; mysql@lists.mysql.com
Subject: RE: mySql versus Sql Server performance

Here's the innodb stuff - although the largest data set I've used in the stats 
run is around 20MB, which doesn't seem like much to me.

'innodb_adaptive_hash_index', 'ON'
'innodb_additional_mem_pool_size', '2097152'
'innodb_autoextend_increment', '8'
'innodb_autoinc_lock_mode', '1'
'innodb_buffer_pool_size', '49283072'
'innodb_checksums', 'ON'
'innodb_commit_concurrency', '0'
'innodb_concurrency_tickets', '500'
'innodb_data_file_path', 'ibdata1:10M:autoextend'
'innodb_data_home_dir', 'C:\MySQL Datafiles\'
'innodb_doublewrite', 'ON'
'innodb_fast_shutdown', '1'
'innodb_file_io_threads', '4'
'innodb_file_per_table', 'OFF'
'innodb_flush_log_at_trx_commit', '1'
'innodb_flush_method', ''
'innodb_force_recovery', '0'
'innodb_lock_wait_timeout', '50'
'innodb_locks_unsafe_for_binlog', 'OFF'
'innodb_log_buffer_size', '1048576'
'innodb_log_file_size', '25165824'
'innodb_log_files_in_group', '2'
'innodb_log_group_home_dir', '.\'
'innodb_max_dirty_pages_pct', '90'
'innodb_max_purge_lag', '0'
'innodb_mirrored_log_groups', '1'
'innodb_open_files', '300'
'innodb_rollback_on_timeout', 'OFF'
'innodb_stats_on_metadata', 'ON'
'innodb_support_xa', 'ON'
'innodb_sync_spin_loops', '20'
'innodb_table_locks', 'ON'
'innodb_thread_concurrency', '8'
'innodb_thread_sleep_delay', '1'
'innodb_use_legacy_cardinality_algorithm', 'ON'

Patrick
myList - everything you could possibly want (to buy)


-Original Message-
From: Gavin Towey [mailto:gto...@ffn.com]
Sent: Monday, October 25, 2010 4:50 PM
To: Patrick Thompson; mysql@lists.mysql.com
Subject: [SPAM] RE: mySql versus Sql Server performance
Importance: Low

So it's a primary key lookup.  That's a rather large primary key though, it's 
going to bloat the table size since innodb in mysql uses clustered indexes.

So the explain plan and table structure look pretty straightforward.  It is 
using the index to satisfy the query.  The next question is what does the 
server memory configuration look like?

SHOW GLOBAL VARIABLES LIKE 'innodb%';

In particular innodb_buffer_pool defines the global set of memory where data 
and indexes from your table are cached.  Mysql could be showing slower 
performance if it is getting cache misses from the buffer pool and is being 
forced to read from disk excessively.

On dedicated mysql servers, the buffer pool should be about 80% of available 
RAM.  The default value is 8M which is pretty much unusable except for trivial 
cases.


-Original Message-
From: Patrick Thompson [mailto:patrick.thomp...@channelintelligence.com]
Sent: Monday, October 25, 2010 12:31 PM
To: Gavin Towey; mysql@lists.mysql.com
Subject: RE: mySql versus Sql Server performance

Query:

SELECT *
FROM Item
WHERE CollectionID = 'a0d3937b-f5a8-0640-dec8-bdd60f7f4775' AND 
ExternalID = 'fred1'

Explain Extended:

select '17304' AS `ID`,'fred1' AS 
`ExternalID`,'a0d3937b-f5a8-0640-dec8-bdd60f7f4775' AS `CollectionID`,NULL AS 
`ItemTypeVersion`,'Item 
xmlns=http://cipl.codeplex.com/CIPlItem1.xsd;Valid1/ValidItemStatus100/ItemStatusExternalIDfred1/ExternalIDModifiedDate2010-10-25T15:06:55.7188551-04:00/ModifiedDatePersonType
 
xmlns=http://cipl.codeplex.com/CIPlOther1.xsd;AddressUSAddressTypeCityStringCelebration
 
1/String/CityCountryStringUSA/String/CountryStateStringFL/String/StateStreetString1170
 Celebration blvd 
1/String

mySql versus Sql Server performance

2010-10-25 Thread Patrick Thompson
I am running an open source project that provides an abstraction layer over a 
number of different stores. I am puzzled by performance numbers I am seeing 
between mysql and sql server - a brief discussion is available here

http://cipl.codeplex.com/wikipage?title=Data%20Provider%20Comparison

The statistics were generated using mySql 5.1 and Sql Server 2008 on a machine 
with the following specs:

OS Name Microsoft Windows 7 Professional
System Model   HP Compaq nc8430 (RB554UT#ABA)
ProcessorIntel(R) Core(TM)2 CPU T7200  @ 2.00GHz, 2000 Mhz, 
2 Core(s), 2 Logical Processor(s)
Installed Physical Memory (RAM) 4.00 GB
Total Virtual Memory 6.75 GB
Page File Space 3.37 GB
Disk 120GB SSD with 22GB available

If this isn't the right place to ask this question, can someone point me to 
somewhere that is.

Thanks


Patrick
Are you using...
myListhttp://www.mylist.com/ - everything you could possibly want (to buy)
Let me know if you can't find something



The information contained in this email message is considered confidential and 
proprietary to the sender and is intended solely for review and use by the 
named recipient. Any unauthorized review, use or distribution is strictly 
prohibited. If you have received this message in error, please advise the 
sender by reply email and delete the message.


RE: mySql versus Sql Server performance

2010-10-25 Thread Patrick Thompson
That's true for the deletes - but not for save and get. The ddl is available 
here

http://cipl.codeplex.com/SourceControl/changeset/view/2460#57689

The code that accesses it is here

http://cipl.codeplex.com/SourceControl/changeset/view/2460#57729

Patrick
myListhttp://www.mylist.com/ - everything you could possibly want (to buy)

From: vegiv...@gmail.com [mailto:vegiv...@gmail.com] On Behalf Of Johan De 
Meersman
Sent: Monday, October 25, 2010 9:55 AM
To: Patrick Thompson
Cc: mysql@lists.mysql.com
Subject: Re: mySql versus Sql Server performance

I merely skimmed it, but your comment that you pay the query compilation cost 
on every request suggests to me that you're not using prepared statements. If 
you can, you should :-)

Also, MySQL *does* support SPs, from 5.0 onwards or something. You could split 
into separate modules for pre- and post-5.

On Mon, Oct 25, 2010 at 3:38 PM, Patrick Thompson 
patrick.thomp...@channelintelligence.commailto:patrick.thomp...@channelintelligence.com
 wrote:
I am running an open source project that provides an abstraction layer over a 
number of different stores. I am puzzled by performance numbers I am seeing 
between mysql and sql server - a brief discussion is available here

http://cipl.codeplex.com/wikipage?title=Data%20Provider%20Comparison

The statistics were generated using mySql 5.1 and Sql Server 2008 on a machine 
with the following specs:

OS Name Microsoft Windows 7 Professional
System Model   HP Compaq nc8430 (RB554UT#ABA)
ProcessorIntel(R) Core(TM)2 CPU T7200  @ 2.00GHz, 2000 Mhz, 
2 Core(s), 2 Logical Processor(s)
Installed Physical Memory (RAM) 4.00 GB
Total Virtual Memory 6.75 GB
Page File Space 3.37 GB
Disk 120GB SSD with 22GB available

If this isn't the right place to ask this question, can someone point me to 
somewhere that is.

Thanks


Patrick
Are you using...
myListhttp://www.mylist.com/ - everything you could possibly want (to buy)
Let me know if you can't find something



The information contained in this email message is considered confidential and 
proprietary to the sender and is intended solely for review and use by the 
named recipient. Any unauthorized review, use or distribution is strictly 
prohibited. If you have received this message in error, please advise the 
sender by reply email and delete the message.



--
Bier met grenadyn
Is als mosterd by den wyn
Sy die't drinkt, is eene kwezel
Hy die't drinkt, is ras een ezel


RE: mySql versus Sql Server performance

2010-10-25 Thread Patrick Thompson
Query:

SELECT *
FROM Item
WHERE CollectionID = 'a0d3937b-f5a8-0640-dec8-bdd60f7f4775' AND 
ExternalID = 'fred1'

Explain Extended:

select '17304' AS `ID`,'fred1' AS 
`ExternalID`,'a0d3937b-f5a8-0640-dec8-bdd60f7f4775' AS `CollectionID`,NULL AS 
`ItemTypeVersion`,'Item 
xmlns=http://cipl.codeplex.com/CIPlItem1.xsd;Valid1/ValidItemStatus100/ItemStatusExternalIDfred1/ExternalIDModifiedDate2010-10-25T15:06:55.7188551-04:00/ModifiedDatePersonType
 
xmlns=http://cipl.codeplex.com/CIPlOther1.xsd;AddressUSAddressTypeCityStringCelebration
 
1/String/CityCountryStringUSA/String/CountryStateStringFL/String/StateStreetString1170
 Celebration blvd 
1/String/StreetZipInt3234748/Int32/Zip/USAddressType/AddressAlternateAddresses
 Count=2USAddressTypeCityStringCelebration 
1/String/CityCountryStringUSA/String/CountryStateStringFL/String/StateStreetString1170
 Celebration blvd 
1/String/StreetZipInt3234748/Int32/Zip/USAddressTypeUSAddressTypeCityStringSeattle
 1/String/CityCountryStringUSA/String/CountryPhoneNumbers 
Count=2PhoneNumberTypeAreaCodeInt32206/Int32/AreaCodeNumberInt327819281/Int32/NumberTags
 Count=1Stringnever answered 
1/String/Tags/PhoneNumberTypePhoneNumberTypeAreaCodeInt32206/Int32/AreaCodeNumberInt329991971/Int32/NumberTags
 Count=1Stringcell 
1/String/Tags/PhoneNumberType/PhoneNumbersStateStringWA/String/StateStreetString12070
 Lakeside pl 
1/String/StreetZipInt3298126/Int32/Zip/USAddressType/AlternateAddressesCreateDateDateTime2010-10-25T15:06:55.7168549-04:00/DateTime/CreateDateNameStringfred1/String/NameTags
 
Count=4Stringfirst/StringStringsecond/StringStringthird/StringString1/String/Tags/PersonType/Item'
 AS `ObjectText`,'2010-10-25 15:06:55' AS `EnteredDate`,'2010-10-25 15:06:55' 
AS `LastModDate` from `ciplitemwell0404`.`item` where 
(('a0d3937b-f5a8-0640-dec8-bdd60f7f4775' = 
'a0d3937b-f5a8-0640-dec8-bdd60f7f4775') and ('fred1' = 'fred1'))

Explain:

1, 'SIMPLE', 'Item', 'const', 'PRIMARY,ItemsByID', 'PRIMARY', '889', 
'const,const', 1, ''


Table definition:

CREATE TABLE  `ciplitemwell0404`.`item` (
  `ID` bigint(20) unsigned NOT NULL AUTO_INCREMENT,
  `ExternalID` varchar(255) CHARACTER SET utf8 NOT NULL DEFAULT '',
  `CollectionID` varchar(40) CHARACTER SET utf8 NOT NULL,
  `ItemTypeVersion` varchar(255) CHARACTER SET utf8 DEFAULT NULL,
  `ObjectText` longtext NOT NULL,
  `EnteredDate` datetime NOT NULL,
  `LastModDate` datetime NOT NULL,
  PRIMARY KEY (`CollectionID`,`ExternalID`),
  UNIQUE KEY `ID` (`ID`),
  KEY `ItemsByID` (`CollectionID`,`ID`) USING BTREE
) ENGINE=InnoDB AUTO_INCREMENT=29687 DEFAULT CHARSET=latin1;


This is just the retrieve side - which seems to be around 1.5 times slower than 
the equivalent Sql Server numbers. 

The update is much slower - 3 to 5 times slower depending on the record size. 
It makes sense to me to focus on the retrieve, maybe the update is just a 
reflection of the same problems.


Patrick
myList - everything you could possibly want (to buy)


-Original Message-
From: Gavin Towey [mailto:gto...@ffn.com] 
Sent: Monday, October 25, 2010 2:00 PM
To: Patrick Thompson; mysql@lists.mysql.com
Subject: RE: mySql versus Sql Server performance

MySQL and most other databases require adjustment of server settings, and 
especially of table structures and indexes to achieve the best performance 
possible.

If you haven't examined index usage for the queries you're running, or adjusted 
server memory settings from defaults, then it's no surprise you would get poor 
performance.

I don't have the inclination to dig through your code; however, if you extract 
the actual queries you are running, then run EXPLAIN query; that will show 
how it's using indexes.  You can put that information here, along with the SHOW 
CREATE TABLE table \G output for all tables involved, and someone here should 
be able to help diagnose why the queries might be slow.

Regards,
Gavin Towey


-Original Message-
From: Patrick Thompson [mailto:patrick.thomp...@channelintelligence.com]
Sent: Monday, October 25, 2010 6:38 AM
To: mysql@lists.mysql.com
Subject: mySql versus Sql Server performance

I am running an open source project that provides an abstraction layer over a 
number of different stores. I am puzzled by performance numbers I am seeing 
between mysql and sql server - a brief discussion is available here

http://cipl.codeplex.com/wikipage?title=Data%20Provider%20Comparison

The statistics were generated using mySql 5.1 and Sql Server 2008 on a machine 
with the following specs:

OS Name Microsoft Windows 7 Professional
System Model   HP Compaq nc8430 (RB554UT#ABA)
ProcessorIntel(R) Core(TM)2 CPU T7200  @ 2.00GHz, 2000 Mhz, 
2 Core(s), 2 Logical Processor(s)
Installed Physical Memory (RAM) 4.00 GB
Total Virtual Memory 6.75 GB
Page File Space 3.37 GB
Disk 120GB SSD with 22GB available

If this isn't the right place to ask this question, can someone point me to 
somewhere that is.

Thanks


Patrick

RE: [SPAM] RE: mySql versus Sql Server performance

2010-10-25 Thread Patrick Thompson
Thanks Martin, though I'm somewhat confused by your message - there are no 
joins in the query (unless the longtext s thought of that way) and the Explain 
seems to indicate the query is using the ItemsById primary index (which is what 
I would expect).

Patrick
myListhttp://www.mylist.com/ - everything you could possibly want (to buy)

From: Martin Gainty [mailto:mgai...@hotmail.com]
Sent: Monday, October 25, 2010 3:53 PM
To: Patrick Thompson; gto...@ffn.com; mysql@lists.mysql.com
Subject: [SPAM] RE: mySql versus Sql Server performance
Importance: Low

Patrick-

you'll want to determine if your predicates are implementing indexes or FTS 
with EXPLAIN statement
http://dev.mysql.com/doc/refman/5.1/en/using-explain.html

you'll need a plan to determine if the join-type of resultsets returned from 
the Queries are NL, Hashed or sort-merge
http://en.wikipedia.org/wiki/Join_%28SQL%29
Each one has pros/cons dependening on the cardinality of the inner resultset vs 
the outer resultset

hth
Martin Gainty
__
Verzicht und Vertraulichkeitanmerkung/Note de déni et de confidentialité

Diese Nachricht ist vertraulich. Sollten Sie nicht der vorgesehene Empfaenger 
sein, so bitten wir hoeflich um eine Mitteilung. Jede unbefugte Weiterleitung 
oder Fertigung einer Kopie ist unzulaessig. Diese Nachricht dient lediglich dem 
Austausch von Informationen und entfaltet keine rechtliche Bindungswirkung. 
Aufgrund der leichten Manipulierbarkeit von E-Mails koennen wir keine Haftung 
fuer den Inhalt uebernehmen.

Ce message est confidentiel et peut être privilégié. Si vous n'êtes pas le 
destinataire prévu, nous te demandons avec bonté que pour satisfaire informez 
l'expéditeur. N'importe quelle diffusion non autorisée ou la copie de ceci est 
interdite. Ce message sert à l'information seulement et n'aura pas n'importe 
quel effet légalement obligatoire. Étant donné que les email peuvent facilement 
être sujets à la manipulation, nous ne pouvons accepter aucune responsabilité 
pour le contenu fourni.





 From: patrick.thomp...@channelintelligence.com
 To: gto...@ffn.com; mysql@lists.mysql.com
 Date: Mon, 25 Oct 2010 15:31:26 -0400
 Subject: RE: mySql versus Sql Server performance

 Query:

 SELECT *
 FROM Item
 WHERE CollectionID = 'a0d3937b-f5a8-0640-dec8-bdd60f7f4775' AND ExternalID = 
 'fred1'

 Explain Extended:

 select '17304' AS `ID`,'fred1' AS 
 `ExternalID`,'a0d3937b-f5a8-0640-dec8-bdd60f7f4775' AS `CollectionID`,NULL AS 
 `ItemTypeVersion`,'Item 
 xmlns=http://cipl.codeplex.com/CIPlItem1.xsd;Valid1/ValidItemStatus100/ItemStatusExternalIDfred1/ExternalIDModifiedDate2010-10-25T15:06:55.7188551-04:00/ModifiedDatePersonType
  
 xmlns=http://cipl.codeplex.com/CIPlOther1.xsd;AddressUSAddressTypeCityStringCelebration
  
 1/String/CityCountryStringUSA/String/CountryStateStringFL/String/StateStreetString1170
  Celebration blvd 
 1/String/StreetZipInt3234748/Int32/Zip/USAddressType/AddressAlternateAddresses
  Count=2USAddressTypeCityStringCelebration 
 1/String/CityCountryStringUSA/String/CountryStateStringFL/String/StateStreetString1170
  Celebration blvd 
 1/String/StreetZipInt3234748/Int32/Zip/USAddressTypeUSAddressTypeCityStringSeattle
  1/String/CityCountryStringUSA/String/CountryPhoneNumbers 
 Count=2PhoneNumberTypeAreaCodeInt32206/Int32/AreaCodeNumberInt327819281/Int32/NumberTags
  Count=1Stringnever answered 
 1/String/Tags/PhoneNumberTypePhoneNumberTypeAreaCodeInt32206/Int32/AreaCodeNumberInt329991971/Int32/NumberTags
  Count=1Stringcell 
 1/String/Tags/PhoneNumberType/PhoneNumbersStateStringWA/String/StateStreetString12070
  Lakeside pl 
 1/String/StreetZipInt3298126/Int32/Zip/USAddressType/AlternateAddressesCreateDateDateTime2010-10-25T15:06:55.7168549-04:00/DateTime/CreateDateNameStringfred1/String/NameTags
  
 Count=4Stringfirst/StringStringsecond/StringStringthird/StringString1/String/Tags/PersonType/Item'
  AS `ObjectText`,'2010-10-25 15:06:55' AS `EnteredDate`,'2010-10-25 15:06:55' 
 AS `LastModDate` from `ciplitemwell0404`.`item` where 
 (('a0d3937b-f5a8-0640-dec8-bdd60f7f4775' = 
 'a0d3937b-f5a8-0640-dec8-bdd60f7f4775') and ('fred1' = 'fred1'))

 Explain:

 1, 'SIMPLE', 'Item', 'const', 'PRIMARY,ItemsByID', 'PRIMARY', '889', 
 'const,const', 1, ''


 Table definition:

 CREATE TABLE `ciplitemwell0404`.`item` (
 `ID` bigint(20) unsigned NOT NULL AUTO_INCREMENT,
 `ExternalID` varchar(255) CHARACTER SET utf8 NOT NULL DEFAULT '',
 `CollectionID` varchar(40) CHARACTER SET utf8 NOT NULL,
 `ItemTypeVersion` varchar(255) CHARACTER SET utf8 DEFAULT NULL,
 `ObjectText` longtext NOT NULL,
 `EnteredDate` datetime NOT NULL,
 `LastModDate` datetime NOT NULL,
 PRIMARY KEY (`CollectionID`,`ExternalID`),
 UNIQUE KEY `ID` (`ID`),
 KEY `ItemsByID` (`CollectionID`,`ID`) USING BTREE
 ) ENGINE=InnoDB AUTO_INCREMENT=29687 DEFAULT CHARSET=latin1;


 This is just the retrieve side - which seems to be around 1.5 times slower 
 than the equivalent Sql Server numbers.

 The update is much

RE: mySql versus Sql Server performance

2010-10-25 Thread Patrick Thompson
Here's the innodb stuff - although the largest data set I've used in the stats 
run is around 20MB, which doesn't seem like much to me. 

'innodb_adaptive_hash_index', 'ON'
'innodb_additional_mem_pool_size', '2097152'
'innodb_autoextend_increment', '8'
'innodb_autoinc_lock_mode', '1'
'innodb_buffer_pool_size', '49283072'
'innodb_checksums', 'ON'
'innodb_commit_concurrency', '0'
'innodb_concurrency_tickets', '500'
'innodb_data_file_path', 'ibdata1:10M:autoextend'
'innodb_data_home_dir', 'C:\MySQL Datafiles\'
'innodb_doublewrite', 'ON'
'innodb_fast_shutdown', '1'
'innodb_file_io_threads', '4'
'innodb_file_per_table', 'OFF'
'innodb_flush_log_at_trx_commit', '1'
'innodb_flush_method', ''
'innodb_force_recovery', '0'
'innodb_lock_wait_timeout', '50'
'innodb_locks_unsafe_for_binlog', 'OFF'
'innodb_log_buffer_size', '1048576'
'innodb_log_file_size', '25165824'
'innodb_log_files_in_group', '2'
'innodb_log_group_home_dir', '.\'
'innodb_max_dirty_pages_pct', '90'
'innodb_max_purge_lag', '0'
'innodb_mirrored_log_groups', '1'
'innodb_open_files', '300'
'innodb_rollback_on_timeout', 'OFF'
'innodb_stats_on_metadata', 'ON'
'innodb_support_xa', 'ON'
'innodb_sync_spin_loops', '20'
'innodb_table_locks', 'ON'
'innodb_thread_concurrency', '8'
'innodb_thread_sleep_delay', '1'
'innodb_use_legacy_cardinality_algorithm', 'ON'

Patrick
myList - everything you could possibly want (to buy)


-Original Message-
From: Gavin Towey [mailto:gto...@ffn.com] 
Sent: Monday, October 25, 2010 4:50 PM
To: Patrick Thompson; mysql@lists.mysql.com
Subject: [SPAM] RE: mySql versus Sql Server performance
Importance: Low

So it's a primary key lookup.  That's a rather large primary key though, it's 
going to bloat the table size since innodb in mysql uses clustered indexes.

So the explain plan and table structure look pretty straightforward.  It is 
using the index to satisfy the query.  The next question is what does the 
server memory configuration look like?

SHOW GLOBAL VARIABLES LIKE 'innodb%';

In particular innodb_buffer_pool defines the global set of memory where data 
and indexes from your table are cached.  Mysql could be showing slower 
performance if it is getting cache misses from the buffer pool and is being 
forced to read from disk excessively.

On dedicated mysql servers, the buffer pool should be about 80% of available 
RAM.  The default value is 8M which is pretty much unusable except for trivial 
cases.


-Original Message-
From: Patrick Thompson [mailto:patrick.thomp...@channelintelligence.com]
Sent: Monday, October 25, 2010 12:31 PM
To: Gavin Towey; mysql@lists.mysql.com
Subject: RE: mySql versus Sql Server performance

Query:

SELECT *
FROM Item
WHERE CollectionID = 'a0d3937b-f5a8-0640-dec8-bdd60f7f4775' AND 
ExternalID = 'fred1'

Explain Extended:

select '17304' AS `ID`,'fred1' AS 
`ExternalID`,'a0d3937b-f5a8-0640-dec8-bdd60f7f4775' AS `CollectionID`,NULL AS 
`ItemTypeVersion`,'Item 
xmlns=http://cipl.codeplex.com/CIPlItem1.xsd;Valid1/ValidItemStatus100/ItemStatusExternalIDfred1/ExternalIDModifiedDate2010-10-25T15:06:55.7188551-04:00/ModifiedDatePersonType
 
xmlns=http://cipl.codeplex.com/CIPlOther1.xsd;AddressUSAddressTypeCityStringCelebration
 
1/String/CityCountryStringUSA/String/CountryStateStringFL/String/StateStreetString1170
 Celebration blvd 
1/String/StreetZipInt3234748/Int32/Zip/USAddressType/AddressAlternateAddresses
 Count=2USAddressTypeCityStringCelebration 
1/String/CityCountryStringUSA/String/CountryStateStringFL/String/StateStreetString1170
 Celebration blvd 
1/String/StreetZipInt3234748/Int32/Zip/USAddressTypeUSAddressTypeCityStringSeattle
 1/String/CityCountryStringUSA/String/CountryPhoneNumbers 
Count=2PhoneNumberTypeAreaCodeInt32206/Int32/AreaCodeNumberInt327819281/Int32/NumberTags
 Count=1Stringnever answered 
1/String/Tags/PhoneNumberTypePhoneNumberTypeAreaCodeInt32206/Int32/AreaCodeNumberInt329991971/Int32/NumberTags
 Count=1Stringcell 
1/String/Tags/PhoneNumberType/PhoneNumbersStateStringWA/String/StateStreetString12070
 Lakeside pl 
1/String/StreetZipInt3298126/Int32/Zip/USAddressType/AlternateAddressesCreateDateDateTime2010-10-25T15:06:55.7168549-04:00/DateTime/CreateDateNameStringfred1/String/NameTags
 
Count=4Stringfirst/StringStringsecond/StringStringthird/StringString1/String/Tags/PersonType/Item'
 AS `ObjectText`,'2010-10-25 15:06:55' AS `EnteredDate`,'2010-10-25 15:06:55' 
AS `LastModDate` from `ciplitemwell0404`.`item` where 
(('a0d3937b-f5a8-0640-dec8-bdd60f7f4775' = 
'a0d3937b-f5a8-0640-dec8-bdd60f7f4775') and ('fred1' = 'fred1'))

Explain:

1, 'SIMPLE', 'Item', 'const', 'PRIMARY,ItemsByID', 'PRIMARY', '889', 
'const,const', 1, ''


Table definition:

CREATE TABLE  `ciplitemwell0404`.`item` (
  `ID` bigint(20) unsigned NOT NULL AUTO_INCREMENT,
  `ExternalID` varchar(255) CHARACTER SET utf8 NOT NULL DEFAULT '',
  `CollectionID` varchar(40) CHARACTER SET utf8 NOT NULL,
  `ItemTypeVersion` varchar(255) CHARACTER SET utf8 DEFAULT NULL,
  `ObjectText

RE: mySql versus Sql Server performance

2010-10-25 Thread Patrick Thompson
Sorry, that should be 200MB not 20MB (still doesn't seem like much to me)

Patrick
myList - everything you could possibly want (to buy)


-Original Message-
From: Patrick Thompson 
Sent: Monday, October 25, 2010 5:24 PM
To: 'Gavin Towey'; mysql@lists.mysql.com
Subject: RE: mySql versus Sql Server performance

Here's the innodb stuff - although the largest data set I've used in the stats 
run is around 20MB, which doesn't seem like much to me. 

'innodb_adaptive_hash_index', 'ON'
'innodb_additional_mem_pool_size', '2097152'
'innodb_autoextend_increment', '8'
'innodb_autoinc_lock_mode', '1'
'innodb_buffer_pool_size', '49283072'
'innodb_checksums', 'ON'
'innodb_commit_concurrency', '0'
'innodb_concurrency_tickets', '500'
'innodb_data_file_path', 'ibdata1:10M:autoextend'
'innodb_data_home_dir', 'C:\MySQL Datafiles\'
'innodb_doublewrite', 'ON'
'innodb_fast_shutdown', '1'
'innodb_file_io_threads', '4'
'innodb_file_per_table', 'OFF'
'innodb_flush_log_at_trx_commit', '1'
'innodb_flush_method', ''
'innodb_force_recovery', '0'
'innodb_lock_wait_timeout', '50'
'innodb_locks_unsafe_for_binlog', 'OFF'
'innodb_log_buffer_size', '1048576'
'innodb_log_file_size', '25165824'
'innodb_log_files_in_group', '2'
'innodb_log_group_home_dir', '.\'
'innodb_max_dirty_pages_pct', '90'
'innodb_max_purge_lag', '0'
'innodb_mirrored_log_groups', '1'
'innodb_open_files', '300'
'innodb_rollback_on_timeout', 'OFF'
'innodb_stats_on_metadata', 'ON'
'innodb_support_xa', 'ON'
'innodb_sync_spin_loops', '20'
'innodb_table_locks', 'ON'
'innodb_thread_concurrency', '8'
'innodb_thread_sleep_delay', '1'
'innodb_use_legacy_cardinality_algorithm', 'ON'

Patrick
myList - everything you could possibly want (to buy)


-Original Message-
From: Gavin Towey [mailto:gto...@ffn.com] 
Sent: Monday, October 25, 2010 4:50 PM
To: Patrick Thompson; mysql@lists.mysql.com
Subject: [SPAM] RE: mySql versus Sql Server performance
Importance: Low

So it's a primary key lookup.  That's a rather large primary key though, it's 
going to bloat the table size since innodb in mysql uses clustered indexes.

So the explain plan and table structure look pretty straightforward.  It is 
using the index to satisfy the query.  The next question is what does the 
server memory configuration look like?

SHOW GLOBAL VARIABLES LIKE 'innodb%';

In particular innodb_buffer_pool defines the global set of memory where data 
and indexes from your table are cached.  Mysql could be showing slower 
performance if it is getting cache misses from the buffer pool and is being 
forced to read from disk excessively.

On dedicated mysql servers, the buffer pool should be about 80% of available 
RAM.  The default value is 8M which is pretty much unusable except for trivial 
cases.


-Original Message-
From: Patrick Thompson [mailto:patrick.thomp...@channelintelligence.com]
Sent: Monday, October 25, 2010 12:31 PM
To: Gavin Towey; mysql@lists.mysql.com
Subject: RE: mySql versus Sql Server performance

Query:

SELECT *
FROM Item
WHERE CollectionID = 'a0d3937b-f5a8-0640-dec8-bdd60f7f4775' AND 
ExternalID = 'fred1'

Explain Extended:

select '17304' AS `ID`,'fred1' AS 
`ExternalID`,'a0d3937b-f5a8-0640-dec8-bdd60f7f4775' AS `CollectionID`,NULL AS 
`ItemTypeVersion`,'Item 
xmlns=http://cipl.codeplex.com/CIPlItem1.xsd;Valid1/ValidItemStatus100/ItemStatusExternalIDfred1/ExternalIDModifiedDate2010-10-25T15:06:55.7188551-04:00/ModifiedDatePersonType
 
xmlns=http://cipl.codeplex.com/CIPlOther1.xsd;AddressUSAddressTypeCityStringCelebration
 
1/String/CityCountryStringUSA/String/CountryStateStringFL/String/StateStreetString1170
 Celebration blvd 
1/String/StreetZipInt3234748/Int32/Zip/USAddressType/AddressAlternateAddresses
 Count=2USAddressTypeCityStringCelebration 
1/String/CityCountryStringUSA/String/CountryStateStringFL/String/StateStreetString1170
 Celebration blvd 
1/String/StreetZipInt3234748/Int32/Zip/USAddressTypeUSAddressTypeCityStringSeattle
 1/String/CityCountryStringUSA/String/CountryPhoneNumbers 
Count=2PhoneNumberTypeAreaCodeInt32206/Int32/AreaCodeNumberInt327819281/Int32/NumberTags
 Count=1Stringnever answered 
1/String/Tags/PhoneNumberTypePhoneNumberTypeAreaCodeInt32206/Int32/AreaCodeNumberInt329991971/Int32/NumberTags
 Count=1Stringcell 
1/String/Tags/PhoneNumberType/PhoneNumbersStateStringWA/String/StateStreetString12070
 Lakeside pl 
1/String/StreetZipInt3298126/Int32/Zip/USAddressType/AlternateAddressesCreateDateDateTime2010-10-25T15:06:55.7168549-04:00/DateTime/CreateDateNameStringfred1/String/NameTags
 
Count=4Stringfirst/StringStringsecond/StringStringthird/StringString1/String/Tags/PersonType/Item'
 AS `ObjectText`,'2010-10-25 15:06:55' AS `EnteredDate`,'2010-10-25 15:06:55' 
AS `LastModDate` from `ciplitemwell0404`.`item` where 
(('a0d3937b-f5a8-0640-dec8-bdd60f7f4775' = 
'a0d3937b-f5a8-0640-dec8-bdd60f7f4775') and ('fred1' = 'fred1'))

Explain:

1, 'SIMPLE', 'Item', 'const', 'PRIMARY,ItemsByID', 'PRIMARY', '889', 
'const,const', 1, ''


Table

Re: Master - master replication

2010-05-24 Thread Patrick Sherrill
I beleive the issue is more storage related than anything else.
Multiple servers exponentially increased risk of compromise.

Carl wrote:
 This is both interesting and puzzling.  
 
 The only way credit card information can be aquired is through SSL 
 communication with the user (user enters credit card information which is 
 used to authorize the transactions, whatever.)  Yet, that same process is not 
 sufficient to comply with PCI DSS requirements to move the card information 
 from one server to another.  Seems illogical since both transmissions are 
 exposed in the same way.
 
 Thanks,
 
 Carl
   - Original Message - 
   From: John Daisley 
   To: Prabhat Kumar 
   Cc: Carl ; Walter Heck ; mysql@lists.mysql.com 
   Sent: Monday, May 24, 2010 7:39 AM
   Subject: Re: Master - master replication
 
 
   ssl is not enough for pci dss compliance. If you store credit card 
 information and are not pci compliant you can be heavily fined and have your 
 ability to process/accept credit card payments permanently removed. 
 
   The storage and transmission of credit card details demands end-to-end 
 encryption and tokenization. MySQL replication with ssl is not going to meet 
 the requirements. Probably be easier to write the data to both servers 
 directly rather than writing to one and then trying to secure replication to 
 a level demanded by the pci regs.
 
   regards
   John
 
 
 
 
   On 24 May 2010 13:23, Prabhat Kumar aim.prab...@gmail.com wrote:
 
 I think setting up few more configuration variable in replication will 
 secure the data in plain text transmission .
 
 #--master-ssl
 #--master-ssl-ca
 #--master-ssl-capath
 #--master-ssl-cert
 #--master-ssl-cipher
 #--master-ssl-key 
 http://dev.mysql.com/doc/refman/5.0/en/replication-options-slave.html
 
 http://dev.mysql.com/doc/refman/5.0/en/secure-create-certs.html
 
 Thanks,
 
 
 
 
 On Mon, May 24, 2010 at 6:45 PM, Carl c...@etrak-plus.com wrote:
 
   Interesting.  How is the best way to protect the information while 
 using master - master replication on remote sites?  (The data contains the 
 information of children, credit cards and bank accounts.)
 
   Thanks,
 
   Carl
 
- Original Message -
From: John Daisley
To: Carl
Cc: Walter Heck ; mysql@lists.mysql.com
Sent: Monday, May 24, 2010 6:47 AM
Subject: Re: Master - master replication
 
 
also consider that it is much more likely that remote slaves will 
 start falling behind particularly if you throw encryption into the equation.
 
Regards
 
John
 
 
On 24 May 2010 13:24, Carl c...@etrak-plus.com wrote:
 
  Walter,
 
  Don't know how I missed that but it exactly what I needed.
 
  Thanks,
 
  Carl
  - Original Message - From: Walter Heck 
 wal...@openquery.com
  To: Carl c...@etrak-plus.com
  Cc: mysql@lists.mysql.com
  Sent: Monday, May 24, 2010 5:49 AM
  Subject: Re: Master - master replication
 
 
 
  Hi Carl,
 
  On Mon, May 24, 2010 at 13:42, Carl c...@etrak-plus.com wrote:
 
1. Is the data visible during transmission?
 
  Not sure what you mean there?
 
 
2. Is there a way to encrypt the data during transmission?
 
  MySQL supports SSL encryption of replication. Here's a good starting
  point: 
 http://dev.mysql.com/doc/refman/5.1/en/replication-solutions-ssl.html
 
  cheers,
 
  Walter Heck
  Engineer @ Open Query (http://openquery.com)
 
 
  --
  MySQL General Mailing List
  For list archives: http://lists.mysql.com/mysql
  To unsubscribe:
 http://lists.mysql.com/mysql?unsub=john.dais...@butterflysystems.co.uk
 
 
 
 
 
--
John Daisley
 
Certified MySQL 5 Database Administrator
Certified MySQL 5 Developer
Cognos BI Developer
 
Telephone: +44 (0)7918 621621
Email: john.dais...@butterflysystems.co.uk
 
 
 
 
 
 -- 
 Best Regards,
 
 Prabhat Kumar
 MySQL DBA
 Datavail-India Mumbai
 Mobile : 91-9987681929
 www.datavail.com
 
 My Blog: http://adminlinux.blogspot.com
 My LinkedIn: http://www.linkedin.com/in/profileprabhat
 
 
 
 
   -- 
   John Daisley
 
   Certified MySQL 5 Database Administrator
   Certified MySQL 5 Developer
   Cognos BI Developer
 
   Telephone: +44 (0)7918 621621
   Email: john.dais...@butterflysystems.co.uk
 

-- 
Patrick Sherrill
patr...@michael-clarke.com

Michael-Clarke Company, Inc.
Since 1982
825 SE 47th Terrace
Cape Coral, FL 33904

(239) 945-0821 Office
(239) 770-6661 Cell

Confidentiality Notice.  This email message, including any attachments,
is for the sole use of the intended recipient(s) and may contain
confidential and privileged information.  Any unauthorized review, use,
disclosure or distribution is prohibited

Re: VC++ 2008 / MySQL debug / Unhandled exception

2009-01-11 Thread Patrick Sherrill
I seem to recall the issue with the debug library, but don't recall the fix. 
Do you get the same permissions (access) error with the release library?

Pat...

- Original Message - 
From: Miguel Cardenas renit...@gmail.com

To: mysql@lists.mysql.com
Sent: Saturday, January 10, 2009 10:22 AM
Subject: VC++ 2008 / MySQL debug / Unhandled exception



Hello list

I have a problem debugging a program that uses MySQL. The program itself
does not have problems, it runs perfectly when run in the console (command
prompt), but if it is executed inside the Visual C++ 2008 debugger it 
causes

an error:

Unhandled exception at 0x004b1560 in MyProgram.exe: 0xC005: Access
violation reading location 0x

when it reaches mysql_real_connect() I'm using mysql-6.0.8-alpha-win32
binary with setup.exe installer.

Note that I don't try to enter inside mysql functions, even if no
breakpoints are setup and I let the program to run freely it aborts there
with the unhandled exception.

My VC++ 2008 configuration is this:


Includes:
C:\Program Files\MySQL\MySQL Server 6.0\include

Libraries:
C:\Program Files\MySQL\MySQL Server 6.0\lib\debug

Code generation:
Multi-threaded /MT

Precompiled headers: NO

Aditional dependencies:
wsock32.lib mysqlclient.lib libmysql.lib mysys.lib

Ignore specific library: (as found in a forum post)
LIBCMTD.lib

Debugging: YES /DEBUG


I based my configuration on this post for VC++ 6.0
http://forums.mysql.com/read.php?45,49606,49606#msg-49606

I guess it may be due to the ignored LIBCMTD.lib (debug version of MT?), 
but
if don't ignore it the compilation fails at link time with lots of 
redefined

symbols.

My concrete question: is there something wrong with the configuration 
shown

in that post that I adapted to VC++ 2008? are VC++ programs with MySQL
unable to run inside VC++ debugger? I don't want to debut my MySQL code it
is working already for *NIX, but there are other parts of the program that
may require debug and it aborts when calling a mysql function.

Thanks for any comment or help




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Adapting ft_word_min_len has no effect

2008-06-19 Thread Patrick Ringl

Hello,

I successfully changed ft_word_min_len to '1' + rebuilt my fulltext 
index (dropped and readded it). But - for some reason the mysqlD still 
does not return anything unless the wordlength is =3.


Any thoughts about this?



regards,
Patrick

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: why does left join gives more results?

2008-05-04 Thread Patrick Aljord
  Your doing a left join which can increase the number of rows returned.
  This is then GROUP BYed and run through a HAVING. Is:
  posts.poster_id=users.id
  a one to one relationship? If it is not, then count(*) would be a
  larger number and pass the HAVING. This may not be your problem, but I
  suggest you have more than you realize.

this is not a one-to-one as a user can make many comments but a
comment belongs to one user only, so I guess the left join returns
more than necesary. but I'm kind of stuck on that one :/

  I suggest reading
  http://www.xaprb.com/blog/2006/04/26/sql-coding-standards/

I read it thanks but I still can't find a way to do it right, I mean I
am following this pattern right no?:

  select column ...
from table ...
   where criterion ...
group by groupingclause ...
  having havingclause ...
order by orderingclause ...

Thanks in advance

Pat

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



why does left join gives more results?

2008-05-03 Thread Patrick Aljord
hey all,

I have my query that counts posts per user:

SELECT count(*) as counted, c.user_id FROM posts c  group by c.user_id
having counted1 order by counted DESC LIMIT 20

I wanted to add user login for each count so I did:

SELECT count(*) as counted, u.login FROM posts c left join users u on
posts.poster_id=u.id  group by c.user_id having counted1 order by
counted DESC LIMIT 20

but now I get more results.

Any idea what I'm doing wrong?

Thanks in advance

Pat

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



select does too much work to find rows where primary key does not match

2008-04-15 Thread Patrick J. McEvoy
I have two MyISAM tables; each uses 'phone' as a primary key. Finding  
rows where the primary keys match is efficient:


mysql explain select bar.phone from foo,bar where foo.phone=bar.phone;
++-+---++---+- 
+-+---+---+-+
| id | select_type | table | type   | possible_keys | key |  
key_len | ref   | rows  | Extra   |
++-+---++---+- 
+-+---+---+-+
|  1 | SIMPLE  | bar   | index  | PRIMARY   | PRIMARY |  
10  | NULL  | 77446 | Using index |
|  1 | SIMPLE  | foo   | eq_ref | PRIMARY   | PRIMARY |  
10  | ssa.bar.phone | 1 | Using index |
++-+---++---+- 
+-+---+---+-+

2 rows in set (0.00 sec)


Finding rows in one table that do not match a row in the other table  
is wildly inefficient:


mysql explain select bar.phone from foo,bar where foo.phone!=bar.phone;
++-+---+---+---+-+- 
+--+-+--+
| id | select_type | table | type  | possible_keys | key | key_len  
| ref  | rows| Extra|
++-+---+---+---+-+- 
+--+-+--+
|  1 | SIMPLE  | bar   | index | NULL  | PRIMARY | 10   
| NULL |   77446 | Using index  |
|  1 | SIMPLE  | foo   | index | NULL  | PRIMARY | 10   
| NULL | 3855468 | Using where; Using index |
++-+---+---+---+-+- 
+--+-+--+

2 rows in set (0.00 sec)

(This is the same for 'NOT', '!=', or ''.)

The amount of work should be identical in both cases: grab a row, look  
up by primary key in the other table, proceed.


My real goal is to delete rows in the smaller table if there is no  
match in the larger table:


delete from bar using foo,bar where not bar.phone=foo.phone;

but it runs for hours. I suppose I could SELECT INTO a new table and  
rename the tables, but that seems dorky.


Is there any way to force SELECT/DELETE to look up the primary key  
rather than scan the entire index?


Thanks.


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: select does too much work to find rows where primary key does not match

2008-04-15 Thread Patrick J. McEvoy
 I would have thought your not = though is matching a lot more rows every 
 time..

The field is UNIQUE PRIMARY KEY in both tables, so there
should be 0 or 1 matches.

 I would look into using where not exists as a subselect

My MySQL book (O'Reilly second edition) does not mention
subqueries or EXISTS. I am reading about it online now.

 delete from bar where not exists (select 'y' from foo where foo.phone = 
 bar.phone);

 something like that.

Thanks, I'll give it a try.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



ordering my regex

2008-03-10 Thread Patrick Aljord
Hi all,

I'm doing a select * from comments where c.content REGEXP
'http://[^i].*' and I would like to sort the urls found by repetition
of the same urls.
As an example if I get 3 records with http://google.com url in the
content and two with http://mysql.com I would get the first the 3
comments with google.com and then the 2 with mysql.com.

Any idea how to do that?

Thanks in advance.

Pat

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



how to select total votes for each comment?

2008-03-04 Thread Patrick Aljord
Hey all,
I have comments(id,content) and votes(comment_id,vote). vote is a tinyint.

I would like to select total votes for each comment, I tried:

 select content, sum(v.votes) from comments c left join votes v on
c.id=v.comment_id

but it only returns first result obviously, any idea how I could do this?

Thanks in advance,

Pat

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



can I optimize this query?

2007-06-25 Thread Patrick Aljord

Hey all,
I have 2 tables:
Profiles(id).
Relationships(id,friend_id,befriender_id).

friend_id and befriender_id represent profiles ids.

I want to find all the profiles that are neither friends neither
befrienders with a given profile.

this is the query I use with profile id=1:

select * from profiles
 where profiles.id not in
 (SELECT profiles.id FROM profiles INNER JOIN relationships ON
profiles.id = relationships.befriender_id WHERE
(relationships.friend_id = 1 ))
and profiles.id not in
(SELECT profiles.id FROM profiles INNER JOIN relationships ON
profiles.id = relationships.friend_id WHERE
(relationships.befriender_id = 1 ));

is there a better, faster way to do so?

thanx in advance

Pat

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Accented characters in InnoDB tables?

2007-06-23 Thread patrick

Do you have to do something special with InnoDB tables to accept
various character sets like accented, European characters? Using the
default, these accented characters come out as garbage.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



how to match all words

2007-03-15 Thread Patrick Aljord

Hey all,
I have a table 'clients' like this:
id int(5),
name varchar(55),
address varchar(55)

I would like to select all the records that have '%x%' and '%y%' but
'%x%' can be in name and '%y%' can be in address. Also in my query
there are generally more words to match (x,y,z,t etc) and I can't use
full text search. Any what's the best way to do this?

Thanx in advance

Pat

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



database schema migration

2006-12-06 Thread Patrick Aljord

hey all,
I have two tables like that:
artists(id,name)
albums(id,artist_id,album_name)

and I need  to transfer the data of this database to three tables that
look like this:
artists(id,name)
albums(id,name)
artists_albums(album_id,artist_id)

any idea what's the fastest query to do this?

thanx in advance

Pat

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



can I recover a database from db files?

2006-11-14 Thread Patrick Aljord

Hey all,

I host my app on a friend server who make backup every night, well
yesterday he installed another distro so I asked him for my db backup
and it turns out the only backup he did was the whole hard drive. So
he just sent me a tarball of my database directory containing:
ads_categories.MYD,ads_categories.MYI,ads.frm,ads.MYD,ads.MYI,categories.frm,categories.MYD,categories.MYI,db.opt,regions.frm,regions.MYD,regions.MYI.

I tried to create a database called mydb on my computer (debian etch)
and just copied all the files to the place /var/lib/mysql/mydb/

then when I try to do a select * from categories, I get:

ERROR 1017 (HY000): Can't find file: './mydb/categories.frm' (errno: 13)

is there any way to get my db back?

thanx in advance

Pat

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



need help on before insert trigger

2006-10-07 Thread Patrick Aljord

I would like to prohibit the value 'xxx' on my column title, and if it
does contain the value I would like to create an exception by
assigning 'xxx' to the primary key id which is int(5).
This is what I do but I get an error on its  creation so I guess it's
not the right way:
CREATE TRIGGER testref BEFORE INSERT ON bookmarks
 FOR EACH ROW
BEGIN
if NEW.title like '%xxx%'
 set NEW.id='xxx';
 END;

the error:
server version for the right syntax to use near ':
  set NEW.id='xxx' at line 4

any idea how to do that?

thanx in advance

pat

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: need help on before insert trigger

2006-10-07 Thread Patrick Aljord

I meant the error is:
mysql CREATE TRIGGER testref BEFORE INSERT ON bookmarks
   -   FOR EACH ROW
   - BEGIN
   - IF NEW.title LIKE '%xxx%' THEN
   -   SET NEW.id ='xxx';
ERROR 1064 (42000): You have an error in your SQL syntax; check the
manual that corresponds to your MySQL server version for the right
syntax to use near 'SET NEW.id ='xxx'' at line 5
mysql END IF;
ERROR 1064 (42000): You have an error in your SQL syntax; check the
manual that corresponds to your MySQL server version for the right
syntax to use near 'END IF' at line 1
mysql END;
ERROR 1064 (42000): You have an error in your SQL syntax; check the
manual that corresponds to your MySQL server version for the right
syntax to use near 'END' at line 1

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: need help on before insert trigger

2006-10-07 Thread Patrick Aljord

thanx it works the trigger is created successfully but it has no
effect. here it is:
delimiter //
create trigger testref before insert on bookmarks
for each row
begin
declare dummy char(2);
if new.title like '%xxx%'
then
set new.id='xxx';
end if;
end;
//create trigger testref before insert on bookmarks
   - for each row
   - begin
   -   declare dummy char(2);
   -   if new.title like '%xxx%'
   -   then
   - set dummy = 'xxx';
   -   end if;
   - end;
   - //
then:
insert into bookmarks values (1, x);
Query OK, 1 row affected, 1 warning (0.00 sec)

the row is created with a warning. I would like to prevent it from
being created. I would like the insert to be canceled if the value is
equal to xxx.

any idea how to cancel the insert?

thanx in advance
Pat

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: need help on before insert trigger

2006-10-07 Thread Patrick Aljord

On 10/7/06, Patrick Aljord [EMAIL PROTECTED] wrote:

thanx it works the trigger is created successfully but it has no
effect. here it is:
delimiter //
create trigger testref before insert on bookmarks
 for each row
 begin
 if new.title like '%xxx%'
 then
 set new.id='xxx';
 end if;
 end;
 //create trigger testref before insert on bookmarks
- for each row
- begin
-   declare dummy char(2);
-   if new.title like '%xxx%'
-   then
- set dummy = 'xxx';
-   end if;
- end;
- //

(those are the two different triggers I tried)

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Problem installing PERL DBD::mysql

2006-08-14 Thread Patrick Galbraith

Niels Larsen wrote:

Niels,

Do you mean in the Makefile for zlib?

Thanks!

Patrick


Connie,

I had the same error with another program recently, but probably the
fix for yours is the same: try compile zlib with -fPIC by adding
 -fPIC to CFLAGS in the Makefile.

Niels Larsen



Logg, Connie A. wrote:


Machine info:

[EMAIL PROTECTED] utils]$ uname -a
Linux snv1 2.6.9-34.0.2.ELsmp #1 SMP Fri Jun 30 10:32:04 EDT 2006 
x86_64 x86_64 x86_64 GNU/Linux

[EMAIL PROTECTED] utils]$ perl -v
This is perl, v5.8.8 built for x86_64-linux
[EMAIL PROTECTED] utils]$ more /etc/redhat-release
Red Hat Enterprise Linux WS release 4 (Nahant Update 4)


--
I am having a hard time installing DBD:mysql on 64 bit machine 
snv1.ultralight.org

/usr/bin/perl MCPAN -e shell
CPAN install DBD::mysql

Dies with errors...so I decided to do it from the source, and it also 
complainshere is the output from the source install:


[EMAIL PROTECTED] DBD-mysql-3.0006]# /usr/bin/perl Makefile.PL I will use 
the following settings for compiling and testing:


  cflags(mysql_config) = -I/usr/local/include/mysql
  embedded  (mysql_config) =
  libs  (mysql_config) = -L/usr/local/lib/mysql -lmysqlclient 
-lz -lcrypt -lnsl -lm

  mysql_config  (guessed ) = mysql_config
  nocatchstderr (default ) = 0
  nofoundrows   (default ) = 0
  ssl   (guessed ) = 0
  testdb(default ) = test
  testhost  (default ) =
  testpassword  (default ) =
  testsocket(default ) =
  testuser  (default ) =

To change these settings, see 'perl Makefile.PL --help' and 'perldoc 
INSTALL'.


Using DBI 1.52 (for perl 5.008008 on x86_64-linux) installed in 
/usr/lib/perl5/site_perl/5.8.8/x86_64-linux/auto/DBI/

Writing Makefile for DBD::mysql
[EMAIL PROTECTED] DBD-mysql-3.0006]#
[EMAIL PROTECTED] DBD-mysql-3.0006]# make
cc -c  -I/usr/lib/perl5/site_perl/5.8.8/x86_64-linux/auto/DBI 
-I/usr/local/include/mysql -DDBD_MYSQL_INSERT_ID_IS_GOO D -g  
-fno-strict-aliasing -pipe -Wdeclaration-after-statement 
-I/usr/local/include -D_LARGEFILE_SOURCE -D_FILE_OFFSE
T_BITS=64 -I/usr/include/gdbm -O2   -DVERSION=\3.0006\ 
-DXS_VERSION=\3.0006\ -fpic -I/usr/lib/perl5/5.8.8/x86_64

-linux/CORE   dbdimp.c
cc -c  -I/usr/lib/perl5/site_perl/5.8.8/x86_64-linux/auto/DBI 
-I/usr/local/include/mysql -DDBD_MYSQL_INSERT_ID_IS_GOO D -g  
-fno-strict-aliasing -pipe -Wdeclaration-after-statement 
-I/usr/local/include -D_LARGEFILE_SOURCE -D_FILE_OFFSE
T_BITS=64 -I/usr/include/gdbm -O2   -DVERSION=\3.0006\ 
-DXS_VERSION=\3.0006\ -fpic -I/usr/lib/perl5/5.8.8/x86_64

-linux/CORE   mysql.c
mysql.xs: In function `XS_DBD__mysql__GetInfo_dbd_mysql_get_info':
mysql.xs:647: warning: ignoring return value of `Perl_newSViv', 
declared with attribute warn_unused_result
mysql.xs:650: warning: ignoring return value of `Perl_newSViv', 
declared with attribute warn_unused_result
mysql.xs:653: warning: ignoring return value of `Perl_newSVpv', 
declared with attribute warn_unused_result Running Mkbootstrap for 
DBD::mysql () chmod 644 mysql.bs rm -f 
blib/arch/auto/DBD/mysql/mysql.so LD_RUN_PATH=/usr/local/lib/mysql 
/usr/bin/perl myld cc  -shared -L/usr/local/lib dbdimp.o mysql.o  -o 
blib/arch/aut

o/DBD/mysql/mysql.so\
   -L/usr/local/lib/mysql -lmysqlclient -lz -lcrypt -lnsl -lm   \

/usr/bin/ld: /usr/local/lib/libz.a(compress.o): relocation 
R_X86_64_32 against `a local symbol' can not be used when making a 
shared object; recompile with -fPIC

/usr/local/lib/libz.a: could not read symbols: Bad value
collect2: ld returned 1 exit status
make: *** [blib/arch/auto/DBD/mysql/mysql.so] Error 1

I have tried using -fPIC, but I think I do not know how to tell it to 
use that correctly. I assume it is passed with --cflags on the 'perl 
Makefile.PL' line, but I can't get that to work.


Can someone tell me the syntax on the 'perl Makefile.PL' line, or, 
what the problem is and how can I get DBD::mysql to load?


By the way, perl Makefile.PL --help gives me..
[EMAIL PROTECTED] DBD-mysql-3.0006]# perl -Makefile.PL --help

Usage: perl [switches] [--] [programfile] [arguments]
  -0[octal]   specify record separator (\0, if no argument)
  -a  autosplit mode with -n or -p (splits $_ into @F)
  -C[number/list] enables the listed Unicode features
  -c  check syntax only (runs BEGIN and CHECK blocks)
  -d[:debugger]   run program under debugger
  -D[number/list] set debugging flags (argument is a bit mask or 
alphabets)
  -e program  one line of program (several -e's allowed, omit 
programfile)

  -f  don't do $sitelib/sitecustomize.pl at startup
  -F/pattern/ split() pattern for -a switch (//'s are optional)
  -i[extension]   edit  files in place (makes backup if extension 
supplied)

  -Idirectory specify @INC/#include directory (several -I's allowed)
  -l[octal]   enable line ending processing, specifies line 
terminator

  -[mM][-]module

Can I have multiple sockets?

2006-06-14 Thread Patrick Ben Koetter
I have an application (Postfix) I want to run chrooted. When I do run it
chrooted it will not be able to see the MySQL UNIX domain socket at it's
default location. (I could use a TCP socket, but I don't want to...)

Is there a way I can tell MySQL to establish multiple sockets e.g. a socket
per database or would I have to run several instances of a MySQL server giving
each instance a different (socket) configuration?

I tried to figure that out myself and read the manual etc. but I couldn't come
up with an answer. Did I miss some documentation?

Thanks,

Patrick


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Matching Different Tables' Columns and Rows

2006-04-28 Thread -Patrick
Hi,
I am having the toughest time accomplishing this. I am asking for any
examples or links to info pertaining to this as I want to make an effort
before posting any code.

After matching the specific rows by an identifying column between two
tables.. I can manipulate some data albeit on a minor scale.. instead of
finding only specifc rows that match, I'm receiving ALL rows between the
tables, so it may be more logic than querying the db. However, my
initial query to the db never seems to be correct and this may also be
the problem.

Any tips, suggestions or code examples is appreciated.
Thanks
-Patrick

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Matching Different Tables' Columns and Rows

2006-04-28 Thread -Patrick
Basically, I want to match specific rows with ONLY a specific date.

Here's the code:
mysql_select_db($database_connBlog, $connBlog);
$query_FindArticleDates = SELECT * FROM blg_article_art, blg_topic_top
WHERE blg_article_art.idtop_art = blg_topic_top.id_top;
$findArticleDates = mysql_query($query_FindArticleDates, $connBlog) or
die(mysql_error());
$findRow_ArticleDates = mysql_fetch_assoc($findArticleDates);
$field_ArticleDates = mysql_fetch_array($findArticleDates);
$totalRows_FindArticleDates = mysql_num_rows($findArticleDates);

?php for ( $k = $totalRows_FindArticleDates; $k  0; $k-- ) {
  if (mysql_num_rows($findArticleDates)  0){
  mysql_data_seek($findArticleDates,0);
  while($findRow_ArticleDates =
mysql_fetch_assoc($findArticleDates)){
$dates[] = $findRow_ArticleDates['FindArticleDate'];
  }
  } ?
   a href = weblog/month.php??php echo m=$j#38;y=$year?
? echo $getMonth[$i++]. .$year; $j++; }?/a

It is supposed to print the month only IF it there is entries matching
the date. So if there were 3 entries made for one month, then all
entries for that month should be printed. Right now, this prints every
row in existence.

-Patrick


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Matching fields from two different tables

2006-04-28 Thread -Patrick
Folks, I could really use your assistance.
Take a look here: http://pastebin.com/687889

How can I manipulate totalRows_numberComments so that I get the number
of blg_comment_com.idart_com PER blg_article_art.id_art? Right now, it
produces every blg_comment_com.idart_com in existence, definately not cool.

Thanks
-Patrick

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Matching fields from two different tables

2006-04-28 Thread -Patrick
John Hicks wrote:
 -Patrick wrote:
 Folks, I could really use your assistance.
 Take a look here: http://pastebin.com/687889

 How can I manipulate totalRows_numberComments so that I get the number
 of blg_comment_com.idart_com PER blg_article_art.id_art? Right now, it
 produces every blg_comment_com.idart_com in existence, definately not
 cool.

 Thanks
 -Patrick


 You had:
 SELECT COUNT(*) FROM blg_comment_com, blg_article_art WHERE
 blg_article_art.id_art=blg_comment_com.idart_com

 Try this:

 SELECT blg_article_art.id_art, COUNT(*) as NumberOfComments
 FROM blg_comment_com, blg_article_art
 WHERE blg_article_art.id_art = blg_comment_com.idart_com
 group by blg_article_art.id_art

 Try it in the mysql console before you try to plug it into php.
 (You can only learn so much at one time :)

 --John


That is fantastic John. Outputs the id along with respective count.

However, it leads back to my original question... how do I coordinate
the output from this query with php? I know php fairly well, but when
mixes with mysql Im still new and I only want to call those particular
values and have them respond appropriately. What do you suggest? Take
this output and shove it into an array, and match up id_art with a ?
echo arrayNum['id_art']; ? ? Say, if NumberOfComments  0, echo
NumberOfComments? See, I can output the NumberOfComments, but they're
either a total of the whole db and/or constant for every id_art.. even
though the query output is correct? (also derived a similar output but
used a different query... same problem resides -coordination).

Any suggestions are appreciated,
Thank you
-Patrick

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: need help for my jointure

2006-04-27 Thread Patrick Aljord
On 4/26/06, Shawn Green [EMAIL PROTECTED] wrote:



 --- Patrick Aljord [EMAIL PROTECTED] wrote:

  On 4/26/06, Patrick Aljord [EMAIL PROTECTED] wrote:
   I have a table confs like this:
   id int 5 auto_increment primary key;
   conf text;
  
   and another table conf_ip like this:
   id int 5 auto_increment primary key;
   conf_id int 5; ==foreing key of confs
   ip varchar 150;
  
  ok, sorry all for not being specific enough. I figured a query which
  is half working like that:
  select c.id, c.conf from confs as c inner join conf_ip as i on c.id
  =
  i.conf_id where i.ip!='someip
  I would like the select to return all the c.id that are in conf and
  all that don't have a c.ip='someip'. The problem is that if there is
  a
  record that have the c.id=i.conf_id but an ip different than
  localhost, my query will still return the corresponding c.id and I
  don't want that.
  I tried something like that but couldn't get it to work:
  select c.id, c.conf from confs as c inner join conf_ip as i on c.id =
  i.conf_id where (select i.ip from conf_ip where ip='$ip')=0
 
  hope yo usee what I mean
 
  Thanx in advance
 
  Pat
 

 Yes, I think I do. What you have is a table of 'conf' (whatever they
 are) that can have multiple IP addresses (see table `conf_ip`). What I
 think you want to know is which confs do not have a particular IP
 address.

 There are several ways to write this query. One of the more
 straightforward ways to write this is to use a temporary table
 (depending on your version, this should work well written as a
 subquery, too).

 CREATE TEMPORARY TABLE tmpMatches (key(conf_id) SELECT DISTINCT conf_id
 FROM conf_ip
 WHERE ip = 'some_ip_value';

 SELECT c.id, c.conf
 FROM confs c
 LEFT JOIN tmpMatches m
ON m.conf_id = c.id
 WHERE m.conf_id IS NULL;

 DROP TEMPORARY TABLE tmpMatches;


 What we do is generate a list of all of the conf's that do have the
 target IP and save that list into a temporary table. Then we run a
 query that finds every conf EXCEPT those we just located in the first
 step. Last we clean up after ourselves by getting rid of the temp
 table.

 Make sense?


Yes it makes sense, thanx a lot Shawn. As I'm using it my php code, I think
I'd rather do it with a sub query.
Here is how I do it:
SELECT c.id, c.conf
FROM confs c
LEFT JOIN conf_ip i
   ON i.conf_id = c.id
WHERE c.id NOT IN (SELECT DISTINCT conf_id
FROM conf_ip
WHERE ip = 'some_ip_value);

is that correct? I can't test it right now

thanx in advance
Pat


Re: need help for my jointure

2006-04-26 Thread Patrick Aljord
On 4/26/06, Patrick Aljord [EMAIL PROTECTED] wrote:
 I have a table confs like this:
 id int 5 auto_increment primary key;
 conf text;

 and another table conf_ip like this:
 id int 5 auto_increment primary key;
 conf_id int 5; ==foreing key of confs
 ip varchar 150;

ok, sorry all for not being specific enough. I figured a query which
is half working like that:
select c.id, c.conf from confs as c inner join conf_ip as i on c.id =
i.conf_id where i.ip!='someip
I would like the select to return all the c.id that are in conf and
all that don't have a c.ip='someip'. The problem is that if there is a
record that have the c.id=i.conf_id but an ip different than
localhost, my query will still return the corresponding c.id and I
don't want that.
I tried something like that but couldn't get it to work:
select c.id, c.conf from confs as c inner join conf_ip as i on c.id =
i.conf_id where (select i.ip from conf_ip where ip='$ip')=0

hope yo usee what I mean

Thanx in advance

Pat

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Accumilating Blog Comments

2006-04-25 Thread -Patrick
Hi Folks,
Here is the table for the articles:

CREATE TABLE `blg_article_art` (
  `id_art` int(11) NOT NULL auto_increment,
  `idtop_art` int(11) NOT NULL default '0',
  `title_art` varchar(100) NOT NULL default '',
  `description_art` blob NOT NULL,
  `text_art` longtext NOT NULL,
  `date_art` datetime default NULL,
  PRIMARY KEY  (`id_art`)
) TYPE=MyISAM AUTO_INCREMENT=117 ;

Here is the table for the comments:
CREATE TABLE `blg_comment_com` (
  `id_com` int(11) NOT NULL auto_increment,
  `idart_com` int(11) NOT NULL default '0',
  `text_com` text NOT NULL,
  `idusr_com` int(11) NOT NULL default '0',
  `date_com` datetime default NULL,
  `time_com` time default NULL,
  `valid_com` tinyint(4) NOT NULL default '0',
  PRIMARY KEY  (`id_com`),
  FULLTEXT KEY `text_com` (`text_com`),
  FULLTEXT KEY `text_com_2` (`text_com`)
) TYPE=MyISAM AUTO_INCREMENT=128 ;

And here is two attempts below that Ive tried thus far to the best of my
ability:
attept 1:
mysql_select_db($database_connBlog, $connBlog);
$query_rsComments = sprintf(SELECT blg_comment_com.idart_com,
blg_user_usr.username_usr, blg_comment_com.text_com,
blg_comment_com.date_com, blg_comment_com.valid_com FROM
(blg_comment_com INNER JOIN blg_user_usr ON
blg_user_usr.id_usr=blg_comment_com.idusr_com) WHERE
blg_comment_com.idart_com=%s AND blg_comment_com.valid_com=1 ORDER BY
date_com ASC, $KTColParam1_rsComments);
$rsComments = mysql_query($query_rsComments, $connBlog) or
die(mysql_error());
$row_rsComments = mysql_fetch_assoc($rsComments);
$totalRows_rsComments = mysql_num_rows($rsComments);

attempt 2:
mysql_select_db($database_connBlog, $connBlog);
$query_rsComments = SELECT idart_com, COUNT(id_com) FROM
blg_comment_com GROUP BY idart_com;
$rsComments = mysql_query($query_rsComments, $connBlog) or
die(mysql_error());
$row_rsComments = mysql_fetch_assoc($rsComments);
$totalRows_rsComments = mysql_num_rows($rsComments);

I am trying to have a field in my main page that says 'comments(some
number)'. Right now it says only 'comments'. I know one thing about the
two attempts above, my problem is on the line of $query_rsComments and
the commands placed to the database. Totaling up the values for
$totalRows_rsComments and everything else I can do, but I'm not getting
correct response from the base. What is the correct communication for
the tables, or where am I going wrong with the two above attempts?

Thank you,
-Patrick


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Accumilating Blog Comments

2006-04-25 Thread -Patrick
Sorry about that..

$totalrows_rsComments gives a value of 0. But no matter what I do I
can't seem to alter it. It stays at zero.

-Patrick

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Accumilating Blog Comments

2006-04-25 Thread -Patrick
Daniel da Veiga wrote:
 On 4/25/06, -Patrick [EMAIL PROTECTED] wrote:
   
 Hi Folks,
 Here is the table for the articles:

 CREATE TABLE `blg_article_art` (
   `id_art` int(11) NOT NULL auto_increment,
   `idtop_art` int(11) NOT NULL default '0',
   `title_art` varchar(100) NOT NULL default '',
   `description_art` blob NOT NULL,
   `text_art` longtext NOT NULL,
   `date_art` datetime default NULL,
   PRIMARY KEY  (`id_art`)
 ) TYPE=MyISAM AUTO_INCREMENT=117 ;

 Here is the table for the comments:
 CREATE TABLE `blg_comment_com` (
   `id_com` int(11) NOT NULL auto_increment,
   `idart_com` int(11) NOT NULL default '0',
   `text_com` text NOT NULL,
   `idusr_com` int(11) NOT NULL default '0',
   `date_com` datetime default NULL,
   `time_com` time default NULL,
   `valid_com` tinyint(4) NOT NULL default '0',
   PRIMARY KEY  (`id_com`),
   FULLTEXT KEY `text_com` (`text_com`),
   FULLTEXT KEY `text_com_2` (`text_com`)
 ) TYPE=MyISAM AUTO_INCREMENT=128 ;

 And here is two attempts below that Ive tried thus far to the best of my
 ability:
 attept 1:
 mysql_select_db($database_connBlog, $connBlog);
 $query_rsComments = sprintf(SELECT blg_comment_com.idart_com,
 blg_user_usr.username_usr, blg_comment_com.text_com,
 blg_comment_com.date_com, blg_comment_com.valid_com FROM
 (blg_comment_com INNER JOIN blg_user_usr ON
 blg_user_usr.id_usr=blg_comment_com.idusr_com) WHERE
 blg_comment_com.idart_com=%s AND blg_comment_com.valid_com=1 ORDER BY
 date_com ASC, $KTColParam1_rsComments);
 $rsComments = mysql_query($query_rsComments, $connBlog) or
 die(mysql_error());
 $row_rsComments = mysql_fetch_assoc($rsComments);
 $totalRows_rsComments = mysql_num_rows($rsComments);

 attempt 2:
 mysql_select_db($database_connBlog, $connBlog);
 $query_rsComments = SELECT idart_com, COUNT(id_com) FROM
 blg_comment_com GROUP BY idart_com;
 $rsComments = mysql_query($query_rsComments, $connBlog) or
 die(mysql_error());
 $row_rsComments = mysql_fetch_assoc($rsComments);
 $totalRows_rsComments = mysql_num_rows($rsComments);

 I am trying to have a field in my main page that says 'comments(some
 number)'. Right now it says only 'comments'. I know one thing about the
 two attempts above, my problem is on the line of $query_rsComments and
 the commands placed to the database. Totaling up the values for
 $totalRows_rsComments and everything else I can do, but I'm not getting
 correct response from the base. What is the correct communication for
 the tables, or where am I going wrong with the two above attempts?

 Thank you,
 -Patrick


 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


 

 This is not a MySQL related problem, more a PHP one. Check this:
 http://www.php.net/manual/en/function.mysql-num-rows.php

 --
 Daniel da Veiga
 Computer Operator - RS - Brazil
 -BEGIN GEEK CODE BLOCK-
 Version: 3.1
 GCM/IT/P/O d-? s:- a? C++$ UBLA++ P+ L++ E--- W+++$ N o+ K- w O M- V-
 PS PE Y PGP- t+ 5 X+++ R+* tv b+ DI+++ D+ G+ e h+ r+ y++
 --END GEEK CODE BLOCK--

   
Thanks Dan.

I already have that request in the processing.. - my-sql_numrows.
I wrote to this list because I believe the fault lies within the
communication to the database.. as far my statement for
$query_rsComments.. so you think there's nothing wrong with that
statement? becuase i am already familiar with mysql_num rows and have it
setup just like in that link you provided.

Thanks again
-Patrick

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Accumilating Blog Comments

2006-04-25 Thread -Patrick
$query_rsComments = sprintf(SELECT id_com WHERE idart_com=%s ORDER BY
date_com ASC, $KTColParam1_rsComments);


can anyone see what Im trying to do here? basically, I want to match the
id_com with an idart_com to produce a value and report back through the
other chunk of code already given to output a number.. using
mysql_num_rows(). But Im getting syntax and check line errors..

Any thoughts?
-Patrick

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Accumilating Blog Comments

2006-04-25 Thread -Patrick
Philippe Poelvoorde wrote:
 2006/4/25, -Patrick [EMAIL PROTECTED]:
   
 $query_rsComments = sprintf(SELECT id_com WHERE idart_com=%s ORDER BY
 date_com ASC, $KTColParam1_rsComments);


 can anyone see what Im trying to do here? basically, I want to match the
 id_com with an idart_com to produce a value and report back through the
 other chunk of code already given to output a number.. using
 mysql_num_rows(). But Im getting syntax and check line errors..

 Any thoughts?
 -Patrick
 

 You should use count(*) to get the rows number, if it's the only thing you 
 want.
 Why is your inner join enclosed in parenthesis ??

   
Or, can I use count to make statements inside the query? I almost have
it... but it's off because now it's gathering every id_com and putting
them all in one idart_com.. not to it's relative idart_com.. very
frustrating..

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



need help for my jointure

2006-04-25 Thread Patrick Aljord
I have a table confs like this:
id int 5 auto_increment primary key;
conf text;

and another table conf_ip like this:
id int 5 auto_increment primary key;
conf_id int 5; ==foreing key of confs
ip varchar 150;

I would like to
select id, conf from confs where ip!='some val';

how can I do this?

thanx in advance

Pat

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



need help to delete duplicates

2006-04-17 Thread Patrick Aljord
hey all,
I have a table mytable that looks like this:
id tinyint primary key auto_increment
row1 varchar 150
row2 varchar 150

I would like to remove all duplicates, which means that if n records
have the same row1 and row2, keep only one record and remove the
duplicates. Any idea how to do this?

thanks in advance

Pat

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: MySQL 5.0.18-standard - Wrong record (sorry, I cannot find a better subject)

2006-04-08 Thread Patrick Herber
Hello,

I wanted only to report that I removed and re-added the Index as Martijn
suggested and now it's OK.

Thanks again for your help

Regards,
Patrick


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



MySQL 5.0.18-standard - Wrong record (sorry, I cannot find a better subject)

2006-04-04 Thread Patrick Herber
Hello!
 
I'm using MySQL 5.0.18-standard Server on a Linux SuSE 10 Server and I've
got following problem:

I have a table with the followign structure
 
+-+--+--+-+---+-
---+
| Field   | Type | Null | Key | Default   | Extra
|
+-+--+--+-+---+-
---+
| STEP_ID | int(10) unsigned | NO   | PRI | NULL  |
auto_increment |
| INVOICE_ID  | int(10) unsigned | NO   | MUL | 0 |
|
| STEP_TYPE_ID| smallint(5) unsigned | NO   | MUL | 0 |
|
  (some other field) ...
+-+--+--+-+---+-
---+

When I execute following statement 

SELECT STEP_ID, INVOICE_ID, STEP_TYPE_ID FROM step WHERE
INVOICE_ID=17081598;

I get this result

+--++--+
| STEP_ID  | INVOICE_ID | STEP_TYPE_ID |
+--++--+
| 47870211 |   17081598 |1 |
| 47870212 |   17081598 |4 |
| 47870214 |   17081599 |1 |  !!
+--++--+

As you can see there is a record with INVOICE_ID=17081599.

Please note that if I ask for 


mysql SELECT STEP_ID, INVOICE_ID, STEP_TYPE_ID FROM step WHERE
INVOICE_ID=17081599;

I also receive that record:
 
+--++--+
| STEP_ID  | INVOICE_ID | STEP_TYPE_ID |
+--++--+
| 47870214 |   17081599 |1 | 
| 47870215 |   17081599 |4 |
| 47870216 |   17081599 |3 |
+--++--+

Interesting is also that no record with STEP_ID=47870213 is visible.
I wrote visible and not present, because if I try to insert a new Record
with this PK I get this error

mysql INSERT INTO step (STEP_ID, INVOICE_ID, STEP_TYPE_ID) VALUES
(47870213, 17081598, 3);
ERROR 1062 (23000): Duplicate entry '47870213' for key 1

However:

mysql SELECT * FROM step where STEP_ID=47870213;
Empty set (0.00 sec)

The problem for me is that I also collect a statistic from this come out
wrong, because when I ask for 

mysql SELECT COUNT(*) FROM step WHERE INVOICE_ID IN (17081598,17081599) AND
STEP_TYPE_ID=1;

I get, instead of 2:

+--+
| COUNT(*) |
+--+
|3 |
+--+

Can you please tell me what the problem could be and what can I do to solve
it?

Thanks a lot!

Regards,
Patrick


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: MySQL 5.0.18-standard - Wrong record (sorry, I cannot find a better subject)

2006-04-04 Thread Patrick Herber
Thanks Martijn for you prompt answer!

 Do you have indices on INVOICE_ID? 

Yes (KEY `IDX_step1` (`INVOICE_ID`))

 If so, try dropping it and recreating it?

OK, I will do it (I will have to wait until this week-end to do it (= the
table is a bit big (47 mio records) and I cannot stop the service during the
week) and tell you the results.

Shall also perform a REPAIR TABLE?


Regards,
Patrick

 -Original Message-
 From: Martijn Tonies [mailto:[EMAIL PROTECTED] 
 Sent: Tuesday, 04 April 2006 10:34
 To: Patrick Herber; mysql@lists.mysql.com
 Subject: Re: MySQL 5.0.18-standard - Wrong record (sorry, I 
 cannot find a better subject)
 
 Patrick,
 
  I'm using MySQL 5.0.18-standard Server on a Linux SuSE 10 
 Server and 
  I've got following problem:
 
  I have a table with the followign structure
 
 
 +-+--+--+-+---
 +-
 +-+--+--+-+---
 +
  ---+
  | Field   | Type | Null | Key | 
 Default   | Extra
  |
 
 +-+--+--+-+---
 +-
 +-+--+--+-+---
 +
  ---+
  | STEP_ID | int(10) unsigned | NO   | PRI | NULL  |
  auto_increment |
  | INVOICE_ID  | int(10) unsigned | NO   | MUL | 0 |
  |
  | STEP_TYPE_ID| smallint(5) unsigned | NO   | MUL | 0 |
  |
(some other field) ...
 
 +-+--+--+-+---
 +-
 +-+--+--+-+---
 +
  ---+
 
  When I execute following statement
 
  SELECT STEP_ID, INVOICE_ID, STEP_TYPE_ID FROM step WHERE 
  INVOICE_ID=17081598;
 
  I get this result
 
  +--++--+
  | STEP_ID  | INVOICE_ID | STEP_TYPE_ID |
  +--++--+
  | 47870211 |   17081598 |1 |
  | 47870212 |   17081598 |4 |
  | 47870214 |   17081599 |1 |  !!
  +--++--+
 
  As you can see there is a record with INVOICE_ID=17081599.
 
  Please note that if I ask for
 
 
  mysql SELECT STEP_ID, INVOICE_ID, STEP_TYPE_ID FROM step WHERE
  INVOICE_ID=17081599;
 
  I also receive that record:
 
  +--++--+
  | STEP_ID  | INVOICE_ID | STEP_TYPE_ID |
  +--++--+
  | 47870214 |   17081599 |1 | 
  | 47870215 |   17081599 |4 |
  | 47870216 |   17081599 |3 |
  +--++--+
 
  Interesting is also that no record with STEP_ID=47870213 is visible.
  I wrote visible and not present, because if I try to 
 insert a new
 Record
  with this PK I get this error
 
  mysql INSERT INTO step (STEP_ID, INVOICE_ID, STEP_TYPE_ID) VALUES
  (47870213, 17081598, 3);
  ERROR 1062 (23000): Duplicate entry '47870213' for key 1
 
  However:
 
  mysql SELECT * FROM step where STEP_ID=47870213;
  Empty set (0.00 sec)
 
  The problem for me is that I also collect a statistic from 
 this come 
  out wrong, because when I ask for
 
  mysql SELECT COUNT(*) FROM step WHERE INVOICE_ID IN 
  mysql (17081598,17081599)
 AND
  STEP_TYPE_ID=1;
 
  I get, instead of 2:
 
  +--+
  | COUNT(*) |
  +--+
  |3 |
  +--+
 
  Can you please tell me what the problem could be and what 
 can I do to
 solve
  it?
 
 Corrupt index?
 
 Do you have indices on INVOICE_ID? If so, try dropping it and 
 recreating it?
 
 Martijn Tonies
 Database Workbench - development tool for MySQL, and more!
 Upscene Productions
 http://www.upscene.com
 My thoughts:
 http://blog.upscene.com/martijn/
 Database development questions? Check the forum!
 http://www.databasedevelopmentforum.com
 
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:
 http://lists.mysql.com/[EMAIL PROTECTED]
 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Force a COMMIT on InnoDB tables? (set autocommit=0)

2006-03-30 Thread patrick
Is there any way to make this the default behaviour? I did a Google
search, and it was suggested I put the following line in /etc/my.cnf:

[mysqld]
init_connect='set autocommit=0'

This works fine, but I worry that this will affect all incoming
connections regardless of whether or not they are from the
command-line client. Is there a way to set this just for the client,
like some option that would go in the [mysql] section?

Patrick

On 3/28/06, Wolfram Kraus [EMAIL PROTECTED] wrote:
 patrick wrote:
  I'm wondering if there's any way to force updates on InnoDB tables to
  require an explicit COMMIT when running queries from the mysql
  command-line client (similar to Oracle's command line client)?
 

 set autocommit = 0
 See
 http://dev.mysql.com/doc/refman/5.0/en/innodb-and-autocommit.html

 HTH,
 Wolfram


 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Force a COMMIT on InnoDB tables? (set autocommit=0)

2006-03-30 Thread patrick
I've confirmed that this does affect ALL incoming connections.

On 3/30/06, patrick [EMAIL PROTECTED] wrote:
 Is there any way to make this the default behaviour? I did a Google
 search, and it was suggested I put the following line in /etc/my.cnf:

 [mysqld]
 init_connect='set autocommit=0'

 This works fine, but I worry that this will affect all incoming
 connections regardless of whether or not they are from the
 command-line client. Is there a way to set this just for the client,
 like some option that would go in the [mysql] section?

 Patrick

 On 3/28/06, Wolfram Kraus [EMAIL PROTECTED] wrote:
  patrick wrote:
   I'm wondering if there's any way to force updates on InnoDB tables to
   require an explicit COMMIT when running queries from the mysql
   command-line client (similar to Oracle's command line client)?
  
 
  set autocommit = 0
  See
  http://dev.mysql.com/doc/refman/5.0/en/innodb-and-autocommit.html
 
  HTH,
  Wolfram
 
 
  --
  MySQL General Mailing List
  For list archives: http://lists.mysql.com/mysql
  To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
 
 


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Force a COMMIT on InnoDB tables?

2006-03-28 Thread patrick
I'm wondering if there's any way to force updates on InnoDB tables to
require an explicit COMMIT when running queries from the mysql
command-line client (similar to Oracle's command line client)?

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



confused...

2006-02-21 Thread Patrick Duda

Why, when I create a table as follows:

mysql create table requestid ( request_id int not null default 
1, constraint requestid_innodb_pk_cons primary key(request_id) ) 
ENGINE=InnoDB;

Query OK, 0 rows affected (0.02 sec)


Do I get the following?

mysql select request_id from requestid;
Empty set (0.01 sec)

When I do a show create table I see:

mysql show create table requestid;
+---+--+
| Table | Create 
Table 
|

+---+--+
| requestid | CREATE TABLE `requestid` (
  `request_id` int(11) NOT NULL default '1',
  PRIMARY KEY  (`request_id`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1 |
+---+--+
1 row in set (0.00 sec)

Shouldn't I be getting back a '1' when I do my select???  Why am I getting 
an empty set?  What am I not understanding?  How do I create a table with a 
starting value of '1' or '0' for an int???


Thanks


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



InnoDB and locking

2006-02-10 Thread Patrick Duda
I guess I don't understand this locking stuff.  I have a InnoDB table that 
has one thing in it, a counter.  All I want to do is have multiple 
instances of the code read this counter and increment it.  I want to make 
sure that each one is unique.


Here is what I am doing in java:

c.setAutoCommit(false);
...
rs = statement.executeQuery(select request_id from requestid_innodb for 
update);

...
String updateQuery = update requestid_innodb set request_id=;
 updateQuery = updateQuery + nextRequestId;
tempStatement = c.createStatement();
tempStatement.executeUpdate(updateQuery);
...
c.commit();
c.setAutoCommit(true);

If I have multiple instances of this code running I end up with duplicate 
keys.  I thought this was suppose to lock the table so that would not happen.


What am I not doing right?  What am I not understanding about locking?

Thanks

Patrick


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: InnoDB and locking

2006-02-10 Thread Patrick Duda

At 10:52 AM 2/10/2006, [EMAIL PROTECTED] wrote:

Patrick Duda [EMAIL PROTECTED] wrote on 10/02/2006 16:28:56:

 I guess I don't understand this locking stuff.  I have a InnoDB table
that
 has one thing in it, a counter.  All I want to do is have multiple
 instances of the code read this counter and increment it.  I want to
make
 sure that each one is unique.

 Here is what I am doing in java:

 c.setAutoCommit(false);
 ...
 rs = statement.executeQuery(select request_id from requestid_innodb for

 update);
 ...
 String updateQuery = update requestid_innodb set request_id=;
   updateQuery = updateQuery + nextRequestId;
 tempStatement = c.createStatement();
 tempStatement.executeUpdate(updateQuery);
 ...
 c.commit();
 c.setAutoCommit(true);

 If I have multiple instances of this code running I end up with
duplicate
 keys.  I thought this was suppose to lock the table so that would not
happen.

 What am I not doing right?  What am I not understanding about locking?

I think this problem is explained in detail at
http://dev.mysql.com/doc/refman/5.0/en/innodb-locking-reads.html

Alec




Yes, I know that, I have read that and that is why I am asking what it is 
that I am not doing right.  It talks about a counter specifically:
2) read the counter first with a lock mode FOR UPDATE, and increment after 
that. The latter approach can be implemented as follows:


SELECT counter_field FROM child_codes FOR UPDATE;
UPDATE child_codes SET counter_field = counter_field + 1;

A SELECT … FOR UPDATE reads the latest available data, setting exclusive 
locks on each row it reads. Thus, it sets the same locks a searched SQL 
UPDATE would set on the rows. 

Isn't that what my Java code is doing?

I start a transaction by turning off autocommit, I then do a select for 
update.  Then I do the update and I commit.  From the way I read this, no 
one else should be able to read the table until I commit.


Yet, that is not what I am seeing.  When I start several instances of the 
program running I get lots and lots of:


Error inserting records into database [Caused by: Duplicate entry '152' for 
key 1]


That is what has me confused.  I thought I was doing things they way the 
manual said to.


Thanks

Patrick



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: InnoDB and locking

2006-02-10 Thread Patrick Duda

At 12:54 PM 2/10/2006, Mark Matthews wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Patrick Duda wrote:
 I guess I don't understand this locking stuff.  I have a InnoDB table
 that has one thing in it, a counter.  All I want to do is have multiple
 instances of the code read this counter and increment it.  I want to
 make sure that each one is unique.

 Here is what I am doing in java:

 c.setAutoCommit(false);
 ...
 rs = statement.executeQuery(select request_id from requestid_innodb for
 update);
 ...
 String updateQuery = update requestid_innodb set request_id=;
  updateQuery = updateQuery + nextRequestId;
 tempStatement = c.createStatement();
 tempStatement.executeUpdate(updateQuery);
 ...
 c.commit();
 c.setAutoCommit(true);

 If I have multiple instances of this code running I end up with
 duplicate keys.  I thought this was suppose to lock the table so that
 would not happen.

 What am I not doing right?  What am I not understanding about locking?

 Thanks

 Patrick



Patrick,

Are you sure the table is using the InnoDB storage engine? What does the
output of SHOW CREATE TABLE  for the table in question say?

-Mark



ysql show create table requestid_innodb;
+--+-+
| Table| Create 
Table 
|

+--+-+
| requestid_innodb | CREATE TABLE `requestid_innodb` (
  `request_id` int(11) NOT NULL default '0',
  PRIMARY KEY  (`request_id`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1 |
+--+-+
1 row in set (0.00 sec)



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Best practice: FULLTEXT search InnoDB transactions replication

2006-02-08 Thread Patrick Savelberg
Hi, 

We have moved from Mysql4 to MySQL5 and are currently planning our new database 
schema. In this new approach we would like to move to InnoDB's storage engine 
for transaction support and still want to use MySQL's FULLTEXT search 
capabillities. And to make things easy we also want to replicate all our data 
to a second database. 

Now I have two different possible approaches: 

1. All tables are of type InnoDB, except one table which is of type MyIsam = 
the FULLTEXT searchable table. This searchable table would have a column with 
searchable text and a few meta data columns to identify the originating table, 
column and row. I could use the triggers to index the desired columns on 
Inserts, updates and deletes and insert the indexed data into the MyIsam 
search-table. 
Replication would be straigtforward 1-to-1 replication in this aproach. 

2. Still all tables would be of type InnoDB, but instead of creating a single 
searchable MyIsam table I could also alter the storage engine type for the 
searchable tables on de replication slave to MyIsam and delegate all searches 
to the slave. Which even may improve performance, because the master wont be 
doing full text searches anymore. 
Replication would be a bit more tricky because of having the InnoDB tables in 
the master and their corresponding MyIsam tables in the slave. 

I'm wondering which, if any, of the above aproaches is advisable or if there 
are other aproaches which are even better.


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



question about locking

2006-02-08 Thread Patrick Duda

Hi,

I am running MySQL 4.0.1 with j/connector 3.1 and I am having problems 
trying to figure out why I am not getting the results I am expecting.


I have a table that is used for generating primary keys.  It only has one 
item, an int that is incremented each time a key is needed.  This is not my 
code or my design so using something like auto_incrament is not an option.


The code runs under a container and our desire is to have several different 
containers running at the same time, all accessing the same database.  Each 
container is independent so the controls need to be on the database side.


The solution also needs to be portable to other databases so I am trying to 
stay with standard JDBC or SQL options.


The code for generating a new key is this:

try {
c = DatabaseSetup.getDBConnection();

c.setAutoCommit(false);

statement = c.createStatement(ResultSet.TYPE_SCROLL_INSENSITIVE,
ResultSet.CONCUR_UPDATABLE);

rs = statement.executeQuery(select transfer_id from transferid for update);

if (!rs.next())  {
nextTransferId = nextTransferId + 1;
StringBuffer query = new StringBuffer();
query.append(insert into transferid(transfer_id) values ();
query.append(nextTransferId);
query.append());
tempStatement = c.createStatement();
// Now Update the old value with new value
tempStatement.executeUpdate(query.toString());
} else {
rs.previous();
while( rs != null  rs.next() ) {
nextTransferId = rs.getInt(1);
// Get the transfer Id and increment it instead of using
// Db Specific sequence
nextTransferId = nextTransferId + 1;
// Now Update the old value with new value
tempStatement = c.createStatement();
tempStatement.executeUpdate(update transferid set +
transfer_id= + nextTransferId);
}
}
} catch (SQLException e) {

if( c != null )
{
try
{
c.rollback();
c.setAutoCommit(true);
}
catch( SQLException ex )
{
}
}
throw new DBException(i18n.getMessage(dbInsertErr),
e);
} finally {
try {
c.commit();
c.setAutoCommit(true);
if (statement != null) {
statement.close();
}
if (tempStatement != null) {
tempStatement.close();
}
if (rs != null) {
rs.close();
}
if (c != null) {
DatabaseSetup.returnDBConnection(c);
}
} catch (SQLException sql) {
logger.warn(i18n.getMessage(dbStatementErr), sql);
}
}
return nextTransferId;
}

I thought, that if I turned off autocommit I would enter a 
transaction.  Then, by using the select...for update, that I would take 
and hole a lock on the table.  That no other transaction would be able to 
read the table until I released the lock.  However, this is not what I am 
seeing when I run some tests.  I start up a number of containers and then 
fire off a bunch of jobs to each.  Each of these jobs will hit the above 
code.  The problem is that every so often I see the following error message.


 Error inserting records into database [Caused by: Duplicate entry '131' 
for key 1]


What am I doing wrong?

How am I suppose to be doing this via JDBC?  I know it should work...

Thanks


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



A propos de l'upgrade MySQL et de l'interclassement...

2006-01-27 Thread Patrick Gelin
Bonjour,

J'utilise une technologie CMS, installée depuis l'année dernière sur une 
base MySQL 4.0.16-nt. Le CMS à crée lui meme lus tables et y fait référence 
en utilisant l'encodage UTF-8.

Depuis, nous avons migré à la version MySQL 4.1.16-nt et l'encodage par 
défaut choisi a été latin_swedish_ci.

Evidemment mon CMS affiche les caractères n'importe comment, il me faut donc 
corriger cela.

Question:
1. Est ce que mes chaines de caractères initialement UTF-8 sont maintenant 
véritablement des chaines de caratères latin_swedish_ci ou alors est ce que 
le paramètre latin_swedish_ci n'est qu'un attribut et que le contenu est 
résté le mûmu, c'est à dire UTF-8? Difficile à évaluer depuis l'interface 
graphique de phpmyadmin laquelle à son propre interclassement.

2. Comment modifier rapidement l'ensemble des tables et des champs 
latin_swedish_ci vers UTF-8? Pour info, phpmyadmin n'offre qu'une interface 
rudimentaire pour changer, table après table et de façon très laborieuse, 
l'interclassement sinon l'application du changement à la base n'a d'effet 
que sur inodb... Par ailleurs, j'ai fait un test export/import en changeant 
le default_char_set mais cela n'impact pas les paramètres des champs 
actuels.

3. Mon CMS spécifie UTF-8 mais à quel UTF-8 est ce que cela correspond 
vraiment? utf8_unicode_ci, utf8_general_ci, utf8_bin ???

Merci pour votre aide. 




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Group By over many colums

2006-01-19 Thread Patrick Herber
I would suggest a union

SELECT name, count(*) 
FROM (SELECT name1 as name from mytable union select name2 as name from
mytable union select name3 as name from table) 
GROUP BY name

but perhaps there's a better way...

Regards,
Patrick


 -Original Message-
 From: Critters [mailto:[EMAIL PROTECTED] 
 Sent: Thursday, 19 January 2006 16:17
 To: mysql@lists.mysql.com
 Subject: Group By over many colums
 
 Hi
 I have a table setup like this:
 
 id, name1, name2, name3
 
 Which has data like this:
 
 1, Dave, Bob, Simon
 2, Joe, Tim, Dave
 3, Dave, Bob, Tom
 
 I can run 
 
 SELECT name, count(id) FROM tablename GROUP BY name1 ORDER 
 BY count(id) DESC
 
 Which would give me:
 
 Dave, 2
 Joe, 1
 
 But how would I go about getting the following result:
 
 Dave, 3
 Bob, 2
 Tom, 2
 Joe, 1
 Simon, 1
 
 Where it groups by name1, name2 and name3? Is it possible?
 -
 David Scott


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: doing mulitple quires in one go.

2006-01-16 Thread Patrick Herber
Hi,

Do you mean you have such a structure

Table A
ID_a
ID_b
ID_c
...

Table B
ID_b
Value_b
...

Table C
ID_c
Value_c
...

?

In that case you can 

SELECT Value_b, Value_c
FROM A
LEFT JOIN B on A.ID_b=B.ID_b
LEFT JOIN C on A.ID_c=C.ID_c
WHERE ID_a=xxx


Regards,
Patrick

 

 -Original Message-
 From: Gregory Machin [mailto:[EMAIL PROTECTED] 
 Sent: Monday, 16 January 2006 09:14
 To: mysql@lists.mysql.com
 Subject: doing mulitple quires in one go.
 
 Hi.
 I'm in need of advise . I have a table that in 2 columns 
 holds id values to data in two other tables. At the moment I 
 do three quieries. One to get the primary data then use the 
 to id valuse to quiery the other two table to get the 
 required valuse.. Is there a way I can get mysql to return 
 all the data in one quiery ?
 
 Many thanks
 
 --
 Gregory Machin
 [EMAIL PROTECTED]
 [EMAIL PROTECTED]
 www.linuxpro.co.za
 www.exponent.co.za
 Web Hosting Solutions
 Scalable Linux Solutions
 www.iberry.info (support and admin)
 
 +27 72 524 8096
 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: UPDATE Date Column

2006-01-16 Thread Patrick Herber
Do you mean something like that?

UPDATE tablename SET date2=DATE_ADD(date1, INTERVAL -3 MONTH) 

Regards,
Patrick

 -Original Message-
 From: Shaun [mailto:[EMAIL PROTECTED] 
 Sent: Monday, 16 January 2006 15:27
 To: mysql@lists.mysql.com
 Subject: UPDATE Date Column
 
 Hi,
 
 I have a 2 Date Columns in my table, how can I update the 2nd 
 date column so that the dates are 3 months previous to the 
 orignial date column for every row.
 
 Thanks for your advice 
 
 
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:
 http://lists.mysql.com/[EMAIL PROTECTED]
 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



ERROR 1114 (HY000): The table is full converting a big table from MyISAM to InnoDB on 5.0.18

2006-01-15 Thread Patrick Herber
Hello!
I have a database with a big table (Data File 45 GB, Index File 30 GB). 
Since I have some performance troubles with table-locking in a multi-user
environment (when one of them performs a complex query all the other have to
wait up to 1 minute, which is not very nice...), I would like to convert
this (and other tables) into InnoDB engine.
 
I first tried using the innodb_file_per_table option but when running the
statement
 
ALTER TABLE invoice ENGINE=INNODB;
 
ERROR 1114 (HY000): The table '#sql...' is full
 
(this about one our after the start of the command, when the size of the
file was bigger than ca. 70GB (I don't know exactly the size))
 
I tried then without the innodb_file_per_table option, setting my
innodb_data_file_path as follows:
 
innodb_data_file_path=ibdata1:500M;ibdata2:500M;ibdata3;500M;ibdata4:500M;ib
data5:500M;ibdata6:500M;ibdata7:500M;ibdata8:500M;ibdata9:500M;ibdata10:500M
:autoextend

Also in this case I got the same error message.
 
What should I do in order to convert this table?
 
Should I set in the innodb_data_file_path for example 50 Files, each big 4GB
?
 
Thanks a lot for your help.
 
Best regards,
Patrick
 
PS: I'm running MySQL 5.0.18 on a Linux 2.6.13-15.7-smp server.


RE: ERROR 1114 (HY000): The table is full converting a big table from MyISAM to InnoDB on 5.0.18

2006-01-15 Thread Patrick Herber
Thanks a lot for your answer!
However, when I used the option innodb_file_per_table I saw that the temp
file (#sql...) was created in my DB directory and on this partition I still
have plenty of space (more than 200GB).
Do you think I CAN'T use this option for such a big table and I have to use
innodb_data_file_path?

Thanks a lot and regards,
Patrick

 -Original Message-
 From: Jocelyn Fournier [mailto:[EMAIL PROTECTED] 
 Sent: Sunday, 15 January 2006 15:09
 To: Patrick Herber
 Cc: mysql@lists.mysql.com
 Subject: Re: ERROR 1114 (HY000): The table is full converting 
 a big table from MyISAM to InnoDB on 5.0.18
 
 Hi,
 
 I think you should change the tmpdir variable value to a 
 directory which
   have enough room to create your temp big table (by default, 
 it points to /tmp dir).
 
 Regards,
Jocelyn
 
 Patrick Herber a écrit :
  Hello!
  I have a database with a big table (Data File 45 GB, Index 
 File 30 GB). 
  Since I have some performance troubles with table-locking in a 
  multi-user environment (when one of them performs a complex 
 query all 
  the other have to wait up to 1 minute, which is not very 
 nice...), I 
  would like to convert this (and other tables) into InnoDB engine.
   
  I first tried using the innodb_file_per_table option but 
 when running 
  the statement
   
  ALTER TABLE invoice ENGINE=INNODB;
   
  ERROR 1114 (HY000): The table '#sql...' is full
   
  (this about one our after the start of the command, when 
 the size of 
  the file was bigger than ca. 70GB (I don't know exactly the size))
   
  I tried then without the innodb_file_per_table option, setting my 
  innodb_data_file_path as follows:
   
  
 innodb_data_file_path=ibdata1:500M;ibdata2:500M;ibdata3;500M;ibdata4:5
  00M;ib 
  
 data5:500M;ibdata6:500M;ibdata7:500M;ibdata8:500M;ibdata9:500M;ibdata1
  0:500M
  :autoextend
  
  Also in this case I got the same error message.
   
  What should I do in order to convert this table?
   
  Should I set in the innodb_data_file_path for example 50 
 Files, each 
  big 4GB ?
   
  Thanks a lot for your help.
   
  Best regards,
  Patrick
   
  PS: I'm running MySQL 5.0.18 on a Linux 2.6.13-15.7-smp server.
  
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:
 http://lists.mysql.com/[EMAIL PROTECTED]
 


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Duplicate Insert Crashes Client

2005-11-14 Thread Patrick

What type client are you using?

With the C API you would test for the return value (0 or 1) and process 
accordingly.


You could use 'INSERT IGNORE' syntax, but then you would not know what 
records failed (you could test for how many were inserted with mysql_info() 
using the C API).


See Chap 6 around page 421 of the MySQL Reference Manual.

I hope this helps...

Pat...

[EMAIL PROTECTED]
CocoNet Corporation
SW Florida's First ISP
825 SE 47th Terrace
Cape Coral, FL 33904
(239) 540-2626 Voice


- Original Message - 
From: Bruce Martin [EMAIL PROTECTED]

To: mysql@lists.mysql.com
Sent: Sunday, November 13, 2005 3:58 PM
Subject: Duplicate Insert Crashes Client



Hello again,

Ok new problem. If for some reason, my client tries to INSERT something to 
the database that is identical to a record already there, my client 
crashes. Is there an error I can trap for this? The DB does insert the new 
record.


Bruce Martin
The Martin Solution
PO Box 644
Delaware Water Gap, PA
(570) 421-0670
[EMAIL PROTECTED]


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Table names with periods

2005-10-03 Thread Patrick
There are many ways to approach this.  How are you receiving the IP data?  Are 
you reading a file or other stream or are you trying to process the table 
creation by reading a column from a previously populated table through a select 
statement?

The functions, inet_ntoa() and inet_addr(), are part of most networking 
libraries. These are the common functions to convert dotted quad notation. If 
you wanted to write you own function, an IPv4 address is broken down as follows:

AAA.BBB.CCC.DDD
  \   \   \   \
\   \   \   DDD
  \   \   CCC x CCC
\   BBB x BBB x BBB
  AAA x AAA x AAA x AAA

Add the results and you have your unique 32bit number. 

eg. 10.10.10.1 = 10,000 + 1,000 + 100 + 1
   = 11,101


If you are not able to pre-process (scrub) the incoming data programmatically, 
you would need to create a UDF in MySQL to perform the conversion, or, 
alternatively, if you want to use MySQL SELECT statement as-is could replace 
the 'period' with an 'underscore' using MySQL's built-in string functions like 
so:

Assumptions: Reading IP address from an existing table named IP_Addresses with 
a column named ip_address and a column named status.

SELECT  CREATE TABLE ,REPLACE(ip_address,'.','_'),  [insert create 
options here] FROM IP_Addresses WHERE status =ACTIVE

You would obviously add your CREATE TABLE options and INTO OUTFILE options 
as needed.
This would be an alternative to converting IPv4 to 32bit Integer.

I hope this helps...

If at all possible, it is probably best to continue in the MySQL list, there 
are some pretty clever people out there

Pat...

[EMAIL PROTECTED]
CocoNet Corporation
SW Florida's First ISP
825 SE 47th Terrace
Cape Coral, FL 33904

  - Original Message - 
  From: Chance Ellis 
  To: Patrick 
  Sent: Monday, October 03, 2005 2:22 PM
  Subject: Re: Table names with periods


  Patrick,

  I have been trying to figure out how I can convert an IP address to a 32bit 
integer within a SQL create statement.

  Is this possible or am I thinking about this all wrong? The input I am given 
is a straight IP address. I have no way of modifying it other than some option 
in the SQL create statement is possible. I want to create a new table for each 
IP address. Without getting too much into the details, these are my 
requirements and I have been wasting alot of time trying to figure out how to 
change this string in the create statement. 

  Any help you can provide is greatly appreciated.

  Thanks!

  Chance


   
  On 9/28/05, Patrick [EMAIL PROTECTED] wrote: 
Historically any form of punctuation, parameter delimiter, or filepath
delimiter in either a database name, field or column name, file or table 
name would not be recommended; even if the RDBMS or File Handler allows it.

If you are able to stick to alphanumeric characters using underscores
characters if needed for clarity, you go a long way for portability across 
various operating systems. Also, IPv4 addresses are readily converted to
single 32bit integers that minimize the need for dotted quartets.

Early versions of MySQL allowed periods.  This caused OS incompatibility 
issues.  To my knowledge this was fixed prior to version 3 and you are no
longer allowed periods in database or table names.  This, in my thinking, is
a good thing by assuring greater portability and easier migration. 

I hope this helps.

Pat...

[EMAIL PROTECTED]
CocoNet Corporation
SW Florida's First ISP
825 SE 47th Terrace
Cape Coral, FL 33904




Re: Table names with periods

2005-10-03 Thread Patrick

Shawn,
Your correct about my algorithm for IPv4, I was not paying attention (too 
many things at once). It should have read:


AAA.BBB.CCC.DDD
  \\   \\
\\  \   DDD
  \   \  CCC x 256
\  BBB x 256 x 256
  AAA x 256 x 256 x 256

However, the 'SELECT' statement was conceptual, not literal and did in fact 
state that it would need the appropriate 'CREATE TABLE' options as well as 
an 'INTO OUTFILE' clause.  This was left as an exercise for the user to 
construct. It is not intended as Dynamic SQL.  It was intended to create a 
file of SQL statements that could be executed 'ad hoc'.  It is also just one 
of many ways to accomplish the goal. The literal and now complete 'SELECT' 
with the same stated assumptions would look like this:


SELECT  CREATE TABLE  ,REPLACE(ip_address,'.','_'),  ( ip CHAR (16), 
last_access TIMESTAMP  INTO OUTFILE /tmp/createtable.sql FROM 
IP_Addresses WHERE status =ACTIVE


This creates a file of SQL statements that can then be executed on the 
command line or in a cron with the appropriate redirect.


I did, however, miss the INET_ATON() and INET_ATOA() functions added in 
v3.23.30. Eliminating any need for a UDF.


Now that I have embarrased myself with a bad algorithm, defended my 'SELECT' 
construction, and missed a very important pre-built function, I must say, I 
do agree that superficially Chance's concept of creating a table for each IP 
is not one I would personally embrace, but then again, I do not know what he 
is trying to accomplish and he elected not to make the list privy to his 
design.  He did state early on in the thread, that he wasn't looking for 
help in db design, just a solution to the punctuation issue.


Pat...

- Original Message - 
From: [EMAIL PROTECTED]

To: Patrick
Cc: Chance Ellis ; mysql@lists.mysql.com
Sent: Monday, October 03, 2005 4:30 PM
Subject: Re: Table names with periods

Replies embedded:

Patrick [EMAIL PROTECTED] wrote on 10/03/2005 03:43:20 PM:


There are many ways to approach this.  How are you receiving the IP
data?  Are you reading a file or other stream or are you trying to
process the table creation by reading a column from a previously
populated table through a select statement?

The functions, inet_ntoa() and inet_addr(), are part of most
networking libraries. These are the common functions to convert
dotted quad notation. If you wanted to write you own function, an
IPv4 address is broken down as follows:

AAA.BBB.CCC.DDD
  \   \   \   \
\   \   \   DDD
  \   \   CCC x CCC
\   BBB x BBB x BBB
  AAA x AAA x AAA x AAA


I am not sure of your algorithm. Perhaps I am just not understanding your 
notation.





If you are not able to pre-process (scrub) the incoming data
programmatically, you would need to create a UDF in MySQL to perform
the conversion, or, alternatively, if you want to use MySQL SELECT
statement as-is could replace the 'period' with an 'underscore'
using MySQL's built-in string functions like so:

Assumptions: Reading IP address from an existing table named
IP_Addresses with a column named ip_address and a column named status.

SELECT  CREATE TABLE ,REPLACE(ip_address,'.','_'),  [insert
create options here] FROM IP_Addresses WHERE status =ACTIVE


Dynamic SQL? Not with that statement. He is going to need to create his SQL 
statement client-side and send it pre-formatted to the server. MySQL 5.0 has 
the beginnings of dynamic SQL and I am not 100% sure it would accept what 
you typed.




You would obviously add your CREATE TABLE options and INTO
OUTFILE options as needed.
This would be an alternative to converting IPv4 to 32bit Integer.

I hope this helps...

If at all possible, it is probably best to continue in the MySQL
list, there are some pretty clever people out there

Pat...

[EMAIL PROTECTED]
CocoNet Corporation
SW Florida's First ISP
825 SE 47th Terrace
Cape Coral, FL 33904

  - Original Message - 
  From: Chance Ellis

  To: Patrick
  Sent: Monday, October 03, 2005 2:22 PM
  Subject: Re: Table names with periods


  Patrick,

  I have been trying to figure out how I can convert an IP address
to a 32bit integer within a SQL create statement.


You are mixing purposes. MySQL has a CREATE TABLE statement but it requires 
a string literal. You cannot build a CREATE TABLE statement on the 
fly -inside- MySQL. You have to build your statement client-side and send it 
(as a complete statemnt) to MySQL to process.


MySQL has a function that converts IP addresses into numbers (see above) but 
you cannot combine that with a CREATE TABLE statement.




  Is this possible or am I thinking about this all wrong? The input


Yes, I think you are all wrong. You are being too literal in your design 
choices. Generally if your data storage design requires you to add tables 
whenever you add a new whatever, that is a bad design. The better thing to 
do is to create one table that can hold the entire class

Re: Table names with periods

2005-09-28 Thread Patrick
Historically any form of punctuation, parameter delimiter, or filepath 
delimiter in either a database name, field or column name, file or table 
name would not be recommended; even if the RDBMS or File Handler allows it.


If you are able to stick to alphanumeric characters using underscores 
characters if needed for clarity, you go a long way for portability across 
various operating systems. Also, IPv4 addresses are readily converted to 
single 32bit integers that minimize the need for dotted quartets.


Early versions of MySQL allowed periods.  This caused OS incompatibility 
issues.  To my knowledge this was fixed prior to version 3 and you are no 
longer allowed periods in database or table names.  This, in my thinking, is 
a good thing by assuring greater portability and easier migration.


I hope this helps.

Pat...

[EMAIL PROTECTED]
CocoNet Corporation
SW Florida's First ISP
825 SE 47th Terrace
Cape Coral, FL 33904


- Original Message - 
From: Chance Ellis [EMAIL PROTECTED]

To: mysql@lists.mysql.com
Sent: Wednesday, September 28, 2005 12:57 PM
Subject: Re: Table names with periods


Instead of criticizing people's designs without understanding the
requirements why don't you just answer the question, or don't you know
either?


On 9/28/05, Martijn Tonies [EMAIL PROTECTED] wrote:



I am sure this has been asked before but I can't seemed to find it.
 Canyou create table names with periods in them? I need to create tables
based on the IP address of machines (x.x.x.x)

Ehm. Why?

Instead of going the silly table names route, why not try a different
design?

With regards,

Martijn Tonies
Database Workbench - tool for InterBase, Firebird, MySQL, Oracle  MS SQL
Server
Upscene Productions
http://www.upscene.com
Database development questions? Check the forum!
http://www.databasedevelopmentforum.com


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED]





--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



MySQL-4.1.14 Compile Fails HP-UX 11i GCC-4.0.1

2005-09-20 Thread Briggs, Patrick
I'm unable to use the MySQL official HP-UX binaries, because they don't
contain shared libraries in their DEPOT distribution that I need for things
like PHP and DBD driver for PERL.  I also have problems because the MySQL
binaries are compiled using HP's own C++ compiler, which causes all kinds of
problems when other stuff is compiled using GCC, and you want them to work
together.  I'm attempting to compile it myself from the sources.  I've
generally always had good luck compiling from source with GCC, except with
this recent version.
 
This is what I'm using for options in ./configure
 
./configure --prefix=/opt/mysql --enable-thread-safe-client
--enable-local-infile 
 
Compiling MySQL-4.1.14 on HP-UX using GCC-4.0.1 fails with the linker error:
 
/usr/ccs/bin/ld: Unsatisfied symbols:
   Rotate_log_event::is_valid() (first referenced in slave.o) (data)
collect2: ld returned 1 exit status
 
The compile step fails in the directory mysql-4.1.14/sql
 
g++ -O3 -DDBUG_OFF -O3 -march=2.0 -fno-implicit-templates -fno-exceptions
-fno-rtti -DHPUX11 -DSNPRINTF_RETURN_TRUNC -DHAVE_BROKEN_PREAD
-DDONT_USE_FINITE -D_INCLUDE_LONGLONG -DNO_FCNTL_NONBLOCK
-DDO_NOT_REMOVE_THREAD_WRAPPERS -DHAVE_BROKEN_PTHREAD_COND_TIMEDWAIT -o
mysqld sql_lex.o sql_handler.o item.o item_sum.o item_buff.o item_func.o
item_cmpfunc.o item_strfunc.o item_timefunc.o thr_malloc.o item_create.o
item_subselect.o item_row.o item_geofunc.o field.o strfunc.o key.o
sql_class.o sql_list.o net_serv.o protocol.o sql_state.o lock.o my_lock.o
sql_string.o sql_manager.o sql_map.o mysqld.o password.o hash_filo.o
hostname.o set_var.o sql_parse.o sql_yacc.o sql_base.o table.o sql_select.o
sql_insert.o sql_prepare.o sql_error.o sql_update.o sql_delete.o uniques.o
sql_do.o procedure.o item_uniq.o sql_test.o log.o log_event.o init.o
derror.o sql_acl.o unireg.o des_key_file.o discover.o time.o opt_range.o
opt_sum.o records.o filesort.o handler.o ha_heap.o ha_myisam.o
ha_myisammrg.o ha_berkeley.o ha_innodb.o ha_isam.o ha_isammrg.o
ha_ndbcluster.o sql_db.o sql_table.o sql_rename.o sql_crypt.o sql_load.o
mf_iocache.o field_conv.o sql_show.o sql_udf.o sql_analyse.o sql_cache.o
slave.o sql_repl.o sql_union.o sql_derived.o client.o sql_client.o
mini_client_errors.o pack.o stacktrace.o repl_failsafe.o gstream.o spatial.o
sql_help.o protocol_cursor.o tztime.o my_time.o ha_example.o ha_archive.o
ha_tina.o ha_blackhole.o  ../innobase/usr/libusr.a ../innobase/srv/libsrv.a
 ../innobase/dict/libdict.a ../innobase/que/libque.a
../innobase/srv/libsrv.a ../innobase/ibuf/libibuf.a ../innobase/row/librow.a

../innobase/pars/libpars.a ../innobase/btr/libbtr.a ../innobase/trx/libtrx.a
../innobase/read/libread.a ../innobase/usr/libusr.a
 ../innobase/buf/libbuf.a ../innobase/ibuf/libibuf.a
../innobase/eval/libeval.a ../innobase/log/liblog.a ../innobase/fsp/libfsp.a
 ../innobase/fut/libfut.a ../innobase/fil/libfil.a
../innobase/lock/liblock.a ../innobase/mtr/libmtr.a
../innobase/page/libpage.a 
../innobase/rem/librem.a ../innobase/thr/libthr.a ../innobase/sync/libsync.a
../innobase/data/libdata.a ../innobase/mach/libmach.a
 ../innobase/ha/libha.a ../innobase/dyn/libdyn.a ../innobase/mem/libmem.a
../innobase/sync/libsync.a ../innobase/ut/libut.a ../innobase/os/libos.a
 ../innobase/ut/libut.a ../myisam/libmyisam.a ../myisammrg/libmyisammrg.a
../heap/libheap.a ../vio/libvio.a ../mysys/libmysys.a
 ../dbug/libdbug.a ../regex/libregex.a ../strings/libmystrings.a -lz
-lpthread -lcrypt -lnsl -lm -lpthread

/usr/ccs/bin/ld: Unsatisfied symbols:
   Rotate_log_event::is_valid() (first referenced in slave.o) (data)
collect2: ld returned 1 exit status
make[4]: *** [mysqld] Error 1
 
I'm using the GCC binary from the HP-UX Software Porting Archive site:
 
Output of GCC -v:
 
Using built-in specs.
Target: hppa2.0w-hp-hpux11.11
Configured with: ../gcc/configure 
Thread model: single
gcc version 4.0.1
 
- Patrick Briggs
 


RE: Linux vs. Windows?

2005-08-12 Thread Patrick Fowler
Folks,

Go with what you know best.  If you are a good Windows admin etc go with
windows.  If you are a good Linux/Unix admin go with Linux.  What little
performance gain from one or the other will be lost if you do not run a
tight ship all around.  Performance and stability goes way beyond what
OS is better.  Don't let anyone push you in one direction if you are
more comfortable with the other OS.  If you have little exposure with
both I would say Linux.  Not because it may or may not outperform
windows just because once you learn it it's under your belt.


 Same machine, any performance difference?


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Help restoring

2005-04-18 Thread Patrick Campbell
I have backed a bugzilla database using:

mysqldump -u root -p bugs  bugzilla.20050418 and am trying to import it on
a new mysql server using

mysql -u root -p bugs  bugzilla.20050418

I'm getting
ERROR at line 84:

Line 84 is all garbled text...

Working with 3.23.58 ... Any thoughts?


-- 
Patrick Campbell
OurVacationStore.com
Website Administrator
Tel. 602.896.4729


Re: Use MySQL with Microsoft Office

2005-03-09 Thread Patrick Connolly
Somewhere about Wed, 09-Mar-2005 at 03:02PM +0100 (give or take), Jigal van 
Hemert wrote:

| From: Curtis Maurand
|  Using ODBC, however, you can link Access tables to MySQL tables and use
|  Access as the front end to MySQL.  It works very nicely.
| 
| Using ODBC you can access MySQL from OpenOffice.org (which has an
| Access-like frontend built in), using ODBC and MS Jet engine (standard with
| recent Windows versions) you can access and edit Access databases (*.mdb)
| with OpenOffice.org.

but only with Windoze, is it not?  I assume it's dependent on the OS
having the Jet engine.  I'd be very interested to know if anyone has
done an equivalent to that in Linux.




-- 
   ___ Patrick Connolly  
 {~._.~}   
 _( Y )_  Good judgment comes from experience 
(:_~*~_:) Experience comes from bad judgment
 (_)-(_)


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: # Seconds between two datetime fields

2005-03-03 Thread Patrick
The return you are getting is correct for the format you are using. A 90
second difference is in fact 1 minute, 30 seconds(130).
To get the time difference in seconds convert the datetime or timestamp to a
julian date or unixtime and then process.

  SELECT start_time, end_time, UNIX_TIMESTAMP(end_time) -
UNIX_TIMESTAMP(start_time) FROM  mailings_sendstats order by start_time
desc;

I hope this helps...

Pat...

CocoNet Corporation
SW Florida's 1st ISP

- Original Message - 
From: Brian Erickson [EMAIL PROTECTED]
To: mysql@lists.mysql.com
Sent: Thursday, March 03, 2005 7:23 PM
Subject: # Seconds between two datetime fields


 We have a table that has two datetime fields (start_time and
 end_time). We'd like to be able to calculate the number of seconds
 between these two fields in a query. However, a simple end_time -
 start_time does not yield a correct result.

 SELECT start_time, end_time, end_time - start_time FROM
 mailings_sendstats order by start_time desc;
 +-+-+---+
 | start_time  | end_time| end_time - start_time |
 +-+-+---+
 | 2005-03-02 19:08:00 | 2005-03-02 19:08:53 |53 |
 | 2005-03-02 19:06:00 | 2005-03-02 19:07:31 |   131 |
 | 2005-03-02 19:04:00 | 2005-03-02 19:04:55 |55 |
 | 2005-03-02 19:02:00 | 2005-03-02 19:02:53 |53 |
 | 2005-03-02 19:00:00 | 2005-03-02 19:01:10 |   110 |
 | 2005-03-02 18:57:59 | 2005-03-02 18:59:39 |   180 |
 | 2005-03-02 18:56:00 | 2005-03-02 18:57:10 |   110 |
 | 2005-03-02 18:54:00 | 2005-03-02 18:55:20 |   120 |
 | 2005-03-02 18:52:00 | 2005-03-02 18:53:22 |   122 |
 | 2005-03-02 18:50:00 | 2005-03-02 18:51:26 |   126 |
 | 2005-03-02 18:48:00 | 2005-03-02 18:48:58 |58 |
 | 2005-03-02 18:46:00 | 2005-03-02 18:47:28 |   128 |
 etc

 As you can see, if the time difference is less than 1 minute, a
 correct result is returned. If the difference is 1 minute and 30
 seconds, '130' is returned instead of 90.

 I tried playing with the end_time - start_time conversion using
 different calculations. What I came up with is below.

 mysql SELECT start_time, end_time,
 -  end_time - start_time as cur,
 -  FLOOR((end_time - start_time) / 100) as num,
 -  (end_time - start_time) % 100 as mod,
 -  (FLOOR((end_time - start_time) / 100) * 60) + (end_time -
 start_time) % 100 AS seconds
 - FROM mailings_sendstats
 - ORDER BY id desc;

+-+-+-+--+--+-+
 | start_time  | end_time| cur | num  | mod  | seconds
|

+-+-+-+--+--+-+
 | 2005-03-02 19:10:00 | 2005-03-02 19:11:07 | 107 |1 |7 |  67
|
 | 2005-03-02 19:08:00 | 2005-03-02 19:08:53 |  53 |0 |   53 |  53
|
 | 2005-03-02 19:06:00 | 2005-03-02 19:07:31 | 131 |1 |   31 |  91
|
 | 2005-03-02 19:04:00 | 2005-03-02 19:04:55 |  55 |0 |   55 |  55
|
 | 2005-03-02 19:02:00 | 2005-03-02 19:02:53 |  53 |0 |   53 |  53
|
 | 2005-03-02 19:00:00 | 2005-03-02 19:01:10 | 110 |1 |   10 |  70
|
 | 2005-03-02 18:57:59 | 2005-03-02 18:59:39 | 180 |1 |   80 | 140
|
 | 2005-03-02 18:56:00 | 2005-03-02 18:57:10 | 110 |1 |   10 |  70
|
 | 2005-03-02 18:54:00 | 2005-03-02 18:55:20 | 120 |1 |   20 |  80
|
 | 2005-03-02 18:52:00 | 2005-03-02 18:53:22 | 122 |1 |   22 |  82
|
 | 2005-03-02 18:50:00 | 2005-03-02 18:51:26 | 126 |1 |   26 |  86
|
 | 2005-03-02 18:48:00 | 2005-03-02 18:48:58 |  58 |0 |   58 |  58
|
 | 2005-03-02 18:46:00 | 2005-03-02 18:47:28 | 128 |1 |   28 |  88
|
 etc

 First, I calculated the number of minutes (num), then seconds (mod),
 then total seconds (seconds). If you scroll through the results,
 you'll see most of them are accurate. However, when the beginning time
 is just before a new minute, the entire calculation is thrown off.
 Examples:

 | 2005-03-02 18:57:59 | 2005-03-02 18:59:39 | 180 |1 |   80 | 140
|
 | 2005-03-02 18:37:59 | 2005-03-02 18:38:53 |  94 |0 |   94 |  94
|

 So my question is, how can I fix this? I know there are several
 date/time functions that I could probably use, but they were not
 introduced until version 4.1 and I am stuck with version 3.23. We also
 modified the table to use timestamp(14) fields instead of datetime
 fields, but the same result occurs.

 Ultimately:

 a) Can someone think of a way to modify the query above so that it
produces
 *correct* results every time (maybe I'm missing something simple after
 working on this for so long)

 b) We're using this table to track execution time of PHP Cron scripts.
 We may be approaching this entirely the wrong way. If someone 

msAccess to Mysql on Linux

2005-02-26 Thread Patrick Marquetecken
Hi,

I'm trying to move databases from msaccess to Mysql and do this on a Linux
machine.
i have used following tools
mdbtools:
 mdb-export -d# ./recepten.mdb tblRecepten  recepten.txt
dos2unix:
 dos2unix recepten.txt
mysqlimport:
mysqlimport -uroot -p --fields-terminated-by='#'
--fields-optionally-enclosed-by='' --ignore-lines='1' --replace --verbose
recepten /tmp/recepten.txt

I see in some text fields:
4 stuks bizonmedaillon Covee

some strage signs like  and  anyone a idea how i get rid of it?

Patrick


-- 
Sex is like hacking. You get in, you get out, and you hope you didnt leave
something behind that can be traced back to you.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



xml import

2005-02-26 Thread Patrick Marquetecken
Hi,

I seems not to figure out how to import a xml file into mysql 4.x
I'm working on, Linux, help would he fine.

Patrick
-- 
You're dead, Jim.
 -- McCoy, Amok Time, stardate 3372.7..

Fingerprint = 2792 057F C445 9486 F932 3AEA D3A3 1B0C 1059 273B
ICQ# 316932703
Registered Linux User #44550
http://counter.li.org





-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Tables damaged

2005-02-01 Thread Patrick Marquetecken
Hi,

Snort had had problems to insert data into the table so, at i opened a
connection to the server and dit: use snort;
i got this error:
Din't find any fields in table 'data'
Din't find any fields in table 'event'
Din't find any fields in table 'tcphdr'
Database changed

With phpmyadmin these tables are marked in use

What can i do to get this back to work, had no time to create a backup
script witch i schall create as fast as possible now.

Patrick


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Action after new/update record

2005-01-26 Thread Patrick Marquetecken
Hi,

I'm quite new with mySQL, i have bought the book Mastering MySQL4 but i don'f 
find a solution for my problem there.
There are no triggers in version 4 if 'im correct, but i want to export a table 
(small one) every time there is a new record or a record modified.

Whats the way to do this, the manupulation of the data goes with php5 on a 
website.

TIA
Patrick


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: MYSQL is getting slow

2004-12-10 Thread Patrick Marquetecken
On Thu, 9 Dec 2004 13:13:10 -0600
Jeff Smelser [EMAIL PROTECTED] wrote:

 On Thursday 09 December 2004 01:06 pm, Patrick Marquetecken wrote:
 
  and for ansewring Jeff Smelser i have installed mysql 4.x on linux and then
  dit from the commandline create database and then run the create table
  commands from snort.
 
 No idea.. you can do a show create table db.table to find out the engine 
 type..
 
 I ask because those should be myisam.. If they are. I cant see how memory is 
 getting so high. If they are innodb.. I would highly recommend looking into 
 converting them into myisam.. myisam was created for this type of work, would 
 be a easier on the mem.

i checked it, and its indeed myisam.
Patrick

 
 Just some avenues to look at.
 
 Jeff
 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: MYSQL is getting slow

2004-12-10 Thread Patrick Marquetecken
On Thu, 9 Dec 2004 12:34:36 -0800
Dathan Pattishall [EMAIL PROTECTED] wrote:

 This is a very broad question, I sometimes do this myself. But to answer
I know, but where must i begin ...

 the question to the best of my ability I have to ask a few.
 
 
 Are you using RAID? If so what RAID level?
no raid
 
 What are you're my.cnf settings?
standaard execpt for the bind address
 
 What type of Indexes are you using?
On the larged table that has only 2 fields there are primary indexes on both 
fields. And each table has its ons primary index, and some other normal 
indexes.

 
 What does vmstat and iostat say?
NIDS-console distfiles # free
 total   used   free sharedbuffers cached
Mem:507508 497872   9636  0   2232 399024
-/+ buffers/cache:  96616 410892
Swap:   506036  62384 443652

procs ---memory-- ---swap-- -io --system-- cpu
 r  b   swpd   free   buff  cache   si   sobibo   incs us sy id wa
 0  2  62384   5044   2380 40163611   251187   522 15  6 79  0
 1  0  62384   4412   2392 40229600  451113 1272  1859 17  5 79  0
 0  2  62384   4504   1840 40274400  455672 1325  1977 16  8 76  0
 2  0  62412   4960   1808 40140800  3959 0 1170  1866 35 13 52  0
 
 What Filesystem are you using?
ext3 
 
 What are some typical queires for a given schema?
I have no idee thats the whole problem, its a propretaire product.
 
 
 Typically for most orginizations mysql dedicated on the box below is
 wicked fast, even when not optimized because of system cache etc. But if
 your running out of diskspace then that's a problem in itself.
I got a lot of disk space left.


 
 -Original Message-
 From: Patrick Marquetecken [mailto:[EMAIL PROTECTED] 
 Sent: Thursday, December 09, 2004 7:49 AM
 To: [EMAIL PROTECTED]
 Subject: MYSQL is getting slow
 
 Hi,
 
 I have 3 snort sensors logging to a central mySQL database after two
 weeks the size of the database is about 3.3GB and the machine is getting
 slow, as i'm not used to be working with mySQL is my question how far
 can i go before the machine stop responding ?
 
 HP Pentium4 - 2.2mhz - 512 mb, 99% used and no swap, HD of 40GB.
 
 TIA
 Patrick 
 
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:
 http://lists.mysql.com/[EMAIL PROTECTED]
 
 
 -- 
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
 
 
 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: MYSQL is getting slow

2004-12-10 Thread Patrick
You are probably a little short of RAM for that size db.  Lack of swap may
bite you as well. How big are your keys?
Pat...

- Original Message - 
From: mos [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Friday, December 10, 2004 10:48 AM
Subject: Re: MYSQL is getting slow


 At 09:49 AM 12/9/2004, Patrick Marquetecken wrote:
 Hi,
 
 I have 3 snort sensors logging to a central mySQL database after two
weeks
 the size of the database is about 3.3GB and the machine is getting slow,
 as i'm not used to be working with mySQL is my question how far can i go
 before the machine stop responding ?
 
 HP Pentium4 - 2.2mhz - 512 mb, 99% used and no swap, HD of 40GB.
 
 TIA
 Patrick



 Patrick,
  Try running Analyze Table to get MySql to rebalance the key
 distribution. See http://dev.mysql.com/doc/mysql/en/ANALYZE_TABLE.html
Make
 sure you have plenty of disk space and time before running it.

 Mike


 -- 
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: spaces in table/column name

2004-12-10 Thread Patrick
We have always found using anything other than alphanumeric characters and
maybe an occasional underscore for clarity is a bad idea.  It will always
come back to bite you at some point.  Try not using spaces or punctuation in
table or column names and your life will be easier.  White space and
punctuation are frequently used as delimiters in other programs, os's and
applications, so when you use them in elements other than strings you often
limit the portability (i.e. import and export) of your structures.

I hope you find this information valuable.

Pat...

Patrick Sherrill
CocoNet Corporation
SW Florida's 1st ISP



- Original Message - 
From: sharif islam [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Thursday, December 09, 2004 4:07 PM
Subject: spaces in table/column name


 How mysql deals with spaces in table / column name? I am also using
 mysqlcc. If I try the following in the doesn't work. Creating table
 name with spaces from mysqlcc didn't give any error. But the following
 does:

 INSERT INTO 'tbl name with spaces' (col1, 'col name with spaces')
VALUES(15,16);

 -- 
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


MYSQL is getting slow

2004-12-09 Thread Patrick Marquetecken
Hi,

I have 3 snort sensors logging to a central mySQL database after two weeks the 
size of the database is about 3.3GB and the machine is getting slow, as i'm not 
used to be working with mySQL is my question how far can i go before the 
machine stop responding ?

HP Pentium4 - 2.2mhz - 512 mb, 99% used and no swap, HD of 40GB.

TIA
Patrick 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Excel 2 mysql

2004-12-09 Thread Patrick Sherrill
David,
Ian's response looks fine, I would just add 'OPTIONALLY' to the 'ENCLOSED ' 
parameter:

mysql -e LOAD DATA INFILE '/home/dlubowa/clients.csv' INTO TABLE bush 
FIELDS TERMINATED BY ','  OPTIONALLY ENCLOSED BY '' .

This may not work if your entire record is wrapped in quotes as indicated in 
your email.  In this case you are going to need to cure your import file by 
removing the first and last quote that wraps your record.

I hope this helps.
Pat...
[EMAIL PROTECTED]
CocoNet Corporation
SW Florida's First ISP
there are just two issues that I would look at if his solution
- Original Message - 
From: Ian Sales [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]; Patrick Sherrill [EMAIL PROTECTED]
Sent: Thursday, December 09, 2004 9:53 AM
Subject: Re: Excel 2 mysql


David Ziggy Lubowa wrote:
On Thursday 09 December 2004 14:33, David Ziggy Lubowa wrote:
On Wednesday 08 December 2004 05:07, Patrick Sherrill wrote:
David,
Please provide the complete LOAD DATA INFILE command you used.
mysql -e LOAD DATA INFILE '/home/dlubowa/clients.csv' INTO TABLE bush 
 -p test

this is test output that is in my test table, this is how my info is
mysql select * from bush;
++--+-+-+-++
| clients| location | service 
| PhoneNo | contact | emails |
++--+-+-+-++
|| NULL | NULL 
| NULL| NULL| NULL   |
| CRS, Nsambya, HSDN 350 , 041-267733 / 077-7 | NULL | NULL 
| NULL| NULL| NULL   |
++--+-+-+-++

sorry for the distorted info, but basically the   CRS, Nsambya, HSDN 
350 , 041-267733 / 077-7  are all under clients field,  Yet the info is 
supposed to be distributed to all the columns of the table.  What could i 
be doing wrong here.

- add FIELDS TERMINATED BY ','  ENCLOSED BY '' to your LOAD DATA INFILE
- ian
--
+---+
| Ian Sales  Database Administrator |
|   |
| eBuyer  http://www.ebuyer.com |
+---+
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: MYSQL is getting slow

2004-12-09 Thread Patrick Marquetecken
On Thu, 09 Dec 2004 16:17:17 +
Darryl Waterhouse [EMAIL PROTECTED] wrote:

 On Thu, 2004-12-09 at 10:08 -0600, gerald_clark wrote:
 
  
  Patrick Marquetecken wrote:
  
  Hi,
  
  I have 3 snort sensors logging to a central mySQL database after two weeks 
  the size of the database is about 3.3GB and the machine is getting slow, 
  as i'm not used to be working with mySQL is my question how far can i go 
  before the machine stop responding ?
  
  HP Pentium4 - 2.2mhz - 512 mb, 99% used and no swap, HD of 40GB.
  

  
  I f  that is 99% disk used, you are already dead.
  That is why it is slow.
  You have no room to do anything.
  
 
 
 To be fair Gerald, I think he means he's out of memory (correct me if
 I'm wrong here Pat), but you simply *must* allow the machine work space
 to work things out.
 Adding some swap will actually help.
 
 How active is the server?
 
Indeed its the memory, the computer has only mysql and apache on it installed, 
not even a graphical interface.
There is a lot of data from the snort sensors that is getting inserted, and 
from a console machine there is a lot of queyring to the db.

and for ansewring Jeff Smelser i have installed mysql 4.x on linux and then dit 
from the commandline create database and then run the create table commands 
from snort.

Patrick

 Darryl Waterhouse
 ---
 Software Development Manager
 Orbis Monitoring Services
 
 t: +44 151 357 7800
 m: +44 7876 390212
 e: [EMAIL PROTECTED]
 
 Suite 3,
 Sycamore House,
 Lloyd Drive,
 Ellesmere Port,
 Cheshire,
 CH65 9HQ
 


-- 
Uhura, signal our surrender
 -- Kirk

Captain!!??
 -- Uhura

We surrender!!
 -- Kirk (Star Trek VI) 

Fingerprint = 2792 057F C445 9486 F932 3AEA D3A3 1B0C 1059 273B
ICQ# 316932703 
Registered Linux User #44550
http://counter.li.org



pgpnyBHzWdvri.pgp
Description: PGP signature


Re: Excel 2 mysql

2004-12-08 Thread Patrick Sherrill
David,
Please provide the complete LOAD DATA INFILE command you used.
Pat...
[EMAIL PROTECTED]
CocoNet Corporation
SW Florida's First ISP
- Original Message - 
From: David Ziggy Lubowa [EMAIL PROTECTED]
To: Eric Bergen [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]
Sent: Wednesday, December 08, 2004 6:54 PM
Subject: Re: Excel 2 mysql


On Tuesday 07 December 2004 20:02, Eric Bergen wrote:
The easiest way to turn excel into MySQL is to have excel save the
file as a .csv (comma separated values) file and use the mysqlimport
utility or a load data infile query to insert it into a table in
cheers guys, i have managed to change my excel file into a .csv and done a
LOAD DATA INFILE  and dumped the data in the mysql db the only problem 
though
that all the data is going into one field in the table yet it is comma
separated with the exact columns.

How can i solve this .
All help is highly appreciated.
-Z
MySQL.
-Eric
On Tue, 7 Dec 2004 16:02:23 -0800, David Ziggy Lubowa
[EMAIL PROTECTED] wrote:
 Hey guys ,

 i know this has been discussed but looking through the archives i have
 more less hit a stand still , i have one big excel file which has data 
 i
 would like to extract., Now i am no expert in perl neither am i in php 
 ,
 i have looked at some tools like navicat but i dont see where you
 actually start the app , if that is what happens.  Any help is highly
 appreciated.

 cheers

 -Z

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:
 http://lists.mysql.com/[EMAIL PROTECTED]
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: LOAD DATA INFILE question...

2004-11-21 Thread Patrick Connolly
Somewhere about Sat, 20-Nov-2004 at 06:27PM +0100 (give or take), Ferhat BINGOL 
wrote:

| Hi,
| 

| I have a 72 fields data txt file and I was inserting all data
| previously but now I need only some of them to dump into the table.

| I would like to select only 4 fields which are the 1st, 5th,28th
| and 71st fields.

| Is there a statement to do that.
| 

I think it would be simpler to pre-process the file using cut with the
appropriate delmiter if it's not tab-delimited already.  Then import
the reduced file.

HTH

-- 
   ___ Patrick Connolly  
 {~._.~}   
 _( Y )_  Good judgment comes from experience 
(:_~*~_:) Experience comes from bad judgment
 (_)-(_)


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: [users@httpd] November 2, 2004

2004-11-01 Thread Patrick Donker
GH wrote:
Greetings:
  This is just a friendly reminder that if you are registered in the
United States to VOTE on November 2, 2004 (TOMORROW)
  Need to know where you vote?
 Please see the attached file (it is an image) that contains some information
Do we care? Realy? Unlikely. Maybe you should send your 'useful' info on 
 a national mailinglist only.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Transfering data from postgresql to MySQL

2004-10-21 Thread Patrick Hsieh()
now I managed to dump table schema with pg_dump. However, 
is there any schema converting tool available? I don't want to edit
each table schema to make it mysql-compliant.

Ideas?

Pahud





On Mon, 18 Oct 2004 12:23:41 +0200, Jochem van Dieten [EMAIL PROTECTED] wrote:
 On Mon, 18 Oct 2004 18:08:24 +0800,  Patrick Hsieh wrote:
 
  I am planing to transfer data from postgresql to mysql. Is there any
  useful tools, scripts or  utilities to achieve this?
 
 pg_dump
 
 First dump the schema, edit that until you have something MySQL
 understands. Then dump the data using the -d option so you have full
 inserts instead of the usual COPY syntax and feed the file to the
 MySQL command line client.
 
 Jochem
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
 
 


-- 
Patrick Hsieh() [EMAIL PROTECTED]
MSN: [EMAIL PROTECTED]  | ICQ: 97133580 
Skype: pahud_at_pahud.net | YIM: pahudnet
pub  1024D/203F7DF1 2001/12/25 Patrick Hsieh (Pahud) [EMAIL PROTECTED]
http://pgp.mit.edu:11371/pks/lookup?op=getsearch=0x203F7DF1

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Transfering data from postgresql to MySQL

2004-10-18 Thread Patrick Hsieh()
Hello list,

I am planing to transfer data from postgresql to mysql. Is there any
useful tools, scripts or  utilities to achieve this? Any infomation is
highly appreciated!

---
Patrick Hsieh() [EMAIL PROTECTED]
MSN: [EMAIL PROTECTED]  | ICQ: 97133580 
Skype: pahud_at_pahud.net | YIM: pahudnet
pub  1024D/203F7DF1 2001/12/25 Patrick Hsieh (Pahud) [EMAIL PROTECTED]
http://pgp.mit.edu:11371/pks/lookup?op=getsearch=0x203F7DF1

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Telephone number column not working

2004-10-02 Thread Patrick Sherrill
Given the many 'standards' for formatting phone numbers, I would recommend 
using a char or varchar.  Regex is intended for string types.
Do yourself a favor run an alter table and change the column to a char or 
varchar.

I hope this helps...
Pat...
[EMAIL PROTECTED]
CocoNet Corporation
SW Florida's First ISP
- Original Message - 
From: GH [EMAIL PROTECTED]
To: Stuart Felenstein [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]
Sent: Saturday, October 02, 2004 8:09 AM
Subject: Re: Telephone number column not working


One issue could be that an int column unsigned can only hold up to
4294967295 a ten digit number. Plus if you put it in a context of a
phone number... only area codes 428 or lower will have ALL THE
EXCHANGES and ALL THE UNIQUE NUMBERS in the range... with part of area
code 429
A bigint will hold the complete range you are looking for However,
I would sugest that since you mostlikely are not going to be doing
mathematical operations on a phone number that you use a varchar or
char field.
Maybe someone could correct me but aren't regex for strings only?
Gary

On Sat, 2 Oct 2004 04:59:45 -0700 (PDT), Stuart Felenstein
[EMAIL PROTECTED] wrote:
I have a field telephone.
Set to type :int:
Length: 11
It's  not working correctly, and not sure if it's my
application or something I have wrongly set up for the
database.
We are talking about U.S. Telephone numbers here, so 7
digits (area code, exchange, unique number)
Now it seems everything works up to the storing of 6
numbers.  Once I add the 7th number, everything goes
haywire.  The number gets transformed to some totally
different number and / or 0 (zero).
Now I had set up a validation , which I think would be
correct for a U.S. number:
[0-9\+\-\/ \(\)\.]+
Yet, even if I remove that regexp and let it validate
solely on integers: -{0,1}\d+
Nothing.
I thought perhaps enforcing the field to unsigned
might help, but no change.
One last note, I've now added some javascript to
enforce format.  This hasn't changed anything , better
or worse.  Same behaviour.  This is solely for making
sure client enters 111-111- format.  Just wanted
to include this in my information.
Well if anyone has a clue appreicate the help.
Stuart
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Still can't get a response from MySQL AB

2004-09-23 Thread Patrick Connolly
Last week, I asked this list had others had problems contacting MySQL
AB. I promptly received a short note from 

[EMAIL PROTECTED] 

It suggested I could respond to that address which I tried to do, but
I get this:

   - The following addresses had transient non-fatal errors -
[EMAIL PROTECTED]

That later became permanent.  For some unexplained reason, the name
server is timing out.  Something similar happened when I tried
replying to email from this address:

 [EMAIL PROTECTED]

.. except the message was that the address doesn't exist at all --
despite the fact that I'd received mail from it.  This isn't a man in
the middle attack of some kind is it -- or am I simply dead and don't
know it?

What could possibly explain why the mysql.com domain would have such a
problem when others don't?  I can't for the life of me see what's
different between what works and what doesn't.

I'd prefer not to do the correspondence through this list which
already has lots of traffic.

Ideas are most welcome.

Thanx

 --
   ___ Patrick Connolly  
 {~._.~}   
 _( Y )_  Good judgment comes from experience 
(:_~*~_:) Experience comes from bad judgment
 (_)-(_)


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Anyone else having trouble contacting MySQL AB?

2004-09-16 Thread Patrick Connolly
Somewhere about Mon, 13-Sep-2004 at 07:23PM +0300 (give or take), Egor Egorov wrote:

| Patrick Connolly [EMAIL PROTECTED] wrote:
| 
|  I've been trying to contact MySQL AB using the contact us link.  I
|  got a auto-response to the effect that I'd get a real reply in 2 days,
|  but that was a week ago.  My question was about a login that seems to
|  also go into a black hole.
|  
|  Has anyone else experienced similar?
| 
| Uncommon situation. Please try again.
| Which address you are writting to? 

Well, I've been at this for weeks now, and tried just about everything
I can think of short of turning up at the office.

I tried emailing the address I received my Order Number from, namely
[EMAIL PROTECTED]   That one bounced.

Then I tried logging on with the login and password I was given.  Not
recognised, so then I tried sending the email address the Order number
came to (not this one) on the off-chance I made a pasting error
getting the password into the browser.  That wasn't recognised, so I
still got nowwhere.

Next I tried the Contact Us link
(http://www.mysql.com/company/contact/) and that's the one that sent
me the autoresponse that promised to have a real reply which didn't
eventuate.  I even tried the login and password again, but ... (guess).

I'm beginning to feel like the character in Flan O'Brien's The Third
Policeman about someone who's been dead for 20 years and doesn't know
it.

I'm not really dead, am I?  Bummer that.  I was thinking of doing the
Certification exam before that happened.

It does seem ominous that others don't have the same experience.

-- 
   ___ Patrick Connolly  
 {~._.~}   
 _( Y )_  Good judgment comes from experience 
(:_~*~_:) Experience comes from bad judgment
 (_)-(_)


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Anyone else having trouble contacting MySQL AB?

2004-09-13 Thread Patrick Connolly
I've been trying to contact MySQL AB using the contact us link.  I
got a auto-response to the effect that I'd get a real reply in 2 days,
but that was a week ago.  My question was about a login that seems to
also go into a black hole.

Has anyone else experienced similar?


-- 
   ___ Patrick Connolly  
 {~._.~}   
 _( Y )_  Good judgment comes from experience 
(:_~*~_:) Experience comes from bad judgment
 (_)-(_)


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



/usr/libexec/mysqld: unrecognized option `--key_buffer=64m'

2004-09-08 Thread Patrick Campbell
I'm trying to get my key_buffer_size to 64m (default is 8m).  I tried
putting it into my.cnf under [mysqld] with both key_buffer=64m and
key_buffer_size=64m seperately of course, and when that didn't work tried
putting it in from of the: /usr/bin/safe_mysqld  --defaults-file=/etc/my.cnf
/dev/null 21   line in /etc/rc.d/init.d/mysqld.  All yield this error
shown in /var/log/mysqld:

/usr/libexec/mysqld: unrecognized option `--key_buffer=64m'
(or)
/usr/libexec/mysqld: unrecognized option `--key_buffer_size=64m'

MySQL is version 3.23.58 and RedHat version is Red Hat Linux release 9
(Shrike) 2.4.20.

Any suggestions?

Patrick Campbell 
OurVacationStore.com
Website Administrator
[EMAIL PROTECTED]

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Getting rid of duplicates

2004-08-27 Thread Patrick Connolly
I read CSV files into a table using 

mysqlimport  --fields-terminated-by=',' --replace .

That would work fine and, by using the appropriate unique index, it
would deal with the fact that some of the data appears in multiple CSV
files.  However, though the same data is in several files, the date
format is not the same in all of them.  The result is that there are
duplicate records in the resulting table without violating unique
keys.

I devised a way of adding an additional column and converting the
dates into a proper date format thereby making the duplicates
detectable.  There are several ways I can get rid of them:

a) Write out the table using SELECT * DISTINCT INTO OUTFILE with the
appropriate GROUP BY.  Then truncate the table and read back from the
OUTFILE using LOAD DATA INFILE.

b) Create a new table with the same structure but with a slightly
different multiple-column unique key.  Use REPLACE INTO the new table
and the new key will remove the duplicates.  Truncate the original
table, and copy back to the records from the interim table.


There's a major hassle with method a).  The OUTFILE has to be removed
before the same file name can be used again and that requires access
privileges to the database directory.  I'm not entirely happy with
method b) either since it is not particularly scalable nor generic.

I'd have thought this would be an FAQ, but it appears not so from my
searching.  I normally work with a language called S (dialect R) which
handles matrices in a multitude of ways.  Removing duplicates is
straightforward in that language, but from what I know about SQL so
far, it is rather complicated in MySQL.


What do other people do with duplicates?

TIA

-- 
   ___ Patrick Connolly  
 {~._.~}   
 _( Y )_  Good judgment comes from experience 
(:_~*~_:) Experience comes from bad judgment
 (_)-(_)


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Errata in the Study Guide

2004-08-18 Thread Patrick Connolly
Somewhere about Tue, 10-Aug-2004 at 02:19PM +0200 (give or take), Carsten Pedersen 
wrote:

| Hi Patrick,
| 
| On Tue, 2004-08-10 at 12:16, Patrick Connolly wrote:
|  Is this the most appropriate list to mention misprints?  There doesn't
|  seem to be an indication where additional suggestions are to be sent.
| 
| This could be a good starting place if you want to discuss something in
| general - if you have specifics that are not already mentioned in the
| Certification Study Guide Errata -
| http://www.mysql.com/training/certification/studyguides/ - then please
| send them to [EMAIL PROTECTED]
| 
|  I found something that, though not exactly incorrect, works for
|  reasons other than what a reader might think, so it's misleading.
| 
| Posting your thinking to the list might be a good way to raise people's
| awareness of the issues involved. That is of course said with no
| knowledge of what kind of issues you have in mind :-)

I've now done that.

For anyone else who is interested to know, the errata list mentions
the answer to Question 8.6 (p291) wherein a WHERE statement seems to
be working with an alias when we know that wouldn't work.  In fact, it
works because the alias is only different from what it's aliasing by
virtue of its case.  Since column name matching is not case sensitive,
it's really the column name itself which is being referenced, and so
the query doesn't fall over.  A real alias would not match.

The answer to Question 5.12c (p178) is similarly misleading.


Best

-- 
   ___ Patrick Connolly  
 {~._.~}   
 _( Y )_  Good judgment comes from experience 
(:_~*~_:) Experience comes from bad judgment
 (_)-(_)


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Errata in the Study Guide

2004-08-10 Thread Patrick Connolly
Is this the most appropriate list to mention misprints?  There doesn't
seem to be an indication where additional suggestions are to be sent.

I found something that, though not exactly incorrect, works for
reasons other than what a reader might think, so it's misleading.



-- 
   ___ Patrick Connolly  
 {~._.~}   
 _( Y )_  Good judgment comes from experience 
(:_~*~_:) Experience comes from bad judgment
 (_)-(_)


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Specifying table with mysqlimport

2004-08-09 Thread Patrick Connolly
I have a bunch of CSV files that I wish to import into a table.  I can
do that with LOAD DATA INFILE INTO TABLE ..

but there's a bunch of them with different names (naturally) and I
wish to do them in one go.  From my understanding, there is no way to
specify the table since it is inferred from the name of the file.

Is there a smarter way to deal with this than making a shell script
that copies the CSV files in turn to a name that matches the table I
wish to add data to, and then use that name in a loop?

If my table is Bank, this will work:

for i in *CSV; do cp $i Bank.CSV; 
mysqlimport --fields-terminated-by=',' --ignore-lines=1 db_name Bank.CSV; 
done

Something tells me that greater minds have a better way.


-- 
   ___ Patrick Connolly  
 {~._.~}   
 _( Y )_  Good judgment comes from experience 
(:_~*~_:) Experience comes from bad judgment
 (_)-(_)


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: LOAD DATA INFILE insists on being LOCAL

2004-08-02 Thread Patrick Connolly
Somewhere about Sun, 01-Aug-2004 at 11:31AM -0400 (give or take), Michael Stassen 
wrote:

| 
| Patrick Connolly wrote:

[...]

|  Looks to me the mysql user should have no trouble with it:
|  
|  -rw-rw-r--1 pat  pat   332 Jun 28 20:42 Orders.txt
| 
| Every piece of the path to this file must be executable by mysql, as well.

I think that's the main problem I have.  Since it's in a directory
beginning with /home/ and that directory is rwx--, one would have
to change that far back.  Since this machine is not used by anyone
else, perhaps it would not be a problem changing that.  Is that what
people normally do?  Anything else I can think of seems incredibly
complicated.


| 
|  | : Also, to use LOAD DATA INFILE on server files, you must have
|  | : the FILE privilege. See section 5.5.3 Privileges Provided by MySQL.
|  
|  Think we can count that one out as the problem since LOCAL which would
|  have the same requirement does work.
| 
| The FILE privilege is not required with LOCAL.  It is required without 
| LOCAL, and for SELECT INTO OUTFILE.  My bet would be that you don't have the 
| FILE privilege.  You can check with

Good guess.  That was part of the problem (though I'd not have guessed
from the error message).  I forgot that GRANT ALL does not include
FILE.

[...]

| How LOAD DATA LOCAL works was changed in 3.23.49 and 4.0.2.  This
| is documented here
| http://dev.mysql.com/doc/mysql/en/LOAD_DATA_LOCAL.html.

I had read that, but I'd not made the connexion with the error message.

Thanks again.

best

-- 
   ___ Patrick Connolly  
 {~._.~}   
 _( Y )_  Good judgment comes from experience 
(:_~*~_:) Experience comes from bad judgment
 (_)-(_)


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: LOAD DATA INFILE insists on being LOCAL

2004-08-01 Thread Patrick Connolly
Somewhere about Sat, 31-Jul-2004 at 11:17AM -0400 (give or take), Michael Stassen 
wrote:

| With LOCAL, the *client* reads the file on the client's machine.
| Without LOCAL, the *server* reeads the file on the server's
| machine.  Even though the client and server machines are the same
| in your case, those are still different operations.  There are
| restrictions on having the server do the work, for good reason.
| This is documented in the manual
| http://dev.mysql.com/doc/mysql/en/LOAD_DATA.html:

I'm pretty sure I understand the reasons.

| : For security reasons, when reading text files located on the server,
| : the files must either reside in the database directory or be readable
| : by all.  

Looks to me the mysql user should have no trouble with it:

-rw-rw-r--1 pat  pat   332 Jun 28 20:42 Orders.txt


| : Also, to use LOAD DATA INFILE on server files, you must have
| : the FILE privilege. See section 5.5.3 Privileges Provided by MySQL.

Think we can count that one out as the problem since LOCAL which would
have the same requirement does work.

I can't be absolutely sure but I seem to remember I did not have this
problem when I used 3.23.47 before I 'rpm -U'ed to 4.0.18.  With the
Redhat distro version, I could *not* use LOAD DATA LOCAL unless I
started the client with --local-infile[=1] which seems to fit my
understanding of the docs.  With 4.0.18, it's unnecessary which was
another surprise to me.  Is there something I'm missing here?


| 
| Michael

Thanks Michael.


-- 
   ___ Patrick Connolly  
 {~._.~}   
 _( Y )_  Good judgment comes from experience 
(:_~*~_:) Experience comes from bad judgment
 (_)-(_)


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



LOAD DATA INFILE insists on being LOCAL

2004-07-31 Thread Patrick Connolly
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 5 to server version: 4.0.18-standard

Type 'help;' or '\h' for help. Type '\c' to clear the buffer.

mysql 
mysql LOAD DATA  INFILE Orders.txt INTO TABLE Orders3 fields terminated by '\t';
ERROR 1045: Access denied for user: '[EMAIL PROTECTED]' (Using password: NO)

Even thought the server is on the same machine as the host, I always
get this error message if I try .

If I add the word LOCAL, it is quite happy even though I know it
shouldn't be necessary.

That hasn't bothered me very much, but now I'm working on Chapter 9 of
the Certification Study Guide which covers this topic, and the related
SELECT INTO OUTFILE, I don't have a work around.  

I'm using Redhat 7.3 with the mysql RPMs from the MySQL site, not the
ones that came with the distro.  So far, everything else seems to work
properly, but I'm mystified why this should happen.

Somehow, I doubt that anyone will be able to replicate this problem,
so that makes it unlikely anyone will have an answer, but one can't be
sure.  At one stage I thought it might be an obscure hardware
difficulty with this aged machine (over 5 years) because of another
obscure problem I had using fetchmail from a POP server.  However, I
noticed that once I switched off the ISP's virus checking, that
problem vanished, so I'm less inclined to believe it's hardware.

Any wild guesses welcome.

TIA

-- 
   ___ Patrick Connolly  
 {~._.~}   
 _( Y )_  Good judgment comes from experience 
(:_~*~_:) Experience comes from bad judgment
 (_)-(_)


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



problems transferring database

2004-07-23 Thread james patrick
I previously had a server runnning RH 7.3, cPanel 9.41 and MySQL
4.0.20. I'm moving to a different server running Fedora 1, DirectAdmin
and MySQL 4.0.17.

I have a large database (200mb) and I'm trying to move it over.

I made a dump using mysqldump -u USER -pPASSWORD DATABASE 
filename.sql, transferred it between servers via SCP, and tried
importing it using mysql -u USER -pPASSWORD DATABASE  filename.sql.

After the last command runs for about an hour and a half, I get:

ERROR 1067 at line 161570: Invalid default value for 'membersince'

OK, so I look in the dump, and line 161570 has no reference to
'membersince'. The first occurance of that field is in line 326189.

Any clue?

Is this a bug with MySQL 4.0.17 and should I update?

I've tried creating new dumps, transferring it directly to the server,
transferring it from server - my pc - server, ftp, scp, etc. It
makes this error everytime.

Any help is appreciated.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Need help optimizing query

2004-07-15 Thread Patrick Drouin
 Hello everyone,

I'm having a hard time with the following query. It
retrieves about 3K rows from a few tables. One of them
contains over 40M rows. When run on a 3Ghz server with
1G of RAM it returns the rows in more than 1 mini. I
don't think that's normal.

Here's the output of EXPLAIN:

mysql explain SELECT ti.posi, ti.docid, d.filename,
ti.id, c.name FROM corpus_documents cd, corpus c,
documents d, tokens_ins ti, tokens t WHERE
c.corpusid=4 AND cd.corpusid=c.corpusid AND
cd.docid=d.docid AND t.docid=d.docid AND ti.id=t.id
AND t.word='police' AND t.docid=ti.docid\g;

He
+---++--+---+-+---+--+--+|
table | type | possible_keys | key | key_len | ref |
rows | Extra
|+---++--+---+-+---+--+--+|
c | ref | PRIMARY,corpus_corpusid | PRIMARY | 3 |
const | 1 | Using where; Using index || t | ref |
PRIMARY,tokens_id,tokens_docid,tokens_word,tokens_word_docid
| PRIMARY | 30 | const | 24 | Using where || ti | ref
| PRIMARY,tokens_ins_id,tokens_ins_docid |
tokens_ins_id | 4 | t.id | 96 | Using where || d | ref
| PRIMARY,documents_docid | PRIMARY | 3 | t.docid | 3
| Using index || cd | eq_ref | PRIMARY | PRIMARY | 6 |
const,d.docid | 1 | Using where; Using index
|+---++--+---+-+---+--+--+
5 rows in set (0.00 sec)

It seems to be using indexes as expected and it does
not seem to look at that many rows. Here's tthe query
chewed up and nicely displayed:

SELECT ti.posi, ti.docid, d.filename, ti.id, c.name
FROM corpus_documents cd, corpus c, documents d,
tokens_ins ti, tokens t
WHERE c.corpusid=4
AND cd.corpusid=c.corpusid
AND cd.docid=d.docid
AND t.docid=d.docid
AND ti.id=t.id
AND t.word='police'
AND t.docid=ti.docid;
...
3791 rows in set (1 min 29.78 sec)

Here are descriptions of the tables at play :
mysql desc tokens_ins;
+--+---+--+-+-+---+
| Field | Type | Null | Key | Default | Extra |
+--+---+--+-+-+---+
| id | int(11) | | MUL | 0 | |
| posi | int(11) | | PRI | 0 | |
| docid | mediumint(20) | | PRI | 0 | |
| originalspelling | varchar(30) | | | | |
+--+---+--+-+-+---+
4 rows in set (0.02 sec)


mysql desc tokens;
+-++--+-+-+---+
| Field | Type | Null | Key | Default | Extra |
+-++--+-+-+---+
| id | int(11) | | MUL | 0 | |
| docid | mediumint(20) | | PRI | 0 | |
| word | varchar(30) binary | | PRI | | |
| pos | varchar(10) | | PRI | 0 | |
| absfreq | mediumint(20) | | MUL | 0 | |
+-++--+-+-+---+
5 rows in set (0.00 sec)

mysql desc corpus;
+--+---+--+-+-++
| Field | Type | Null | Key | Default | Extra |
+--+---+--+-+-++
| corpusid | mediumint(20) | | PRI | NULL |
auto_increment |
| name | varchar(30) | | PRI
|+-+-+--+-+-+---+
| Field | Type | Null | Key | Default | Extra |
+-+-+--+-+-+---+
| docid | mediumint(20) | | PRI | 0 | |
| filename | varchar(30) | | PRI | | |
| language | char(3) | | | | |
| description | varchar(255) binary | YES | | NULL | |
+-+-+--+-+-+---+
4 rows in set (0.00 sec)


mysql desc corpus_documents;
+--+---+--+-+-+---+
| Field | Type | Null | Key | Default | Extra |
+--+---+--+-+-+---+
| corpusid | mediumint(20) | | PRI | 0 | |
| docid | mediumint(20) | | PRI | 0 | |
+--+---+--+-+-+---+
2 rows in set (0.00 sec)

Can anybody give me a hand speeding up this ting? I'm
running out of ideas.

Thanks,
P | |
| language | char(3) | | MUL | | |
+--+---+--+-+-++
3 rows in set (0.00 sec)
mysql desc documents;








__
Do you Yahoo!?
Yahoo! Mail Address AutoComplete - You start. We finish.
http://promotions.yahoo.com/new_mail 

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Need help optimizing query

2004-07-15 Thread Patrick Drouin
Bonjour Arnaud,

--- Arnaud [EMAIL PROTECTED] wrote:
 On 15 Jul 2004 at 6:27, Patrick Drouin
 Your indexes look good, but I see that you have some
 varchar fields. 
 Maybe
 you could run an optimize table on these tables?

I'm running it at the moment, I will follow-up on the
list when it's done. It could take a while I guess. 

 Also, you are 
 talking
 about a 40M rows table. If it is a read only MyISAM
 table, I might 
 try
 compression.

 It's mainly read-only, I sometimes batch load some
data but users don't update at all. I'll look into
compression and see what it is about.

Thanks,
Patrick

__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



  1   2   3   4   >