FW: MySQL Indexes

2011-10-07 Thread Jerry Schwartz
>-Original Message-
>From: Reindl Harald [mailto:h.rei...@thelounge.net]
>Sent: Friday, October 07, 2011 12:21 PM
>To: mysql@lists.mysql.com
>Subject: Re: MySQL Indexes
>
>but could this not be called a bug?
>
[JS] No.

Think of two telephone books: one is sorted by first name, last name and the 
other is sorted by last name, first name. (Those are like your two keys, f1/f2 
and f2/f1.)

If you want to find someone by their first name, you use the first book. If 
you want to find somebody by their last name, you use the second book.

If you want to find someone by their last name, the first book (key f1/f2) is 
useless. If you want to find someone by their first name, the second book 
(f2/f1) is useless.

Does that help explain it?

Regards,

Jerry Schwartz
Global Information Incorporated
195 Farmington Ave.
Farmington, CT 06032

860.674.8796 / FAX: 860.674.8341
E-mail: je...@gii.co.jp
Web site: www.giiresearch.com




>Am 07.10.2011 18:08, schrieb Michael Dykman:
>> When a query selects on field_a and field_b, that index can be used.  If
>> querying on field_a alone, the index again is useful.  Query on field_b
>> alone however, that first index is of no use to you.
>>
>> On Fri, Oct 7, 2011 at 10:49 AM, Brandon Phelps  wrote:
>>
>>> This thread has sparked my interest. What is the difference between an
>>> index on (field_a, field_b) and an index on (field_b, field_a)?
>>>
>>>
>>> On 10/06/2011 07:43 PM, Nuno Tavares wrote:
>>>
>>>> Neil, whenever you see multiple fields you'd like to index, you should
>>>> consider, at least:
>>>>
>>>> * The frequency of each query;
>>>> * The occurrences of the same field in multiple queries;
>>>> * The cardinality of each field;
>>>>
>>>> There is a tool "Index Analyzer" that may give you some hints, and I
>>>> think it's maatkit that has a tool to run a "query log" to find good
>>>> candidates - I've seen it somewhere, I believe
>>>>
>>>> Just remember that idx_a(field_a,field_b) is not the same, and is not
>>>> considered for use, the same way as idx_b(field_b,field_a).
>>>>
>>>> -NT
>>>>
>>>>
>>>> Em 07-10-2011 00:22, Michael Dykman escreveu:
>>>>
>>>>> Only one index at a time can be used per query, so neither strategy is
>>>>> optimal.  You need at look at the queries you intend to run against the
>>>>> system and construct indexes which support them.
>>>>>
>>>>>  - md
>>>>>
>>>>> On Thu, Oct 6, 2011 at 2:35 PM, Neil Tompkins
>>>>> **wrote:
>>>>>
>>>>>  Maybe that was a bad example.  If the query was name = 'Red' what index
>>>>>> should I create ?
>>>>>>
>>>>>> Should I create a index of all columns used in each query or have a
>>>>>> index
>>>>>> on individual column ?
>>>>>>
>>>>>>
>>>>>> On 6 Oct 2011, at 17:28, Michael Dykman  wrote:
>>>>>>
>>>>>> For the first query, the obvious index on score will give you optimal
>>>>>> results.
>>>>>>
>>>>>> The second query is founded on this phrase: "Like '%Red%' " and no 
>>>>>> index
>>>>>> will help you there.  This is an anti-pattern, I am afraid.  The only
>>>>>> way
>>>>>> your database can satisfy that expression is to test each and every
>>>>>> record
>>>>>> in the that database (the test itself being expensive as infix finding
>>>>>> is
>>>>>> iterative).  Perhaps you should consider this approach instead:
>>>>>>  <http://dev.mysql.com/doc/**refman/5.5/en/fulltext-**
>>>>>> natural-language.html<http://dev.mysql.com/doc/refman/5.5/en/fulltext-
>natural-language.html>
>>>>>>>
>>>>>> http://dev.mysql.com/doc/**refman/5.5/en/fulltext-**
>>>>>> natural-language.html<http://dev.mysql.com/doc/refman/5.5/en/fulltext-
>natural-language.html>
>>>>>>
>>>>>> On Thu, Oct 6, 2011 at 10:59 AM, Tompkins Neil<>>>>> googlemail.com >
>>>>>> neil.tompk...@googlemail.com>  wrote:
>>>>>>
>>>>>>  Hi,
>>>>>>>
>>>>>>> Can anyone he

RE: How MyISAM handle auto_increment

2011-10-07 Thread Jerry Schwartz
>-Original Message-
>From: Lucio Chiappetti [mailto:lu...@lambrate.inaf.it]
>Sent: Thursday, October 06, 2011 3:18 AM
>To: Jerry Schwartz
>Cc: Mysql List
>Subject: RE: How MyISAM handle auto_increment
>
>On Wed, 5 Oct 2011, Jerry Schwartz wrote:
>
>> Can't you use
>> CREATE TABLE  LIKE 
>> and then reset the auto-increment value?
>
>Thanks. Since when does "create table like" exist? I was unaware of it,
>but I see it exists in mysql 5.1. The tricks I described worked since 3.x
>or thereabouts.
>
[JS] I don't know when it was introduced. I never used anything before 4.0, 
and I don't remember when I first used it that command.

Regards,

Jerry Schwartz
Global Information Incorporated
195 Farmington Ave.
Farmington, CT 06032

860.674.8796 / FAX: 860.674.8341
E-mail: je...@gii.co.jp
Web site: www.giiresearch.com





>--
>
>Lucio Chiappetti - INAF/IASF - via Bassini 15 - I-20133 Milano (Italy)
>
>Citizens entrusted of public functions have the duty to accomplish them
>with discipline and honour
>   [Art. 54 Constitution of the Italian Republic]
>
>For more info : http://www.iasf-milano.inaf.it/~lucio/personal.html
>
>
>--
>MySQL General Mailing List
>For list archives: http://lists.mysql.com/mysql
>To unsubscribe:http://lists.mysql.com/mysql?unsub=je...@gii.co.jp





-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: Community Support better than Official Support? (was: Can I Develop using Community Edition and Deploy onto Enterprise Edition??)

2011-09-22 Thread Jerry Schwartz
That's a great attitude. I always appreciate it when a support organization 
listens to users. I've been on both sides of the fence, and I always hated it 
when there //was// a fence.

One thing I always favored, again as both a user and as a tech support 
professional, was a public list of known bugs (excluding security-related 
ones, of course). It saves a lot of head-banging, the kind of frustration that 
can lead to a very high level of anger.

Just last week I spent a day trying to get a particular feature to work (in a 
completely unrelated product), not knowing that it was flat out broken. The 
company in question has three options:

- Paid support: they gave me a free trial, and I quickly discovered that it 
was useless. Their only response was "Take two reboots and call me in the 
morning."

- E-mail support: bitbuc...@blackhole.com

- The user forum: it has many experienced users, some beta testers, and 
(because the product is used world-wide) a response time measured in hours. 
What it doesn't have is any presence from the company.

As you can imagine, if not for the other users this program would be just a 
bad memory.


Regards,

Jerry Schwartz
Global Information Incorporated
195 Farmington Ave.
Farmington, CT 06032

860.674.8796 / FAX: 860.674.8341
E-mail: je...@gii.co.jp
Web site: www.giiresearch.com


>-Original Message-
>From: Shawn Green (MySQL) [mailto:shawn.l.gr...@oracle.com]
>Sent: Thursday, September 22, 2011 9:53 AM
>To: john.dais...@butterflysystems.co.uk
>Cc: John Daisley; Claudio Nanni; Johan De Meersman; Alastair Armstrong;
>mysql@lists.mysql.com
>Subject: Community Support better than Official Support? (was: Can I Develop
>using Community Edition and Deploy onto Enterprise Edition??)
>
>This comment has me intrigued:
>
>On 9/21/2011 17:50, John Daisley wrote:
>> Partitioning is available in the community edition and has been for a
>> while now. Support is the only real difference and since Oracle took
>> over the support available in the community is usually faster and better
>> than you get from Oracle.
>>
>
>I work in MySQL Support and other than the tools that we were given to
>work with, very little should have changed in our attitude, our
>knowledge, or our level of professionalism (that I am aware of). Perhaps
>there are thinks that the other support providers are doing better?
>
>Please use this thread as a forum to which you can vent all of your
>complaints or concerns about MySQL support or to describe ways in which
>the other support systems are better. If it's policy changes, tell us.
>If it's response times, tell us. If it's our level of services, tell us.
>If you don't like the font on the web site, tell us. This is your chance
>to completely rip us a new one and to brag about your favorite service
>offerings at the same time.
>
>All opinions about any support providers are welcome.
>
>Thank you kindly,
>--
>Shawn Green
>MySQL Principal Technical Support Engineer
>Oracle USA, Inc. - Hardware and Software, Engineered to Work Together.
>Office: Blountville, TN
>
>--
>MySQL General Mailing List
>For list archives: http://lists.mysql.com/mysql
>To unsubscribe:http://lists.mysql.com/mysql?unsub=je...@gii.co.jp





-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: asking question

2011-09-21 Thread Jerry Schwartz
I should have mentioned that there is an example for mysql in the 
documentation: http://www.php.net/manual/en/mysql.examples-basic.php. It is 
the simplest interface, but it is not object-oriented. Mysqli can be used 
either as procedures or as objects.

The differences between mysql, mysqli, and PDO are on page 
http://www.php.net/manual/en/mysqli.overview.php.

Regards,

Jerry Schwartz
Global Information Incorporated
195 Farmington Ave.
Farmington, CT 06032

860.674.8796 / FAX: 860.674.8341
E-mail: je...@gii.co.jp
Web site: www.giiresearch.com


>-Original Message-
>From: valery-christian [mailto:valechris2...@yahoo.fr]
>Sent: Wednesday, September 21, 2011 8:17 AM
>To: mysql@lists.mysql.com
>Subject: asking question
>
>the meaning of Warning: mysql_fetch_array(): supplied argument is not a valid
>MySQL result resource in C:\Program Files\EasyPHP 2.0b1\www\memo3\index.php 
>on
>line 64




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: asking question

2011-09-21 Thread Jerry Schwartz
It is important to call all of the MySQL functions in the right order. It is 
very easy to skip one if you are just learning. I'm not going to give you the 
exact function calls, because there are several sets. Pick a set (PDO works 
with different databases, so it is most portable) and then keep these steps in 
mind:

1. Connect to the database server; this returns a resource (handle) for the 
database server
2. Select the database, if the first step didn't already do that
3. Execute a query, using the resource from step 1; this will return a value 
that you will use in the next step
4a. If the query returned a set of records (a valid SELECT, for example), you 
can now use the functions (such as mysql_fetch_array) that fetch the data for 
a row
4b. If the query did NOT return a set of records, the query would have 
returned a Boolean value to tell you if it worked or not
5. Free up the result set

That's the general sequence. The details will be different for mysql, mysqli, 
and PDO.

Your error message means that either you skipped step 1, 2, or 3; or that step 
3 didn't return a record set. If could have been something like an INSERT, or 
it could have been a SELECT that wasn't valid.

Prepared statements use a more complicated sequence.

I hope that helps.

Regards,

Jerry Schwartz
Global Information Incorporated
195 Farmington Ave.
Farmington, CT 06032

860.674.8796 / FAX: 860.674.8341
E-mail: je...@gii.co.jp
Web site: www.giiresearch.com


>-Original Message-
>From: valery-christian [mailto:valechris2...@yahoo.fr]
>Sent: Wednesday, September 21, 2011 8:17 AM
>To: mysql@lists.mysql.com
>Subject: asking question
>
>the meaning of Warning: mysql_fetch_array(): supplied argument is not a valid
>MySQL result resource in C:\Program Files\EasyPHP 2.0b1\www\memo3\index.php 
>on
>line 64




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: table design question

2011-09-19 Thread Jerry Schwartz
>-Original Message-
>From: Richard Reina [mailto:gatorre...@gmail.com]
>Sent: Monday, September 19, 2011 9:55 AM
>To: mysql@lists.mysql.com
>Subject: table design question
>
>I want to create a US geography database. So far I have categories such as
>state nick names (some states have more than one), state mottos (text 25 to
>150 characters), state name origins (100-300 characters), state "trivial
>facts", entry into union.  My question is; would it be better to keep at
>least some of this information in separate tables like:
>
>state_basic
>ID | name | Incorporation | Entry in Union| Name_origin | Motto
>
>state_nicknames
>ID | name | nick_name|
>
>state_trivia
>ID | name | fact
>
>or would it be batter for queries to try to put all this information in one
>table?
>
[JS] Use separate tables. Unless you have a //very// good reason, you should 
always try to normalize your data.

In other words, use separate tables unless you are positive that you will 
//always// have 1:1 relationships between the various fields. For example, 
even such a simple thing as the data of incorporation might have more than one 
value in the case of the original colonies, the independent republics (Texas, 
California), and (I'm not sure about these) the Dakotas and West Virginia.

Did you know that Maine was once part of Massachusetts? You could put that 
kind of thing into a trivia record, but that might make it harder to use in 
the future. My personal philosophy is that it is easier to scramble an egg 
than to unscramble it. You might someday need to keep track of which states 
were originally part of other states.

And remember, those things that will never happen will happen the day before 
your vacation. The last thing you want to hear is "Richard, before you leave I 
need you to..." (I have 45 years of experience with that.)


Regards,

Jerry Schwartz
Global Information Incorporated
195 Farmington Ave.
Farmington, CT 06032

860.674.8796 / FAX: 860.674.8341
E-mail: je...@gii.co.jp
Web site: www.giiresearch.com




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: locked non-existent row

2011-09-01 Thread Jerry Schwartz
>> The ability to lock non-existent records is critical.
>
>I concur, although this is just a transaction consisting of 1 statement :).
>--
[JS] As the Three Musketeers used to say, "One for all and all for one!"

Peter, I couldn't even //understand// that paragraph about key gaps. The 
document writers might have assumed that this aspect of locking was universal 
and well known, and didn't think it needed explanation.

As for MS SQL Server, I would be dumbfounded if it worked differently; but 
I've been dumbfounded before by software design.

My standard answer for "Why does it work that way?" is "I am not a mental 
health professional, nor do I play one on TV."

Regards,

Jerry Schwartz
Global Information Incorporated
195 Farmington Ave.
Farmington, CT 06032

860.674.8796 / FAX: 860.674.8341
E-mail: je...@gii.co.jp
Web site: www.giiresearch.com







-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: locked non-existent row

2011-09-01 Thread Jerry Schwartz
>-Original Message-
>From: Peter Brawley [mailto:peter.braw...@earthlink.net]
>Sent: Wednesday, August 31, 2011 10:40 AM
>To: r...@grib.nl; mysql@lists.mysql.com
>Subject: Re: locked non-existent row
>
>On 8/31/2011 4:50 AM, Rik Wasmus wrote:
>>> While a transaction in one thread tries to update a non-existent InnoDB
>>> row with a given key value, an attempt to insert that value in another
>>> thread is locked out. Does anyone know where this behaviour is documented?

[JS] Forgive my ignorance, but I thought that was standard behavior for a row- 
or row-range lock (not just MySQL) in any DBMS that supported row locking. 
(Back when these things were first being invented, one term was "predicate 
locking.") The general idea was that you are locking rows that meet certain 
criteria, whether any or all of them exist or not. You're locking not only the 
existence, but the potential existence, of those rows.

I would expect it to apply not only to keys, but to any set. For example,

SELECT * FROM `t` WHERE `t`.`x` < 3 FOR UPDATE;

should lock all rows where `t`.`x` < 3 for update, insertion, or deletion --  
regardless of whether or not `x` is a key. Otherwise you have no way of 
knowing who wins.

The ability to lock non-existent records is critical.

Try it, you'll see.


Regards,

Jerry Schwartz
Global Information Incorporated
195 Farmington Ave.
Farmington, CT 06032

860.674.8796 / FAX: 860.674.8341
E-mail: je...@gii.co.jp
Web site: www.giiresearch.com







-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: Hungarian Notation [Was Re: Too many aliases]

2011-08-08 Thread Jerry Schwartz
I was a reluctant convert, and still don't use Hungarian notation 
consistently; but in something like MS Access, where you might want to 
associate a label with a field, things like "lblCompany" and "txtCompany" make 
a lot of sense.

Regards,

Jerry Schwartz
Global Information Incorporated
195 Farmington Ave.
Farmington, CT 06032

860.674.8796 / FAX: 860.674.8341
E-mail: je...@gii.co.jp
Web site: www.giiresearch.com

>-Original Message-
>From: Martin Gainty [mailto:mgai...@hotmail.com]
>Sent: Saturday, August 06, 2011 10:02 PM
>To: j...@bytesmiths.com; mysql@lists.mysql.com
>Subject: RE: Hungarian Notation [Was Re: Too many aliases]
>
>
>Jan-
>the upside is you dont have to look up a variable to know what type it is:
>zVariable is Null termed string
>bVariable is boolean
>nVariable is an Integer
>fVariable is a float
>dVariable is a double..
>cVariable is a char
>
>Martin
>__
>easy peasy..Shawshank Redemption
>
>
>> Subject: Hungarian Notation [Was Re: Too many aliases]
>> From: j...@bytesmiths.com
>> Date: Sat, 6 Aug 2011 09:58:43 -0700
>> To: mysql@lists.mysql.com
>>
>> > From: Johnny Withers 
>> >
>> > http://en.wikipedia.org/wiki/Hungarian_notation
>>
>> "The original Hungarian notation... was invented by Charles Simonyi... who
>later became Chief Architect at Microsoft."
>>
>> Ugh. That explains a lot!
>>
>> The only time I let types intrude on names is with booleans, which I try to
>name with a state-of-being verb, such as "has_paid", "is_member",
>"has_children", etc.
>>
>> > On Thu, Aug 4, 2011 at 9:41 AM, Mike Diehl  wrote:
>> >
>> >> Well, while we're on the subject of SQL style, can anyone tell me why 
>> >> I'm
>> >> always seeing people prefixing the name of a table with something like
>> >> "tbl?"
>>
>> 
>> You can't do anything about the length of your life, but you can do 
>> something
>about its width and depth. -- H. L. Mencken
>>  Jan Steinman, EcoReality Co-op 
>>
>>
>> --
>> MySQL General Mailing List
>> For list archives: http://lists.mysql.com/mysql
>> To unsubscribe:http://lists.mysql.com/mysql?unsub=mgai...@hotmail.com
>>
>




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: Too many aliases

2011-08-04 Thread Jerry Schwartz
>-Original Message-
>From: David Lerer [mailto:dle...@us.univision.com]
>Sent: Wednesday, August 03, 2011 10:25 AM
>To: mysql@lists.mysql.com
>Subject: RE: Too many aliases
>
>I rarely use aliases (unless rarely required in self-join queries).
>Yes, the column names may be longer this way, but easy to refer to and
>easy to communicate (by specifying a table number). I wonder what others
>think about it.
>
[JS] Back when I was trying to fit as much code on a punch-card as possible, 
the languages only supported short names, and the terminals ran at 10cps, 
every keystroke was precious.

Now I routinely pay the price of extra keystrokes for readability: not just 
with things such as column names, but with parentheses and the like as well. 
Like you, I only use aliases when necessary.

If I want brain-teasers, I'll do a crossword puzzle.

But that's just me.

Regards,

Jerry Schwartz
Global Information Incorporated
195 Farmington Ave.
Farmington, CT 06032

860.674.8796 / FAX: 860.674.8341
E-mail: je...@gii.co.jp
Web site: www.giiresearch.com





-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: How to Shuffle data

2011-07-14 Thread Jerry Schwartz
There are a couple of problems with using any database for doing this.

- Rows in a table are inherently in no particular order. That means they are 
neither sorted nor random.
- Depending upon the keys you are using (an auto-increment field for example), 
you might be able to select a random row; but that wouldn't guarantee that you 
wouldn't get the same row twice.

There is a way around this that sounds tempting, but it would be hideously 
impractical:

1. Create a copy of your original table, `t2`, with an auto-increment index 
`aui` (either from the original table or created at this point.
2. Create yet another table, `t3`, with the same structure but no data in it.
3. Select a random record from `t2` using
   SELECT ROUND(RAND()*MAX(`t2`.`aui`)) FROM `t2` INTO @RANDREC;
4. Copy that randomly-selected record into `t3` using
   INSERT INTO `t3` SELECT * FROM `t2` WHERE `t2`.`aui` = @RANDREC;
5. Remove the selected record from `t2` using
   DELETE FROM `t2` WHERE `t2`.`aui` = @RANDREC;
6. Loop back to step 3 until `t2` is empty.

Here's why, tempting as it is, it wouldn't work: as the number of records in 
`t2` dwindles, it will become less and less likely that @RANDREC will actually 
match an existing value of `t2`.`aui`. By the time you got down to a handful 
of records, it might take years to find one.

Now if the LIMIT clause accepted variables instead of constants, you could 
replace steps 3 and 4 with something like this:

   SELECT ROUND(RAND()*COUNT(`t2`.`aui`)) INTO @RANDSTART;
   SELECT FROM `t2` LIMIT @RANDSTART, 1;

And that would actually work (give or take some corner cases I haven't thought 
about very hard).

In point of fact, that's basically what you need to do whether you do it in 
MySQL or in an external program. The difference is that in an external program 
you can effectively renumber your rows as you remove them, so that it won't 
get harder and harder to find an actual row. Even then, a random shuffle is 
likely to be time-consuming.

Regards,

Jerry Schwartz
Global Information Incorporated
195 Farmington Ave.
Farmington, CT 06032

860.674.8796 / FAX: 860.674.8341
E-mail: je...@gii.co.jp
Web site: www.the-infoshop.com

>-Original Message-
>From: Adarsh Sharma [mailto:adarsh.sha...@orkash.com]
>Sent: Wednesday, July 13, 2011 9:59 AM
>To: wal...@waltertross.com
>Cc: mysql@lists.mysql.com
>Subject: Re: How to Shuffle data
>
>Do i need to move output to a CSV file & then write a Java or C code for it.
>
>I think a procedure can do it easily , If I know the proper function for
>that.
>
>Thanks
>
>wal...@waltertross.com wrote:
>> What you want to do is not shuffle, but reverse the domain order, starting
>> from the TLD (top level domain). You may pick the string functions you
>> need from the string functions page of the manual, but the problem is that
>> there may be any number of subdomains, so that in principle you would need
>> a loop construct. If I were in your position, I wouldn't use MySQL to do
>> this.
>> ciao
>> Walter
>>
>>
>>> Dear all,
>>>
>>> I have million of sites stored in url column of a mysql table.
>>>
>>> Some few examples are :-
>>>
>>> www.facebook.com/home
>>> adelaide.yourguide.com/news/local/news/entertainment/cd-review-day-and-age-
>the-killers/1401702.aspx
>>> abclive.in/abclive_business/2393.html
>>> abclive.in/abclive_business/assocham_manufacturing_companies.html
>>> abclive.in/abclive_business/b-ramalinga-raju-satyam-financial-
>irregularities.html
>>> aktualne.centrum.cz/report/krimi/clanek.phtml?id=635342
>>> aktualne.centrum.cz/report/krimi/clanek.phtml?id=635306
>>>
>>> I want to take the output in a tsv file the sites url in the below forms :
>>>
>>> com.faebook.com/home
>>> com.yourguide.adelaide/news/local/news/entertainment/cd-review-day-and-age-
>the-killers/1401702.aspx
>>> in.abclive/abclive_business/2393.html
>>> in.abclive/abclive_business/assocham_manufacturing_companies.html
>>> in.abclive/abclive_business/b-ramalinga-raju-satyam-financial-
>irregularities.html
>>> cz.centrum.aktualne/report/krimi/clanek.phtml?id=635306
>>> cz.centrum.aktualne/report/krimi/clanek.phtml?id=635342
>>>
>>> I need to shuffle the . words . Is there any in built function in mysql
>>> to achieve this.
>>>
>>> Thanks
>>>
>>>
>>> --
>>> MySQL General Mailing List
>>> For list archives: http://lists.mysql.com/mysql
>>> To unsubscribe:
>>> http://lists.mysql.com/mysql?unsub=wal...@waltertross.com
>>>
>>>
>>
>>
>>
>>





-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: Facebook Trapped In MySQL a 'Fate Worse Than Death'

2011-07-12 Thread Jerry Schwartz
Let this be a lesson to all of those designers who say "That will never 
happen."

Regards,

Jerry Schwartz
Global Information Incorporated
195 Farmington Ave.
Farmington, CT 06032

860.674.8796 / FAX: 860.674.8341
E-mail: je...@gii.co.jp
Web site: www.the-infoshop.com





-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: SELECT records less than 15 minutes old

2011-06-21 Thread Jerry Schwartz


>-Original Message-
>From: sono...@fannullone.us [mailto:sono...@fannullone.us]
>Sent: Monday, June 20, 2011 6:01 PM
>To: mysql@lists.mysql.com
>Cc: wha...@bfs.de; Jerry Schwartz
>Subject: Re: SELECT records less than 15 minutes old
>
>On Jun 20, 2011, at 10:11 AM, Jerry Schwartz wrote:
>
>>> You should use UTC time zone or you will run into trouble with DST.
>>>
>> [JS] If you do that, you can't use an automatic timestamp field. You have 
>> to
>> set the field yourself.
>
>   Thanks Walter and Jerry.
>
>   Is there a way to get NOW() to use UTC instead of the server timezone?
>(The server is not mine, so I can't change the my.cnf.)  Here's my statement:
>
>SELECT * FROM `log` WHERE `id` = $_id AND ( `time_stamp` >= DATE_SUB(NOW(),
>INTERVAL 30 MINUTE) )
>

>   Earlier in my PHP script I've used date_default_timezone_set, but that
>doesn't affect the MySQL statement.
>
>--
>
>Possible Solution
>
>I tried: SET time_zone = 'UTC';
>but MySQL complained with: #1298 - Unknown or incorrect time zone: 'UTC'
>
>I then tried:
>SET time_zone = '-0:00';
>and that seems to have worked.  Is this the correct way to do it?
>

[JS] I believe that is the only way to do it.

The reason you can't use time zone names is that the time zone tables in MySQL 
have not been loaded.

Regards,

Jerry Schwartz
Global Information Incorporated
195 Farmington Ave.
Farmington, CT 06032

860.674.8796 / FAX: 860.674.8341
E-mail: je...@gii.co.jp
Web site: www.the-infoshop.com







-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: SELECT records less than 15 minutes old

2011-06-20 Thread Jerry Schwartz
>-Original Message-
>From: walter harms [mailto:wha...@bfs.de]
>Sent: Monday, June 20, 2011 7:07 AM
>To: sono...@fannullone.us
>Cc: mysql@lists.mysql.com
>Subject: Re: SELECT records less than 15 minutes old
>
>
>
>Am 19.06.2011 21:06, schrieb sono...@fannullone.us:
>> On Jun 19, 2011, at 11:11 AM, Claudio Nanni wrote:
>>
>>> just a quick debug:
>>
>>  Thanks, Claudio.  It turned out to be that NOW() was using the server's
>time and my timestamp was based on my timezone.  After fixing that, the 
>SELECT
>statement works properly.
>>
>> Marc
>
>You should use UTC time zone or you will run into trouble with DST.
>
[JS] If you do that, you can't use an automatic timestamp field. You have to 
set the field yourself.

Regards,

Jerry Schwartz
Global Information Incorporated
195 Farmington Ave.
Farmington, CT 06032

860.674.8796 / FAX: 860.674.8341
E-mail: je...@gii.co.jp
Web site: www.the-infoshop.com





-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: optimization strategies based on file-level storage

2011-06-17 Thread Jerry Schwartz
>-Original Message-


>What I was really trying to figure out was why it takes me 4 hours to
>add a new column to my 22-million-row table, and whether a different
>table design can avoid that problem.  That reply in the forum says,
>"ALTER TABLE ... ADD COLUMN will always copy the entire table over,
>and rebuild all the indexes. (And, effectively, do an OPTIMIZE.) Do
>_not_ leave space for extra columns, it won't help."  I'm about to
>reply and point out the trick that you suggested to me: create dummy
>columns early and then just rename them later :)
>
> -Bennett
>
[JS] They will be too small, or the wrong type, or there won't be enough of 
them. Based upon 30+ years of database design, I'd bet money on it. ;-)

Regards,

Jerry Schwartz
Global Information Incorporated
195 Farmington Ave.
Farmington, CT 06032

860.674.8796 / FAX: 860.674.8341
E-mail: je...@gii.co.jp
Web site: www.the-infoshop.com







-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: Encoding "Table Name" and "Filed Name"

2011-06-17 Thread Jerry Schwartz
>-Original Message-
>From: Vikram A [mailto:vikkiatb...@yahoo.in]
>Sent: Friday, June 17, 2011 2:54 AM
>To: Johan De Meersman
>Cc: MY SQL Mailing list
>Subject: Re: Encoding "Table Name" and "Filed Name"
>
>Sir,
>
>I agree, its impossible to do manual look ups. But our aim is to avoid the 
>use
>of DB with out code. Also we have ensured, 'secret data is encrypted using 
>some
>functions with key'.
>
[JS] A lot depends upon your ultimate goal. Do you need to keep anyone from 
seeing the data, or do you need to prevent sabotage?

Even if your DB manager can't tell what the "real" table names are, he could 
still sabotage your system simply by deleting the entire database.

>I shall follow both encode i.e,  1) filed and table name, 2) data level? Or
>only data level is enough by having accounts as you suggested?
>
[JS] Encrypting at the data level will prevent anyone from seeing the real 
data values, but even that requires more than just encryption. Ideally, you 
would have two people work on the encryption algorithms separately and you 
must make sure that they cannot communicate with each other -- ever!

You will need those two people (or companies) forever if you ever want to 
change the application.

There are books on this.

You always have to trust somebody, and that somebody is always the weak link. 
Somebody must be using the application, and that someone is a bigger risk that 
your DB manager.

Regards,

Jerry Schwartz
Global Information Incorporated
195 Farmington Ave.
Farmington, CT 06032

860.674.8796 / FAX: 860.674.8341
E-mail: je...@gii.co.jp
Web site: www.the-infoshop.com




>Thanks You.
>Vikram
>
>
>From: Johan De Meersman 
>To: Vikram A 
>Cc: MY SQL Mailing list 
>Sent: Friday, 17 June 2011 11:50 AM
>Subject: Re: Encoding "Table Name" and "Filed Name"
>
>
>- Original Message -
>> From: "Vikram A" 
>>
>> My question is, DO i face any negative project management problems by
>> doing this? Pleas share your experience on this aspect and commend
>> our idea.
>
>Seems... a bit pointless, no? If someone has access to the database, they can
>still see the data. If someone has access to the code, they can still figure
>out the naming scheme. If, for some reason, you have to do manual lookups
>(think debugging, custom reporting, ...) you're making your own life hard.
>
>If you don't want people to see your data, manage your accounts. If people
>leave, delete their accounts. Make sure your code prevents SQL injections 
>(use
>bind variables and so on). Firewall off your server from everything but the
>application server. Et cetera ad nauseam.
>
>There's plenty of security recommendations, but I've never heard of anyone
>encrypting their table and field names. It sounds like something upper
>management would come up with :-)
>
>
>--
>Bier met grenadyn
>Is als mosterd by den wyn
>Sy die't drinkt, is eene kwezel
>Hy die't drinkt, is ras een ezel
>
>--
>MySQL General Mailing List
>For list archives: http://lists.mysql.com/mysql
>To unsubscribe:http://lists.mysql.com/mysql?unsub=vikkiatb...@yahoo.in




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: Doubt regarding Mysqlsump

2011-06-09 Thread Jerry Schwartz


>
>A single table is always consistent. Data inconsistency occurs in sets of
>interrelated tables, in other words, on the database level.
>
[JS] Not even a single table is always consistent (unless there is 
transactions). Consider a single transaction that consists of two steps:

1. Delete record A
2. Add record B

Now consider

1. Delete record A
-Backup starts-
2. Add record B

You have no idea whether or not record B will be in your backup.

Worse things can happen, of course:

1. Delete record A
!!KABOOM!!

The data in the table is not going to be consistent. You'd have to analyze the 
data to find out what did and didn't happen before the crash, back out step 1, 
and re-run that transaction in the application.


Regards,

Jerry Schwartz
Global Information Incorporated
195 Farmington Ave.
Farmington, CT 06032

860.674.8796 / FAX: 860.674.8341
E-mail: je...@gii.co.jp
Web site: www.the-infoshop.com



>
>--
>Bier met grenadyn
>Is als mosterd by den wyn
>Sy die't drinkt, is eene kwezel
>Hy die't drinkt, is ras een ezel
>
>--
>MySQL General Mailing List
>For list archives: http://lists.mysql.com/mysql
>To unsubscribe:http://lists.mysql.com/mysql?unsub=je...@gii.co.jp





-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: Data missing after field optimization

2011-06-09 Thread Jerry Schwartz


>> If that's all you did, you indeed 'removed the default NULL' but did not
>specify another default. Hence, if you don't explicitly specify a value in 
>your
>insert statement, the insert can not happen as the server doesn't know what 
>to
>put there and is explicitly disallowed from leaving the value empty.
>
>   That makes sense.  So then why does phpMyAdmin allow you to choose 
> "None"
>as a default?

[JS] There are times when you want to enforce certain conditions even though 
they might sometimes generate an error. Two examples are required fields (no 
default value) or referential integrity.

The goal is not to lose data, but to make sure your applications are doing 
what they are supposed to do. You would program your applications to trap and 
report errors.


Regards,

Jerry Schwartz
Global Information Incorporated
195 Farmington Ave.
Farmington, CT 06032

860.674.8796 / FAX: 860.674.8341
E-mail: je...@gii.co.jp
Web site: www.the-infoshop.com







-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: Timestamp value

2011-06-06 Thread Jerry Schwartz
>-Original Message-
>From: Johan De Meersman [mailto:vegiv...@tuxera.be]
>Sent: Monday, June 06, 2011 12:57 PM
>To: Jerry Schwartz
>Cc: mysql@lists.mysql.com
>Subject: Re: Timestamp value
>
>
>I may be mistaken, but isn't UTC pretty much GMT if you don't want subsecond
>precision? Set your server's timezone to GMT and you should get what you 
>want.

[JS] I don't want to affect everything on the server (scheduled events, file 
modification dates, etc.) I only want one UTC (or GMT, I don't really care) 
field.

Regards,

Jerry Schwartz
Global Information Incorporated
195 Farmington Ave.
Farmington, CT 06032

860.674.8796 / FAX: 860.674.8341
E-mail: je...@gii.co.jp
Web site: www.the-infoshop.com







-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Timestamp value

2011-06-06 Thread Jerry Schwartz
When you UPDATE a record, a timestamp field (`t`) is set to the current time 
in the time zone given by @@time_zone, correct? That will usually be the local 
time.

If somebody in another time zone needs to compare `t` against //their own// 
local time, they need to use

CONVERT_TZ(`t`,'my_local_time zone','their_local_time_zone`)

Am I right?

So, what if they do not know my local time zone? Is there a way to create a 
timestamp field that is always in UTC? I don't think there is, without using a 
trigger.

Am I right about that?



Regards,

Jerry Schwartz
Global Information Incorporated
195 Farmington Ave.
Farmington, CT 06032

860.674.8796 / FAX: 860.674.8341
E-mail: je...@gii.co.jp
Web site: www.the-infoshop.com






-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: timezone

2011-06-03 Thread Jerry Schwartz
From: John Daisley [mailto:daisleyj...@googlemail.com] 
Sent: Friday, June 03, 2011 6:09 AM
To: Rocio Gomez Escribano
Cc: mysql@lists.mysql.com
Subject: Re: timezone

 

now() returns the current system time which doesn't really have a great deal to 
do with time zones.

 

[JS] I think that statement is confusing – at least, it is to me. Unless you 
specify otherwise, NOW() returns the current system time in the system’s time 
zone. Your comment could be interpreted to mean that it doesn’t use any time 
zone: i.e., that it returns the UTC.

 

 

Regards,

 

Jerry Schwartz

Global Information Incorporated

195 Farmington Ave.

Farmington, CT 06032

 

860.674.8796 / FAX: 860.674.8341

E-mail: je...@gii.co.jp 

Web site: www.the-infoshop.com <http://www.the-infoshop.com/> 

 

 

 

You can check what the current time zone is set to with the following command

 

show variables like 'time_zone';

 

but that is likely to return the value 'SYSTEM' which means it takes the value 
from the host operating system ( usually set in /etc/timezone ). 

 

You can set the time_zone variable either globally or per session to an offset 
of UTC as follows

 

SET time_zone='+00:00:00';

SET GLOBAL time_zone='+00:00:00';

 

Or you can specify a 'default_time_zone' in your my.cnf/my.ini options file.

 

You can also set the time_zone variables to a named offset which will then take 
account of daylight savings times but to do this you must first load the mysql 
time zone tables. 

 

I strongly suggest you read the manual section relating to time zone support 
which you can find here 

 

http://dev.mysql.com/doc/refman/5.5/en/time-zone-support.html

 

 

 

On 3 June 2011 10:27, Rocio Gomez Escribano  
wrote:

I’m afraid I don’’t understand you:

 

 

mysql> select count(*) from mysql.time_zone_name;

+--+

| count(*) |

+--+

|0 |

+--+

1 row in set (0.00 sec)

 

But, when I execute:

 

 

mysql> select now();

+-+

| now()   |

+-+

| 2011-06-03 11:28:00 |

+-+

1 row in set (0.00 sec)

 

That’s correct, in Spain it’s that time. So, mysql is using the timezone 
correctly, isn’t it?

 

Thanks!

 

Rocío Gómez Escribano

 <mailto:r.sanc...@ingenia-soluciones.com> r.go...@ingenia-soluciones.com

 

Descripción: cid:image002.jpg@01CB8CB6.ADEBA830

Polígono Campollano C/F, nº21T

02007 Albacete (España)

Tlf:967-504-513  Fax: 967-504-513

 <http://www.ingenia-soluciones.com> www.ingenia-soluciones.com

 

De: John Daisley [mailto:daisleyj...@googlemail.com] 
Enviado el: viernes, 03 de junio de 2011 11:18
Para: Rocio Gomez Escribano
CC: mysql@lists.mysql.com
Asunto: Re: timezone

 

Have you populated the timezone tables? Run this query if you are not sure.

 

SELECT COUNT(*) FROM mysql.time_zone_name;

 

If it returns 0 then you need to populate the them as per the instructions here 
http://dev.mysql.com/doc/refman/5.5/en/time-zone-support.html

 

Default timezone in mysql is set at server startup to SYSTEM, which means so 
long as your system clock is correct the MySQL server should be correct.

 

 

 

On 3 June 2011 09:55, Rocio Gomez Escribano  
wrote:

Hello! I’m having trouble with timezones.

 

I’m in Spain, we have 2 different timezone now we are in GMT+2, in winter, this 
is the GMT+1.

 

I’m looking for an instruction which give me the current timezone, but I cant 
find it! Do you know how can I now it?

 

Thanks!

 

Rocío Gómez Escribano

r.go...@ingenia-soluciones.com <mailto:r.sanc...@ingenia-soluciones.com> 

 

¡Error! Nombre de archivo no especificado.

Polígono Campollano C/F, nº21T

02007 Albacete (España)

Tlf:967-504-513  Fax: 967-504-513

 <http://www.ingenia-soluciones.com> www.ingenia-soluciones.com

 




-- 
John Daisley

Certified MySQL 5 Database Administrator
Certified MySQL 5 Developer
Cognos BI Developer

Telephone: +44 (0)7918 621621
Email: john.dais...@butterflysystems.co.uk




-- 
John Daisley

Butterfly Information Systems

Microsoft SQL Server Database Administrator

Certified MySQL 5 Database Administrator & Developer
Cognos BI Developer \ Administrator 

Available for short & long term contracts


Telephone: +44 (0)7918 621621
Email: john.dais...@butterflysystems.co.uk

 



RE: Renaming a database?

2011-05-24 Thread Jerry Schwartz
 

From: Andrew Moore [mailto:eroomy...@gmail.com] 
Sent: Tuesday, May 24, 2011 2:56 PM
To: Jerry Schwartz
Cc: mysql@lists.mysql.com
Subject: Re: Renaming a database?

 

Could you try restoring the other db with another name and changing the 
connection string in the app?

[JS] I could, but it would be a nuisance. The app is MS Access, so there isn’t 
just one string to change.

Fortunately my database only takes about 30 minutes to load, so this wasn’t a 
huge problem. I just wanted to make sure I wasn’t missing something.

 

Regards,

 

Jerry Schwartz

Global Information Incorporated

195 Farmington Ave.

Farmington, CT 06032

 

860.674.8796 / FAX: 860.674.8341

E-mail: je...@gii.co.jp 

Web site: www.the-infoshop.com <http://www.the-infoshop.com/> 

 

 

On 24 May 2011 16:20, "Jerry Schwartz"  wrote:
> It looks like there’s no way to rename a database. Is that true?
> 
> 
> 
> I have two reasons for wanting to do this:
> 
> 
> 
> - I want to get a current version of a database out of the way so that I can 
> load in an older version, without having to pay the penalty of reloading the 
> current version.
> 
> - I think I have a database that isn’t being used (don’t ask), and I want to 
> hide it to see if anything blows up.
> 
> 
> 
> Suggestions?
> 
> 
> 
> Regards,
> 
> 
> 
> Jerry Schwartz
> 
> Global Information Incorporated
> 
> 195 Farmington Ave.
> 
> Farmington, CT 06032
> 
> 
> 
> 860.674.8796 / FAX: 860.674.8341
> 
> E-mail: <mailto:je...@gii.co.jp> je...@gii.co.jp 
> 
> Web site: <http://www.the-infoshop.com/> www.the-infoshop.com
> 
> 
> 



RE: Renaming a database?

2011-05-24 Thread Jerry Schwartz

>-Original Message-
>From: Andrew Moore [mailto:eroomy...@gmail.com]
>Sent: Tuesday, May 24, 2011 11:31 AM
>To: Jerry Schwartz
>Cc: mysql@lists.mysql.com
>Subject: Re: Renaming a database?
>
>Instead of renaming it, revoke permissions to it. If it's being used you
>should see some problems in the application due to access denied.
>
[JS] That's a good suggestion. It takes care of one case, but not the other.

Regards,

Jerry Schwartz
Global Information Incorporated
195 Farmington Ave.
Farmington, CT 06032

860.674.8796 / FAX: 860.674.8341
E-mail: je...@gii.co.jp
Web site: www.the-infoshop.com

>Andy
>
>On Tue, May 24, 2011 at 4:19 PM, Jerry Schwartz  wrote:
>
>> It looks like there's no way to rename a database. Is that true?
>>
>>
>>
>> I have two reasons for wanting to do this:
>>
>>
>>
>> - I want to get a current version of a database out of the way so that I
>> can load in an older version, without having to pay the penalty of 
>> reloading
>> the current version.
>>
>> - I think I have a database that isn't being used (don't ask), and I want
>> to hide it to see if anything blows up.
>>
>>
>>
>> Suggestions?
>>
>>
>>
>> Regards,
>>
>>
>>
>> Jerry Schwartz
>>
>> Global Information Incorporated
>>
>> 195 Farmington Ave.
>>
>> Farmington, CT 06032
>>
>>
>>
>> 860.674.8796 / FAX: 860.674.8341
>>
>> E-mail:  <mailto:je...@gii.co.jp> je...@gii.co.jp
>>
>> Web site:  <http://www.the-infoshop.com/> www.the-infoshop.com
>>
>>
>>
>>




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: Renaming a database?

2011-05-24 Thread Jerry Schwartz
>-Original Message-
>From: Johan De Meersman [mailto:vegiv...@tuxera.be]
>Sent: Tuesday, May 24, 2011 11:52 AM
>To: Jerry Schwartz
>Cc: mysql@lists.mysql.com
>Subject: Re: Renaming a database?
>
>- Original Message -
>> From: "Jerry Schwartz" 
>>
>> It looks like there's no way to rename a database. Is that true?
>
>Yes. Silly oversight, although there's probably complexity reasons behind it.
>
[JS] According to the documentation, this was implemented at one time but it 
caused problems so the feature was removed.

Regards,

Jerry Schwartz
Global Information Incorporated
195 Farmington Ave.
Farmington, CT 06032

860.674.8796 / FAX: 860.674.8341
E-mail: je...@gii.co.jp
Web site: www.the-infoshop.com



>> - I want to get a current version of a database out of the way so
>> that I can load in an older version, without having to pay the
>> penalty of reloading the current version.
>
>That precludes "rename table db1.blah to db2.blah", I guess - it basically
>copies the whole thing and then kills the old one. Sloww for big tables 
>:-)
>
>> - I think I have a database that isn't being used (don't ask), and I
>> want to hide it to see if anything blows up.
>
>If you are in a position to shut your server down, you may rename the 
>database
>directory while it's down IF you only use MyISAM tables.
>
>If you have InnoDB tables, you're stuck with dump/reload or rename table.
>
>
>--
>Bier met grenadyn
>Is als mosterd by den wyn
>Sy die't drinkt, is eene kwezel
>Hy die't drinkt, is ras een ezel




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Renaming a database?

2011-05-24 Thread Jerry Schwartz
It looks like there’s no way to rename a database. Is that true?

 

I have two reasons for wanting to do this:

 

- I want to get a current version of a database out of the way so that I can 
load in an older version, without having to pay the penalty of reloading the 
current version.

- I think I have a database that isn’t being used (don’t ask), and I want to 
hide it to see if anything blows up.

 

Suggestions?

 

Regards,

 

Jerry Schwartz

Global Information Incorporated

195 Farmington Ave.

Farmington, CT 06032

 

860.674.8796 / FAX: 860.674.8341

E-mail:  <mailto:je...@gii.co.jp> je...@gii.co.jp 

Web site:  <http://www.the-infoshop.com/> www.the-infoshop.com

 



RE: Join based upon LIKE

2011-05-05 Thread Jerry Schwartz
>-Original Message-
>From: Nuno Tavares [mailto:nuno.tava...@dri.pt]
>Sent: Tuesday, May 03, 2011 6:21 PM
>To: mysql@lists.mysql.com
>Subject: Re: Join based upon LIKE
>
>Dear Jerry,
>
>I've been silently following this discussion because I've missed the
>original question.
>
>But from your last explanation, now it really looks you have a "data
>quality" kind of issue, which is by far related with MySQL.
>
[JS] Definitely -- but I have to work with the tools available. This is only 
one part of the process, there is more trouble further on that is not related 
to our database at all.

>Indeed, in Data Quality, there is *never* a ready solution, because the
>source is tipically chaotic
>
>May I suggest you to explore Google Refine? It seems to be able to
>address all those issues quite nicely, and the clustering might solve
>your problem at once. You shall know, however, how to export the tables
>(or a usable JOIN) as a CSV, see SELECT ... INTO OUTFILE for that.
>
[JS] I never heard of Google Refine. Thanks for bringing to my attention.

>Hope it helps,
>-NT
[JS] Thank you.

Regards,

Jerry Schwartz
Global Information Incorporated
195 Farmington Ave.
Farmington, CT 06032

860.674.8796 / FAX: 860.674.8341
E-mail: je...@gii.co.jp
Web site: www.the-infoshop.com



>
>Em 03-05-2011 21:34, Jerry Schwartz escreveu:
>> My situation is sounds rather simple. All I am doing is matching a
>spreadsheet
>> of products against our database. My job is to find any matches against
>> existing products and determine which ones are new, which ones are
>> replacements for older products, and which ones just need to have the
>> publication date (and page count, price, whatever) refreshed.
>>
>> Publisher is no problem. What I have for each "feed" is a title and (most 
>> of
>> the time) an ISBN or other identification assigned by the publisher.
>>
>> Matching by product ID is easy (assuming there aren't any mistakes in the
>> current or previous feeds); but the publisher might or might not change the
>> product ID when they update a report. That's why I also run a match by 
>> title,
>> and that's where all the trouble comes from.
>>
>> The publisher might or might not include a mix of old and new products in a
>> feed. The publisher might change the title of an existing product, either 
>> on
>> purpose or by accident; they might simply be sloppy about their spelling; 
>> or
>> (and this is where it is critical) the title might include a reference to
>some
>> time period such as a year or a quarter.
>>
>> I think we'd better pull the plug on this discussion. It doesn't seem like
>> there's a ready solution. Fortunately our database is small, and most feeds
>> are only a few hundred products.
>>
>> Regards,
>>
>> Jerry Schwartz
>> Global Information Incorporated
>> 195 Farmington Ave.
>> Farmington, CT 06032
>>
>> 860.674.8796 / FAX: 860.674.8341
>> E-mail: je...@gii.co.jp
>> Web site: www.the-infoshop.com
>>
>>
>>> -Original Message-
>>> From: shawn wilson [mailto:ag4ve...@gmail.com]
>>> Sent: Tuesday, May 03, 2011 4:08 PM
>>> Cc: mysql mailing list
>>> Subject: Re: Join based upon LIKE
>>>
>>> I'm actually enjoying this discussion because I have the same type of 
>>> issue.
>>> However, I have done away with trying to do a full text search in favor of
>>> making a table with unique fields where all fields should uniquely 
>>> identify
>>> the group. If I get a dupe, I can clean it up.
>>>
>>> However, like you, they don't want me to mess with the original data. So,
>>> what I have is another table with my good data that my table with my 
>>> unique
>>> data refers to. If a bad record is creased, I don't care I just create my
>>> relationship to the table of data I know (read think - I rarely look at 
>>> this
>>> stuff) is good.
>>>
>>> So, I have 4 fields that should be unique for a group. Two chats and two
>>> ints. If three of these match a record in the 'good data' table - there's 
>>> my
>>> relationship. If two or less match, I create a new record in my 'good 
>>> data'
>>> table and log the event. (I haven't gotten to the logging part yet though,
>>> easy enough just to look sense none of the fields in 'good data' should
>>> match)
>>>
>>> I'm thinking you might have to dig deeper than me 

RE: DBA Mentor?

2011-05-05 Thread Jerry Schwartz
Folks, I have no skin in this game whatsoever; but I thought the original 
question was reasonable (given that the person was new to the list), and I 
thought the first answer was rude.

Regards,

Jerry Schwartz
Global Information Incorporated
195 Farmington Ave.
Farmington, CT 06032

860.674.8796 / FAX: 860.674.8341
E-mail: je...@gii.co.jp
Web site: www.the-infoshop.com




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: Join based upon LIKE

2011-05-03 Thread Jerry Schwartz
My situation is sounds rather simple. All I am doing is matching a spreadsheet 
of products against our database. My job is to find any matches against 
existing products and determine which ones are new, which ones are 
replacements for older products, and which ones just need to have the 
publication date (and page count, price, whatever) refreshed.

Publisher is no problem. What I have for each "feed" is a title and (most of 
the time) an ISBN or other identification assigned by the publisher.

Matching by product ID is easy (assuming there aren't any mistakes in the 
current or previous feeds); but the publisher might or might not change the 
product ID when they update a report. That's why I also run a match by title, 
and that's where all the trouble comes from.

The publisher might or might not include a mix of old and new products in a 
feed. The publisher might change the title of an existing product, either on 
purpose or by accident; they might simply be sloppy about their spelling; or 
(and this is where it is critical) the title might include a reference to some 
time period such as a year or a quarter.

I think we'd better pull the plug on this discussion. It doesn't seem like 
there's a ready solution. Fortunately our database is small, and most feeds 
are only a few hundred products.

Regards,

Jerry Schwartz
Global Information Incorporated
195 Farmington Ave.
Farmington, CT 06032

860.674.8796 / FAX: 860.674.8341
E-mail: je...@gii.co.jp
Web site: www.the-infoshop.com


>-Original Message-
>From: shawn wilson [mailto:ag4ve...@gmail.com]
>Sent: Tuesday, May 03, 2011 4:08 PM
>Cc: mysql mailing list
>Subject: Re: Join based upon LIKE
>
>I'm actually enjoying this discussion because I have the same type of issue.
>However, I have done away with trying to do a full text search in favor of
>making a table with unique fields where all fields should uniquely identify
>the group. If I get a dupe, I can clean it up.
>
>However, like you, they don't want me to mess with the original data. So,
>what I have is another table with my good data that my table with my unique
>data refers to. If a bad record is creased, I don't care I just create my
>relationship to the table of data I know (read think - I rarely look at this
>stuff) is good.
>
>So, I have 4 fields that should be unique for a group. Two chats and two
>ints. If three of these match a record in the 'good data' table - there's my
>relationship. If two or less match, I create a new record in my 'good data'
>table and log the event. (I haven't gotten to the logging part yet though,
>easy enough just to look sense none of the fields in 'good data' should
>match)
>
>I'm thinking you might have to dig deeper than me to find 'good data' but I
>think its there. Maybe isbn, name, publisher + address, price, average
>pages, name of sales person, who you guys pay for the material, etc etc etc.
>
>
>On May 3, 2011 10:59 AM, "Johan De Meersman"  wrote:
>>
>>
>> - Original Message -
>> > From: "Jerry Schwartz" 
>> >
>> > I'm not sure that I could easily build a dictionary of non-junk
>> > words, since
>>
>> The traditional way is to build a database of junk words. The list tends
>to be shorter :-)
>>
>> Think and/or/it/the/with/like/...
>>
>> Percentages of mutual and non-mutual words between two titles should be a
>reasonable indicator of likeness. You could conceivably even assign value to
>individual words, so "polypropylbutanate" is more useful than "synergy" for
>comparison purposes.
>>
>> All very theoretical, though, I haven't actually done much of it to this
>level. My experience in data mangling is limited to mostly
>should-be-fixed-format data like sports results.
>>
>>
>> --
>> Bier met grenadyn
>> Is als mosterd by den wyn
>> Sy die't drinkt, is eene kwezel
>> Hy die't drinkt, is ras een ezel
>>
>> --
>> MySQL General Mailing List
>> For list archives: http://lists.mysql.com/mysql
>> To unsubscribe:http://lists.mysql.com/mysql?unsub=ag4ve...@gmail.com
>>




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: Join based upon LIKE

2011-05-03 Thread Jerry Schwartz
>-Original Message-
>From: Johan De Meersman [mailto:vegiv...@tuxera.be]
>Sent: Tuesday, May 03, 2011 5:31 AM
>To: Jerry Schwartz
>Cc: Jim McNeely; mysql mailing list; Johan De Meersman
>Subject: Re: Join based upon LIKE
>
>
>http://www.gedpage.com/soundex.html offers a simple explanation of what it
>does.
>
>One possibility would be building a referential table with only a recordID 
>and
>soundex column, unique over both; and filling that with the soundex of
>individual nonjunk words.
>
>So, from the titles
>
>1 | Rain in Spain
>2 | Spain's Rain
>
>you'd get
>
>1 | R500
>1 | S150
>2 | S150
>2 | R500
>
>From thereon, you can see that all the same words have been used - ignoring a
>lot of spelling errors like Spian. Obviously not a magic solution, but it's a
>start.
>
[JS] Thanks.

I'm not sure that I could easily build a dictionary of non-junk words, since 
some of these reports have titles like "Toluene Diisocyanate Market Outlook 
2008", "Toluene Market Outlook 2008", and "Toluene: 2009 World Market Outlook 
And Forecast (Special Crisis Edition)".

I shall ponder this when I am caught up, or (more likely) in the afterlife.

Regards,

Jerry Schwartz
Global Information Incorporated
195 Farmington Ave.
Farmington, CT 06032

860.674.8796 / FAX: 860.674.8341
E-mail: je...@gii.co.jp
Web site: www.the-infoshop.com

>- Original Message -
>> From: "Jerry Schwartz" 
>> To: "Johan De Meersman" 
>> Cc: "Jim McNeely" , "mysql mailing list"
>
>> Sent: Monday, 2 May, 2011 4:09:36 PM
>> Subject: RE: Join based upon LIKE
>>
>> [JS] I've thought about using soundex(), but I'm not quite sure how.
>>
>> I didn't pursue it much because there are so many odd terms such as
>> chemical
>> names, but perhaps I should give it a try in my infinite free time.
>>
>>
>> [JS] Thanks for your condolences.
>>
>> Regards,
>>
>> Jerry Schwartz
>> Global Information Incorporated
>> 195 Farmington Ave.
>> Farmington, CT 06032
>>
>> 860.674.8796 / FAX: 860.674.8341
>> E-mail: je...@gii.co.jp
>> Web site: www.the-infoshop.com
>>
>
>--
>Bier met grenadyn
>Is als mosterd by den wyn
>Sy die't drinkt, is eene kwezel
>Hy die't drinkt, is ras een ezel




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: Join based upon LIKE

2011-05-02 Thread Jerry Schwartz
>-Original Message-
>From: Johan De Meersman [mailto:vegiv...@tuxera.be]
>Sent: Sunday, May 01, 2011 4:01 AM
>To: Jerry Schwartz
>Cc: Jim McNeely; mysql mailing list
>Subject: Re: Join based upon LIKE
>
>
>- Original Message -----
>> From: "Jerry Schwartz" 
>>
>> I shove those modified titles into a table and do a JOIN ON
>> `prod_title` LIKE
>> `wild_title`.
>
>Roughly what I meant with the shadow fields, yes - keep your own set of data
>around :-)
>
>I have little more to offer, then, I'm afraid. The soundex() algorithm may or
>may not be of some use to you; it offers comparison based (roughly) on
>pronounciation instead of spelling.
>
[JS] I've thought about using soundex(), but I'm not quite sure how.

I didn't pursue it much because there are so many odd terms such as chemical 
names, but perhaps I should give it a try in my infinite free time.

>Apart from that, you have my deepest sympathy. I hope you can wake up from 
>the
>nightmare soon :-)
>
[JS] Thanks for your condolences.

Regards,

Jerry Schwartz
Global Information Incorporated
195 Farmington Ave.
Farmington, CT 06032

860.674.8796 / FAX: 860.674.8341
E-mail: je...@gii.co.jp
Web site: www.the-infoshop.com



>--
>Bier met grenadyn
>Is als mosterd by den wyn
>Sy die't drinkt, is eene kwezel
>Hy die't drinkt, is ras een ezel




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: ERROR 2006 (HY000)

2011-04-29 Thread Jerry Schwartz
>-Original Message-
>From: Larry Martell [mailto:larry.mart...@gmail.com]
>Sent: Friday, April 29, 2011 5:31 PM
>To: Jerry Schwartz
>Cc: mysql mailing list
>Subject: Re: ERROR 2006 (HY000)
>
>On Fri, Apr 29, 2011 at 3:23 PM, Jerry Schwartz  wrote:
>> I'm getting
>>
>> ERROR 2006 (HY000) at line 10: MySQL server has gone away
>>
>> while trying to do an insert through the MySQL CLI. Our database is so 
>> small
>> that I haven't gotten around to doing any tuning, so this came out of the
>> blue. I'm not quite sure where to start.
>>
>> Here's what things look like at my end:
>>
>> =
>>
>> select version();
>> +--+
>> | version()|
>> +--+
>> | 5.1.36-community |
>> +--+
>>
>>
>> DROP TEMPORARY TABLE IF EXISTS new_titles;
>>
>> CREATE TEMPORARY TABLE new_titles (
>>new_title VARCHAR(255), INDEX (new_title),
>>new_title_like VARCHAR(255), INDEX (new_title_like)
>>) ENGINE MYISAM;
>>
>> INSERT INTO new_titles
>> VALUES
>>
>> ('(I.Z) Queenco Ltd. (QNCO) - Financial and Strategic SWOT Analysis 
>> Review',
>> '(I%Z) Queenco Ltd%(QNCO)%Financial%Strategic SWOT Analysis Review'),
>> ('@Comm Corporation (ATCM) - Strategic SWOT Analysis Review', '@Comm
>> Corporation (ATCM)%Strategic SWOT Analysis Review'),
>> ('010017 Telecom GmbH - Strategic SWOT Analysis Review', '010017 Telecom
>> GmbH%Strategic SWOT Analysis Review'),
>> ('1 & 1 Internet AG - Strategic SWOT Analysis Review', '1%1 Internet
>> AG%Strategic SWOT Analysis Review'),
>> ('1,618 STRICT AB (STRI B) - Financial and Strategic SWOT Analysis Review',
>> '1%618 STRICT AB (STRI B)%Financial%Strategic SWOT Analysis Review'),
>> ('1199SEIU Benefit and Pension Funds - Strategic SWOT Analysis Review',
>> '1199SEIU Benefit%Pension Funds%Strategic SWOT Analysis Review'),
>> ('1300 Smiles Limited (ONT) - Financial and Strategic SWOT Analysis 
>> Review',
>> '1300 Smiles Limited (ONT)%Financial%Strategic SWOT Analysis Review'),
>> ('141 Capital Inc. - Strategic SWOT Analysis Review', '141 Capital
>> Inc%Strategic SWOT Analysis Review'),
>> ('1855 SA (AL185) - Financial and Strategic SWOT Analysis Review', '1855 SA
>> (AL185)%Financial%Strategic SWOT Analysis Review'),
>> ('1pm plc (OPM) - Financial and Strategic SWOT Analysis Review', '1pm plc
>> (OPM)%Financial%Strategic SWOT Analysis Review'),
>> ('1st Century Bancshares, Inc. (FCTY) - Financial and Strategic SWOT 
>> Analysis
>> Review', '1st Century Bancshares%Inc%(FCTY)%Financial%Strategic SWOT 
>> Analysis
>> Review'),
>> ('1st NRG Corp. (FNRC) - Financial and Strategic SWOT Analysis Review', 
>> '1st
>> NRG Corp%(FNRC)%Financial%Strategic SWOT Analysis Review'),
>> ('1st RED AG (SXL) - Financial and Strategic SWOT Analysis Review', '1st 
>> RED
>> AG (SXL)%Financial%Strategic SWOT Analysis Review'),
>> ('20 Microns Limited (533022) - Financial and Strategic SWOT Analysis
>Review',
>> '20 Microns Limited (533022)%Financial%Strategic SWOT Analysis Review'),
>> ('21 Holdings Limited (1003) - Financial and Strategic SWOT Analysis 
>> Review',
>> '21 Holdings Limited (1003)%Financial%Strategic SWOT Analysis Review'),
>> ('21LADY Co., Ltd. (3346) - Financial and Strategic SWOT Analysis Review',
>> '21LADY Co%Ltd%(3346)%Financial%Strategic SWOT Analysis Review'),
>>
>> ==
>>
>> ... and so on for about 15000 rows.
>>
>> Before I go through changing this to 15000 separate inserts, I'd like to 
>> know
>> how to "fix" the problem the right way. I looked at all of the system
>> variables, and none of them seemed to apply. I probably missed something
>> obvious.
>
>This is most likely due to your query exceeding the max packet size.
>Break it up into smaller pieces or increase max_allowed_packet.
[JS] You hit the nail on the head.

I wondered about max_allowed_packet, but the documentation says that it 
applies to BLOBs or long strings. It didn't occur to me that a query could, 
itself, be a long string.

My query is a tad more than 2mb, so setting max_allowed_packet to 3mb fixed it 
right up.

Thanks!

Regards,

Jerry Schwartz
Global Information Incorporated
195 Farmington Ave.
Farmington, CT 06032

860.674.8796 / FAX: 860.674.8341
E-mail: je...@gii.co.jp
Web site: www.the-infoshop.com








-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



ERROR 2006 (HY000)

2011-04-29 Thread Jerry Schwartz
I'm getting

ERROR 2006 (HY000) at line 10: MySQL server has gone away

while trying to do an insert through the MySQL CLI. Our database is so small 
that I haven't gotten around to doing any tuning, so this came out of the 
blue. I'm not quite sure where to start.

Here's what things look like at my end:

=

select version();
+--+
| version()|
+--+
| 5.1.36-community |
+--+


DROP TEMPORARY TABLE IF EXISTS new_titles;

CREATE TEMPORARY TABLE new_titles (
new_title VARCHAR(255), INDEX (new_title),
new_title_like VARCHAR(255), INDEX (new_title_like)
) ENGINE MYISAM;

INSERT INTO new_titles
VALUES

('(I.Z) Queenco Ltd. (QNCO) - Financial and Strategic SWOT Analysis Review', 
'(I%Z) Queenco Ltd%(QNCO)%Financial%Strategic SWOT Analysis Review'),
('@Comm Corporation (ATCM) - Strategic SWOT Analysis Review', '@Comm 
Corporation (ATCM)%Strategic SWOT Analysis Review'),
('010017 Telecom GmbH - Strategic SWOT Analysis Review', '010017 Telecom 
GmbH%Strategic SWOT Analysis Review'),
('1 & 1 Internet AG - Strategic SWOT Analysis Review', '1%1 Internet 
AG%Strategic SWOT Analysis Review'),
('1,618 STRICT AB (STRI B) - Financial and Strategic SWOT Analysis Review', 
'1%618 STRICT AB (STRI B)%Financial%Strategic SWOT Analysis Review'),
('1199SEIU Benefit and Pension Funds - Strategic SWOT Analysis Review', 
'1199SEIU Benefit%Pension Funds%Strategic SWOT Analysis Review'),
('1300 Smiles Limited (ONT) - Financial and Strategic SWOT Analysis Review', 
'1300 Smiles Limited (ONT)%Financial%Strategic SWOT Analysis Review'),
('141 Capital Inc. - Strategic SWOT Analysis Review', '141 Capital 
Inc%Strategic SWOT Analysis Review'),
('1855 SA (AL185) - Financial and Strategic SWOT Analysis Review', '1855 SA 
(AL185)%Financial%Strategic SWOT Analysis Review'),
('1pm plc (OPM) - Financial and Strategic SWOT Analysis Review', '1pm plc 
(OPM)%Financial%Strategic SWOT Analysis Review'),
('1st Century Bancshares, Inc. (FCTY) - Financial and Strategic SWOT Analysis 
Review', '1st Century Bancshares%Inc%(FCTY)%Financial%Strategic SWOT Analysis 
Review'),
('1st NRG Corp. (FNRC) - Financial and Strategic SWOT Analysis Review', '1st 
NRG Corp%(FNRC)%Financial%Strategic SWOT Analysis Review'),
('1st RED AG (SXL) - Financial and Strategic SWOT Analysis Review', '1st RED 
AG (SXL)%Financial%Strategic SWOT Analysis Review'),
('20 Microns Limited (533022) - Financial and Strategic SWOT Analysis Review', 
'20 Microns Limited (533022)%Financial%Strategic SWOT Analysis Review'),
('21 Holdings Limited (1003) - Financial and Strategic SWOT Analysis Review', 
'21 Holdings Limited (1003)%Financial%Strategic SWOT Analysis Review'),
('21LADY Co., Ltd. (3346) - Financial and Strategic SWOT Analysis Review', 
'21LADY Co%Ltd%(3346)%Financial%Strategic SWOT Analysis Review'),

==

... and so on for about 15000 rows.

Before I go through changing this to 15000 separate inserts, I'd like to know 
how to "fix" the problem the right way. I looked at all of the system 
variables, and none of them seemed to apply. I probably missed something 
obvious.

Regards,

Jerry Schwartz
Global Information Incorporated
195 Farmington Ave.
Farmington, CT 06032

860.674.8796 / FAX: 860.674.8341
E-mail: je...@gii.co.jp
Web site: www.the-infoshop.com





-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: Join based upon LIKE

2011-04-29 Thread Jerry Schwartz
>-Original Message-
>From: Jim McNeely [mailto:j...@newcenturydata.com]
>Sent: Thursday, April 28, 2011 6:43 PM
>To: Jerry Schwartz
>Subject: Re: Join based upon LIKE
>
>It just smells wrong, a nicer system would have you joining on ID's of some
>kind so that spelling wouldn't matter. I don't know the full situation for 
>you
>though.
>
[JS] That would be nice, wouldn't it.

In a nutshell, we sell publications. Publishers send us lists of publications. 
Some are new, some replace previous editions. (Think of books, almanacs, and 
newsletters.) Some publishers make do without any product IDs at all, but most 
do use product IDs of some kind.

The problem is that the March edition of a publication might or might not have 
the same product ID as the February edition. I try to match them both by 
product ID and by title. Sometimes the title will "fuzzy match", but the ID 
won't; sometimes the ID will match but the title won't; sometimes (if I'm 
really lucky) they both match; and sometimes the ID matches one product and 
the title matches another.

It's the fuzzy match by title that gives me fits:

- The title might have a date in it ("Rain in Spain in 2010 Q2"), but not 
necessarily in a uniform way ("Rain in Spain Q3 2010").
- The title might have differences in wording or punctuation ("Rain in Spain - 
2010Q2").
- The title might have simple misspellings ("Rain in Spian - Q2 2010").

I've written code that looks for troublesome constructs and replaces them with 
"%": " in ", "-", " to ", "Q2", "2Q", and more and more. So "Rain in Spain - 
2010 Q2" becomes "Rain%Spain%".

I shove those modified titles into a table and do a JOIN ON `prod_title` LIKE 
`wild_title`.

This will miss actual misspellings ("Spain", "Spian"). It will also produce a 
large number of false positives.

On the back end, I have other code that compares the new titles against the 
titles retrieved by that query and decides if they are exact matches, 
approximate matches (here I do use regular expressions, as well as lists of 
known "bad boys"), or false positives. From there on, it's all hand work.

Pretty big nut, eh?

So that's why I need to use LIKE in my JOIN.

Regards,

Jerry Schwartz
Global Information Incorporated
195 Farmington Ave.
Farmington, CT 06032

860.674.8796 / FAX: 860.674.8341
E-mail: je...@gii.co.jp
Web site: www.the-infoshop.com




>Jim McNeely
>
>On Apr 28, 2011, at 12:28 PM, Jerry Schwartz wrote:
>
>> No takers?
>>
>> -Original Message-
>> From: Jerry Schwartz [mailto:je...@gii.co.jp]
>> Sent: Monday, April 25, 2011 2:34 PM
>> To: 'Mailing-List mysql'
>> Subject: Join based upon LIKE
>>
>> I have to match lists of new publications against our database, so that I 
>> can
>> replace the existing publications in our catalog. For example,
>>
>> "The UK Market for Puppies in February 2011"
>>
>> would be a replacement for
>>
>> "The UK Market for Puppies in December 2010"
>>
>> Unfortunately, the publishers aren't particularly careful with their 
>> titles.
>> One might even say they are perverse. I am likely to get
>>
>> "UK Market: Puppies - Feb 2011"
>>
>> as replacement for
>>
>> "The UK Market for Puppies in December 2010"
>>
>> You can see that a straight match by title is not going to work.
>>
>> Here's what I've been doing:
>>
>> =
>>
>> SET @PUBID = (SELECT pub.pub_id FROM pub WHERE pub.pub_code = 'GD');
>>
>> CREATE TEMPORARY TABLE new_titles (
>>  new_title VARCHAR(255), INDEX (new_title),
>>  new_title_like VARCHAR(255), INDEX (new_title_like)
>>  );
>>
>> INSERT INTO new_titles
>> VALUES
>>
>> ('Alternative Energy Monthly Deal Analysis - M&A and Investment Trends, 
>> April
>> 2011', 'Alternative Energy Monthly Deal Analysis%M&A%Investment Trends%'),
>> ('Asia Pacific Propylene Industry Outlook to 2015 - Market Size, Company
>> Share, Price Trends, Capacity Forecasts of All Active and Planned Plants',
>> 'Asia Pacific Propylene Industry Outlook to%Market Size%Company Share%Price
>> Trends%Capacity Forecasts of All Active%Planned Plants'),
>> ...
>> ('Underground Gas Storage Industry Outlook in North America, 2011 - Details
>of
>> All Operating and Planned Gas Storage Sites to 2014', 'Underground Gas
>Storage
>> Industry Outlook%North America%Details of Al

RE: Join based upon LIKE

2011-04-29 Thread Jerry Schwartz
>-Original Message-
>From: Johan De Meersman [mailto:vegiv...@tuxera.be]
>Sent: Friday, April 29, 2011 5:56 AM
>To: Jerry Schwartz
>Cc: mysql mailing list
>Subject: Re: Join based upon LIKE
>
>
>- Original Message -----
>> From: "Jerry Schwartz" 
>>
>> [JS] This isn't the only place I have to deal with fuzzy data. :-(
>> Discretion prohibits further comment.
>
>Heh. What you *really* need, is a LART. Preferably one of the spiked variety.
>
[JS] Unless a LART is a demon of some kind, I don't know what it is.

>> A full-text index would work if I were only looking for one title at
>> a time, but I don't know if that would be a good idea if I have a list of
>> 1 titles. That would pretty much require either 1 separate queries
>> or a very, very long WHERE clause.
>
>Yes, unfortunately. You should see if you can introduce a form of data
>normalisation - say, shadow fields with corrected entries, or functionality 
>in
>the application that suggests correct entries based on what the user typed.
>
[JS] Except for obvious misspellings and non-ASCII characters, I do not have 
the freedom to muck with the text. If the data were created in-house, I could 
correct it on the way in; but it comes from myriad other companies.

>Or, if the money's there, you could have a look at Amazon Mechanical Turk 
>(yes,
>really) for cheap-ish data correction.
>
[JS] Again, I can't change the data. The titles are assigned by the 
publishers. Think what would happen if Amazon decided to "fix" the titles of 
books. "Ain't Misbehavin" would, at best, turn into "I am not misbehaving".

Regards,

Jerry Schwartz
Global Information Incorporated
195 Farmington Ave.
Farmington, CT 06032

860.674.8796 / FAX: 860.674.8341
E-mail: je...@gii.co.jp
Web site: www.the-infoshop.com



>--
>Bier met grenadyn
>Is als mosterd by den wyn
>Sy die't drinkt, is eene kwezel
>Hy die't drinkt, is ras een ezel




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: Join based upon LIKE

2011-04-28 Thread Jerry Schwartz
>-Original Message-
>From: Johan De Meersman [mailto:vegiv...@tuxera.be]
>Sent: Thursday, April 28, 2011 4:18 PM
>To: Jerry Schwartz
>Cc: mysql mailing list
>Subject: Re: Join based upon LIKE
>
>
>- Original Message -----
>> From: "Jerry Schwartz" 
>>
>> No takers?
>
>Not willingly, no :-p
>
>This is a pretty complex problem, as SQL itself isn't particularly well-
>equipped to deal with fuzzy data. One approach that might work is using a
>fulltext indexing engine (MySQL's built-in ft indices, or an external one 
>like
>Solr or something) and doing best-fit matches on the keywords of the title
>you're looking for.
>
[JS] This isn't the only place I have to deal with fuzzy data. :-( Discretion 
prohibits further comment.

A full-text index would work if I were only looking for one title at a time, 
but I don't know if that would be a good idea if I have a list of 1 
titles. That would pretty much require either 1 separate queries or a 
very, very long WHERE clause.


Regards,

Jerry Schwartz
Global Information Incorporated
195 Farmington Ave.
Farmington, CT 06032

860.674.8796 / FAX: 860.674.8341
E-mail: je...@gii.co.jp
Web site: www.the-infoshop.com



>
>--
>Bier met grenadyn
>Is als mosterd by den wyn
>Sy die't drinkt, is eene kwezel
>Hy die't drinkt, is ras een ezel




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



FW: Join based upon LIKE

2011-04-28 Thread Jerry Schwartz
No takers?

-Original Message-
From: Jerry Schwartz [mailto:je...@gii.co.jp]
Sent: Monday, April 25, 2011 2:34 PM
To: 'Mailing-List mysql'
Subject: Join based upon LIKE

I have to match lists of new publications against our database, so that I can 
replace the existing publications in our catalog. For example,

"The UK Market for Puppies in February 2011"

would be a replacement for

"The UK Market for Puppies in December 2010"

Unfortunately, the publishers aren't particularly careful with their titles. 
One might even say they are perverse. I am likely to get

"UK Market: Puppies - Feb 2011"

as replacement for

"The UK Market for Puppies in December 2010"

You can see that a straight match by title is not going to work.

Here's what I've been doing:

=

SET @PUBID = (SELECT pub.pub_id FROM pub WHERE pub.pub_code = 'GD');

CREATE TEMPORARY TABLE new_titles (
new_title VARCHAR(255), INDEX (new_title),
new_title_like VARCHAR(255), INDEX (new_title_like)
);

INSERT INTO new_titles
VALUES

('Alternative Energy Monthly Deal Analysis - M&A and Investment Trends, April 
2011', 'Alternative Energy Monthly Deal Analysis%M&A%Investment Trends%'),
('Asia Pacific Propylene Industry Outlook to 2015 - Market Size, Company 
Share, Price Trends, Capacity Forecasts of All Active and Planned Plants', 
'Asia Pacific Propylene Industry Outlook to%Market Size%Company Share%Price 
Trends%Capacity Forecasts of All Active%Planned Plants'),
...
('Underground Gas Storage Industry Outlook in North America, 2011 - Details of 
All Operating and Planned Gas Storage Sites to 2014', 'Underground Gas Storage 
Industry Outlook%North America%Details of All Operating%Planned Gas Storage 
Sites to%'),
('Uveitis Therapeutics - Pipeline Assessment and Market Forecasts to 2017', 
'Uveitis Therapeutics%Pipeline Assessment%Market Forecasts to%');

SELECT prod.prod_title AS `Title IN Database`,
new_titles.new_title AS `Title IN Feed`,
prod.prod_num AS `ID`
FROM new_titles JOIN prod ON prod.prod_title LIKE (new_titles.new_title_like)
AND prod.pub_id = @PUBID AND prod.prod_discont = 0
ORDER BY new_titles.new_title;
==

(I've written code that substitutes "%" for certain strings that I specify, 
and there is some trial and error involved.)

Here's how MySQL handles that SELECT:

*** 1. row ***
   id: 1
  select_type: SIMPLE
table: new_titles
 type: ALL
possible_keys: NULL
  key: NULL
  key_len: NULL
  ref: NULL
 rows: 47
Extra: Using filesort
*** 2. row ***
   id: 1
  select_type: SIMPLE
table: prod
 type: ref
possible_keys: pub_id
  key: pub_id
  key_len: 48
  ref: const
 rows: 19607
Extra: Using where
=

Here's the important part of the table `prod`:

=

   Table: prod
Create Table: CREATE TABLE `prod` (
  `prod_id` varchar(15) NOT NULL DEFAULT '',
  `prod_num` mediumint(6) unsigned DEFAULT NULL,
  `prod_title` varchar(255) DEFAULT NULL,
  `prod_type` varchar(2) DEFAULT NULL,
  `prod_vat_pct` decimal(5,2) DEFAULT NULL,
  `prod_discont` tinyint(1) DEFAULT NULL,
  `prod_replacing` mediumint(6) unsigned DEFAULT NULL,
  `prod_replaced_by` mediumint(6) unsigned DEFAULT NULL,
  `prod_ready` tinyint(1) DEFAULT NULL,
  `pub_id` varchar(15) DEFAULT NULL,
...
  PRIMARY KEY (`prod_id`),
  UNIQUE KEY `prod_num` (`prod_num`),
  KEY `prod_pub_prod_id` (`prod_pub_prod_id`),
  KEY `pub_id` (`pub_id`),
  KEY `prod_title` (`prod_title`),
  FULLTEXT KEY `prod_title_fulltext` (`prod_title`)
) ENGINE=MyISAM DEFAULT CHARSET=utf8

=

This works reasonably well for a small number (perhaps 200-300) of new 
products; but now I've been handed a list of over 15000 to stuff into the 
table `new_titles`! This motivates me to wonder if there is a better way, 
since I expect this to take a very long time.

Suggestions?

Regards,

Jerry Schwartz
Global Information Incorporated
195 Farmington Ave.
Farmington, CT 06032

860.674.8796 / FAX: 860.674.8341
E-mail: je...@gii.co.jp
Web site: www.the-infoshop.com





-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: Error in accept: Too many open files

2011-04-26 Thread Jerry Schwartz
You are seeing *NIX error messages. You need to increase the operating system 
settings that control the number of files that a process can open.

I'm very rusty, and never really used Linux, so I can't tell you the exact 
parameters. There are probably two: total number of open files, and files per 
process.

Regards,

Jerry Schwartz
Global Information Incorporated
195 Farmington Ave.
Farmington, CT 06032

860.674.8796 / FAX: 860.674.8341
E-mail: je...@gii.co.jp
Web site: www.the-infoshop.com

>-Original Message-
>From: Brent Clark [mailto:brentgclarkl...@gmail.com]
>Sent: Tuesday, April 26, 2011 3:44 AM
>To: mysql mailing list
>Subject: Error in accept: Too many open files
>
>Hiya
>
>I recently imported data on a new server (5.1). But before the importing
>I added the feature 'innodb_file_per_table'.
>
>I now want to enable Master - Master replication, but the problem is, im
>seeing the following
>
>---
>---
>
>Apr 24 23:32:50 maj-web01 mysqld: 110424 23:32:50 [ERROR] Error in
>accept: Too many open files
>Apr 24 23:35:03 maj-web01 mysqld: 110424 23:35:03 [ERROR]
>/usr/sbin/mysqld: Can't open file: './maj_fs2/sites.frm' (errno: 24)
>---
>---
>
>
>Ive been going through High Performance MySQL, and a key area the book
>address is the config option 'innodb_open_files'.
>
>If I do a file count for *.ibd files, I only have 147 files, and I see
>the limit is 300
>
>mysql> show global variables like 'innodb_open_files'\G;
>*** 1. row ***
>Variable_name: innodb_open_files
> Value: 300
>1 row in set (0.00 sec)
>
>Could the other option to look at be 'open_files_limit'?
>
>The database is a mixture of innodb and Myiasm.
>
>I really need to get replication working, if someone could help my
>understand this issue, it would be appreciated.
>
>Regards
>Brent Clark
>
>
>--
>MySQL General Mailing List
>For list archives: http://lists.mysql.com/mysql
>To unsubscribe:http://lists.mysql.com/mysql?unsub=je...@gii.co.jp





-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Join based upon LIKE

2011-04-25 Thread Jerry Schwartz
I have to match lists of new publications against our database, so that I can 
replace the existing publications in our catalog. For example,

"The UK Market for Puppies in February 2011"

would be a replacement for

"The UK Market for Puppies in December 2010"

Unfortunately, the publishers aren't particularly careful with their titles. 
One might even say they are perverse. I am likely to get

"UK Market: Puppies - Feb 2011"

as replacement for

"The UK Market for Puppies in December 2010"

You can see that a straight match by title is not going to work.

Here's what I've been doing:

=

SET @PUBID = (SELECT pub.pub_id FROM pub WHERE pub.pub_code = 'GD');

CREATE TEMPORARY TABLE new_titles (
new_title VARCHAR(255), INDEX (new_title),
new_title_like VARCHAR(255), INDEX (new_title_like)
);

INSERT INTO new_titles
VALUES

('Alternative Energy Monthly Deal Analysis - M&A and Investment Trends, April 
2011', 'Alternative Energy Monthly Deal Analysis%M&A%Investment Trends%'),
('Asia Pacific Propylene Industry Outlook to 2015 - Market Size, Company 
Share, Price Trends, Capacity Forecasts of All Active and Planned Plants', 
'Asia Pacific Propylene Industry Outlook to%Market Size%Company Share%Price 
Trends%Capacity Forecasts of All Active%Planned Plants'),
...
('Underground Gas Storage Industry Outlook in North America, 2011 - Details of 
All Operating and Planned Gas Storage Sites to 2014', 'Underground Gas Storage 
Industry Outlook%North America%Details of All Operating%Planned Gas Storage 
Sites to%'),
('Uveitis Therapeutics - Pipeline Assessment and Market Forecasts to 2017', 
'Uveitis Therapeutics%Pipeline Assessment%Market Forecasts to%');

SELECT prod.prod_title AS `Title IN Database`,
new_titles.new_title AS `Title IN Feed`,
prod.prod_num AS `ID`
FROM new_titles JOIN prod ON prod.prod_title LIKE (new_titles.new_title_like)
AND prod.pub_id = @PUBID AND prod.prod_discont = 0
ORDER BY new_titles.new_title;
==

(I've written code that substitutes "%" for certain strings that I specify, 
and there is some trial and error involved.)

Here's how MySQL handles that SELECT:

*** 1. row ***
   id: 1
  select_type: SIMPLE
table: new_titles
 type: ALL
possible_keys: NULL
  key: NULL
  key_len: NULL
  ref: NULL
 rows: 47
Extra: Using filesort
*** 2. row ***
   id: 1
  select_type: SIMPLE
table: prod
 type: ref
possible_keys: pub_id
  key: pub_id
  key_len: 48
  ref: const
 rows: 19607
Extra: Using where
=

Here's the important part of the table `prod`:

=

   Table: prod
Create Table: CREATE TABLE `prod` (
  `prod_id` varchar(15) NOT NULL DEFAULT '',
  `prod_num` mediumint(6) unsigned DEFAULT NULL,
  `prod_title` varchar(255) DEFAULT NULL,
  `prod_type` varchar(2) DEFAULT NULL,
  `prod_vat_pct` decimal(5,2) DEFAULT NULL,
  `prod_discont` tinyint(1) DEFAULT NULL,
  `prod_replacing` mediumint(6) unsigned DEFAULT NULL,
  `prod_replaced_by` mediumint(6) unsigned DEFAULT NULL,
  `prod_ready` tinyint(1) DEFAULT NULL,
  `pub_id` varchar(15) DEFAULT NULL,
...
  PRIMARY KEY (`prod_id`),
  UNIQUE KEY `prod_num` (`prod_num`),
  KEY `prod_pub_prod_id` (`prod_pub_prod_id`),
  KEY `pub_id` (`pub_id`),
  KEY `prod_title` (`prod_title`),
  FULLTEXT KEY `prod_title_fulltext` (`prod_title`)
) ENGINE=MyISAM DEFAULT CHARSET=utf8

=

This works reasonably well for a small number (perhaps 200-300) of new 
products; but now I've been handed a list of over 15000 to stuff into the 
table `new_titles`! This motivates me to wonder if there is a better way, 
since I expect this to take a very long time.

Suggestions?

Regards,

Jerry Schwartz
Global Information Incorporated
195 Farmington Ave.
Farmington, CT 06032

860.674.8796 / FAX: 860.674.8341
E-mail: je...@gii.co.jp
Web site: www.the-infoshop.com





-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: Memory Usage.

2011-04-25 Thread Jerry Schwartz
>-Original Message-
>From: Andrés Tello [mailto:mr.crip...@gmail.com]
>Sent: Monday, April 25, 2011 10:24 AM
>To: Mailing-List mysql
>Subject: Memory Usage.
>
>How can I know how memory is being used by Mysql?
>
>I have 32GB Ram, but I can't make mysql to use more than 12GB Ram , and even
>that I have tables over 40GB...
>
>Thanks! xD
[JS] The amount of memory used will be the smallest of

1. Available physical memory
2. Usable memory (this is a limitation built into the code design, both at OS 
level and at the MySQL/storage engine level)
3. Allowed memory (a configuration setting)
4. Needed memory

1 and 2 are usually easy to find out.

When it comes to 4, things get very murky. As someone else said, the goal is 
to use memory wisely. That means finding a trade-off between efficient use of 
memory and speed. Once you understand and make some choices with 4, you can go 
back and tinker with 3.

As any performance consultant worth his salt will tell you, "It depends."

Regards,

Jerry Schwartz
Global Information Incorporated
195 Farmington Ave.
Farmington, CT 06032

860.674.8796 / FAX: 860.674.8341
E-mail: je...@gii.co.jp
Web site: www.the-infoshop.com










-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: mysql deal with specail character problem

2011-04-25 Thread Jerry Schwartz
From: 赵琦 [mailto:tyzha...@gmail.com] 
Sent: Thursday, April 21, 2011 9:19 PM
To: Jerry Schwartz
Cc: Johan De Meersman; sstap...@mnsi.net; mysql
Subject: Re: mysql deal with specail character problem

 

thanks for reply!

 

The charater set is latin1,and the key field is mangled by the email client.

 

[JS When you say that the character set is latin1, do you mean that the 
database is using latin1? That would definitely cause your problem, because the 
data would be truncated at the first character that isn’t latin1.

 

Here’s a simple experiment: Add one row to your table, and then SELECT LENGTH() 
of your field. What comes back?

 

Regards,

 

Jerry Schwartz

Global Information Incorporated

195 Farmington Ave.

Farmington, CT 06032

 

860.674.8796 / FAX: 860.674.8341

E-mail: je...@gii.co.jp 

Web site: www.the-infoshop.com <http://www.the-infoshop.com/> 

 

 

 

I up load the actual character of  '?' to the attachment.

2011/4/22 Jerry Schwartz 



>-Original Message-
>From: Johan De Meersman [mailto:vegiv...@tuxera.be]
>Sent: Thursday, April 21, 2011 9:56 AM
>To: sstap...@mnsi.net
>Cc: ??; mysql; Jerry Schwartz
>Subject: Re: mysql deal with specail character problem
>
>- Original Message -
>> From: "Steve Staples" 
>>
>> Doesn't the '?-1-1'  mean that it's a joined key?  so the 3
>
>That's what I tought, but I *can* see the characters he's typed, and the last
>of what you see as ? is definitely different.
>

[JS] That's because his font (or e-mail client) mangled the text, and so his
reply wasn't legible.

Our main office is in Japan, so I go through this all the time.


Regards,

Jerry Schwartz
Global Information Incorporated
195 Farmington Ave.
Farmington, CT 06032

860.674.8796 / FAX: 860.674.8341
E-mail: je...@gii.co.jp
Web site: www.the-infoshop.com <http://www.the-infoshop.com/> 
>
>--

>Bier met grenadyn
>Is als mosterd by den wyn
>Sy die't drinkt, is eene kwezel
>Hy die't drinkt, is ras een ezel




 



RE: mysql deal with specail character problem

2011-04-21 Thread Jerry Schwartz


>-Original Message-
>From: Johan De Meersman [mailto:vegiv...@tuxera.be]
>Sent: Thursday, April 21, 2011 9:56 AM
>To: sstap...@mnsi.net
>Cc: ??; mysql; Jerry Schwartz
>Subject: Re: mysql deal with specail character problem
>
>- Original Message -
>> From: "Steve Staples" 
>>
>> Doesn't the '?-1-1'  mean that it's a joined key?  so the 3
>
>That's what I tought, but I *can* see the characters he's typed, and the last
>of what you see as ? is definitely different.
>
[JS] That's because his font (or e-mail client) mangled the text, and so his 
reply wasn't legible.

Our main office is in Japan, so I go through this all the time.


Regards,

Jerry Schwartz
Global Information Incorporated
195 Farmington Ave.
Farmington, CT 06032

860.674.8796 / FAX: 860.674.8341
E-mail: je...@gii.co.jp
Web site: www.the-infoshop.com
>
>--
>Bier met grenadyn
>Is als mosterd by den wyn
>Sy die't drinkt, is eene kwezel
>Hy die't drinkt, is ras een ezel




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: mysql deal with specail character problem

2011-04-21 Thread Jerry Schwartz
>-Original Message-
>From: ?? [mailto:tyzha...@gmail.com]
>Sent: Thursday, April 21, 2011 7:09 AM
>To: mysql
>Subject: mysql deal with specail character problem
>
>Hi:
>I have a table that the field a is primary key;
>I insert a record like this
>insert into table(a,b,c) values('?',1,1)
>Then i insert a record like this
> insert into table(a,b,c) values('?',1,1)
>
>mysql post a error execute failed: Duplicate entry '?-1-1' for key 1;
>
>it is strange, the primary key field is not the same, but i get this error.
>
>any one can help me?
>
[JS] What is the character set and collation for that field? Please post the 
output of

SHOW CREATE TABLE x\G

(Please use the \G, so that the output will fit in your message better.)

There is something wrong with the way your data is going in, because the 
commas are not being seen properly. You are putting in

'?',1,1

but MySQL is showing

'?-1-1'

in the error message! That is a clue.

I have put plenty of Japanese and Chinese characters into my database, so 
don't give up hope.

>Thanks




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: efficient use of varchar?

2011-04-07 Thread Jerry Schwartz
By the way, the weird-looking title is in Japanese (the database/table/field 
are UTF-8).



Some of you might be able to read that.

Is it possible that this is upsetting the ANALYSE procedure?

Regards,

Jerry Schwartz
Global Information Incorporated
195 Farmington Ave.
Farmington, CT 06032

860.674.8796 / FAX: 860.674.8341
E-mail: je...@gii.co.jp
Web site: www.the-infoshop.com


>-Original Message-
>From: Jerry Schwartz [mailto:je...@gii.co.jp]
>Sent: Thursday, April 07, 2011 12:09 PM
>To: 'petya'; 'John G. Heim'
>Cc: mysql@lists.mysql.com
>Subject: RE: efficient use of varchar?
>
>>-Original Message-
>>From: petya [mailto:pe...@petya.org.hu]
>>Sent: Wednesday, April 06, 2011 3:55 PM
>>To: John G. Heim
>>Cc: mysql@lists.mysql.com
>>Subject: Re: efficient use of varchar?
>>
>>Hi,
>>
>>There is difference between varchar(63) and varchar(38). Instead of
>>selecting MAX(LENGTH()) you can do PROCEDURE ANALYZE() on the table,
>>which will tell you about the optimal record type for the data you
>>currently have in the table.
>>
>[JS] Okay, it's my turn to be puzzled. I never used PROCEDURE ANALYSE (and 
>you
>**have** to use the UK English spelling, apparently), so I decided to try it.
>I have a VARCHAR(255) field named `prod_title`. This is what came out:
>
>==
>us-gii >SELECT prod_title FROM prod PROCEDURE ANALYSE()\G
>*** 1. row ***
> Field_name: giiexpr_db.prod.prod_title
>  Min_value:
>
>High Throughput Screening 2003: Improving Strategies, Technologies, and
>Productivity
>  Max_value:
>?½ÿ?¡ë?¡ªµáí?fÑ?»å??ºµ¥â?«í?Éå??Ä?ê???ª«??Åè?Çâµá??»ä«??èíµëï?åî
> Min_length: 2
> Max_length: 255
>   Empties_or_zeros: 0
>  Nulls: 0
>Avg_value_or_avg_length: 54.0701
>Std: NULL
>  Optimal_fieldtype: TINYTEXT NOT NULL
>1 row in set (0.23 sec)
>
>==
>
>Aside from Max_value, which I'll wonder about later, why is the
>Optimal_fieldtype TINYTEXT?
>
>Regards,
>
>Jerry Schwartz
>Global Information Incorporated
>195 Farmington Ave.
>Farmington, CT 06032
>
>860.674.8796 / FAX: 860.674.8341
>E-mail: je...@gii.co.jp
>Web site: www.the-infoshop.com
>
>
>
>
>>When you are using inreasonably large varchar columns, and mysql decides
>>to create an in-memory temporary table with the MEMORY storage engine,
>>practically all varchar(n) fields will be converted to char(n) fields,
>>because memory storage engine doesn't support variable length data.
>>
>>Peter Boros
>>
>>On 04/06/2011 08:28 PM, John G. Heim wrote:
>>> Does it make any difference if I allocate a particular number of bytes
>>> for a varchar? I mean, just as an example, maybe its more efficient to
>>> use a power of 2 as the field length. Or maybe power of 2 minus 1. I'm
>>> guessing not and that anything less than 255 is the same.
>>>
>>> I'm converting some data in a spreadsheet and I have to create a table
>>> with about 150 columns. So I created all the columns varchar(255) and
>>> now I'm asking mysql to show me the longest value in each column.
>>> "SELECT MAX(LENGTH(column99)) FROM TABLE99". Then I've been modifying
>>> the "CREATE TABLE" code to accomodate the longest value plus a little
>>> more. So if the longest value in the column is 38 characters, I'd
>>> probably make that a VARCHAR(50). But maybe I might as well make that 63
>>> or 64. Or maybe I'm wasting my time and should leave them all 255.
>>>
>>>
>>>
>>
>>--
>>MySQL General Mailing List
>>For list archives: http://lists.mysql.com/mysql
>>To unsubscribe:http://lists.mysql.com/mysql?unsub=je...@gii.co.jp
>
>
>
>
>
>--
>MySQL General Mailing List
>For list archives: http://lists.mysql.com/mysql
>To unsubscribe:http://lists.mysql.com/mysql?unsub=je...@gii.co.jp





-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: efficient use of varchar?

2011-04-07 Thread Jerry Schwartz
>-Original Message-
>From: petya [mailto:pe...@petya.org.hu]
>Sent: Wednesday, April 06, 2011 3:55 PM
>To: John G. Heim
>Cc: mysql@lists.mysql.com
>Subject: Re: efficient use of varchar?
>
>Hi,
>
>There is difference between varchar(63) and varchar(38). Instead of
>selecting MAX(LENGTH()) you can do PROCEDURE ANALYZE() on the table,
>which will tell you about the optimal record type for the data you
>currently have in the table.
>
[JS] Okay, it's my turn to be puzzled. I never used PROCEDURE ANALYSE (and you 
**have** to use the UK English spelling, apparently), so I decided to try it. 
I have a VARCHAR(255) field named `prod_title`. This is what came out:

==
us-gii >SELECT prod_title FROM prod PROCEDURE ANALYSE()\G
*** 1. row ***
 Field_name: giiexpr_db.prod.prod_title
  Min_value:

High Throughput Screening 2003: Improving Strategies, Technologies, and 
Productivity
  Max_value: 
?½ÿ?¡ë?¡ªµáí?fÑ?»å??ºµ¥â?«í?Éå??Ä?ê???ª«??Åè?Çâµá??»ä«??èíµëï?åî
 Min_length: 2
 Max_length: 255
   Empties_or_zeros: 0
  Nulls: 0
Avg_value_or_avg_length: 54.0701
Std: NULL
  Optimal_fieldtype: TINYTEXT NOT NULL
1 row in set (0.23 sec)

==

Aside from Max_value, which I'll wonder about later, why is the 
Optimal_fieldtype TINYTEXT?

Regards,

Jerry Schwartz
Global Information Incorporated
195 Farmington Ave.
Farmington, CT 06032

860.674.8796 / FAX: 860.674.8341
E-mail: je...@gii.co.jp
Web site: www.the-infoshop.com




>When you are using inreasonably large varchar columns, and mysql decides
>to create an in-memory temporary table with the MEMORY storage engine,
>practically all varchar(n) fields will be converted to char(n) fields,
>because memory storage engine doesn't support variable length data.
>
>Peter Boros
>
>On 04/06/2011 08:28 PM, John G. Heim wrote:
>> Does it make any difference if I allocate a particular number of bytes
>> for a varchar? I mean, just as an example, maybe its more efficient to
>> use a power of 2 as the field length. Or maybe power of 2 minus 1. I'm
>> guessing not and that anything less than 255 is the same.
>>
>> I'm converting some data in a spreadsheet and I have to create a table
>> with about 150 columns. So I created all the columns varchar(255) and
>> now I'm asking mysql to show me the longest value in each column.
>> "SELECT MAX(LENGTH(column99)) FROM TABLE99". Then I've been modifying
>> the "CREATE TABLE" code to accomodate the longest value plus a little
>> more. So if the longest value in the column is 38 characters, I'd
>> probably make that a VARCHAR(50). But maybe I might as well make that 63
>> or 64. Or maybe I'm wasting my time and should leave them all 255.
>>
>>
>>
>
>--
>MySQL General Mailing List
>For list archives: http://lists.mysql.com/mysql
>To unsubscribe:http://lists.mysql.com/mysql?unsub=je...@gii.co.jp





-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: "IF"

2011-03-15 Thread Jerry Schwartz
>-Original Message-
>From: Sándor Halász [mailto:h...@tbbs.net]
>Sent: Monday, March 14, 2011 8:52 PM
>To: Johan De Meersman
>Cc: mysql@lists.mysql.com
>Subject: Re: "IF"
>
>>>>> 2011/03/14 16:08 +0100, Johan De Meersman >>>>
>I'm afraid I'm not authoritative on this, but it seems to me that it would be
>very very bad if the third, unused expression were to be evaluated - not only
>from a performance point of view, but who is to say that that expression is 
>not
>a user-defined function that modifies the database ?
><<<<<<<<
>Yes, but Access s "IIF", of the same use, evaluates all three, and the
>documentation explicitly says so. MySQL s, that I have seen, says neither.
>Assuming the worst is safer, and then one uses CASE ..., but if not, 
>
>
[JS] Also some optimizers (and I don't know if MySQL is that clever) will 
figure out which term is the cheapest to evaluate, and will change the order 
of evaluation.

Regards,

Jerry Schwartz
Global Information Incorporated
195 Farmington Ave.
Farmington, CT 06032

860.674.8796 / FAX: 860.674.8341
E-mail: je...@gii.co.jp
Web site: www.the-infoshop.com



>--
>MySQL General Mailing List
>For list archives: http://lists.mysql.com/mysql
>To unsubscribe:http://lists.mysql.com/mysql?unsub=je...@gii.co.jp





-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: how to RE-add innoDB storage?

2011-03-04 Thread Jerry Schwartz
>-Original Message-
>From: ed [mailto:eth...@earthlink.net]

[JS] 

>
>I guess wordwrap is going to mess this up;
>
>
>mysql> show engines ;
[JS] Next time, try

SHOW ENGINES\G

Regards,

Jerry Schwartz
Global Information Incorporated
195 Farmington Ave.
Farmington, CT 06032

860.674.8796 / FAX: 860.674.8341
E-mail: je...@gii.co.jp
Web site: www.the-infoshop.com







-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: best practice: mysql_multi, VMs w/single instance per or doesn't matter?

2011-03-04 Thread Jerry Schwartz
The biggest issue for me is that you want your development environment to be 
as identical as possible to the production environment (to avoid mistakes when 
you move your application over). You don't want your production environment to 
accidentally access your development data; but at the same time you want to 
make sure that your development isn't accidentally playing with the "live" 
data.

Since I'm running a *AMP application, I can just use "localhost" and not worry 
about forgetting to change the database name references.

Regards,

Jerry Schwartz
Global Information Incorporated
195 Farmington Ave.
Farmington, CT 06032

860.674.8796 / FAX: 860.674.8341
E-mail: je...@gii.co.jp
Web site: www.the-infoshop.com

>-Original Message-
>From: Johan De Meersman [mailto:vegiv...@tuxera.be]
>Sent: Friday, March 04, 2011 6:21 AM
>To: Sid Lane
>Cc: MySql
>Subject: Re: best practice: mysql_multi, VMs w/single instance per or doesn't
>matter?
>
>
>Other people have answered with pros and cons of virtualisation, but I would
>rather ask another question: why do you feel it necessary to split up the
>database?
>
>If it's only used for QC, it's probably not in intensive use. Why would you 
>go
>through the bother of splitting it up? You're staying on the same server,
>apparently, so you'll have to decide which instance gets what part of cpu,
>memory and other resources, you'll have to provide separate backup for all
>instances, et cetera; while leaving things as they are is zero effort.
>
>What is the problem with the current setup that will be resolved by 
>splitting?
>
>
>--
>Bier met grenadyn
>Is als mosterd by den wyn
>Sy die't drinkt, is eene kwezel
>Hy die't drinkt, is ras een ezel
>
>--
>MySQL General Mailing List
>For list archives: http://lists.mysql.com/mysql
>To unsubscribe:http://lists.mysql.com/mysql?unsub=je...@gii.co.jp





-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: Get MySQL server IP address in SQL

2011-03-03 Thread Jerry Schwartz
Do you have to worry about named pipes?

Regards,

Jerry Schwartz
Global Information Incorporated
195 Farmington Ave.
Farmington, CT 06032

860.674.8796 / FAX: 860.674.8341
E-mail: je...@gii.co.jp
Web site: www.the-infoshop.com


>-Original Message-
>From: Claudio Nanni [mailto:claudio.na...@gmail.com]
>Sent: Thursday, March 03, 2011 4:21 PM
>To: sstap...@mnsi.net
>Cc: mysql@lists.mysql.com
>Subject: Re: Get MySQL server IP address in SQL
>
>Hi Steve,
>
>Thanks for your tip.
>
>It's not a cluster,
>I am building a tool to monitor mysql performances and this information(*IP*)
>is required in a stored procedure on the mysql-agents installations.
>Being it possible to have multiple mysql instances on the same server the
>only unique identifier would be the couple [IP-PORT],
>the port you can get it easily, the IP I did not find it.
>I dont trust the [hostname] since is slightly more subject to be changed,
>while the ip on eth0(:0) is a bit more unlikely to be changed.
>I would like this information when then you collect data in the same vault.
>Also the IP has some risky values:
>*IP*:  127.0.0.1, 192.x.x.x, 10.x.x.x
>While the hostname risky/not unique values would be:
>*HOSTNAME*: any non official DNS name
>
>While IP/HOSTNAME should be unique on any two systems in the "world", the
>PORT is used to distinguish instances on the same host.
>
>Thanks
>
>Claudio
>
>
>2011/3/3 Steve Staples 
>
>> On Thu, 2011-03-03 at 13:43 -0600, Chris W wrote:
>> >
>> > On 3/2/2011 5:59 PM, Reindl Harald wrote:
>> > > Am 03.03.2011 00:31, schrieb Claudio Nanni:
>> > >> Anyone knows how to get the server* IP address* thru SQL?
>> > > no, because it is nonsense and has nothing to do with a db-server
>> > >
>> > > if you connect via tcp you know the ip
>> > >
>> >
>> > Isn't that kind of like going to someones home, knocking on their door,
>> > and asking, "Where do you live?"
>> >
>> > Chris W
>> >
>>
>> What if this is a "load balanced" cluster?  Doesn't that setup query
>> "serverA", and in turn, "serverA" finds the least busiest server in the
>> "cluster", which could be "serverY", therefore you would have no idea
>> which server the query was run on?
>>
>> But, as far as I can tell, you could only get the "server_id" (which
>> would have to be unique anyway in the cluster), so you could just add
>> this to the query:
>> SELECT @@global.server_id
>>
>> Then you can figure out elsewhere what 'server_id' corresponds to what
>> server ip address.
>>
>> Just trying to think of other solutions on why the OP would want this
>> data...
>>
>> Steve
>>
>>
>>
>>
>> --
>> MySQL General Mailing List
>> For list archives: http://lists.mysql.com/mysql
>> To unsubscribe:
>> http://lists.mysql.com/mysql?unsub=claudio.na...@gmail.com
>>
>>
>
>
>--
>Claudio




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: Query help

2011-03-02 Thread Jerry Schwartz
If you want one row for each combination, you'll need either a temporary table 
or a sub-query. Try this:

SELECT ip_address, login_id
FROM basic_table
JOIN
 (SELECT
  ip_address
  FROM
  basic_table
 GROUP BY
  ip_address
 HAVING
 COUNT(*) > 1) AS x
ON basic_table.ip_address = x.ip_address;

Regards,

Jerry Schwartz
Global Information Incorporated
195 Farmington Ave.
Farmington, CT 06032

860.674.8796 / FAX: 860.674.8341
E-mail: je...@gii.co.jp
Web site: www.the-infoshop.com


>-Original Message-
>From: Tompkins Neil [mailto:neil.tompk...@googlemail.com]
>Sent: Wednesday, March 02, 2011 10:12 AM
>To: Jerry Schwartz
>Cc: [MySQL]
>Subject: Re: Query help
>
>Thanks for the response.  This is what I was after.  Although, I am looking
>to find out the email addresses used to login from the same IP ?
>
>On Wed, Mar 2, 2011 at 2:49 PM, Jerry Schwartz  wrote:
>
>>
>> >-Original Message-
>> >From: Tompkins Neil [mailto:neil.tompk...@googlemail.com]
>> >Sent: Wednesday, March 02, 2011 6:00 AM
>> >To: [MySQL]
>> >Subject: Query help
>> >
>> >Hi
>> >
>> >I've the following basic table
>> >
>> >login_id
>> >email_address
>> >ip_address
>> >
>> >I want to extract all records from this table in which a user has used the
>> >same IP address but different email address to login ?
>> >
>> >Thanks,
>> >Neil
>> [JS] I haven't looked at my code lately, but I'm pretty sure that
>>
>> SELECT
>>ip_address
>> FROM
>>basic_table
>> GROUP BY
>>    ip_address
>> HAVING
>>COUNT(*) > 1;
>>
>> is what you want. You don't need to group on login_id. And, as Claudio
>> said,
>>
>> SELECT
>>ip_address, GROUP_CONCAT(login_id, ', ') AS list_of_login_ids
>>
>> will give you the IP addresses as well.
>>
>> Regards,
>>
>> Jerry Schwartz
>> Global Information Incorporated
>> 195 Farmington Ave.
>> Farmington, CT 06032
>>
>> 860.674.8796 / FAX: 860.674.8341
>> E-mail: je...@gii.co.jp
>> Web site: www.the-infoshop.com
>>
>>
>>
>>
>>
>>
>>




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: Query help

2011-03-02 Thread Jerry Schwartz

>-Original Message-
>From: Tompkins Neil [mailto:neil.tompk...@googlemail.com]
>Sent: Wednesday, March 02, 2011 6:00 AM
>To: [MySQL]
>Subject: Query help
>
>Hi
>
>I've the following basic table
>
>login_id
>email_address
>ip_address
>
>I want to extract all records from this table in which a user has used the
>same IP address but different email address to login ?
>
>Thanks,
>Neil
[JS] I haven't looked at my code lately, but I'm pretty sure that

SELECT
ip_address
FROM
basic_table
GROUP BY
ip_address
HAVING
COUNT(*) > 1;

is what you want. You don't need to group on login_id. And, as Claudio said,

SELECT
ip_address, GROUP_CONCAT(login_id, ', ') AS list_of_login_ids

will give you the IP addresses as well.

Regards,

Jerry Schwartz
Global Information Incorporated
195 Farmington Ave.
Farmington, CT 06032

860.674.8796 / FAX: 860.674.8341
E-mail: je...@gii.co.jp
Web site: www.the-infoshop.com







-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: How to export data with column names?

2011-02-14 Thread Jerry Schwartz
>-Original Message-
>From: mos [mailto:mo...@fastmail.fm]
>Sent: Sunday, February 13, 2011 4:50 PM
>To: mysql@lists.mysql.com
>Subject: How to export data with column names?
>
>I want to use
>
>select * into outfile "myfile.txt" from table1;
>
>and have it export the data as tab delimited but with the column names. I
>need the column names because the import utility will use that to create
>the table in another (non-MySQL) database.
>
[JS] >mysql -hxxx -uyyy -pzzz -e "select * from table1" > myfile.txt

Regards,

Jerry Schwartz
Global Information Incorporated
195 Farmington Ave.
Farmington, CT 06032

860.674.8796 / FAX: 860.674.8341
E-mail: je...@gii.co.jp
Web site: www.the-infoshop.com




>As it stands, I can't get the column names to appear.
>
>Mike
>
>
>--
>MySQL General Mailing List
>For list archives: http://lists.mysql.com/mysql
>To unsubscribe:http://lists.mysql.com/mysql?unsub=je...@gii.co.jp





-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: Replacing MS SQL with MySql

2011-02-09 Thread Jerry Schwartz
 

From: vegiv...@gmail.com [mailto:vegiv...@gmail.com] On Behalf Of Johan De 
Meersman
Sent: Wednesday, February 09, 2011 1:10 PM
To: Jerry Schwartz
Cc: Y z; mysql@lists.mysql.com
Subject: Re: Replacing MS SQL with MySql

 

 

On Wed, Feb 9, 2011 at 7:02 PM, Jerry Schwartz  wrote:

[JS] Actually, I've done a lot of tracing recently (to solve my own
performance problems), and Access 2007 is very clever at pulling parts of a
dataset and a number of other things. For example, when you are browsing a
dataset Access will pre-fetch records 20 at a time.


Good to know that performance enhancements have been made, but what I'm 
specifically thinking of is joins between remote tables - have you got data on 
how it handles those ? 

 

[JS] I don’t have any data at the moment. I know that I tried outsmarting 
Access with pass-through queries, with little luck.

 

Regards,

 

Jerry Schwartz

Global Information Incorporated

195 Farmington Ave.

Farmington, CT 06032

 

860.674.8796 / FAX: 860.674.8341

E-mail: je...@gii.co.jp 

Web site: www.the-infoshop.com <http://www.the-infoshop.com/> 

 

 

-- 
Bier met grenadyn
Is als mosterd by den wyn
Sy die't drinkt, is eene kwezel
Hy die't drinkt, is ras een ezel



RE: Replacing MS SQL with MySql

2011-02-09 Thread Jerry Schwartz
>-Original Message-
>From: vegiv...@gmail.com [mailto:vegiv...@gmail.com] On Behalf Of Johan De
>Meersman
>Sent: Wednesday, February 09, 2011 1:18 AM
>To: Y z
>Cc: mysql@lists.mysql.com
>Subject: Re: Replacing MS SQL with MySql
>
>No way to do that directly; however, using the MySQL ODBC connector you can
>get at least a) and c) to play passthrough. Performance will likely suffer,
>though; especially Access' Jet Engine has a tendency to pull in full remote
>datasets instead of passing through the query.
>
>
[JS] Actually, I've done a lot of tracing recently (to solve my own 
performance problems), and Access 2007 is very clever at pulling parts of a 
dataset and a number of other things. For example, when you are browsing a 
dataset Access will pre-fetch records 20 at a time.

There are some places where it bogs down badly, mostly when it needs to make 
sure that its local data cache is synchronized with the MySQL database. That's 
where I'm having trouble.

Regards,

Jerry Schwartz
Global Information Incorporated
195 Farmington Ave.
Farmington, CT 06032

860.674.8796 / FAX: 860.674.8341
E-mail: je...@gii.co.jp
Web site: www.the-infoshop.com




>On Wed, Feb 9, 2011 at 6:36 AM, Y z  wrote:
>
>>
>> I have a windows app that wants to talk to either a) an access database, b)
>> a MS Sql Express database, or c) a MS Sql 2008 database.
>>
>> Can anyone please point me in the direction of configuring My Sql to
>> imitate any of the above?
>>
>> Thanks!
>>
>>
>> --
>> MySQL General Mailing List
>> For list archives: http://lists.mysql.com/mysql
>> To unsubscribe:http://lists.mysql.com/mysql?unsub=vegiv...@tuxera.be
>>
>>
>
>
>--
>Bier met grenadyn
>Is als mosterd by den wyn
>Sy die't drinkt, is eene kwezel
>Hy die't drinkt, is ras een ezel




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: Replacing MS SQL with MySql

2011-02-09 Thread Jerry Schwartz
You should probably take this over to the MyODBC list.

I suggest that you get the Bullzip programs and look at what they do by way of 
transferring the database structure. You'll learn a lot from that.

Regards,

Jerry Schwartz
Global Information Incorporated
195 Farmington Ave.
Farmington, CT 06032

860.674.8796 / FAX: 860.674.8341
E-mail: je...@gii.co.jp
Web site: www.the-infoshop.com


>-Original Message-
>From: Y z [mailto:yan...@hotmail.com]
>Sent: Wednesday, February 09, 2011 12:37 AM
>To: mysql@lists.mysql.com
>Subject: Replacing MS SQL with MySql
>
>
>I have a windows app that wants to talk to either a) an access database, b) a
>MS Sql Express database, or c) a MS Sql 2008 database.
>
>Can anyone please point me in the direction of configuring My Sql to imitate
>any of the above?
>
>Thanks!
>
>
>--
>MySQL General Mailing List
>For list archives: http://lists.mysql.com/mysql
>To unsubscribe:http://lists.mysql.com/mysql?unsub=je...@gii.co.jp





-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: How do increase memory allocated to MySQL?

2011-02-04 Thread Jerry Schwartz
>-Original Message-
>From: Yannis Haralambous [mailto:yannis.haralamb...@telecom-bretagne.eu]
>Sent: Thursday, February 03, 2011 10:18 PM
>To: David Brian Chait
>Cc: mysql@lists.mysql.com; y...@mpi-inf.mpg.de
>Subject: Re: How do increase memory allocated to MySQL?

[JS] 


>
>the query was just
>
>SELECT * FROM wasfoundin WHERE yakoright LIKE '%geography%'
>
[JS] I think it is the leading "%" that is causing the problem.

I don't have a ready solution. Perhaps a full-text index would help, but I'm 
not sure.

Regards,

Jerry Schwartz
Global Information Incorporated
195 Farmington Ave.
Farmington, CT 06032

860.674.8796 / FAX: 860.674.8341
E-mail: je...@gii.co.jp
Web site: www.the-infoshop.com



>Le 4 févr. 2011 à 11:14, David Brian Chait a écrit :
>
>> Yannis,
>>  How is the data structured? Can you give us an example of the queries
>that you are trying to run? Do you have indexes in place? A very inefficient
>query, or poorly structured database can lead to this type of timeout issue 
>on
>the type of low end hardware that you are using.
>>
>> Thanks,
>> David
>>
>> -Original Message-
>> From: Yannis Haralambous [mailto:yannis.haralamb...@telecom-bretagne.eu]
>> Sent: Thursday, February 03, 2011 5:43 PM
>> To: mysql@lists.mysql.com
>> Cc: y...@mpi-inf.mpg.de
>> Subject: How do increase memory allocated to MySQL?
>>
>> Hi everybody,
>>
>> I have loaded a very big amount of data in my MySQL database (coming from 
>> the
>YAGO project):
>>
>> -rw-rw  1 yannis  admin   65  3 fév 16:07 db.opt
>> -rw-rw  1 yannis  admin   6392030392  3 fév 21:35 wasfoundin.MYD
>> -rw-rw  1 yannis  admin  11085793280  4 fév 04:54 wasfoundin.MYI
>> -rw-rw  1 yannis  admin 8668  3 fév 16:09 wasfoundin.frm
>>
>> as you can see the MYI file is 11Gb and the MYD file 6.4Gb.
>>
>> Whenever I try to send a query through phpmyadmin, it crashes and I get
>returned to the initial page of phpmyadmin.
>> When I send an SQL query directly, it works, but it takes many minutes (a
>simple SELECT will take about 10 minutes).
>>
>> I'm using the latest version of MAMP under MacOS X (on a MacBook Pro 2.6 
>> GHz
>Intel Core 2 with 4Gb of RAM).
>>
>> What can I do to make phpmyadmin work? Is it a chance to gain more speed 
>> when
>interacting through regular SQL queries, or is it hopeless?
>> Where can I find more information about using MySQL with such big tables?
>>
>> Thanks in advance
>>
>> --
>> ---
>> Yannis Haralambous
>> Directeur d'études
>> ADRESSE TEMPORAIRE :
>> University of Aizu
>> Aizu-Wakamatsu, Fukushima-ken  965-8580, Japon ADRESSE PERMANENTE :
>> Institut Télécom, Télécom Bretagne
>> Département Informatique
>> UMR CNRS 3192 Lab-STICC
>> Technopôle Brest Iroise
>> CS 83818, 29238 Brest Cedex 3, France
>> Tel: +33 2 29 00 14 27
>> Fax: +33 2 29 00 12 82
>> Email: yannis.haralamb...@telecom-bretagne.eu
>> Internet: http://omega.enstb.org/yannis
>> ICBM address: 48°21'31.57"N 4°34'16.76"W
>> Twitter: y_haralambous
>> ---
>> ...pour distinguer l'extérieur d'un aquarium, mieux vaut n'être pas poisson
>>
>> ...the ball I threw while playing in the park has not yet reached the 
>> ground
>>
>> Es gab eine Zeit, wo ich nur ungern über Schubert sprechen, nur Nächtens 
>> den
>Bäumen und Sternen von ihm vorerzählen mögen.
>>
>
>--
>---
>Yannis Haralambous
>Directeur d'études
>ADRESSE TEMPORAIRE :
>University of Aizu
>Aizu-Wakamatsu, Fukushima-ken  965-8580, Japon
>ADRESSE PERMANENTE :
>Institut Télécom, Télécom Bretagne
>Département Informatique
>UMR CNRS 3192 Lab-STICC
>Technopôle Brest Iroise
>CS 83818, 29238 Brest Cedex 3, France
>Tel: +33 2 29 00 14 27
>Fax: +33 2 29 00 12 82
>Email: yannis.haralamb...@telecom-bretagne.eu
>Internet: http://omega.enstb.org/yannis
>ICBM address: 48°21'31.57"N 4°34'16.76"W
>Twitter: y_haralambous
>---
>...pour distinguer l'extérieur d'un aquarium,
>mieux vaut n'être pas poisson
>
>...the ball I threw while playing in the park
>has not yet reached the ground
>
>Es gab eine Zeit, wo ich nur ungern über Schubert sprechen,
>nur Nächtens den Bäumen und Sternen von ihm vorerzählen mögen.





-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: map polygon data for popular us cities

2011-02-03 Thread Jerry Schwartz
I remember a few years back I was doing some terrain modeling, and I think 
there was a repository of files on a US government web site. It's been so long 
that I don't remember what the file format was, but I was importing them into 
a 3D graphics program. There might be flattened versions.

Regards,

Jerry Schwartz
Global Information Incorporated
195 Farmington Ave.
Farmington, CT 06032

860.674.8796 / FAX: 860.674.8341
E-mail: je...@gii.co.jp
Web site: www.the-infoshop.com

>-Original Message-
>From: Geert-Jan Brits [mailto:gbr...@gmail.com]
>Sent: Thursday, February 03, 2011 3:44 AM
>To: viraj
>Cc: Gavin Towey; Johan De Meersman; mysql@lists.mysql.com
>Subject: Re: map polygon data for popular us cities
>
>Besides the above,
>I recall Flickr (you know the photo-sharing site)  being able to extract
>(kind of rough) polygons from exif-data of photos (containing lat/longs) .
>Since often these photo's are tagged by city they could extract city
>boundaries.
>
>See this old posts I dug up quickly, I'm not sure if development is still
>active, etc.
>http://code.flickr.com/blog/2008/10/30/the-shape-of-alpha/
>http://biodivertido.blogspot.com/2008/11/flickr-creating-polygons-out-of-
>points.html
>
>hth,
>Geert-Jan
>
>2011/2/3 viraj 
>
>> thank you all for the links and notes.
>>
>> ~viraj
>>
>> On Thu, Feb 3, 2011 at 7:08 AM, Gavin Towey  wrote:
>> > Openstreetmap.org is as close as you'll get. I'd be surprised if they
>> have shapes for cities beyond just lat/lon point data, but they should have
>> shapes data for zips, counties, states and countries if I recall correctly.
>> >
>> > -Original Message-
>> > From: vegiv...@gmail.com [mailto:vegiv...@gmail.com] On Behalf Of Johan
>> De Meersman
>> > Sent: Wednesday, February 02, 2011 7:59 AM
>> > To: viraj
>> > Cc: mysql@lists.mysql.com
>> > Subject: Re: map polygon data for popular us cities
>> >
>> > On Wed, Feb 2, 2011 at 11:30 AM, viraj  wrote:
>> >
>> >> dear list,
>> >> where can i find a list of map polygons for united states cities? any
>> >> open database? or tool to obtain correct coordinates?
>> >>
>> >
>> > A bit offtopic here, but I suspect that most such databases will be
>> > proprietary and thus payable through the nose. Have a look at the
>> > OpenStreetMap project, I suspect their database might be accessible under
>> > some open license.
>> >
>> > --
>> > Bier met grenadyn
>> > Is als mosterd by den wyn
>> > Sy die't drinkt, is eene kwezel
>> > Hy die't drinkt, is ras een ezel
>> >
>> > IMPORTANT: This email message is intended only for the use of the
>> individual to whom, or entity to which, it is addressed and may contain
>> information that is privileged, confidential and exempt from disclosure
>> under applicable law. If you are NOT the intended recipient, you are hereby
>> notified that any use, dissemination, distribution or copying of this
>> communication is strictly prohibited.  If you have received this
>> communication in error, please reply to the sender immediately and
>> permanently delete this email. Thank you.
>> >
>>
>> --
>> MySQL General Mailing List
>> For list archives: http://lists.mysql.com/mysql
>> To unsubscribe:http://lists.mysql.com/mysql?unsub=gbr...@gmail.com
>>
>>




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: InnoDB and rsync

2011-01-25 Thread Jerry Schwartz
>-Original Message-
>From: vegiv...@gmail.com [mailto:vegiv...@gmail.com] On Behalf Of Johan De
>Meersman
>Sent: Tuesday, January 25, 2011 9:56 AM
>To: Reindl Harald
>Cc: Robinson, Eric; mysql@lists.mysql.com
>Subject: Re: InnoDB and rsync
>
>> jesus christ nobody cares if they are binary replica as long
>> as the data is consistent and ident
>>
>
>Actually, I can see this being an issue if you're using LVM snapshot backups
>or another similar technique - if the datafiles aren't all identical you
>won't be able to restore to any machine from a single backup.
>
[JS] I don't get it. Isn't this like saying that you can't substitute a 
dictionary with a red cover for a dictionary with a blue cover?

If you do a complete LVM restore, then you have complete and (hopefully) 
consistent copy of your files, even if the volume images were not identical; 
in fact, they almost certainly will NOT be identical at the disk-image level.

Regards,

Jerry Schwartz
Global Information Incorporated
195 Farmington Ave.
Farmington, CT 06032

860.674.8796 / FAX: 860.674.8341
E-mail: je...@gii.co.jp
Web site: www.the-infoshop.com



>
>--
>Bier met grenadyn
>Is als mosterd by den wyn
>Sy die't drinkt, is eene kwezel
>Hy die't drinkt, is ras een ezel




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: CURRENT insert ID

2011-01-24 Thread Jerry Schwartz
>-Original Message-
>From: Donovan Brooke [mailto:li...@euca.us]
>Sent: Friday, January 21, 2011 7:28 PM
>Cc: mysql@lists.mysql.com
>Subject: Re: CURRENT insert ID
>
>Just an idear..
>
>Don't auto_increment the main table.. create a unique Id table,
>auto_increment that, and grab that value first for use with both fields
>in your main table.
>
[JS] I've thought of that, but it creates another problem.

Let's say I add a record to the ID table, thereby auto-incrementing its key. 
Now I need to retrieve that key value. How do I do that while retaining some 
semblance of data integrity? I'd have to do something like "SELECT MAX()", 
which fails to retrieve "my" value if someone else has inserted a record in 
the meantime.

I don't, from Access, have the ability to throw a lock on the table (so far as 
I know). I guess maybe I could do that with pass-through queries, but I'm not 
sure.

Regards,

Jerry Schwartz
Global Information Incorporated
195 Farmington Ave.
Farmington, CT 06032

860.674.8796 / FAX: 860.674.8341
E-mail: je...@gii.co.jp
Web site: www.the-infoshop.com




>Donovan
>
>
>--
>D Brooke
>
>--
>MySQL General Mailing List
>For list archives: http://lists.mysql.com/mysql
>To unsubscribe:http://lists.mysql.com/mysql?unsub=je...@gii.co.jp





-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: CURRENT insert ID

2011-01-24 Thread Jerry Schwartz
I'll have to investigate how to do a transaction from Access. I guess 
pass-through queries might do it, but I'm not sure.

>-Original Message-
>From: Carsten Pedersen [mailto:cars...@bitbybit.dk]
>Sent: Sunday, January 23, 2011 9:36 AM
>To: Jerry Schwartz
>Cc: 'mysql.'
>Subject: Re: CURRENT insert ID
>
>Seeing from later posts that you're using InnoDB, why don't you simply
>wrap the INSERT/UPDATE into a transaction? You don't avoid the UPDATE,
>but I'm not sure I understand the need to mess w/ triggers.
>
>BEGIN
>INSERT INTO t(id) NULL
>UPDATE t SET xxx=last_insert_id()
>COMMIT
>
[JS] I'll have to investigate how to do a transaction from Access. I guess 
pass-through queries might do it, but I'm not sure.

Regards,

Jerry Schwartz
Global Information Incorporated
195 Farmington Ave.
Farmington, CT 06032

860.674.8796 / FAX: 860.674.8341
E-mail: je...@gii.co.jp
Web site: www.the-infoshop.com



>Best,
>
>/ Carsten
>
>Den 21-01-2011 17:41, Jerry Schwartz skrev:
>> Here it is in a nutshell:
>>
>>
>>
>> I have a field that needs to be set equal to the auto-increment ID as a
>record is entered. I don't know how to do this without a subsequent UPDATE
>(which I can do with a trigger). Is there any way to avoid the cost of an
>UPDATE?
>>
>>
>>
>> Here's a more concrete description of the problem:
>>
>>
>>
>> CREATE TABLE t (
>>
>> id INT(11) AUTO-INCREMENT PRIMARY,
>>
>> xxx INT(11)
>>
>> );
>>
>>
>>
>> When a record is added to table `t`, I need to set `xxx` to the value
>generated for `id`. (`xxx` might be changed later.)
>>
>>
>>
>> Is there anything clever I can do?
>>
>>
>>
>> Regards,
>>
>>
>>
>> Jerry Schwartz
>>
>> Global Information Incorporated
>>
>> 195 Farmington Ave.
>>
>> Farmington, CT 06032
>>
>>
>>
>> 860.674.8796 / FAX: 860.674.8341
>>
>> E-mail:<mailto:je...@gii.co.jp>  je...@gii.co.jp
>>
>> Web site:<http://www.the-infoshop.com/>  www.the-infoshop.com
>>
>>
>>
>>




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: CURRENT insert ID

2011-01-21 Thread Jerry Schwartz
>-Original Message-
>From: Michael Dykman [mailto:mdyk...@gmail.com]
>Sent: Friday, January 21, 2011 1:27 PM
>To: Jerry Schwartz
>Cc: MySql
>Subject: Re: CURRENT insert ID
>
>You don't need to do an update:
>
>...
>
>new.xxx = new.id
>...
>
[JS] I wish it were that easy. new.id is null until after the INSERT has 
completed:

SHOW CREATE TABLE xxx\G
*** 1. row **
   Table: xxx
Create Table: CREATE TABLE `xxx` (
  `id` int(11) NOT NULL AUTO_INCREMENT,
  `vv` int(11) DEFAULT NULL,
  PRIMARY KEY (`id`)
) ENGINE=InnoDB AUTO_INCREMENT=2 DEFAULT CHARSET=utf8

SHOW CREATE TRIGGER foo\G
*** 1. row ***
   Trigger: foo
  sql_mode: 
STRICT_TRANS_TABLES,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION
SQL Original Statement: CREATE DEFINER=`access`@`%` TRIGGER foo BEFORE UPDATE 
ON xxx
FOR EACH ROW
SET NEW.vv = NEW.id
  character_set_client: utf8
  collation_connection: utf8_general_ci
Database Collation: utf8_general_ci

INSERT INTO xxx VALUES (NULL,NULL);
SELECT * FROM xxx;
++--+
| id | vv   |
++--+
|  1 | NULL |
++--+
1 row in set (0.00 sec)

I'm tearing my hair out.

Regards,

Jerry Schwartz
Global Information Incorporated
195 Farmington Ave.
Farmington, CT 06032

860.674.8796 / FAX: 860.674.8341
E-mail: je...@gii.co.jp
Web site: www.the-infoshop.com




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: CURRENT insert ID

2011-01-21 Thread Jerry Schwartz
>-Original Message-
>From: João Cândido de Souza Neto [mailto:j...@consultorweb.cnt.br]
>Sent: Friday, January 21, 2011 12:47 PM
>To: mysql@lists.mysql.com
>Subject: Re: CURRENT insert ID
>
>Ok, you must have your own reasons to do that.
>
>The fact is: You can´t set the auto_incremente value field to another field
>in the same table and record even in a trigger.
>
>So, the best way is a second update.
>
[JS] That's what I'm trying to do now, but I'm using MS Access through an ODBC 
connection and I haven't figured out how to retrieve last_insert_id.

I should ask in the myodbc forum.

Regards,

Jerry Schwartz
Global Information Incorporated
195 Farmington Ave.
Farmington, CT 06032

860.674.8796 / FAX: 860.674.8341
E-mail: je...@gii.co.jp
Web site: www.the-infoshop.com



>--
>João Cândido de Souza Neto
>
>"Darryle Steplight"  escreveu na mensagem
>news:AANLkTim+fjYUoU+1A5RG9eFS+NnaZXT7K+Ho-q=f-...@mail.gmail.com...
>@Joao - I'm currently building a database out right now that has this
>scenario. One field can be the primary key, that has a purpose for holding
>the record id, another field can hold the value. Let say there are two
>fields, id, s_id. Initially, you insert a record and `id` is now 100 and you
>update s_id to be 100.  But for whatever reason, later down the road you
>need s_id to be 200. You can just update the s_id field instead of deleting
>the entire record and inserting an entire new one with X amount of fields.
>Updating one field is a lot less work than deleting and inserting. I have my
>tables set up so I won't have to use the primary key for queries, I will
>only use the s_id field.
>
>2011/1/21 João Cândido de Souza Neto 
>
>> I can´t think about how useful for you would be to have two fields with
>> the
>> same value.
>>
>> --
>> João Cândido de Souza Neto
>>
>> ""Jerry Schwartz""  escreveu na mensagem
>> news:007501cbb98a$177acba0$467062e0$@co.jp...
>> Here it is in a nutshell:
>>
>>
>>
>> I have a field that needs to be set equal to the auto-increment ID as a
>> record is entered. I don't know how to do this without a subsequent UPDATE
>> (which I can do with a trigger). Is there any way to avoid the cost of an
>> UPDATE?
>>
>>
>>
>> Here's a more concrete description of the problem:
>>
>>
>>
>> CREATE TABLE t (
>>
>> id INT(11) AUTO-INCREMENT PRIMARY,
>>
>> xxx INT(11)
>>
>> );
>>
>>
>>
>> When a record is added to table `t`, I need to set `xxx` to the value
>> generated for `id`. (`xxx` might be changed later.)
>>
>>
>>
>> Is there anything clever I can do?
>>
>>
>>
>> Regards,
>>
>>
>>
>> Jerry Schwartz
>>
>> Global Information Incorporated
>>
>> 195 Farmington Ave.
>>
>> Farmington, CT 06032
>>
>>
>>
>> 860.674.8796 / FAX: 860.674.8341
>>
>> E-mail:  <mailto:je...@gii.co.jp> je...@gii.co.jp
>>
>> Web site:  <http://www.the-infoshop.com/> www.the-infoshop.com
>>
>>
>>
>>
>>
>>
>> --
>> MySQL General Mailing List
>> For list archives: http://lists.mysql.com/mysql
>> To unsubscribe:http://lists.mysql.com/mysql?unsub=dstepli...@gmail.com
>>
>>
>
>
>--
>--
>"May the Source be with you."
>
>
>
>--
>MySQL General Mailing List
>For list archives: http://lists.mysql.com/mysql
>To unsubscribe:http://lists.mysql.com/mysql?unsub=je...@gii.co.jp





-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: CURRENT insert ID

2011-01-21 Thread Jerry Schwartz
I made a typo in my previous message.

>-Original Message-
>From: Jerry Schwartz [mailto:je...@gii.co.jp]
>Sent: Friday, January 21, 2011 12:20 PM
>To: 'Jerry Schwartz'; 'Michael Dykman'; 'MySql'
>Subject: RE: CURRENT insert ID
>
>>-Original Message-
>>From: Jerry Schwartz [mailto:je...@gii.co.jp]
>>Sent: Friday, January 21, 2011 11:56 AM
>>To: 'Michael Dykman'; 'MySql'
>>Subject: RE: CURRENT insert ID
>>
>>>-Original Message-
>>>From: Michael Dykman [mailto:mdyk...@gmail.com]
>>>Sent: Friday, January 21, 2011 11:50 AM
>>>To: MySql
>>>Subject: Re: CURRENT insert ID
>>>
>>>I think an ON INSERT TRIGGER would take care of this; can't think of
>>>any other way.  Using last_insert_id() in the argument list would
>>>likely yield you the previous value (which might not even related to
>>>your table.
>>>
>[JS] Alas, you cannot update a record in a trigger if the record is in the
>same table as the trigger.
>
>
>CREATE TRIGGER xx AFTER UPDATE ON t
>FOR EACH ROW
> UPDATE t SET f1 = 7;
>
>
[JS] That should have read "AFTER INSERT"; but it's still illegal.

Regards,

Jerry Schwartz
Global Information Incorporated
195 Farmington Ave.
Farmington, CT 06032

860.674.8796 / FAX: 860.674.8341
E-mail: je...@gii.co.jp
Web site: www.the-infoshop.com








-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: CURRENT insert ID

2011-01-21 Thread Jerry Schwartz
>-Original Message-
>From: Jerry Schwartz [mailto:je...@gii.co.jp]
>Sent: Friday, January 21, 2011 11:56 AM
>To: 'Michael Dykman'; 'MySql'
>Subject: RE: CURRENT insert ID
>
>>-Original Message-
>>From: Michael Dykman [mailto:mdyk...@gmail.com]
>>Sent: Friday, January 21, 2011 11:50 AM
>>To: MySql
>>Subject: Re: CURRENT insert ID
>>
>>I think an ON INSERT TRIGGER would take care of this; can't think of
>>any other way.  Using last_insert_id() in the argument list would
>>likely yield you the previous value (which might not even related to
>>your table.
>>
[JS] Alas, you cannot update a record in a trigger if the record is in the 
same table as the trigger.


CREATE TRIGGER xx AFTER UPDATE ON t
FOR EACH ROW
 UPDATE t SET f1 = 7;


That's illegal.

Right now, I'm stumped.

>>Having siad that..   odd requirement.
>>
>[JS] You don't know the half of it.
>
>Thanks.
>
>Regards,
>
>Jerry Schwartz
>Global Information Incorporated
>195 Farmington Ave.
>Farmington, CT 06032
>
>860.674.8796 / FAX: 860.674.8341
>E-mail: je...@gii.co.jp
>Web site: www.the-infoshop.com
>
>
>
>>  - michael dykman
>>
>>ps  -- sorry for the duplicate Jerry, reply-to policy on this list is
>>forever tripping me up.
>>
>>
>>>
>>> On Fri, Jan 21, 2011 at 11:41 AM, Jerry Schwartz  wrote:
>>>> Here it is in a nutshell:
>>>>
>>>>
>>>>
>>>> I have a field that needs to be set equal to the auto-increment ID as a
>>record is entered. I don't know how to do this without a subsequent UPDATE
>>(which I can do with a trigger). Is there any way to avoid the cost of an
>>UPDATE?
>>>>
>>>>
>>>>
>>>> Here's a more concrete description of the problem:
>>>>
>>>>
>>>>
>>>> CREATE TABLE t (
>>>>
>>>> id INT(11) AUTO-INCREMENT PRIMARY,
>>>>
>>>> xxx INT(11)
>>>>
>>>> );
>>>>
>>>>
>>>>
>>>> When a record is added to table `t`, I need to set `xxx` to the value
>>generated for `id`. (`xxx` might be changed later.)
>>>>
>>>>
>>>>
>>>> Is there anything clever I can do?
>>>>
>>>>
>>>>
>>>> Regards,
>>>>
>>>>
>>>>
>>>> Jerry Schwartz
>>>>
>>>> Global Information Incorporated
>>>>
>>>> 195 Farmington Ave.
>>>>
>>>> Farmington, CT 06032
>>>>
>>>>
>>>>
>>>> 860.674.8796 / FAX: 860.674.8341
>>>>
>>>> E-mail:  <mailto:je...@gii.co.jp> je...@gii.co.jp
>>>>
>>>> Web site:  <http://www.the-infoshop.com/> www.the-infoshop.com
>>>>
>>>>
>>>>
>>>>
>>>
>>>
>>>
>>> --
>>>  - michael dykman
>>>  - mdyk...@gmail.com
>>>
>>>  May the Source be with you.
>>>
>>
>>
>>
>>--
>> - michael dykman
>> - mdyk...@gmail.com
>>
>> May the Source be with you.
>>
>>--
>>MySQL General Mailing List
>>For list archives: http://lists.mysql.com/mysql
>>To unsubscribe:http://lists.mysql.com/mysql?unsub=je...@gii.co.jp
>
>
>
>
>
>--
>MySQL General Mailing List
>For list archives: http://lists.mysql.com/mysql
>To unsubscribe:http://lists.mysql.com/mysql?unsub=je...@gii.co.jp





-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: CURRENT insert ID

2011-01-21 Thread Jerry Schwartz
>-Original Message-
>From: Michael Dykman [mailto:mdyk...@gmail.com]
>Sent: Friday, January 21, 2011 11:50 AM
>To: MySql
>Subject: Re: CURRENT insert ID
>
>I think an ON INSERT TRIGGER would take care of this; can't think of
>any other way.  Using last_insert_id() in the argument list would
>likely yield you the previous value (which might not even related to
>your table.
>
>Having siad that..   odd requirement.
>
[JS] You don't know the half of it.

Thanks.

Regards,

Jerry Schwartz
Global Information Incorporated
195 Farmington Ave.
Farmington, CT 06032

860.674.8796 / FAX: 860.674.8341
E-mail: je...@gii.co.jp
Web site: www.the-infoshop.com



>  - michael dykman
>
>ps  -- sorry for the duplicate Jerry, reply-to policy on this list is
>forever tripping me up.
>
>
>>
>> On Fri, Jan 21, 2011 at 11:41 AM, Jerry Schwartz  wrote:
>>> Here it is in a nutshell:
>>>
>>>
>>>
>>> I have a field that needs to be set equal to the auto-increment ID as a
>record is entered. I don't know how to do this without a subsequent UPDATE
>(which I can do with a trigger). Is there any way to avoid the cost of an
>UPDATE?
>>>
>>>
>>>
>>> Here's a more concrete description of the problem:
>>>
>>>
>>>
>>> CREATE TABLE t (
>>>
>>> id INT(11) AUTO-INCREMENT PRIMARY,
>>>
>>> xxx INT(11)
>>>
>>> );
>>>
>>>
>>>
>>> When a record is added to table `t`, I need to set `xxx` to the value
>generated for `id`. (`xxx` might be changed later.)
>>>
>>>
>>>
>>> Is there anything clever I can do?
>>>
>>>
>>>
>>> Regards,
>>>
>>>
>>>
>>> Jerry Schwartz
>>>
>>> Global Information Incorporated
>>>
>>> 195 Farmington Ave.
>>>
>>> Farmington, CT 06032
>>>
>>>
>>>
>>> 860.674.8796 / FAX: 860.674.8341
>>>
>>> E-mail:  <mailto:je...@gii.co.jp> je...@gii.co.jp
>>>
>>> Web site:  <http://www.the-infoshop.com/> www.the-infoshop.com
>>>
>>>
>>>
>>>
>>
>>
>>
>> --
>>  - michael dykman
>>  - mdyk...@gmail.com
>>
>>  May the Source be with you.
>>
>
>
>
>--
> - michael dykman
> - mdyk...@gmail.com
>
> May the Source be with you.
>
>--
>MySQL General Mailing List
>For list archives: http://lists.mysql.com/mysql
>To unsubscribe:http://lists.mysql.com/mysql?unsub=je...@gii.co.jp





-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: best way to have a unique key

2011-01-21 Thread Jerry Schwartz
>-Original Message-
>From: Michael Dykman [mailto:mdyk...@gmail.com]
>Sent: Friday, January 21, 2011 11:35 AM
>To: Johan De Meersman
>Cc: Anthony Pace; mysql.
>Subject: Re: best way to have a unique key
>
>One of the components of the UUID is drawn form the mac address of the
>server..  While in practice this is not true of all systems
>
>(except from http://dev.mysql.com/doc/refman/5.0/en/miscellaneous-
>functions.html#function_uuid)
>Currently, the MAC address of an interface is taken into
>account only on FreeBSD and Linux. On other operating systems, MySQL
>uses a randomly generated 48-bit number.
>(end except)
>
>that potentially 48-bit random portion has 281474976710656
>possibilities, which makes in far, far more likely that your server is
>hit by a meteor during a snowstorm in August while the Dalai Lama is
>doing an Elvis impression for the Chinese premier.
>
[JS] Boy, if I had a nickel for every time I've heard that...

Seriously, though I've written and used enough pseudo-random number generators 
to have a tiny nagging doubt. I didn't realize that MySQL didn't use the MAC 
address on Windows platforms.

Also, it is possible to programmatically change a MAC address (for system 
failover, for example).

If you're not the kind of person who stays up nights worrying about proton 
decay, you're probably justified in shrugging this off. You'll likely be 
collecting your pension before anything nasty happens.


> - michael dykman
>
>On Fri, Jan 21, 2011 at 1:22 AM, Johan De Meersman  
>wrote:
>> I have to say, something similar was my first thought, too - you never
>> mention uuid in your original post. As already stated, uuid() should be a
>> Universal Unique IDentifier. It's afaik a random 128-bit number; given the
>> space to choose from it should be rather unique. I have to admit that I'm
>> not entirely confident about that myself, either, though: as Pratchett put
>> it, one-in-a-million chances tend to pop up nine times out of ten.
>>
>> The code should have bits for handling duplicate primaries regardless of 
>> the
>> method used to generate it, tough, so there's no reason to not do it. 
>> Having
>> two subsequent UUID() calls generate pre-existing numbers seems to me to be
>> likely in the same way as having Bush return his dirty oil dollars to Irak.
>>
>> On Thu, Jan 20, 2011 at 8:10 PM, Anthony Pace
>wrote:
>>
>>> Dude, come on.  I know that all primary keys have to be unique; however, I
>>> was obviously referring to the use of uuid over auto incrementation.
>>>
>>> On 1/20/2011 1:36 PM, Michael Dykman wrote:
>>>
 It is axiomatic in the relational model that a primary must be unique.
  This is not a quirk put forth by your current employer.  Neither
 MySQL nor any other RDBMS will allow you to establish a primary key
 that is not unique.

  - michael dykman

 On Thu, Jan 20, 2011 at 1:32 PM, Anthony Pace
  wrote:

> Due to certain reasons, the company I am doing business with has decided
> that the primary key, for an orders table, be a unique key; however, I
> don't
> like the possibility of it conflicting if moved to another machine.
>
> What are some pitfalls of using a unique key, that is generated by a
> server
> side script, rather than by mysql?
> What are the best ways to do this?
>
> Please keep in mind this variable will also be displayed on the
> customer's
> Receipt, but again, since it's random, it doesn't have to mean anything.
>
> --
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:http://lists.mysql.com/mysql?unsub=mdyk...@gmail.com
>
>
>


>>>
>>> --
>>> MySQL General Mailing List
>>> For list archives: http://lists.mysql.com/mysql
>>> To unsubscribe:http://lists.mysql.com/mysql?unsub=vegiv...@tuxera.be
>>>
>>>
>>
>>
>> --
>> Bier met grenadyn
>> Is als mosterd by den wyn
>> Sy die't drinkt, is eene kwezel
>> Hy die't drinkt, is ras een ezel
>>
>
>
>
>--
> - michael dykman
> - mdyk...@gmail.com
>
> May the Source be with you.
>
>--
>MySQL General Mailing List
>For list archives: http://lists.mysql.com/mysql
>To unsubscribe:http://lists.mysql.com/mysql?unsub=je...@gii.co.jp





-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



CURRENT insert ID

2011-01-21 Thread Jerry Schwartz
Here it is in a nutshell:

 

I have a field that needs to be set equal to the auto-increment ID as a record 
is entered. I don’t know how to do this without a subsequent UPDATE (which I 
can do with a trigger). Is there any way to avoid the cost of an UPDATE?

 

Here’s a more concrete description of the problem:

 

CREATE TABLE t (

id INT(11) AUTO-INCREMENT PRIMARY,

xxx INT(11)

);

 

When a record is added to table `t`, I need to set `xxx` to the value generated 
for `id`. (`xxx` might be changed later.)

 

Is there anything clever I can do?

 

Regards,

 

Jerry Schwartz

Global Information Incorporated

195 Farmington Ave.

Farmington, CT 06032

 

860.674.8796 / FAX: 860.674.8341

E-mail:  <mailto:je...@gii.co.jp> je...@gii.co.jp 

Web site:  <http://www.the-infoshop.com/> www.the-infoshop.com

 



RE: best way to have a unique key

2011-01-21 Thread Jerry Schwartz
>-Original Message-
>From: vegiv...@gmail.com [mailto:vegiv...@gmail.com] On Behalf Of Johan De
>Meersman
>Sent: Friday, January 21, 2011 1:22 AM
>To: Anthony Pace
>Cc: Michael Dykman; mysql.
>Subject: Re: best way to have a unique key
>
>I have to say, something similar was my first thought, too - you never
>mention uuid in your original post. As already stated, uuid() should be a
>Universal Unique IDentifier. It's afaik a random 128-bit number; given the
>space to choose from it should be rather unique. I have to admit that I'm
>not entirely confident about that myself, either, though: as Pratchett put
>it, one-in-a-million chances tend to pop up nine times out of ten.
>
[JS] A UUID (what Microsoft calls a GUID) is based in part on the MAC address 
of the generating device. Since MAC addresses are supposed to be unique across 
the known universe, so should a UUID.

Regards,

Jerry Schwartz
Global Information Incorporated
195 Farmington Ave.
Farmington, CT 06032

860.674.8796 / FAX: 860.674.8341
E-mail: je...@gii.co.jp
Web site: www.the-infoshop.com



>The code should have bits for handling duplicate primaries regardless of the
>method used to generate it, tough, so there's no reason to not do it. Having
>two subsequent UUID() calls generate pre-existing numbers seems to me to be
>likely in the same way as having Bush return his dirty oil dollars to Irak.
>
>On Thu, Jan 20, 2011 at 8:10 PM, Anthony Pace 
>wrote:
>
>> Dude, come on.  I know that all primary keys have to be unique; however, I
>> was obviously referring to the use of uuid over auto incrementation.
>>
>> On 1/20/2011 1:36 PM, Michael Dykman wrote:
>>
>>> It is axiomatic in the relational model that a primary must be unique.
>>>  This is not a quirk put forth by your current employer.  Neither
>>> MySQL nor any other RDBMS will allow you to establish a primary key
>>> that is not unique.
>>>
>>>  - michael dykman
>>>
>>> On Thu, Jan 20, 2011 at 1:32 PM, Anthony Pace
>>>  wrote:
>>>
>>>> Due to certain reasons, the company I am doing business with has decided
>>>> that the primary key, for an orders table, be a unique key; however, I
>>>> don't
>>>> like the possibility of it conflicting if moved to another machine.
>>>>
>>>> What are some pitfalls of using a unique key, that is generated by a
>>>> server
>>>> side script, rather than by mysql?
>>>> What are the best ways to do this?
>>>>
>>>> Please keep in mind this variable will also be displayed on the
>>>> customer's
>>>> Receipt, but again, since it's random, it doesn't have to mean anything.
>>>>
>>>> --
>>>> MySQL General Mailing List
>>>> For list archives: http://lists.mysql.com/mysql
>>>> To unsubscribe:http://lists.mysql.com/mysql?unsub=mdyk...@gmail.com
>>>>
>>>>
>>>>
>>>
>>>
>>
>> --
>> MySQL General Mailing List
>> For list archives: http://lists.mysql.com/mysql
>> To unsubscribe:http://lists.mysql.com/mysql?unsub=vegiv...@tuxera.be
>>
>>
>
>
>--
>Bier met grenadyn
>Is als mosterd by den wyn
>Sy die't drinkt, is eene kwezel
>Hy die't drinkt, is ras een ezel




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: [PHP] RE: Organisational question: surely someone has implemented many Boolean values (tags) and a solution exist

2011-01-20 Thread Jerry Schwartz
>> [JS] I disagree. The method I proposed can be extended to any depth, and 
>> any
>> leaf or branch can be retrieved with a single query.
>>
>
>I suppose for retrievals this structure has advantages, but unless
>MySQL has a ++ operator (or better yet, one that adds or subtracts 2
>from an int) then it looks to be a pain to add nodes.
>
[JS] Not at all. Somebody, somehow, has to assign a name to the tag and 
designate its place in the hierarchy. I don't see how you can avoid that being 
done by a human.

Regards,

Jerry Schwartz
Global Information Incorporated
195 Farmington Ave.
Farmington, CT 06032

860.674.8796 / FAX: 860.674.8341
E-mail: je...@gii.co.jp
Web site: www.the-infoshop.com




>But I will play with the idea. Maybe after I write the code (I'm
>saving that for tomorrow) I'll see it differently. Thanks.
>
>--
>Dotan Cohen
>
>http://gibberish.co.il
>http://what-is-what.com




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: Organisational question: surely someone has implemented many Boolean values (tags) and a solution exist

2011-01-20 Thread Jerry Schwartz
>-Original Message-
>From: Dotan Cohen [mailto:dotanco...@gmail.com]
>Sent: Thursday, January 20, 2011 11:25 AM
>To: Jerry Schwartz
>Cc: mysql.; php-general.
>Subject: Re: Organisational question: surely someone has implemented many
>Boolean values (tags) and a solution exist
>
>
>> As for setting up a hierarchy, that's trickier. One way to handle that is 
>> to
>> work like libraries do: 10 is "fiction", 10.05 is "crime novels", 10.05.07 
>> is
>> "British authors", and so forth. Your `tags` table then looks like
>>
>
>Thanks. I prefer the "parent tag" field, though, I feel that it is
>more flexible.
>
>
[JS] I disagree. The method I proposed can be extended to any depth, and any 
leaf or branch can be retrieved with a single query.

Regards,

Jerry Schwartz
Global Information Incorporated
195 Farmington Ave.
Farmington, CT 06032

860.674.8796 / FAX: 860.674.8341
E-mail: je...@gii.co.jp
Web site: www.the-infoshop.com



>--
>Dotan Cohen
>
>http://gibberish.co.il
>http://what-is-what.com




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: Can't read dir of '.' (errno: 13)

2011-01-20 Thread Jerry Schwartz
>-Original Message-
>From: Joerg Bruehe [mailto:joerg.bru...@oracle.com]
>Sent: Thursday, January 20, 2011 6:54 AM
>To: Jerry Schwartz
>Cc: mysql@lists.mysql.com
>Subject: Re: Can't read dir of '.' (errno: 13)
>
>Hi!
>
>
>Jerry Schwartz wrote:
>> [[...]]
>>
>>> [[...]]
>>>
>> [JS] Sorry, that still doesn't make sense to em. To authenticate the user,
>> mysqld needs to read the mysql database. That is also where the databases 
>> are
>> listed (in `mysql`.`db`). If the daemon can read `mysql`.`user`, why can't 
>> it
>> read `mysql`.`db`? It's a MyISAM database, so everything is in the same 
>> file.
>
>Now I understand how your question was meant - sorry to explain a
>completely different issue first.
>
[JS] No problem. To me, knowledge is never useless.

>About reading files: MyISAM is stored file-per-table, so the table
>"mysql.user" is stored in the three files "mysql/user.{frm,MYD,MYI}";
>table "mysql.db" is stored in other files which might have got different
>permissions. (This would be a handling error, obviously.)
>
[JS] Stranger things have happened. I could tell tales that would turn your 
hair white (mine already is).
>>
>> What am I missing? Does SHOW DATABASES do something other than pull the
>> database names out of `mysql`.`db`?
>
>AIUI, there are operations for which the server wants to read the
>directory, and "show databases" seems to be one of them.
>For examples, see these bug reports:
>  http://bugs.mysql.com/bug.php?id=22615
>  http://bugs.mysql.com/bug.php?id=42676
>  http://bugs.mysql.com/bug.php?id=51399
>Sorry - I would tell you more details, if I knew them for sure.
>
[JS] It never occurred to me that SHOW DATABASE would scan the file directory. 
Johan De Meersman came up with the same explanation.

>Now, consider a case like this:
>  drwx--x--x   user-A group-B   .
>  -rw-rw-rw-   user-A group-B   file
>
>In this case, "file" would be readable (and even writable) for everybody
>knowing the name, but a "ls ." would fail for everybody but "user-A":
>There is no "read" permission on the directory for anybody but the owner.
>
[JS] Got it.
>
>Regards,
>Jörg
>
>--
>Joerg Bruehe,  MySQL Build Team,  joerg.bru...@oracle.com
>ORACLE Deutschland B.V. & Co. KG,   Komturstrasse 18a,   D-12099 Berlin
>Geschaeftsfuehrer: Juergen Kunz, Marcel v.d. Molen, Alexander v.d. Ven
>Amtsgericht Muenchen: HRA 95603





-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: Organisational question: surely someone has implemented many Boolean values (tags) and a solution exist

2011-01-20 Thread Jerry Schwartz
I think the canonical way would be to have one table for your items, one table 
for your tags, and one table for your tag assignments.

CREATE TABLE items (
item_id INT(11) AUTO-INCREMENT PRIMARY KEY,
item_name VARCHAR(100) NOT NULL KEY,
...
);

CREATE TABLE tags (
tag_id INT(11) AUTO-INCREMENT PRIMARY KEY,
tag_name VARCHAR(100) NOT NULL KEY,
...
);

CREATE TABLE item_tags (
item_id INT(11) NOT NULL KEY,
tag_id INT(11) NOT NULL KEY
);

This way you could do

SELECT item_id, item_name FROM
tags JOIN item_tags ON tags.tag_id = item_tags.tag_id
JOIN items ON item_tags.item_id = items.item_id
WHERE ...
;

to get all of the items with a particular tag, or

SELECT tag_id, tag_name FROM
items JOIN item_tags ON items.item_id = item_tags.item_id
JOIN tags ON item_tags.tag_id = tags.tag_id
WHERE ...
;

with equal ease and efficiency.

Using an ever-lengthening bitmap for the tag assignments is a trap for the 
unwary. The path to perdition is lined with the bodies of those who believed 
"We'll never need more than x..."

As for setting up a hierarchy, that's trickier. One way to handle that is to 
work like libraries do: 10 is "fiction", 10.05 is "crime novels", 10.05.07 is 
"British authors", and so forth. Your `tags` table then looks like

CREATE TABLE tags (
tag_id INT(11) AUTO-INCREMENT PRIMARY KEY,
tag_name VARCHAR(100) NOT NULL KEY,
tag_number VARCHAR(100) NOT NULL KEY,
...
);

Then you can search for tags by

tag_number LIKE ('10.%') or
tag_number LIKE ('10.05%')

and so forth. This scheme is infinitely extendable. To get the entire 
hierarchy, you simply

SELECT tag_number, tag_name FROM tags ORDER BY tag_number;

Regards,

Jerry Schwartz
Global Information Incorporated
195 Farmington Ave.
Farmington, CT 06032

860.674.8796 / FAX: 860.674.8341
E-mail: je...@gii.co.jp
Web site: www.the-infoshop.com


>-Original Message-
>From: Dotan Cohen [mailto:dotanco...@gmail.com]
>Sent: Thursday, January 20, 2011 9:32 AM
>To: mysql.; php-general.
>Subject: Organisational question: surely someone has implemented many Boolean
>values (tags) and a solution exist
>
>I am designing an application that make heavy usage of one-to-many
>tags for items. That is, each item can have multiple tags, and there
>are tens of tags (likely to grow to hundreds). Most operation on the
>database are expected to be searches for the items that have a
>particular tag. That is, users will search per tags, not per items.
>
>These are the ways that I've thought about storing the tags, some bad
>and some worse. If there is a better way I'd love to know.
>
>1) Each item will get a row in a "tags" table, with a column for each tag.
>mysql> CREATE TABLE tags (
>id INT NOT NULL AUTO_INCREMENT PRIMARY KEY,
>item VARCHAR(100),
>tag1 bool,
>tag2 bool,
>
>tagN bool
>);
>
>With this approach I would be adding a new column every time a new
>category is added. This looks to me a good way given that users will
>be searching per tag and a simple "SELECT item FROM tags WHERE
>tag1=true;" is an easy, inexpensive query. This table will get very
>large, there will likely be literally thousands of items (there will
>exist more items than tags).
>
>
>
>2) Store the applicable tags one per line in a text field in the items table.
>mysql> CREATE TABLE items (
>id INT NOT NULL AUTO_INCREMENT PRIMARY KEY,
>item VARCHAR(100),
>tags text,
>);
>
>This looks like a bad idea, searching by tag will be a mess.
>
>
>
>3) Store the tags in a table and add items to a text field. For instance:
>mysql> CREATE TABLE tags (
>id INT NOT NULL AUTO_INCREMENT PRIMARY KEY,
>tagName VARCHAR(100),
>items text,
>);
>
>This looks to be the best way from a MySQL data retrieval perspective,
>but I do not know how expensive it will be to then split the items in
>PHP. Furthermore, adding items to tags could get real expensive.
>
>
>
>Caveat: at some point in the future there may be added the ability to
>have a tag hierarchy. For instance, there could exist a tag
>"restaurant" that will get the subtags "italian" and "french". I could
>fake this with any approach by having a table of existing tags with a
>"parentTag" field, so if I plan on having this table anyway would
>method 3 above be preferable?
>
>Note: this message is cross-posted to the MySQL and the PHP lists as I
>am really not sure where is the best place to do the logic. My
>apologies to those who receive the message twice.
>
>Thanks!
>
>--
>Dotan Cohen
>
>http://gibberish.co.il
>http://what-is-what.com
>
>--
>MySQL General Mailing List
>For list archives: http://lists.mysql.com/mysql
>To unsubscribe:http://lists.mysql.com/mysql?unsub=je...@gii.co.jp





-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: Can't read dir of '.' (errno: 13)

2011-01-19 Thread Jerry Schwartz
Regards,

Jerry Schwartz
Global Information Incorporated
195 Farmington Ave.
Farmington, CT 06032

860.674.8796 / FAX: 860.674.8341
E-mail: je...@gii.co.jp
Web site: www.the-infoshop.com

>-Original Message-
>From: Joerg Bruehe [mailto:joerg.bru...@oracle.com]
>Sent: Wednesday, January 19, 2011 2:56 PM
>To: Jerry Schwartz
>Cc: mysql@lists.mysql.com
>Subject: Re: Can't read dir of '.' (errno: 13)
>
>Hi !
>
>
>Jerry Schwartz wrote:
>>> -Original Message-
>>> From: Joerg Bruehe [mailto:joerg.bru...@oracle.com]
>>> Sent: Wednesday, January 19, 2011 10:43 AM
>>> To: mysql@lists.mysql.com
>>> Cc: Jerry Schwartz
>>> Subject: Re: Can't read dir of '.' (errno: 13)
>>>
>>> Hi!
>>>
>>>
>>> Jerry Schwartz wrote:
>>>>> -[[...]]
>>>>>
>>>> [JS] I have a related question: how can MySQL authenticate a user if file
>>>> system permissions won't let it read any databases?
>>> To me, your question sounds like you forgot the difference between
>>> server and client:
>>>
>> [JS] I don't think so, but perhaps I misunderstood the source of the error
>> message.
>>
>> mysql> show databases ;
>> ERROR 1018 (HY000): Can't read dir of '.' (errno: 13)
>>
>> Are you saying that the mysql CLI needs to read some directory ('.', in 
>> this
>> case) in order to send mysqld a simple query like SHOW DATABASES? I would
>hope
>> it doesn't need a temporary file for that.
>
>No, I didn't say that.
>
>You say that "the mysql CLI ... send(s) mysqld a simple query ...", and
>that is correct: the client ("mysql") sends the command "show databases"
>to the server ("mysqld").
>To process this, the server has to check the file(s) and directory(ies)
>used to store the database contents, and one of those actions is to read
>the current directory.
>
>>
>> Who is issuing the errno 13, the client or the daemon?
>
>AIUI, it is the server, encountering a permission problem.
>So the server encounters a problem, and it reports this back to the
>client who will then output the message.
>
>
>Your question that triggered my original reply was:
>   how can MySQL authenticate a user if file system permissions
>   won't let it read any databases?
>
>My original reply tried to make it obvious that the user (who is to be
>authenticated) is just a database concept, so this (maybe non-existing)
>user's (possibly missing) permissions in the operating system to read
>any OS object (file, directory) do not matter at all when the MySQL
>server does the authentication.
>It is the server (process) that needs the permissions to read (and
>write) database directories and files.
>
>Of course, if these permissions are missing, then the server cannot
>operate (and so also cannot authenticate a user), but that is a question
>of server setup and not of database user privileges.
>
[JS] Sorry, that still doesn't make sense to em. To authenticate the user, 
mysqld needs to read the mysql database. That is also where the databases are 
listed (in `mysql`.`db`). If the daemon can read `mysql`.`user`, why can't it 
read `mysql`.`db`? It's a MyISAM database, so everything is in the same file.

What am I missing? Does SHOW DATABASES do something other than pull the 
database names out of `mysql`.`db`?

>
>HTH,
>Jörg
>
>--
>Joerg Bruehe,  MySQL Build Team,  joerg.bru...@oracle.com
>ORACLE Deutschland B.V. & Co. KG,   Komturstrasse 18a,   D-12099 Berlin
>Geschaeftsfuehrer: Juergen Kunz, Marcel v.d. Molen, Alexander v.d. Ven
>Amtsgericht Muenchen: HRA 95603





-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: Can't read dir of '.' (errno: 13)

2011-01-19 Thread Jerry Schwartz
>-Original Message-
>From: Joerg Bruehe [mailto:joerg.bru...@oracle.com]
>Sent: Wednesday, January 19, 2011 10:43 AM
>To: mysql@lists.mysql.com
>Cc: Jerry Schwartz
>Subject: Re: Can't read dir of '.' (errno: 13)
>
>Hi!
>
>
>Jerry Schwartz wrote:
>>> -[[...]]
>>>
>> [JS] I have a related question: how can MySQL authenticate a user if file
>> system permissions won't let it read any databases?
>
>To me, your question sounds like you forgot the difference between
>server and client:
>
[JS] I don't think so, but perhaps I misunderstood the source of the error 
message.

mysql> show databases ;
ERROR 1018 (HY000): Can't read dir of '.' (errno: 13)

Are you saying that the mysql CLI needs to read some directory ('.', in this 
case) in order to send mysqld a simple query like SHOW DATABASES? I would hope 
it doesn't need a temporary file for that.

Who is issuing the errno 13, the client or the daemon?


Regards,

Jerry Schwartz
Global Information Incorporated
195 Farmington Ave.
Farmington, CT 06032

860.674.8796 / FAX: 860.674.8341
E-mail: je...@gii.co.jp
Web site: www.the-infoshop.com




>- The MySQL server processes always run as the same operating system
>  user (typically: "mysql"), and basic setup must ensure they have
>  access rights to all files storing the database contents.
>
>- The client processes run on any machine (local or remote) as any user,
>  and that OS user may be totally unrelated to the database user that
>  will be authenticated.
>
>Just assume some application (with privilege checks!) accessed via the
>web: The web server (Apache, ...) executing the PHP (or other) code is
>running with some OS user ID which is the same for all application
>(database) users, and there is no need to define those application users
>on the machine running the server processes of Apache or MySQL.
>
>
>HTH,
>Jörg
>
>--
>Joerg Bruehe,  MySQL Build Team,  joerg.bru...@oracle.com
>ORACLE Deutschland B.V. & Co. KG,   Komturstrasse 18a,   D-12099 Berlin
>Geschaeftsfuehrer: Juergen Kunz, Marcel v.d. Molen, Alexander v.d. Ven
>Amtsgericht Muenchen: HRA 95603





-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: Can't read dir of '.' (errno: 13)

2011-01-19 Thread Jerry Schwartz
>-Original Message-
>From: Adarsh Sharma [mailto:adarsh.sha...@orkash.com]
>Sent: Wednesday, January 19, 2011 12:45 AM
>To: mysql@lists.mysql.com
>Subject: Can't read dir of '.' (errno: 13)
>
>Dear all,
>
>I am facing the issue mentioned below while issuing show databases command.
>
>
>root@s7-dt-bse:~# mysql -uroot -p
>Enter password:
>Welcome to the MySQL monitor.  Commands end with ; or \g.
>Your MySQL connection id is 5
>Server version: 5.1.41-3ubuntu12.8 (Ubuntu)
>
>Type 'help;' or '\h' for help. Type '\c' to clear the current input
>statement.
>
>mysql> show databases ;
>ERROR 1018 (HY000): Can't read dir of '.' (errno: 13)
>mysql> show databases;
>ERROR 1018 (HY000): Can't read dir of '.' (errno: 13)
>mysql>
>
>My Mysql data dir is /hdd2-1/myisam_data/*
>
>Mysql is the owner and group of myisam_data folder.
>
>Can I know why it occurs and how to debug it.
>
[JS] I have a related question: how can MySQL authenticate a user if file 
system permissions won't let it read any databases?

Regards,

Jerry Schwartz
Global Information Incorporated
195 Farmington Ave.
Farmington, CT 06032

860.674.8796 / FAX: 860.674.8341
E-mail: je...@gii.co.jp
Web site: www.the-infoshop.com



>
>Thanks & Regards
>
>Adarsh Sharma
>
>--
>MySQL General Mailing List
>For list archives: http://lists.mysql.com/mysql
>To unsubscribe:http://lists.mysql.com/mysql?unsub=je...@gii.co.jp





-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: Rewrite SQL to stop table scan

2011-01-17 Thread Jerry Schwartz

>>
>> SELECT COUNT(*) AS num FROM alerts WHERE stamp > DATE_SUB(NOW(),
>> interval 300 second) AND devid=244;
>>
>> With this query, MySQL will run DATE_SUB() once, and then use the
>> index on stamp (which I assume you have) to narrow down the result set.
>>
[JS] I don't understand how an index on a timestamp would help. Theoretically, 
each record could have a unique value for the timestamp; so the index would 
have an entry for each record. Would MySQL really use that in preference to, 
or in combination with, an index on devid?

Regards,

Jerry Schwartz
Global Information Incorporated
195 Farmington Ave.
Farmington, CT 06032

860.674.8796 / FAX: 860.674.8341
E-mail: je...@gii.co.jp
Web site: www.the-infoshop.com



>> Steve
>>
>That did it!  Thank you.
>
>--
>MySQL General Mailing List
>For list archives: http://lists.mysql.com/mysql
>To unsubscribe:http://lists.mysql.com/mysql?unsub=je...@gii.co.jp





-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: Rewrite SQL to stop table scan

2011-01-14 Thread Jerry Schwartz
>-Original Message-
>From: Bruce Ferrell [mailto:bferr...@baywinds.org]
>Sent: Friday, January 14, 2011 5:53 AM
>To: mysql@lists.mysql.com
>Subject: Rewrite SQL to stop table scan
>
>How would you rewrite the following SQL so that is doesn't do a full
>table scan.   It does in fact do a full scan in spite of the time
>clause.  It's been
>making me nuts for months.
>
>select count(*) as count
>from alerts where (unix_timestamp(stamp) > (unix_timestamp(now()) -
>'300' ) )
>and devid = '244';
>
>Thanks in advance,
>
[JS] I didn't see a key on `devid`. That's the only thing I can think of, 
since putting a key on `stamp` doesn't make sense.

Regards,

Jerry Schwartz
Global Information Incorporated
195 Farmington Ave.
Farmington, CT 06032

860.674.8796 / FAX: 860.674.8341
E-mail: je...@gii.co.jp
Web site: www.the-infoshop.com



>Bruce
>
>P.S.
>
>I've tried it this way:
>
>select count(*) as count  from alerts where ((unix_timestamp(stamp) >
>(unix_timestamp(now()) - '300' ) )) and devid = '244';
>
>and explain always says this:
>
>+--+---+--+-+--+-+-+
>| id | select_type | table  | type | possible_keys | key  | key_len |
>ref  | rows| Extra   |
>++-++--+---+--+-+--+---
>--+-+
>|  1 | SIMPLE  | alerts | ALL  | NULL  | NULL | NULL
>|NULL | 2041284 | Using where |
>++-++--+---+--+-+--+---
>--+-+
>
>And it's structured this way:
>
>| Field| Type| Null  |  Key | Default
> | Extra
>| id| varchar(60)| NO   | MUL |
>|
>| stamp  | timestamp | NO   | MUL |
>CURRENT_TIMESTAMP | on update CURRENT_TIMESTAMP |
>
>
>
>--
>MySQL General Mailing List
>For list archives: http://lists.mysql.com/mysql
>To unsubscribe:http://lists.mysql.com/mysql?unsub=je...@gii.co.jp





-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: Incorrect key file for table

2011-01-14 Thread Jerry Schwartz

>-Original Message-
>From: Jørn Dahl-Stamnes [mailto:sq...@dahl-stamnes.net]
>Sent: Friday, January 14, 2011 7:31 AM
>To: mysql@lists.mysql.com
>Subject: Incorrect key file for table
>
>Hello,
>
>While doing a select query I  got the following error in the error-log file:
>
>Incorrect key feil for table '/tmp/#sql_5f8_0.MYI'; try to repair it
>
>It seem rather meaningless to try to repair a temporary table... So is this a
>bug in MySQL, or?
>
[JS] Check for space in /tmp. I've seen temporary files that are 10x the size 
of the actual table(s).

Regards,

Jerry Schwartz
Global Information Incorporated
195 Farmington Ave.
Farmington, CT 06032

860.674.8796 / FAX: 860.674.8341
E-mail: je...@gii.co.jp
Web site: www.the-infoshop.com



>The database I'm using has only InnoDb tables and I'm using version
>5.1.42-log.
>The query is build using Propel (from a web-application made by Symfony
>framework).
>
>--
>Jørn Dahl-Stamnes
>homepage: http://www.dahl-stamnes.net/dahls/
>
>--
>MySQL General Mailing List
>For list archives: http://lists.mysql.com/mysql
>To unsubscribe:http://lists.mysql.com/mysql?unsub=je...@gii.co.jp





-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: export result from select statement

2011-01-10 Thread Jerry Schwartz
>-Original Message-
>From: Michael Dykman [mailto:mdyk...@gmail.com]
>Sent: Sunday, January 09, 2011 3:02 PM
>To: LAMP; mysql@lists.mysql.com
>Subject: Re: export result from select statement
>
>I  second Jerry's recommendation.  It always works like a charm for me.
>
[JS] Thanks.

By the way, if your SELECT is easy to type you can use the -e parameter on the 
MySQL command line.

Regards,

Jerry Schwartz
Global Information Incorporated
195 Farmington Ave.
Farmington, CT 06032

860.674.8796 / FAX: 860.674.8341
E-mail: je...@gii.co.jp
Web site: www.the-infoshop.com



> - michael dykman
>
>On Sun, Jan 9, 2011 at 2:44 PM, Jerry Schwartz  wrote:
>> The technique I've settled on is this:
>>
>> mysql blah blah blah < the_select_query.sql > the_output_i_want.txt
>>
>> That gives you a tab-delimited text file with column headings.
>>
>> Regards,
>>
>> Jerry Schwartz
>> Global Information Incorporated
>> 195 Farmington Ave.
>> Farmington, CT 06032
>>
>> 860.674.8796 / FAX: 860.674.8341
>> E-mail: je...@gii.co.jp
>> Web site: www.the-infoshop.com
>>
>>>-Original Message-
>>>From: LAMP [mailto:l...@afan.net]
>>>Sent: Saturday, January 08, 2011 6:05 PM
>>>To: mysql@lists.mysql.com
>>>Subject: export result from select statement
>>>
>>>Hi guys,
>>>I wonder how to store to csv or txt file result from SELECT query?
>>>not a whole table nor database. Just results from SELECT query.
>>>Usually I use MySQL Query Browser and Export feature, but in this
>>>case I don't have access with MySQL Query Browser. Though, I have a
>>>command line access.
>>>I found on one place something like
>>>#SELECT 2+2 into outfile '/path/to/specific/directory/test.csv';
>>>Though, it doesn't work?!?
>>>
>>>Thanks.
>>>
>>>--
>>>MySQL General Mailing List
>>>For list archives: http://lists.mysql.com/mysql
>>>To unsubscribe:http://lists.mysql.com/mysql?unsub=je...@gii.co.jp
>>
>>
>>
>>
>>
>> --
>> MySQL General Mailing List
>> For list archives: http://lists.mysql.com/mysql
>> To unsubscribe:http://lists.mysql.com/mysql?unsub=mdyk...@gmail.com
>>
>>
>
>
>
>--
> - michael dykman
> - mdyk...@gmail.com
>
> May the Source be with you.
>
>--
>MySQL General Mailing List
>For list archives: http://lists.mysql.com/mysql
>To unsubscribe:http://lists.mysql.com/mysql?unsub=je...@gii.co.jp





-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: export result from select statement

2011-01-10 Thread Jerry Schwartz
>-Original Message-
>From: LAMP [mailto:l...@afan.net]
>Sent: Sunday, January 09, 2011 9:46 PM
>To: mysql@lists.mysql.com
>Subject: Re: export result from select statement
>
>maybe it's "clear" to other but it's pretty unclear.
>#mysql -username -p "select * from table_name where id=123" >
>'/home/me/test/test.txt'
>actually doesn't work?!?
>
>
[JS] The command you wrote doesn't redirect the input of the client. It should 
read

#mysql -username -pxxx dbname < /home/me/testing/text.sql > 
/home/me/testing/text.txt

By the way, it is a bad idea to use "test" as a file or directory name. It is 
very easy to accidentally invoke the "test" command, which can really make you 
scratch your head.

Regards,

Jerry Schwartz
Global Information Incorporated
195 Farmington Ave.
Farmington, CT 06032

860.674.8796 / FAX: 860.674.8341
E-mail: je...@gii.co.jp
Web site: www.the-infoshop.com



>
>Jerry Schwartz wrote:
>> The technique I've settled on is this:
>>
>> mysql blah blah blah < the_select_query.sql > the_output_i_want.txt
>>
>> That gives you a tab-delimited text file with column headings.
>>
>> Regards,
>>
>> Jerry Schwartz
>> Global Information Incorporated
>> 195 Farmington Ave.
>> Farmington, CT 06032
>>
>> 860.674.8796 / FAX: 860.674.8341
>> E-mail: je...@gii.co.jp
>> Web site: www.the-infoshop.com
>>
>>
>>> -Original Message-
>>> From: LAMP [mailto:l...@afan.net]
>>> Sent: Saturday, January 08, 2011 6:05 PM
>>> To: mysql@lists.mysql.com
>>> Subject: export result from select statement
>>>
>>> Hi guys,
>>> I wonder how to store to csv or txt file result from SELECT query?
>>> not a whole table nor database. Just results from SELECT query.
>>> Usually I use MySQL Query Browser and Export feature, but in this
>>> case I don't have access with MySQL Query Browser. Though, I have a
>>> command line access.
>>> I found on one place something like
>>> #SELECT 2+2 into outfile '/path/to/specific/directory/test.csv';
>>> Though, it doesn't work?!?
>>>
>>> Thanks.
>>>
>>> --
>>> MySQL General Mailing List
>>> For list archives: http://lists.mysql.com/mysql
>>> To unsubscribe:http://lists.mysql.com/mysql?unsub=je...@gii.co.jp
>>>
>>
>>
>>
>>
>>
>>





-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: export result from select statement

2011-01-09 Thread Jerry Schwartz
The technique I've settled on is this:

mysql blah blah blah < the_select_query.sql > the_output_i_want.txt

That gives you a tab-delimited text file with column headings.

Regards,

Jerry Schwartz
Global Information Incorporated
195 Farmington Ave.
Farmington, CT 06032

860.674.8796 / FAX: 860.674.8341
E-mail: je...@gii.co.jp
Web site: www.the-infoshop.com

>-Original Message-
>From: LAMP [mailto:l...@afan.net]
>Sent: Saturday, January 08, 2011 6:05 PM
>To: mysql@lists.mysql.com
>Subject: export result from select statement
>
>Hi guys,
>I wonder how to store to csv or txt file result from SELECT query?
>not a whole table nor database. Just results from SELECT query.
>Usually I use MySQL Query Browser and Export feature, but in this
>case I don't have access with MySQL Query Browser. Though, I have a
>command line access.
>I found on one place something like
>#SELECT 2+2 into outfile '/path/to/specific/directory/test.csv';
>Though, it doesn't work?!?
>
>Thanks.
>
>--
>MySQL General Mailing List
>For list archives: http://lists.mysql.com/mysql
>To unsubscribe:http://lists.mysql.com/mysql?unsub=je...@gii.co.jp





-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: This just seems to slow

2011-01-04 Thread Jerry Schwartz
I did wind up using LOAD DATA INFILE.

When I started, I was afraid that I was going to process about 20 tables every 
day; but I redid the data exchange to avoid that.

Regards,

Jerry Schwartz
Global Information Incorporated
195 Farmington Ave.
Farmington, CT 06032

860.674.8796 / FAX: 860.674.8341
E-mail: je...@gii.co.jp
Web site: www.the-infoshop.com

>-Original Message-
>From: mos [mailto:mo...@fastmail.fm]
>Sent: Tuesday, January 04, 2011 12:26 PM
>To: Jerry Schwartz; 'mos'; mysql@lists.mysql.com
>Subject: RE: This just seems to slow
>
>At 05:08 PM 1/3/2011, Jerry Schwartz wrote:
>>Folks, this is getting creepy. It seems like each of the variations you've
>>given me works some times and not others. I haven't found the pattern yet.
>
>The SQL I sent you works on my server just fine with your table and your
>data.  BTW, there is no way to get a list of warnings produced from
>MySQLImport. You will need to switch to SQL and execute "Load Data InFile
>" and after that has executed, do a "Show Warnings" to display the
>warnings. I always use "Load Data Infile" and not MySQLImport because I
>have more control over its execution. The Load Data should be 10x faster
>than using Inserts so it is worth considering.
>
>The MySQL server prefers the import file to be in the server's data
>directory. If you are trying to import it from another location you need to
>change your my.ini file. Please see the article
>http://dev.mysql.com/doc/refman/5.0/en/load-data-local.html
>
>Mike
>
>
>>During the hours it took my initial data loads to finish, I rewrote the
>>import
>>process so that (I hope) I won't have to go through this again.
>>
>>Thanks for your help.
>>
>>Regards,
>>
>>Jerry Schwartz
>>Global Information Incorporated
>>195 Farmington Ave.
>>Farmington, CT 06032
>>
>>860.674.8796 / FAX: 860.674.8341
>>E-mail: je...@gii.co.jp
>>Web site: www.the-infoshop.com
>>
>>
>> >-Original Message-
>> >From: mos [mailto:mo...@fastmail.fm]
>> >Sent: Monday, January 03, 2011 3:25 PM
>> >To: Jerry Schwartz; mysql@lists.mysql.com
>> >Subject: RE: This just seems to slow
>> >
>> >Jerry,
>> > Try this:
>> >
>> >mysqlimport -uusername -ppassword --verbose --debug-info --delete
>> >--columns=`dm_history_dm_id`,`DM_History_Customer_ID` --local
>> >--fields-terminated-by=","  --fields-optionally-enclosed-by="\""
>> >--lines-terminated-by="\r\n" --host=localhost yourdbname t_dmu_history.txt
>> >
>> >
>> >I use Load Data Infile all the time, usually inside of a program like
>> Delphi.
>> >
>> >Mike
>> >
>> >At 09:56 AM 1/3/2011, Jerry Schwartz wrote:
>> >>Okay, I have a confession to make: I have never gotten Load Data Infile 
>> >>or
>> >>mysqlimport to work.
>> >>
>> >>Here's my CSV file, named "t_dmu_history.txt":
>> >>
>> >>13071,299519
>> >>13071,299520
>> >>13071,299521
>> >>13071,299522
>> >>13071,299524
>> >>13071,299526
>> >>13071,299527
>> >>...
>> >>
>> >>Here's my mysqlimport command:
>> >>
>> >>mysqlimport -uaccess -pxxx --delete
>> >>--columns=`dm_history_dm_id`,`DM_History_Customer_ID`
>> >>  --local --silent
>> >>--fields-terminated-by=',' --lines-terminated-by='\r\n' --host=localhost
>> >>maintable_usa t_dmu_history.txt
>> >>
>> >>I'm running on Windows Vista, and mysqlimport is "Ver 3.7 Distrib 5.1.31,
>> >>for
>> >>Win32 (ia32)"
>> >>
>> >>It runs for awhile, but I wind up with only one record:
>> >>
>> >>localhost >select * from t_dmu_history;
>> >>+--+--++
>> >>| t_dmu_history_id | DM_History_DM_ID | DM_History_Customer_ID |
>> >>+--+--++
>> >>|1 |13071 |   NULL |
>> >>+--+--++
>> >>1 row in set (0.00 sec)
>> >>
>> >>Obviously mysqlimport is parsing the input file incorrectly, but I don't
>> >>know
>> >>why.
>> >>
>> >>Here's the table itself:
>> 

RE: This just seems to slow

2011-01-03 Thread Jerry Schwartz
Folks, this is getting creepy. It seems like each of the variations you've 
given me works some times and not others. I haven't found the pattern yet.

During the hours it took my initial data loads to finish, I rewrote the import 
process so that (I hope) I won't have to go through this again.

Thanks for your help.

Regards,

Jerry Schwartz
Global Information Incorporated
195 Farmington Ave.
Farmington, CT 06032

860.674.8796 / FAX: 860.674.8341
E-mail: je...@gii.co.jp
Web site: www.the-infoshop.com


>-Original Message-
>From: mos [mailto:mo...@fastmail.fm]
>Sent: Monday, January 03, 2011 3:25 PM
>To: Jerry Schwartz; mysql@lists.mysql.com
>Subject: RE: This just seems to slow
>
>Jerry,
> Try this:
>
>mysqlimport -uusername -ppassword --verbose --debug-info --delete
>--columns=`dm_history_dm_id`,`DM_History_Customer_ID` --local
>--fields-terminated-by=","  --fields-optionally-enclosed-by="\""
>--lines-terminated-by="\r\n" --host=localhost yourdbname t_dmu_history.txt
>
>
>I use Load Data Infile all the time, usually inside of a program like Delphi.
>
>Mike
>
>At 09:56 AM 1/3/2011, Jerry Schwartz wrote:
>>Okay, I have a confession to make: I have never gotten Load Data Infile or
>>mysqlimport to work.
>>
>>Here's my CSV file, named "t_dmu_history.txt":
>>
>>13071,299519
>>13071,299520
>>13071,299521
>>13071,299522
>>13071,299524
>>13071,299526
>>13071,299527
>>...
>>
>>Here's my mysqlimport command:
>>
>>mysqlimport -uaccess -pxxx --delete
>>--columns=`dm_history_dm_id`,`DM_History_Customer_ID`
>>  --local --silent
>>--fields-terminated-by=',' --lines-terminated-by='\r\n' --host=localhost
>>maintable_usa t_dmu_history.txt
>>
>>I'm running on Windows Vista, and mysqlimport is "Ver 3.7 Distrib 5.1.31, 
>>for
>>Win32 (ia32)"
>>
>>It runs for awhile, but I wind up with only one record:
>>
>>localhost >select * from t_dmu_history;
>>+--+--++
>>| t_dmu_history_id | DM_History_DM_ID | DM_History_Customer_ID |
>>+--+--++
>>|1 |13071 |   NULL |
>>+--+--++
>>1 row in set (0.00 sec)
>>
>>Obviously mysqlimport is parsing the input file incorrectly, but I don't 
>>know
>>why.
>>
>>Here's the table itself:
>>
>>++-+--+-+-++
>>| Field  | Type| Null | Key | Default | Extra  |
>>++-+--+-+-++
>>| t_dmu_history_id   | int(11) | NO   | PRI | NULL| auto_increment |
>>| DM_History_DM_ID   | int(11) | YES  | MUL | NULL||
>>| DM_History_Customer_ID | int(11) | YES  | MUL | NULL||
>>++-+--+-+-+--------+
>>
>>Table: t_dmu_history
>>Create Table: CREATE TABLE `t_dmu_history` (
>>   `t_dmu_history_id` int(11) NOT NULL AUTO_INCREMENT,
>>   `DM_History_DM_ID` int(11) DEFAULT NULL,
>>   `DM_History_Customer_ID` int(11) DEFAULT NULL,
>>   PRIMARY KEY (`t_dmu_history_id`),
>>   KEY `DM_History_DM_ID` (`DM_History_DM_ID`),
>>   KEY `DM_History_Customer_ID` (`DM_History_Customer_ID`)
>>) ENGINE=InnoDB AUTO_INCREMENT=2 DEFAULT CHARSET=utf8
>>
>>Regards,
>>
>>Jerry Schwartz
>>Global Information Incorporated
>>195 Farmington Ave.
>>Farmington, CT 06032
>>
>>860.674.8796 / FAX: 860.674.8341
>>E-mail: je...@gii.co.jp
>>Web site: www.the-infoshop.com
>>
>>
>> >-Original Message-
>> >From: mos [mailto:mo...@fastmail.fm]
>> >Sent: Sunday, January 02, 2011 11:42 PM
>> >To: mysql@lists.mysql.com
>> >Subject: Re: This just seems to slow
>> >
>> >Jerry,
>> >   Use "Load Data Infile" when loading a lot of data. Whoever is 
>> > giving
>> >you the data should be able to dump it to a CSV file. Your imports will be
>> >much faster.
>> >
>> >Mike
>> >
>> >At 07:51 PM 1/2/2011, you wrote:
>> >>I'm trying to load data into a simple table, and it is taking many hours
>> >>(and
>> >>still not done). I know hardware, etc., can have a big effect, but 
>> >>NOT

RE: mysqlimport doesn't work for me

2011-01-03 Thread Jerry Schwartz
>-Original Message-
>From: Carsten Pedersen [mailto:cars...@bitbybit.dk]
>Sent: Monday, January 03, 2011 1:48 PM
>To: Jerry Schwartz
>Cc: 'mos'; mysql@lists.mysql.com
>Subject: Re: mysqlimport doesn't work for me
>
>It's been a long time sine I used mysqlimport, but you might want to try:
>
>- using "--fields-terminated-by" rather than "--fields-terminated"


[JS] Good catch! Unfortunately, it didn't fix the problem:

Connecting to localhost
Selecting database maintable_usa
Deleting the old data from table t_dmu_history
Loading data from LOCAL file: C:/Users/Jerry/Documents/Access_MySQL 
Tests/t_dmu_history.txt into t_dmu_history
maintable_usa.t_dmu_history: Records: 1  Deleted: 0  Skipped: 0  Warnings: 2
Disconnecting from localhost

I don't know if there's any way to find out what those warnings are.


>- losing (or escaping) the backticks in --columns=
>
[JS] The loons who designed this system were fond of putting spaces in the 
table names. I've tried this particular table with and without the back-ticks.

Many of the table and field names are in Japanese, too. I shudder to think how 
that will work out.

>- checking my.cnf to see if the client settings are the same for mysql>
>and mysqlimport


[JS] Good thought.

>- checking user privileges (are you using the same account in both
>instances?)


[JS] Yes.

>- checking the line delimiter and --lines-terminated-by
>
>FWIW, I always prefer tab-delimited files over comma-separated ones.
>This gets around a lot of i18n issues.
>
[JS] No doubt.

>/ Carsten
>
>Den 03-01-2011 19:33, Jerry Schwartz skrev:
>> 
>>
>> This works:
>>
>> localhost>TRUNCATE t_dmu_history;
>> Query OK, 0 rows affected (0.41 sec)
>>
>> localhost>LOAD DATA LOCAL INFILE 't_dmu_history.txt' INTO TABLE 
>> t_dmu_history
>> FIELDS TERMINATED BY "," (`dm_history_dm_id`,`dm_history_customer_id`);
>>
>> Query OK, 876211 rows affected (25.16 sec)
>> Records: 876211  Deleted: 0  Skipped: 0  Warnings: 0
>>
>> localhost>SELECT * FROM t_dmu_history LIMIT 4;
>> +--+--++
>> | t_dmu_history_id | DM_History_DM_ID | DM_History_Customer_ID |
>> +--+--++
>> |1 |13071 | 299519 |
>> |2 |13071 | 299520 |
>> |3 |13071 | 299521 |
>> |4 |13071 | 299522 |
>> +--+--++
>> 4 rows in set (0.03 sec)
>> 
>> This does not work:
>>
>> localhost>TRUNCATE t_dmu_history;
>> localhost>quit
>>
>> C:\Users\Jerry\Documents\Access MySQL
>> Production>mysqlimport --columns=`dm_history_dm_id`,`dm_history_customer_id`
>--fields-terminated=','
>>   --local --password=xxx --pipe --user=access --verbose maintable_usa
>> t_dmu_history.txt
>> Connecting to localhost
>> Selecting database maintable_usa
>> Loading data from LOCAL file: C:/Users/Jerry/Documents/Access MySQL
>> Production/t_dmu_history.txt into t_dmu_history
>> maintable_usa.t_dmu_history: Records: 876211  Deleted: 0  Skipped: 0
>> Warnings: 1752422
>> Disconnecting from localhost
>>
>> localhost>SELECT * FROM t_dmu_history LIMIT 4;
>> +--+--++
>> | t_dmu_history_id | DM_History_DM_ID | DM_History_Customer_ID |
>> +--+--++
>> |1 |13071 |   NULL |
>> |2 |13071 |   NULL |
>> |    3 |13071 |   NULL |
>> |4 |13071 |   NULL |
>> +--+--++
>> 4 rows in set (0.00 sec)
>> =
>>
>> Before you ask, the mysql CLI is also using a named pipe.
>>
>> Windows Vista 32-bit
>> MySQL version 5.1.31-community
>> Mysqlimport Ver 3.7 Distrib 5.1.31, for Win32 (ia32)
>>
>> What am I missing?
>>
>> Regards,
>>
>> Jerry Schwartz
>> Global Information Incorporated
>> 195 Farmington Ave.
>> Farmington, CT 06032
>>
>> 860.674.8796 / FAX: 860.674.8341
>> E-mail: je...@gii.co.jp
>> Web site: www.the-infoshop.com
>>
>>
>>
>>
>>




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: This just seems to slow

2011-01-03 Thread Jerry Schwartz
>-Original Message-
>From: Gavin Towey [mailto:gto...@ffn.com]
>Sent: Monday, January 03, 2011 1:47 PM
>To: Jerry Schwartz; 'mos'; mysql@lists.mysql.com
>Subject: RE: This just seems to slow
>
>I much prefer LOAD DATA INFILE to mysqlimport.  The issue looks like you have 
>a
>file with two columns, and a table with three.  You will probably need to be
>more specific about which columns map to which fields in the file.  Please
>report the error with any commands you run.
>
[JS] I gave this information in a new thread that I started, "mysqlimport 
doesn't work for me."

>Also, most importantly, how slow is "slow?"  Have you measured the import 
>speed
>in terms of rows per second?
>
[JS] Let's just say you could use tree rings as a timer.

>The largest factor I have found that influences overall import speed is the
>innodb_buffer_pool_size.  Make sure you're not running with the default size.
>A buffer pool that's large enough to contain the secondary indexes on the 
>table
>will also help a lot.
>
[JS] I haven't done any tuning. I expected to do that later, when the 
application went live. (Don't worry about that.)

Regards,

Jerry Schwartz
Global Information Incorporated
195 Farmington Ave.
Farmington, CT 06032

860.674.8796 / FAX: 860.674.8341
E-mail: je...@gii.co.jp
Web site: www.the-infoshop.com



>
>
>-Original Message-
>From: Jerry Schwartz [mailto:je...@gii.co.jp]
>Sent: Monday, January 03, 2011 7:56 AM
>To: 'mos'; mysql@lists.mysql.com
>Subject: RE: This just seems to slow
>
>Okay, I have a confession to make: I have never gotten Load Data Infile or
>mysqlimport to work.
>
>Here's my CSV file, named "t_dmu_history.txt":
>
>13071,299519
>13071,299520
>13071,299521
>13071,299522
>13071,299524
>13071,299526
>13071,299527
>...
>
>Here's my mysqlimport command:
>
>mysqlimport -uaccess -pxxx --delete --
>columns=`dm_history_dm_id`,`DM_History_Customer_ID`
> --local --silent
>--fields-terminated-by=',' --lines-terminated-by='\r\n' --host=localhost
>maintable_usa t_dmu_history.txt
>
>I'm running on Windows Vista, and mysqlimport is "Ver 3.7 Distrib 5.1.31, for
>Win32 (ia32)"
>
>It runs for awhile, but I wind up with only one record:
>
>localhost >select * from t_dmu_history;
>+--+--++
>| t_dmu_history_id | DM_History_DM_ID | DM_History_Customer_ID |
>+--+--++
>|1 |13071 |   NULL |
>+--+--++
>1 row in set (0.00 sec)
>
>Obviously mysqlimport is parsing the input file incorrectly, but I don't know
>why.
>
>Here's the table itself:
>
>++-+--+-+-++
>| Field  | Type| Null | Key | Default | Extra  |
>++-+--+-+-++
>| t_dmu_history_id   | int(11) | NO   | PRI | NULL| auto_increment |
>| DM_History_DM_ID   | int(11) | YES  | MUL | NULL||
>| DM_History_Customer_ID | int(11) | YES  | MUL | NULL||
>++-+--+-+-++
>
>   Table: t_dmu_history
>Create Table: CREATE TABLE `t_dmu_history` (
>  `t_dmu_history_id` int(11) NOT NULL AUTO_INCREMENT,
>  `DM_History_DM_ID` int(11) DEFAULT NULL,
>  `DM_History_Customer_ID` int(11) DEFAULT NULL,
>  PRIMARY KEY (`t_dmu_history_id`),
>  KEY `DM_History_DM_ID` (`DM_History_DM_ID`),
>  KEY `DM_History_Customer_ID` (`DM_History_Customer_ID`)
>) ENGINE=InnoDB AUTO_INCREMENT=2 DEFAULT CHARSET=utf8
>
>Regards,
>
>Jerry Schwartz
>Global Information Incorporated
>195 Farmington Ave.
>Farmington, CT 06032
>
>860.674.8796 / FAX: 860.674.8341
>E-mail: je...@gii.co.jp
>Web site: www.the-infoshop.com
>
>
>>-Original Message-
>>From: mos [mailto:mo...@fastmail.fm]
>>Sent: Sunday, January 02, 2011 11:42 PM
>>To: mysql@lists.mysql.com
>>Subject: Re: This just seems to slow
>>
>>Jerry,
>>   Use "Load Data Infile" when loading a lot of data. Whoever is giving
>>you the data should be able to dump it to a CSV file. Your imports will be
>>much faster.
>>
>>Mike
>>
>>At 07:51 PM 1/2/2011, you wrote:
>>>I'm trying to load data into a simple table, and it is taking many hours
>>>(and
>>>still not done). I know hardware, etc., can have a big eff

mysqlimport doesn't work for me

2011-01-03 Thread Jerry Schwartz


This works:

localhost >TRUNCATE t_dmu_history;
Query OK, 0 rows affected (0.41 sec)

localhost >LOAD DATA LOCAL INFILE 't_dmu_history.txt' INTO TABLE t_dmu_history 
FIELDS TERMINATED BY "," (`dm_history_dm_id`,`dm_history_customer_id`);

Query OK, 876211 rows affected (25.16 sec)
Records: 876211  Deleted: 0  Skipped: 0  Warnings: 0

localhost >SELECT * FROM t_dmu_history LIMIT 4;
+--+--++
| t_dmu_history_id | DM_History_DM_ID | DM_History_Customer_ID |
+--+--++
|1 |13071 | 299519 |
|2 |13071 | 299520 |
|3 |13071 | 299521 |
|4 |13071 | 299522 |
+--+--++
4 rows in set (0.03 sec)

This does not work:

localhost >TRUNCATE t_dmu_history;
localhost >quit

C:\Users\Jerry\Documents\Access MySQL 
Production>mysqlimport --columns=`dm_history_dm_id`,`dm_history_customer_id` 
--fields-terminated=',' 
 --local --password=xxx --pipe --user=access --verbose maintable_usa 
t_dmu_history.txt
Connecting to localhost
Selecting database maintable_usa
Loading data from LOCAL file: C:/Users/Jerry/Documents/Access MySQL 
Production/t_dmu_history.txt into t_dmu_history
maintable_usa.t_dmu_history: Records: 876211  Deleted: 0  Skipped: 0 
Warnings: 1752422
Disconnecting from localhost

localhost >SELECT * FROM t_dmu_history LIMIT 4;
+--+--++
| t_dmu_history_id | DM_History_DM_ID | DM_History_Customer_ID |
+--+--++
|1 |13071 |   NULL |
|2 |13071 |   NULL |
|3 |13071 |   NULL |
|4 |13071 |   NULL |
+--+--++
4 rows in set (0.00 sec)
=

Before you ask, the mysql CLI is also using a named pipe.

Windows Vista 32-bit
MySQL version 5.1.31-community
Mysqlimport Ver 3.7 Distrib 5.1.31, for Win32 (ia32)

What am I missing?

Regards,

Jerry Schwartz
Global Information Incorporated
195 Farmington Ave.
Farmington, CT 06032

860.674.8796 / FAX: 860.674.8341
E-mail: je...@gii.co.jp
Web site: www.the-infoshop.com





-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: This just seems to slow

2011-01-03 Thread Jerry Schwartz
Okay, I have a confession to make: I have never gotten Load Data Infile or 
mysqlimport to work.

Here's my CSV file, named "t_dmu_history.txt":

13071,299519
13071,299520
13071,299521
13071,299522
13071,299524
13071,299526
13071,299527
...

Here's my mysqlimport command:

mysqlimport -uaccess -pxxx --delete 
--columns=`dm_history_dm_id`,`DM_History_Customer_ID` 
 --local --silent
--fields-terminated-by=',' --lines-terminated-by='\r\n' --host=localhost 
maintable_usa t_dmu_history.txt

I'm running on Windows Vista, and mysqlimport is "Ver 3.7 Distrib 5.1.31, for 
Win32 (ia32)"

It runs for awhile, but I wind up with only one record:

localhost >select * from t_dmu_history;
+--+--++
| t_dmu_history_id | DM_History_DM_ID | DM_History_Customer_ID |
+--+--++
|1 |13071 |   NULL |
+--+--++
1 row in set (0.00 sec)

Obviously mysqlimport is parsing the input file incorrectly, but I don't know 
why.

Here's the table itself:

++-+--+-+-++
| Field  | Type| Null | Key | Default | Extra  |
++-+--+-+-++
| t_dmu_history_id   | int(11) | NO   | PRI | NULL| auto_increment |
| DM_History_DM_ID   | int(11) | YES  | MUL | NULL||
| DM_History_Customer_ID | int(11) | YES  | MUL | NULL||
++-+--+-+-++

   Table: t_dmu_history
Create Table: CREATE TABLE `t_dmu_history` (
  `t_dmu_history_id` int(11) NOT NULL AUTO_INCREMENT,
  `DM_History_DM_ID` int(11) DEFAULT NULL,
  `DM_History_Customer_ID` int(11) DEFAULT NULL,
  PRIMARY KEY (`t_dmu_history_id`),
  KEY `DM_History_DM_ID` (`DM_History_DM_ID`),
  KEY `DM_History_Customer_ID` (`DM_History_Customer_ID`)
) ENGINE=InnoDB AUTO_INCREMENT=2 DEFAULT CHARSET=utf8

Regards,

Jerry Schwartz
Global Information Incorporated
195 Farmington Ave.
Farmington, CT 06032

860.674.8796 / FAX: 860.674.8341
E-mail: je...@gii.co.jp
Web site: www.the-infoshop.com


>-Original Message-
>From: mos [mailto:mo...@fastmail.fm]
>Sent: Sunday, January 02, 2011 11:42 PM
>To: mysql@lists.mysql.com
>Subject: Re: This just seems to slow
>
>Jerry,
>   Use "Load Data Infile" when loading a lot of data. Whoever is giving
>you the data should be able to dump it to a CSV file. Your imports will be
>much faster.
>
>Mike
>
>At 07:51 PM 1/2/2011, you wrote:
>>I'm trying to load data into a simple table, and it is taking many hours 
>>(and
>>still not done). I know hardware, etc., can have a big effect, but NOTHING
>>should have this big an effect.
>>
>>=
>>us-gii >show create table t_dmu_history\G
>>*** 1. row ***
>>Table: t_dmu_history
>>Create Table: CREATE TABLE `t_dmu_history` (
>>   `t_dmu_history_id` int(11) NOT NULL AUTO_INCREMENT,
>>   `DM_History_DM_ID` int(11) DEFAULT NULL,
>>   `DM_History_Customer_ID` int(11) DEFAULT NULL,
>>   PRIMARY KEY (`t_dmu_history_id`),
>>   KEY `DM_History_DM_ID` (`DM_History_DM_ID`),
>>   KEY `DM_History_Customer_ID` (`DM_History_Customer_ID`)
>>) ENGINE=InnoDB AUTO_INCREMENT=1446317 DEFAULT CHARSET=utf8
>>=
>>
>>Here's a snip of what the input file looks like:
>>=
>>SET autocommit=1;
>>
>>#
>># Dumping data for table 'T_DMU_History'
>>#
>>
>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>>VALUES (13071, 299519);
>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>>VALUES (13071, 299520);
>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>>VALUES (13071, 299521);
>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>>VALUES (13071, 299522);
>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>>VALUES (13071, 299524);
>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>>VALUES (13071, 299526);
>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>>VALUES (13071, 299527);
>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>>VALUES (13071, 299528);
>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>>VALUES (13071, 299529);
>>INSERT INTO `T_DMU_History` (`DM_Histor

RE: This just seems to slow

2011-01-03 Thread Jerry Schwartz
>-Original Message-
>From: Daevid Vincent [mailto:dae...@daevid.com]
>Sent: Sunday, January 02, 2011 11:49 PM
>To: mysql@lists.mysql.com
>Cc: 'mos'
>Subject: RE: This just seems to slow
>
>Another option would be to mangle your insert statement with some other
>language like PHP, Python, Ruby, etc. so that the inserts are MULTI inserts
>instead. Something like:
>
>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>VALUES (13071, 299519),
>VALUES (13071, 299520),
>VALUES (13071, 299521),
>...
>
>That will radically speed up the inserts.
>
[JS] I thought of that, but unfortunately this is just one of 25-odd tables 
(each in a different format, of course).

>Also delete your INDEX / KEYs and add them at the very end instead.
>
[JS] Wouldn't it take as long to build the indices? I guess it probably 
wouldn't.

Regards,

Jerry Schwartz
Global Information Incorporated
195 Farmington Ave.
Farmington, CT 06032

860.674.8796 / FAX: 860.674.8341
E-mail: je...@gii.co.jp
Web site: www.the-infoshop.com




>-Original Message-
>From: mos [mailto:mo...@fastmail.fm]
>Sent: Sunday, January 02, 2011 8:42 PM
>To: mysql@lists.mysql.com
>Subject: Re: This just seems to slow
>
>Jerry,
>   Use "Load Data Infile" when loading a lot of data. Whoever is giving
>you the data should be able to dump it to a CSV file. Your imports will be
>much faster.
>
>Mike
>
>At 07:51 PM 1/2/2011, you wrote:
>>I'm trying to load data into a simple table, and it is taking many hours
>(and
>>still not done). I know hardware, etc., can have a big effect, but NOTHING
>>should have this big an effect.
>>
>>=
>>us-gii >show create table t_dmu_history\G
>>*** 1. row ***
>>Table: t_dmu_history
>>Create Table: CREATE TABLE `t_dmu_history` (
>>   `t_dmu_history_id` int(11) NOT NULL AUTO_INCREMENT,
>>   `DM_History_DM_ID` int(11) DEFAULT NULL,
>>   `DM_History_Customer_ID` int(11) DEFAULT NULL,
>>   PRIMARY KEY (`t_dmu_history_id`),
>>   KEY `DM_History_DM_ID` (`DM_History_DM_ID`),
>>   KEY `DM_History_Customer_ID` (`DM_History_Customer_ID`)
>>) ENGINE=InnoDB AUTO_INCREMENT=1446317 DEFAULT CHARSET=utf8
>>=
>>
>>Here's a snip of what the input file looks like:
>>=
>>SET autocommit=1;
>>
>>#
>># Dumping data for table 'T_DMU_History'
>>#
>>
>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>>VALUES (13071, 299519);
>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>>VALUES (13071, 299520);
>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>>VALUES (13071, 299521);
>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>>VALUES (13071, 299522);
>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>>VALUES (13071, 299524);
>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>>VALUES (13071, 299526);
>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>>VALUES (13071, 299527);
>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>>VALUES (13071, 299528);
>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>>VALUES (13071, 299529);
>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>>VALUES (13071, 299531);
>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>>VALUES (13071, 299532);
>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>>VALUES (13071, 299533);
>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>>VALUES (13071, 299534);
>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>>VALUES (13071, 299535);
>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>>VALUES (13073, 298880);
>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>>VALUES (13073, 298881);
>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>>VALUES (13073, 298882);
>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>>VALUES (13073, 298883);
>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>>VALUES (13073, 298884);
>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>>VALUES (13073, 298885);
>>INSERT INTO `T_DMU_History` (`DM_Hi

RE: This just seems to slow

2011-01-03 Thread Jerry Schwartz
>-Original Message-
>From: Wagner Bianchi [mailto:wagnerbianch...@gmail.com]
>Sent: Monday, January 03, 2011 4:42 AM
>To: Daevid Vincent
>Cc: mysql@lists.mysql.com; mos
>Subject: Re: This just seems to slow
>
>Multiple line insert is the better choice...it will be organized in
>transaction blocks of many lines and it will speed up data insertion.
>
>[bian...@mysql.com]# mysqldump -u root -p --all-databases -e > file.dump
>
[JS] If only I were using mysqldump :-(.

Regards,

Jerry Schwartz
Global Information Incorporated
195 Farmington Ave.
Farmington, CT 06032

860.674.8796 / FAX: 860.674.8341
E-mail: je...@gii.co.jp
Web site: www.the-infoshop.com



>-e: extended-inserts
>
>Best regards.
>--
>Wagner Bianchi
>
>2011/1/3 Daevid Vincent 
>
>> Another option would be to mangle your insert statement with some other
>> language like PHP, Python, Ruby, etc. so that the inserts are MULTI inserts
>> instead. Something like:
>>
>> INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>> VALUES (13071, 299519),
>> VALUES (13071, 299520),
>> VALUES (13071, 299521),
>> ...
>>
>> That will radically speed up the inserts.
>>
>> Also delete your INDEX / KEYs and add them at the very end instead.
>>
>> -Original Message-
>> From: mos [mailto:mo...@fastmail.fm]
>> Sent: Sunday, January 02, 2011 8:42 PM
>> To: mysql@lists.mysql.com
>> Subject: Re: This just seems to slow
>>
>> Jerry,
>>   Use "Load Data Infile" when loading a lot of data. Whoever is giving
>> you the data should be able to dump it to a CSV file. Your imports will be
>> much faster.
>>
>> Mike
>>
>> At 07:51 PM 1/2/2011, you wrote:
>> >I'm trying to load data into a simple table, and it is taking many hours
>> (and
>> >still not done). I know hardware, etc., can have a big effect, but NOTHING
>> >should have this big an effect.
>> >
>> >=
>> >us-gii >show create table t_dmu_history\G
>> >*** 1. row ***
>> >Table: t_dmu_history
>> >Create Table: CREATE TABLE `t_dmu_history` (
>> >   `t_dmu_history_id` int(11) NOT NULL AUTO_INCREMENT,
>> >   `DM_History_DM_ID` int(11) DEFAULT NULL,
>> >   `DM_History_Customer_ID` int(11) DEFAULT NULL,
>> >   PRIMARY KEY (`t_dmu_history_id`),
>> >   KEY `DM_History_DM_ID` (`DM_History_DM_ID`),
>> >   KEY `DM_History_Customer_ID` (`DM_History_Customer_ID`)
>> >) ENGINE=InnoDB AUTO_INCREMENT=1446317 DEFAULT CHARSET=utf8
>> >=
>> >
>> >Here's a snip of what the input file looks like:
>> >=
>> >SET autocommit=1;
>> >
>> >#
>> ># Dumping data for table 'T_DMU_History'
>> >#
>> >
>> >INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>> >VALUES (13071, 299519);
>> >INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>> >VALUES (13071, 299520);
>> >INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>> >VALUES (13071, 299521);
>> >INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>> >VALUES (13071, 299522);
>> >INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>> >VALUES (13071, 299524);
>> >INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>> >VALUES (13071, 299526);
>> >INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>> >VALUES (13071, 299527);
>> >INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>> >VALUES (13071, 299528);
>> >INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>> >VALUES (13071, 299529);
>> >INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>> >VALUES (13071, 299531);
>> >INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>> >VALUES (13071, 299532);
>> >INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>> >VALUES (13071, 299533);
>> >INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>> >VALUES (13071, 299534);
>> >INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>> >VALUES (13071, 299535);
>> >INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>> >VALUES (13073, 298880);
>&

This just seems to slow

2011-01-02 Thread Jerry Schwartz
I'm trying to load data into a simple table, and it is taking many hours (and 
still not done). I know hardware, etc., can have a big effect, but NOTHING 
should have this big an effect.

=
us-gii >show create table t_dmu_history\G
*** 1. row ***
   Table: t_dmu_history
Create Table: CREATE TABLE `t_dmu_history` (
  `t_dmu_history_id` int(11) NOT NULL AUTO_INCREMENT,
  `DM_History_DM_ID` int(11) DEFAULT NULL,
  `DM_History_Customer_ID` int(11) DEFAULT NULL,
  PRIMARY KEY (`t_dmu_history_id`),
  KEY `DM_History_DM_ID` (`DM_History_DM_ID`),
  KEY `DM_History_Customer_ID` (`DM_History_Customer_ID`)
) ENGINE=InnoDB AUTO_INCREMENT=1446317 DEFAULT CHARSET=utf8
=

Here's a snip of what the input file looks like:
=
SET autocommit=1;

#
# Dumping data for table 'T_DMU_History'
#

INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`) 
VALUES (13071, 299519);
INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`) 
VALUES (13071, 299520);
INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`) 
VALUES (13071, 299521);
INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`) 
VALUES (13071, 299522);
INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`) 
VALUES (13071, 299524);
INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`) 
VALUES (13071, 299526);
INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`) 
VALUES (13071, 299527);
INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`) 
VALUES (13071, 299528);
INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`) 
VALUES (13071, 299529);
INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`) 
VALUES (13071, 299531);
INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`) 
VALUES (13071, 299532);
INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`) 
VALUES (13071, 299533);
INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`) 
VALUES (13071, 299534);
INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`) 
VALUES (13071, 299535);
INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`) 
VALUES (13073, 298880);
INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`) 
VALUES (13073, 298881);
INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`) 
VALUES (13073, 298882);
INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`) 
VALUES (13073, 298883);
INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`) 
VALUES (13073, 298884);
INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`) 
VALUES (13073, 298885);
INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`) 
VALUES (13073, 298886);
INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`) 
VALUES (13073, 298887);
INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`) 
VALUES (13073, 298889);
INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`) 
VALUES (13073, 298890);
INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`) 
VALUES (13073, 298891);
=

There are about 87 records.

I realize that using one INSERT per row is going to hurt, but I don't control 
the format of the incoming data.

Besides, I'd have thought this would be pretty quick regardless of how clumsy 
the method was.

Is that "autocommit" a problem? This is a bulk load into an empty table, so 
I'm not worried about ACID.

Any suggestions?





Regards,

Jerry Schwartz
Global Information Incorporated
195 Farmington Ave.
Farmington, CT 06032

860.674.8796 / FAX: 860.674.8341
E-mail: je...@gii.co.jp
Web site: www.the-infoshop.com





-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Corrupted TRG file?

2010-12-21 Thread Jerry Schwartz
Version 5.1.31 Community (Win32)

 

I created a trigger named TC_Products_Date_Trigger on the table TC_Products.

 

I typed in 

 

SHOW CREATE TRIGGER tc_products_date_trigger;

 

and this came back at me:

 

ERROR 1602 (HY000): Corrupted TRG file for table `subtable_usa`.`tc_products`

 

I almost jumped out of my skin. Then I typed in

 

SHOW CREATE TRIGGER TC_Products_Date_Trigger;

 

and that worked just fine. I didn’t find any bug reports that seemed relevant, 
but I can’t believe I’m first one to stumble on this.

 

Regards,

 

Jerry Schwartz

Global Information Incorporated

195 Farmington Ave.

Farmington, CT 06032

 

860.674.8796 / FAX: 860.674.8341

E-mail:  <mailto:je...@gii.co.jp> je...@gii.co.jp 

Web site:  <http://www.the-infoshop.com/> www.the-infoshop.com

 



RE: Trigger?

2010-12-21 Thread Jerry Schwartz
Aha! That was the clue I needed. Thank you so much.

 

So, to make sure I understand:

 

A “BEFORE” trigger is executed **between** the time that the record is 
assembled and the time that the action occurs. That’s why the constraints on 
the field value were being applied before my trigger was triggered.

 

Contrariwise, I assume that an “AFTER” trigger would be executed last, after 
everything has been done.

 

Am I correct?

 

By the way, 

 

SET NEW.foo = IFNULL(NEW.foo, 'ok')

 

works just fine.

 

Regards,

 

Jerry Schwartz

Global Information Incorporated

195 Farmington Ave.

Farmington, CT 06032

 

860.674.8796 / FAX: 860.674.8341

E-mail: je...@gii.co.jp 

Web site: www.the-infoshop.com <http://www.the-infoshop.com/> 

 

From: Wagner Bianchi [mailto:wagnerbianch...@gmail.com] 
Sent: Monday, December 20, 2010 6:44 PM
To: Jerry Schwartz
Cc: mysql@lists.mysql.com
Subject: Re: Trigger?

 

Well, to produce this result, the first thing that we have to do is to get rid 
of the NOT NULL constraint of the column `foo`. After it, the 'null' can be 
sent within a INSERT statement, as below:

  mysql> show create table testtrigger\G
*** 1. row ***
   Table: testtrigger
Create Table: CREATE TABLE `testtrigger` (
  `id` int(10) unsigned NOT NULL AUTO_INCREMENT,
  `foo` char(10) DEFAULT NULL,
  PRIMARY KEY (`id`)
) ENGINE=InnoDB AUTO_INCREMENT=4 DEFAULT CHARSET=utf8
1 row in set (0.05 sec)

so, after to create table, we create the trigger:

mysql> create trigger trg_test
-> before insert on testtrigger
-> for each row
-> begin
->   if(NEW.foo IS NULL || NEW.foo = '') then
-> set NEW.foo = 'Ok';
->   end if;
-> end;
-> //
Query OK, 0 rows affected (0.04 sec)

mysql> insert into testtrigger set id =100, foo =null;
Query OK, 1 row affected (0.03 sec)

mysql> select * from testtrigger;
+-+--+
| id  | foo  |
+-+--+
| 100 | Ok   |
+-+--+
1 row in set (0.00 sec)

The way that your table is now, with foo NOT NULL, you can't send foo =null 
with a query cause column don't accept null values. The column was defined as a 
not null.

Look this:

mysql> alter table testtrigger modify foo char(10) not null;
Query OK, 1 row affected (0.10 sec)
Records: 1  Duplicates: 0  Warnings: 0

mysql> insert into testtrigger set id =100, foo =null;
ERROR 1048 (23000): Column 'foo' cannot be null

Did you get?



Best regards.

--

Wagner Bianchi

 

2010/12/20 Jerry Schwartz 

I've never used a trigger before, and I want to make one that sounds like it
should be simple.

Create Table: CREATE TABLE `testtrigger` (
 `id` int(10) unsigned NOT NULL AUTO_INCREMENT,
 `foo` char(10) NOT NULL,
 PRIMARY KEY (`id`)
) ENGINE=InnoDB AUTO_INCREMENT=4 DEFAULT CHARSET=utf8

Here's what I want to do: if no value is supplied for `foo`, or if a NULL
value is supplied for `foo`, I want to set it to a particular value.

I tried things like this:

SET NEW.foo = IFNULL(NEW.foo,'ok')

But that didn't work.

If you point me in the right direction, I'll be okay from there (I hope).

Thanks.

Regards,

Jerry Schwartz
Global Information Incorporated
195 Farmington Ave.
Farmington, CT 06032

860.674.8796 / FAX: 860.674.8341
E-mail: je...@gii.co.jp
Web site: www.the-infoshop.com





--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=wagnerbianch...@gmail.com

 



RE: Trigger?

2010-12-21 Thread Jerry Schwartz
Here's my latest attempt:

localhost >CREATE TRIGGER makefoo BEFORE INSERT ON testtrigger
-> FOR EACH ROW
-> SET NEW.foo = IFNULL(NEW.foo, 'ok')
-> |
Query OK, 0 rows affected (0.00 sec)

As you can see, the trigger syntax is correct; but it doesn't do what I want.

localhost >INSERT INTO testtrigger (id, foo) VALUES (NULL, 'xxx');
Query OK, 1 row affected (0.00 sec)

localhost >INSERT INTO testtrigger (id, foo) VALUES (NULL, NULL);
ERROR 1048 (23000): Column 'foo' cannot be null

localhost >INSERT INTO testtrigger (id) VALUES (NULL);
ERROR 1364 (HY000): Field 'foo' doesn't have a default value

So I'm missing something important.

Regards,

Jerry Schwartz
Global Information Incorporated
195 Farmington Ave.
Farmington, CT 06032

860.674.8796 / FAX: 860.674.8341
E-mail: je...@gii.co.jp
Web site: www.the-infoshop.com


>-Original Message-
>From: Michael Dykman [mailto:mdyk...@gmail.com]
>Sent: Monday, December 20, 2010 6:25 PM
>To: Jerry Schwartz
>Cc: mysql@lists.mysql.com
>Subject: Re: Trigger?
>
>The expression you supplied looks right enough..  how was it declared?
> as an on UPDATE/on INSERT trigger or just a single case?
>
>
> - michael dykman
>
>On Mon, Dec 20, 2010 at 5:21 PM, Jerry Schwartz  wrote:
>> I've never used a trigger before, and I want to make one that sounds like 
>> it
>> should be simple.
>>
>> Create Table: CREATE TABLE `testtrigger` (
>>  `id` int(10) unsigned NOT NULL AUTO_INCREMENT,
>>  `foo` char(10) NOT NULL,
>>  PRIMARY KEY (`id`)
>> ) ENGINE=InnoDB AUTO_INCREMENT=4 DEFAULT CHARSET=utf8
>>
>> Here's what I want to do: if no value is supplied for `foo`, or if a NULL
>> value is supplied for `foo`, I want to set it to a particular value.
>>
>> I tried things like this:
>>
>> SET NEW.foo = IFNULL(NEW.foo,'ok')
>>
>> But that didn't work.
>>
>> If you point me in the right direction, I'll be okay from there (I hope).
>>
>> Thanks.
>>
>> Regards,
>>
>> Jerry Schwartz
>> Global Information Incorporated
>> 195 Farmington Ave.
>> Farmington, CT 06032
>>
>> 860.674.8796 / FAX: 860.674.8341
>> E-mail: je...@gii.co.jp
>> Web site: www.the-infoshop.com
>>
>>
>>
>>
>>
>> --
>> MySQL General Mailing List
>> For list archives: http://lists.mysql.com/mysql
>> To unsubscribe:http://lists.mysql.com/mysql?unsub=mdyk...@gmail.com
>>
>>
>
>
>
>--
> - michael dykman
> - mdyk...@gmail.com
>
> May the Source be with you.




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Trigger?

2010-12-20 Thread Jerry Schwartz
I've never used a trigger before, and I want to make one that sounds like it 
should be simple.

Create Table: CREATE TABLE `testtrigger` (
  `id` int(10) unsigned NOT NULL AUTO_INCREMENT,
  `foo` char(10) NOT NULL,
  PRIMARY KEY (`id`)
) ENGINE=InnoDB AUTO_INCREMENT=4 DEFAULT CHARSET=utf8

Here's what I want to do: if no value is supplied for `foo`, or if a NULL 
value is supplied for `foo`, I want to set it to a particular value.

I tried things like this:

SET NEW.foo = IFNULL(NEW.foo,'ok')

But that didn't work.

If you point me in the right direction, I'll be okay from there (I hope).

Thanks.

Regards,

Jerry Schwartz
Global Information Incorporated
195 Farmington Ave.
Farmington, CT 06032

860.674.8796 / FAX: 860.674.8341
E-mail: je...@gii.co.jp
Web site: www.the-infoshop.com





-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: Access to MySQL

2010-12-17 Thread Jerry Schwartz

>-Original Message-
>From: Shawn Green (MySQL) [mailto:shawn.l.gr...@oracle.com]
>Sent: Friday, December 17, 2010 10:35 AM
>To: Jerry Schwartz
>Cc: j...@consultorweb.cnt.br; mysql@lists.mysql.com
>Subject: Re: Access to MySQL
>
>Hi Jerry,
>
>On 12/17/2010 09:34, Jerry Schwartz wrote:
>>> -Original Message-
>>> From: Jo?o C?ndido de Souza Neto [mailto:j...@consultorweb.cnt.br]
>>> Sent: Friday, December 17, 2010 6:11 AM
>>> To: mysql@lists.mysql.com
>>> Subject: Re: Access to MySQL
>>>
>>> What about this?
>>>
>>> date_format(now(), "%Y/%m/%d")
>>>
>> [JS] I don't think you can use anything but a constant as a default value.
>>
>
>You are correct with one exception that was already mentioned earlier:
>the TIMESTAMP storage type.
>
>from http://dev.mysql.com/doc/refman/5.1/en/create-table.html
>###
>The DEFAULT clause specifies a default value for a column. With one
>exception, the default value must be a constant; it cannot be a function
>or an expression. This means, for example, that you cannot set the
>default for a date column to be the value of a function such as NOW() or
>CURRENT_DATE. The exception is that you can specify CURRENT_TIMESTAMP as
>the default for a TIMESTAMP column. See Section 10.3.1.1, "TIMESTAMP
>Properties".
>###
>
>However, nothing says you can't use a function or other computation in a
>TRIGGER to set the default value to an empty column of a new row to
>whatever you wanted it to be.
>
>http://dev.mysql.com/doc/refman/5.1/en/create-trigger.html
>
>I know it's a workaround but it will keep the default value management
>out of your application and inside the database.
>
[JS] Thanks. I did have that in the back of my mind, but to be honest I never 
used a trigger.

I'll have to think about this.


Regards,

Jerry Schwartz
Global Information Incorporated
195 Farmington Ave.
Farmington, CT 06032

860.674.8796 / FAX: 860.674.8341
E-mail: je...@gii.co.jp
Web site: www.the-infoshop.com



>--
>Shawn Green
>MySQL Principal Technical Support Engineer
>Oracle USA, Inc.
>Office: Blountville, TN
>
>--
>MySQL General Mailing List
>For list archives: http://lists.mysql.com/mysql
>To unsubscribe:http://lists.mysql.com/mysql?unsub=je...@gii.co.jp





-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: Access to MySQL

2010-12-17 Thread Jerry Schwartz
>-Original Message-
>From: João Cândido de Souza Neto [mailto:j...@consultorweb.cnt.br]
>Sent: Friday, December 17, 2010 6:11 AM
>To: mysql@lists.mysql.com
>Subject: Re: Access to MySQL
>
>What about this?
>
>date_format(now(), "%Y/%m/%d")
>
[JS] I don't think you can use anything but a constant as a default value.

Regards,

Jerry Schwartz
Global Information Incorporated
195 Farmington Ave.
Farmington, CT 06032

860.674.8796 / FAX: 860.674.8341
E-mail: je...@gii.co.jp
Web site: www.the-infoshop.com



>--
>João Cândido de Souza Neto
>
>""Jerry Schwartz""  escreveu na mensagem
>news:011201cb9d6c$eccc1e60$c6645b...@co.jp...
>I have to move the back-end of an Access application to MySQL, and I've run
>into one issue that I haven't been able to solve yet.
>
>
>
>The Access database stores dates as text in a "/mm/dd" format. The
>problem is that the default value is a formula that generates the current
>date, formatted as text. In Access, it looks like
>
>
>
>'=Format$(Now(),\"/mm/dd\")'
>
>
>
>This construct is used throughout the table definitions.
>
>
>
>Is there any alternative to setting the default to something else (NULL, for
>example) and moving the "default" into the application code? That would be a
>significant PITA.
>
>
>
>Regards,
>
>
>
>Jerry Schwartz
>
>Global Information Incorporated
>
>195 Farmington Ave.
>
>Farmington, CT 06032
>
>
>
>860.674.8796 / FAX: 860.674.8341
>
>E-mail:  <mailto:je...@gii.co.jp> je...@gii.co.jp
>
>Web site:  <http://www.the-infoshop.com/> www.the-infoshop.com
>
>
>
>
>
>
>--
>MySQL General Mailing List
>For list archives: http://lists.mysql.com/mysql
>To unsubscribe:http://lists.mysql.com/mysql?unsub=je...@gii.co.jp





-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: Access to MySQL

2010-12-17 Thread Jerry Schwartz
>-Original Message-
>From: Jesper Wisborg Krogh [mailto:my...@wisborg.dk]
>Sent: Friday, December 17, 2010 6:07 AM
>To: Jerry Schwartz
>Cc: 'MySQL'
>Subject: Re: Access to MySQL
>
>On 17/12/2010, at 9:02 AM, Jerry Schwartz wrote:
>
>> I have to move the back-end of an Access application to MySQL, and I've run
>into one issue that I haven't been able to solve yet.
>>
>> The Access database stores dates as text in a "/mm/dd" format. The
>problem is that the default value is a formula that generates the current 
>date,
>formatted as text. In Access, it looks like
>>
>> '=Format$(Now(),\"/mm/dd\")'
>>
>> This construct is used throughout the table definitions.
>>
>> Is there any alternative to setting the default to something else (NULL, 
>> for
>example) and moving the "default" into the application code? That would be a
>significant PITA.
>
>If a 32-bit date range is enough, then you can use the timestamp data type.
>That supports having the current time as the default value. See also
>http://dev.mysql.com/doc/refman/5.1/en/timestamp.html
>
[JS] Unfortunately, I have to keep that field as a text field.

Also, a timestamp would change every time a record is updated and you can only 
have one per record.

Thanks for trying.

Regards,

Jerry Schwartz
Global Information Incorporated
195 Farmington Ave.
Farmington, CT 06032

860.674.8796 / FAX: 860.674.8341
E-mail: je...@gii.co.jp
Web site: www.the-infoshop.com



>Regards,
>Jesper




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Access to MySQL

2010-12-16 Thread Jerry Schwartz
I have to move the back-end of an Access application to MySQL, and I’ve run 
into one issue that I haven’t been able to solve yet.

 

The Access database stores dates as text in a “/mm/dd” format. The problem 
is that the default value is a formula that generates the current date, 
formatted as text. In Access, it looks like

 

'=Format$(Now(),\"/mm/dd\")'

 

This construct is used throughout the table definitions.

 

Is there any alternative to setting the default to something else (NULL, for 
example) and moving the “default” into the application code? That would be a 
significant PITA.

 

Regards,

 

Jerry Schwartz

Global Information Incorporated

195 Farmington Ave.

Farmington, CT 06032

 

860.674.8796 / FAX: 860.674.8341

E-mail:  <mailto:je...@gii.co.jp> je...@gii.co.jp 

Web site:  <http://www.the-infoshop.com/> www.the-infoshop.com

 



RE: Design: how to prioritise 1-to-many fields

2010-12-09 Thread Jerry Schwartz
>From: gvim [mailto:gvi...@gmail.com]
>Sent: Thursday, December 09, 2010 9:33 AM
>To: MySQL
>Subject: Design: how to prioritise 1-to-many fields
>
>I have a typical contact database which caters for multiple email addresses
>with a distinct Email table keyed to a foreign key inside the Contact table,
>ie. a 1-to-many relationship. However, I want to prioritise these Email 
>entries
>for a given Contact entry so all I can think of is to add a numeric Priority
>field alongside the Address field inside the Email table. Is this the
>best/standard solution or is there another way?
>
[JS] Sounds about right to me, but don't use a sequence like 1, 2, 3. Use 10, 
20, 30. You never know when you'll have to squeeze in a new priority level, 
and you don't want to have to renumber them all.


Regards,

Jerry Schwartz
Global Information Incorporated
195 Farmington Ave.
Farmington, CT 06032

860.674.8796 / FAX: 860.674.8341
E-mail: je...@gii.co.jp
Web site: www.the-infoshop.com



>gvim
>
>--
>MySQL General Mailing List
>For list archives: http://lists.mysql.com/mysql
>To unsubscribe:http://lists.mysql.com/mysql?unsub=je...@gii.co.jp





-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: localhost vs domain for connection string

2010-11-29 Thread Jerry Schwartz
>-Original Message-
>From: vegiv...@gmail.com [mailto:vegiv...@gmail.com] On Behalf Of Johan De
>Meersman
>Sent: Thursday, November 25, 2010 3:29 AM
>To: Jerry Schwartz
>Cc: Brent Clark; mysql mailing list
>Subject: Re: localhost vs domain for connection string
>
>On Wed, Nov 24, 2010 at 4:44 PM, Jerry Schwartz  wrote:
>
>> [JS] This might or might not be enabled by default. I'm running on Windows,
>> and I seem to remember having to change it.
>>
>> # Enable named pipe, bypassing the network stack
>> enable-named-pipe
>>
>
>Windows' named pipes are not the same as unix sockets, although the general
>idea is similar. I'm not sure, but I think the Unix socket file is always
>created.
>
[JS] I don't remember either. I also don't remember if the original question 
was about *nix or Windows.


Regards,

Jerry Schwartz
Global Information Incorporated
195 Farmington Ave.
Farmington, CT 06032

860.674.8796 / FAX: 860.674.8341
E-mail: je...@gii.co.jp
Web site: www.the-infoshop.com



>--
>Bier met grenadyn
>Is als mosterd by den wyn
>Sy die't drinkt, is eene kwezel
>Hy die't drinkt, is ras een ezel




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: localhost vs domain for connection string

2010-11-24 Thread Jerry Schwartz
>-Original Message-
>From: vegiv...@gmail.com [mailto:vegiv...@gmail.com] On Behalf Of Johan De
>Meersman
>Sent: Wednesday, November 24, 2010 2:39 AM
>To: Jerry Schwartz
>Cc: Brent Clark; mysql mailing list
>Subject: Re: localhost vs domain for connection string
>
>On Tue, Nov 23, 2010 at 7:55 PM, Jerry Schwartz  wrote:
>
>> >IIRC, "localhost" is seen by the client as a magic word to mean "use the
>> >UNIX socket, not 127.0.0.1".
>> >
>> [JS] IF it is enabled in my.cnf.
>>
>Hmm, didn't know that bit. What's the option called ?
>
[JS] This might or might not be enabled by default. I'm running on Windows, 
and I seem to remember having to change it.

# SERVER SECTION
# --
#
# The following options will be read by the MySQL Server. Make sure that
# you have installed the server correctly (see above) so it reads this
# file.
#
[mysqld]

# The TCP/IP Port the MySQL Server will listen on
port=3306

# Enable named pipe, bypassing the network stack
enable-named-pipe

=
Regards,

Jerry Schwartz
Global Information Incorporated
195 Farmington Ave.
Farmington, CT 06032

860.674.8796 / FAX: 860.674.8341
E-mail: je...@gii.co.jp
Web site: www.the-infoshop.com




>--
>Bier met grenadyn
>Is als mosterd by den wyn
>Sy die't drinkt, is eene kwezel
>Hy die't drinkt, is ras een ezel




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



  1   2   3   4   5   6   7   >