On Fri, Jan 1, 2016 at 9:31 PM, Peter Brawley
wrote:
> On 1/1/2016 19:24, Larry Martell wrote:
>>
>> On Fri, Jan 1, 2016 at 2:12 PM, Peter Brawley
>> wrote:
>>>
>>> On 12/31/2015 0:51, Larry Martell wrote:
I need to count the
On Fri, Jan 1, 2016 at 2:12 PM, Peter Brawley
wrote:
> On 12/31/2015 0:51, Larry Martell wrote:
>>
>> I need to count the number of rows in a table that are grouped by a
>> list of columns, but I also need to exclude rows that have more then
>> some count when grouped
On 12/31/2015 0:51, Larry Martell wrote:
I need to count the number of rows in a table that are grouped by a
list of columns, but I also need to exclude rows that have more then
some count when grouped by a different set of columns. Conceptually,
this is not hard, but I am having trouble doing
From: Olivier Nicole olivier.nic...@cs.ait.ac.th
You could look for a tool called The Regex Coach. While it is mainly
for Windows, it runs very well in vine. I fijd it highly useful to debug
regexps.
On the Mac, look for RegExRx. It lets you paste in text to work on, build a
regex, and see
Paul,
You could look for a tool called The Regex Coach. While it is mainly
for Windows, it runs very well in vine. I fijd it highly useful to debug
regexps.
Best regards,
Olivier
--
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
Trying to pattern match ip addresses is a famous anti-pattern; it's one of
those things like you feel like it should work, but it won't.
Your case, however, is pretty specific. taking advantage of the limited
range (I will assume you only wanted 4 sections of IPv4)
this should come close:
I don't think it accepts \d, or much of anything else I am used to
putting in expressions :)
This is what I ended up with and it appears to be working:
REGEXP '10.[[:alnum:]]{1,3}.(22[4-9]|23[0-9]).[[:alnum:]]{1,3}'
On Thu, Mar 19, 2015 at 11:10 AM, Michael Dykman mdyk...@gmail.com wrote:
Hi Larry,
On 2/1/2015 4:49 PM, Larry Martell wrote:
I have 2 queries. One takes 4 hours to run and returns 21 rows, and
the other, which has 1 additional where clause, takes 3 minutes and
returns 20 rows. The main table being selected from is largish
(37,247,884 rows with 282 columns). Caching
Hi Larry,
On 2/4/2015 3:18 PM, Larry Martell wrote:
On Wed, Feb 4, 2015 at 2:56 PM, shawn l.green shawn.l.gr...@oracle.com wrote:
Hi Larry,
On 2/1/2015 4:49 PM, Larry Martell wrote:
I have 2 queries. One takes 4 hours to run and returns 21 rows, and
the other, which has 1 additional where
On Wed, Feb 4, 2015 at 3:25 PM, shawn l.green shawn.l.gr...@oracle.com wrote:
Hi Larry,
On 2/4/2015 3:18 PM, Larry Martell wrote:
On Wed, Feb 4, 2015 at 2:56 PM, shawn l.green shawn.l.gr...@oracle.com
wrote:
Hi Larry,
On 2/1/2015 4:49 PM, Larry Martell wrote:
I have 2 queries. One
On Wed, Feb 4, 2015 at 2:56 PM, shawn l.green shawn.l.gr...@oracle.com wrote:
Hi Larry,
On 2/1/2015 4:49 PM, Larry Martell wrote:
I have 2 queries. One takes 4 hours to run and returns 21 rows, and
the other, which has 1 additional where clause, takes 3 minutes and
returns 20 rows. The
Hello Larry,
On 2/4/2015 3:37 PM, Larry Martell wrote:
On Wed, Feb 4, 2015 at 3:25 PM, shawn l.green shawn.l.gr...@oracle.com wrote:
Hi Larry,
On 2/4/2015 3:18 PM, Larry Martell wrote:
On Wed, Feb 4, 2015 at 2:56 PM, shawn l.green shawn.l.gr...@oracle.com
wrote:
Hi Larry,
On 2/1/2015
Hello Mimko,
Sorry for the late reply. I had a bunch of work to take care of before
vacation, then there was the vacation itself. :)
On 11/13/2014 2:34 PM, Mimiko wrote:
Hello. I have this table:
show create table cc_agents_tier_status_log:
CREATE TABLE cc_agents_tier_status_log (
id
On 15.11.2014 01:06, Peter Brawley wrote:
Let's see the results of Explain Extended this query, result of Show
Create Table cc_member_queue_end_log.
cc_member_queue_end_log is not of interest, it is used just as a series
of numbers. It may be any table with ids.
I've changed a bit the
Let's see the results of Explain Extended this query, result of Show
Create Table cc_member_queue_end_log.
PB
-
On 2014-11-13 1:34 PM, Mimiko wrote:
Hello. I have this table:
show create table cc_agents_tier_status_log:
CREATE TABLE cc_agents_tier_status_log (
id int(10) unsigned
delete b from icd9x10 a
join icd9x10 b on a.icd9 = b.icd9 and a.id b.id
...
CREATE TABLE `ICD9X10` (
...
id icd9 icd10
25 29182 F10182
26 29182 F10282
...
Good luck,
Bob
On 3/29/2014 2:26 PM, william drescher wrote:
I am given a table: ICD9X10 which is a maping of ICD9 codes to
ICD10 codes. Unfortunately the table contains duplicate entries
that I need to remove.
CREATE TABLE `ICD9X10` (
`id` smallint(6) NOT NULL AUTO_INCREMENT,
`icd9` char(8) NOT NULL,
Hi Bill,
How big is your table? It seems to me that you might want to change your
unique keys to something like (icd9, icd10), thus guaranteeing that every
mapping will exist only once in your table. You could create a new table
with that constraint and copy all your data to it:
CREATE TABLE
On 29-03-2014 19:26, william drescher wrote:
I am given a table: ICD9X10 which is a maping of ICD9 codes to ICD10
codes. Unfortunately the table contains duplicate entries that I need
to remove.
...
I just can't think of a way to write a querey to delete the duplicates.
Does anyone have a
Bill, here is one approach:
The following query will return the id's that should NOT be deleted:
Select min (id) from icd9x10 group by icd9, icd10
Once you run it and happy with the results then you subquery it in a DELETE
statement. Something like:
Delete from icd9x10 A where A.id not in
boah you *must not* remove ibdata1
it contains the global tablespace even with file_per_table
ib_logfile0 and ib_logfile1 may be removed, but make sure you have
a as cinsistent as possible backup of the whole datadir
I removed ib_logfile0 and ib_logfile1 and restarted mysql with
10:04:27 AM
Subject: Re: help: innodb database cannot recover
I removed ib_logfile0 and ib_logfile1 and restarted mysql with
innodb_force_recovery=1,
mysql keeps crashing and restart:
thd: 0x0
Attempting backtrace. You can use the following information to find
out
where mysqld died
Am 20.06.2013 10:11, schrieb Peter:
130620 00:47:08 mysqld_safe Starting mysqld daemon with databases from
/var/lib/mysql
InnoDB: Error: tablespace size stored in header is 456832 pages, but
InnoDB: the sum of data file sizes is only 262080 pages
InnoDB: Cannot start InnoDB. The tail of
2013/6/20 Peter one2001...@yahoo.com
Hello,
I copied innodb database (ib_logfile0 ib_logfile1 ibdata1 and the whole
database directory) from one crashed machine to another.
I find that I cannot start database to get the database data any more.
How did you copy the database?
Manuel
Am 20.06.2013 15:18, schrieb Peter:
2013/6/20 Peter one2001...@yahoo.com
Hello,
I copied innodb database (ib_logfile0 ib_logfile1 ibdata1 and the whole
database directory) from one crashed machine to another.
I find that I cannot start database to get the database data any more.
2013/6/20 Peter one2001...@yahoo.com
Hello,
I copied innodb database (ib_logfile0 ib_logfile1 ibdata1 and the whole
database directory) from one crashed machine to another.
I find that I cannot start database to get the database data any more.
How did you copy the database?
Manuel
I copy
2013/6/20 Peter one2001...@yahoo.com
2013/6/20 Peter one2001...@yahoo.com
Hello,
I copied innodb database (ib_logfile0 ib_logfile1 ibdata1 and the whole
database directory) from one crashed machine to another.
I find that I cannot start database to get the database data any more.
How
Am 20.06.2013 15:18, schrieb Peter:
I copied innodb database (ib_logfile0 ib_logfile1 ibdata1 and the whole
database directory) from one crashed machine to another.
I find that I cannot start database to get the database data any more.
How did you copy the database?
Manuel
I copy
Am 20.06.2013 15:18, schrieb Peter:
I copied innodb database (ib_logfile0 ib_logfile1 ibdata1 and the whole
database directory) from one crashed machine to another.
I find that I cannot start database to get the database data any more.
How did you copy the database?
Manuel
I copy
2013/6/20 Peter one2001...@yahoo.com
2013/6/20 Peter one2001...@yahoo.com
Hello,
I copied innodb database (ib_logfile0 ib_logfile1 ibdata1 and the whole
database directory) from one crashed machine to another.
I find that I cannot start database to get the database data any more.
Hello,
I copied innodb database (ib_logfile0 ib_logfile1 ibdata1 and the whole
database directory) from one crashed machine to another.
I find that I cannot start database to get the database data any more.
How did you copy the database?
Manuel
I copy the files ib_logfile0 ib_logfile1
Am 20.06.2013 23:47, schrieb Peter:
Hello,
I copied innodb database (ib_logfile0 ib_logfile1 ibdata1 and the whole
database directory) from one crashed machine to another.
I find that I cannot start database to get the database data any more.
How did you copy the database?
Manuel
I
On 19.04.2013 06:49, Kapil Karekar wrote:
snip
Though I would recommend not using such names. Some poor guy working
on your application six months down the line is going to wonder why
his queries are failing, spend a day trying to figure out and will
post the same question again to this list
On 19-Apr-2013, at 9:14 AM, Doug d...@hacks.perl.sh wrote:
why these db names created fail but the last one gets success?
mysql create database 3208e1c6aa32;
mysql create database 208e1c6aa32;
mysql create database 08e1c6aa32;
mysql create database 8e1c6aa32;
These are not working because
On 2013-04-19, Doug d...@hacks.perl.sh wrote:
why these db names created fail but the last one gets success?
[snips]
mysql create database 3208e1c6aa32;
mysql create database 208e1c6aa32;
mysql create database 08e1c6aa32;
mysql create database 8e1c6aa32;
mysql create database e1c6aa32;
Okay, panic over. I recursively stripped the ACLs and things are working.
Next time I drop a table from phpMyAdmin, I'll carefully read the little thing
that pops up saying I'm about to drop an entire database... :-( One gets so
yea, whatever to warning notifiers...)
Thanks to all who sent
Am 09.01.2013 16:33, schrieb Jan Steinman:
I accidentally dropped a crucial database. My only backup is via Apple's Time
Machine.
First, I stopped mysqld and copied (via tar) the database in question from
the backup. Restarted, but drat -- most of the tables were apparently using
Hi,
It is not very surprising that the database cannot recover from a Time Machine
backup. This generally applies to any software that is running at the moment
the backup is taken. The InnoDB is especially sensitive to taking what is
called a 'dirty' backup because it has a cache. You
On 12/11/2012 7:22 PM, h...@tbbs.net wrote:
... (Are all the distinct ids really needed? When one joins on a
field with the same name in both tables, one may use 'USING', and
only the common field, with neither NULL, shows up in the output.)
This is a perfectly acceptable naming convention to
On Wed, Dec 12, 2012 at 8:25 AM, Shawn Green shawn.l.gr...@oracle.com wrote:
On 12/11/2012 7:22 PM, h...@tbbs.net wrote:
... (Are all the distinct ids really needed? When one joins on a
field with the same name in both tables, one may use 'USING', and
only the common field, with neither
On Tue, Dec 11, 2012 at 8:48 PM, Peter Brawley
peter.braw...@earthlink.net wrote:
ERROR 1054 (42S22): Unknown column 'data_tool.category_id' in 'on clause'
But category_id is a column in data_tool.
Then a bit of reordering is required ...
SELECT data_target.name, ep, wafer_id, lot_id,
On Tue, Dec 11, 2012 at 7:22 PM, h...@tbbs.net wrote:
2012/12/11 16:19 -0500, Larry Martell
I have this query:
SELECT data_target.name, ep, wafer_id, lot_id,
date_time, data_file_id, data_cstimage.name,
bottom, wf_file_path_id, data_measparams.name,
vacc,
On 2012-12-11 3:19 PM, Larry Martell wrote:
I have this query:
SELECT data_target.name, ep, wafer_id, lot_id,
date_time, data_file_id, data_cstimage.name,
bottom, wf_file_path_id, data_measparams.name,
vacc, data_category.name
FROM data_cst, data_target, data_cstimage,
On Tue, Dec 11, 2012 at 5:12 PM, Peter Brawley
peter.braw...@earthlink.net wrote:
On 2012-12-11 3:19 PM, Larry Martell wrote:
I have this query:
SELECT data_target.name, ep, wafer_id, lot_id,
date_time, data_file_id, data_cstimage.name,
bottom, wf_file_path_id,
2012/12/11 16:19 -0500, Larry Martell
I have this query:
SELECT data_target.name, ep, wafer_id, lot_id,
date_time, data_file_id, data_cstimage.name,
bottom, wf_file_path_id, data_measparams.name,
vacc, data_category.name
FROM data_cst, data_target, data_cstimage,
ERROR 1054 (42S22): Unknown column 'data_tool.category_id' in 'on clause'
But category_id is a column in data_tool.
Then a bit of reordering is required ...
SELECT data_target.name, ep, wafer_id, lot_id,
date_time, data_file_id, data_cstimage.name,
bottom, wf_file_path_id,
Off hand, I would iterate over the PRIMARY KEY, looking at a thousand rows at a
time, DELETEing any that need to be purged. I would use a Perl or PHP loop, or
write a stored procedure. More discussion of huge deletes (which this
_could_ be):
http://mysql.rjweb.org/doc.php/deletebig
Message-
From: Rick James [mailto:rja...@yahoo-inc.com]
Sent: Thursday, October 25, 2012 1:09 PM
To: Daevid Vincent; mysql@lists.mysql.com
Subject: RE: Help with purging old logs for each customer ID
Off hand, I would iterate over the PRIMARY KEY, looking at a thousand rows
at a time
, 2012 1:33 PM
To: Rick James; mysql@lists.mysql.com
Subject: RE: Help with purging old logs for each customer ID
Well, the customer_id is relevant in that I want the last 90 days
relative to each customer.
customer_id = 123 might have logs from jan - mar customer_id = 444
might have logs
To: Daevid Vincent; mysql@lists.mysql.com
Subject: RE: Help with purging old logs for each customer ID
If the 90 days is back from MAX(created_on) for a given customer...
INDEX(customer_id, created_on)
will probably be needed. And that should replace KEY `customers_id`
(`customer_id`).
Maybe
But connect-timeout has nothing to do with termination of query. It is no.
of secs that mysqld server waits for a connect packet before responding
with Bad handshake, default value is 10 seconds.
Probably you should adjust net_read /write_ timeout.
- Original Message -
From: indrani gorti indrani.go...@gmail.com
Later I changed it to SET GLOBAL connect_timeout=60;
However, I still find that the query loses the connection to mysql
server after about 10 mins( 600.495 secs)
I see that the connect_time is 60 though.
If I recall
Also check for any firewall or NAT
On Sep 5, 2012 5:17 PM, indrani gorti indrani.go...@gmail.com wrote:
Hi all,
I am a newbie to work with the settings in mysql although I have used in
very small applications before
I am currently experimenting with very large tables and a few complicated
On Tue, Aug 21, 2012 at 8:26 PM, Larry Martell larry.mart...@gmail.com wrote:
On Tue, Aug 21, 2012 at 8:07 PM, h...@tbbs.net wrote:
2012/08/21 16:35 -0600, Larry Martell
I am trying to write a query that selects from both a correlated
subquery and a table in the main query, and I'm having a
On Thu, Aug 23, 2012 at 8:08 AM, Larry Martell larry.mart...@gmail.com wrote:
On Tue, Aug 21, 2012 at 8:26 PM, Larry Martell larry.mart...@gmail.com
wrote:
On Tue, Aug 21, 2012 at 8:07 PM, h...@tbbs.net wrote:
2012/08/21 16:35 -0600, Larry Martell
I am trying to write a query that selects
à la manipulation, nous ne pouvons accepter aucune responsabilité
pour le contenu fourni.
Date: Tue, 21 Aug 2012 20:26:51 -0600
Subject: Re: help with correlated subquery
From: larry.mart...@gmail.com
To: h...@tbbs.net
CC: mysql@lists.mysql.com
On Tue, Aug 21, 2012 at 8:07 PM, h
...@gmail.com]
Sent: Tuesday, August 21, 2012 7:27 PM
To: h...@tbbs.net
Cc: mysql@lists.mysql.com
Subject: Re: help with correlated subquery
On Tue, Aug 21, 2012 at 8:07 PM, h...@tbbs.net wrote:
2012/08/21 16:35 -0600, Larry Martell
I am trying to write a query that selects from both a correlated
Hello Martin,
On 8/22/2012 8:30 AM, Martin Gainty wrote:
assign realistic alias names
OuterJoin should be called OuterJoin
InnerJoin should be called InnerJoin
Almost! MySQL does not have a simple OUTER JOIN command (some RDBMSes
call this a FULL OUTER JOIN). What we do have is the option
select count(*), target_name_id, ep, avg(bottom), avg(averages)
from
( SELECT avg(bottom) as averages, target_name_id as t,
ep as e
from data_cst
where target_name_id = 44
group by target_name_id, ep, wafer_id) x,
On Tue, Aug 21, 2012 at 5:30 PM, Rick James rja...@yahoo-inc.com wrote:
select count(*), target_name_id, ep, avg(bottom), avg(averages)
from
( SELECT avg(bottom) as averages, target_name_id as t,
ep as e
from data_cst
where
On Tue, Aug 21, 2012 at 5:39 PM, Martin Gainty mgai...@hotmail.com wrote:
a look at the first query:
select count(*), target_name_id, ep, wafer_id from data_cst
where target_name_id = 44 group by target_name_id, ep, wafer_id;
+--++--+--+
| count(*) |
2012/08/21 16:35 -0600, Larry Martell
I am trying to write a query that selects from both a correlated
subquery and a table in the main query, and I'm having a lot of
trouble getting the proper row count. I'm sure this is very simple,
and I'm just missing it. I'll try and present a simple
On Tue, Aug 21, 2012 at 8:07 PM, h...@tbbs.net wrote:
2012/08/21 16:35 -0600, Larry Martell
I am trying to write a query that selects from both a correlated
subquery and a table in the main query, and I'm having a lot of
trouble getting the proper row count. I'm sure this is very simple,
Hi Victor,
To answer your question about saving the table.
This URL http://dev.mysql.com/doc/refman/5.6/en/myisam-repair.html - *Stage
3: Difficult repair* directly addresses your concerns.
You also may want to look into different option of REPAIR TABLE command
It doesn't particularly say, but this:
120116 19:15:29 120116 19:15:29 InnoDB: 1.1.8 started; log sequence number
1595675
suggests to me that there's still junk from a previous install around. You
might want to clean that up.
- Original Message -
From: mantianyu mantia...@gmail.com
2012/01/16 19:37 +0800, mantianyu
but at last step I start the service by run
sudo bin/mysqld_safe --user=mysql
I got following error message
cifer@Pig:/usr/local/mysql$ 120116 19:15:28 mysqld_safe Logging to
'/var/log/mysql/error.log'.
Your means of starting does not show where the
On Mon, Jan 16, 2012 at 4:37 AM, mantianyu mantia...@gmail.com wrote:
I have download the
Linux - Generic 2.6 (x86, 32-bit), Compressed TAR Archive
binary edition
and I installed it all followed the INSTALL_BINARY
but at last step I start the service by run
sudo bin/mysqld_safe
Am 30.11.2011 03:13, schrieb Karen Abgarian:
The concept is not difficult to explain. Most people do not expect a gas
tank
to shrink once the gas is consumed...right?
yes, but the hard-disk is the gas tank and the data are the gas
and yes, normally everybody would expect after deleting
2011/11/29 23:19 +0100, Reindl Harald
MY only luck is that i recognized this years ago after PLAYING
with innodb and so i started with innodb_file_per_table=1 from
the begin with the first production database
And are then the table-files in the directories with frm, or in the
directory
Am 30.11.2011 07:02, schrieb Hal?sz S?ndor:
2011/11/29 23:19 +0100, Reindl Harald
MY only luck is that i recognized this years ago after PLAYING
with innodb and so i started with innodb_file_per_table=1 from
the begin with the first production database
And are then the table-files in the
Am 29.11.2011 14:08, schrieb Luis Pugoy:
Hello. I have the following problem.
I was importing a large database to mysql using mysqldump. Unfortunately this
filled up the whole disk, and
mysqldump exited with an error that the table it is currently writing to is
full. Checking df -h it
On 29.11.2011, at 5:21, Reindl Harald wrote:
ibdata1 does NEVER get smaller, this is normal and a hughe problem
in your case, only if you are using innodb_file_per_table which
is NOT default would retire the space after drop tables
why is this dumb innodb_file_per_table=0 default since
This is not to say that MySQL could not have more of the file management
features. For example, the ability to add or remove datafiles on the fly
and the
ability to detach tablespaces as collections of tables.
That's where MySQL(read InnoDB) got stuck actually, it never introduced a
Am 29.11.2011 20:25, schrieb Karen Abgarian:
On 29.11.2011, at 5:21, Reindl Harald wrote:
why is this dumb innodb_file_per_table=0 default since MOST PEOPLE
have only troubles with it because they can not free space with
optimize table with no real benefits?
The logic behind this is
Hi... there is stuff inline there.
The logic behind this is probably that without innodb_file_per_table=1
and with several large ibdata files, the space IS freed up when one does
optimize table or drop table. The space is freed up inside the database
files and can be reused.
well,
On Nov 29, 2011, at 11:50 AM, Claudio Nanni wrote:
This is not to say that MySQL could not have more of the file management
features. For example, the ability to add or remove datafiles on the fly and
the
ability to detach tablespaces as collections of tables.
That's where MySQL(read
Am 30.11.2011 01:11, schrieb Karen Abgarian:
MY only luck is that i recognized this years ago after PLAYING
with innodb and so i started with innodb_file_per_table=1 from
the begin with the first production database
Well, I would not base my database design on luck and playing. There
Hi... and some more stuff inline.
Well, I would not base my database design on luck and playing. There
should be good awareness
of what the features do and what would be the plan to deal with file
allocations should the database
grow, shrink or somerset
if you are working many
Hi Aveek,
You need to use something like union all and having to get desire result
Follow example below
select file, digest from
(
SELECT file, digest,Count(*) as Cnt FROM A GROUP BY file, digest
union all
SELECT file, digest,Count(*) as Cnt FROM B GROUP BY file, digest
) tmp
group by
I eventually came up with a solution myself although the query is a bit
different
SELECT C.file, C.digest, (a.cnt_A + b.cnt_B) AS total_count, C.refcount FROM C,
(SELECT file, digest, COUNT(file) AS cnt_A FROM A GROUP BY file, digest) as a,
(SELECT file, digest, COUNT(file) AS cnt_B FROM B
Shawn,
Thanks for the great help! It still is not working. I did an EXPLAIN on this
query with your amended split out join statements and got this:
++-+---+---+---++-+--++-+
| id | select_type | table | type |
Rhino,
Thanks for the help and time! Actually, I thought the same thing, but what's
weird is that is the only thing that doesn't slow it down. If I take out all of
the join clauses EXCEPT that one the query runs virtually instantaneously. for
some reason it will use the index in that case and
If the optimizer chooses the wrong index, you can tell it what index to use.
SELECT a.IdAppt, a.IdPatient,
p.NameLast, p.NameFirst, p.NameMI
from Appt_ a force index(id_patient)
LEFT JOIN patient_ p
ON a.IdPatient = p.IdPatient
WHERE a.ApptDate = '2009-03-01';
See
On 3/10/2011 12:32, Jim McNeely wrote:
Rhino,
Thanks for the help and time! Actually, I thought the same thing, but what's
weird is that is the only thing that doesn't slow it down. If I take out all of
the join clauses EXCEPT that one the query runs virtually instantaneously. for
some
Shawn,
This is the first thing that I though as well, but here is a portion from the
show create table for patient_:
PRIMARY KEY (`zzk`),
KEY `IdPatient` (`IdPatient`),
KEY `SSN` (`SSN`),
KEY `IdLastword` (`IdLastword`),
KEY `DOB` (`DateOfBirth`),
KEY `NameFirst` (`NameFirst`),
KEY
On 3/10/2011 13:12, Jim McNeely wrote:
Shawn,
This is the first thing that I though as well, but here is a portion from the
show create table for patient_:
PRIMARY KEY (`zzk`),
KEY `IdPatient` (`IdPatient`),
KEY `SSN` (`SSN`),
KEY `IdLastword` (`IdLastword`),
KEY `DOB`
On 3/10/11 10:46 AM, Shawn Green (MySQL) wrote:
On 3/10/2011 12:32, Jim McNeely wrote:
Rhino,
Thanks for the help and time! Actually, I thought the same thing, but what's
weird is that is the only thing that doesn't slow it down.
If I take out all of the join clauses EXCEPT that one the
Hi Jim,
On 3/9/2011 17:57, Jim McNeely wrote:
I am trying to set up an export query which is executing very slowly, and I was
hoping I could get some help. Here is the query:
SELECT a.IdAppt, a.IdPatient,
p.NameLast, p.NameFirst, p.NameMI,
a.IdProcedure, a.ProcName, CAST(CONCAT(a.ApptDate, ,
SELECT name, city, state, phone, prods_done, cancels, miles FROM
(SELECT name, city, state, phone, prods_done, cancels, miles, ((prod_done -
cancels) * 100 / prod_done) reliability
FROM volunteer_search WHERE project_id = 5653) A
ORDER BY reliability DESC, miles ASC
Give it a try !!!
Rolando A.
Hi Paul!
Paul Halliday wrote:
I have a query (thanks to this list) that uses a join to add country
information to an IP. It looks like this:
SELECT COUNT(signature) AS count, INET_NTOA(src_ip), map1.cc as
src_cc, INET_NTOA(dst_ip), map2.cc as dst_cc
FROM event LEFT JOIN mappings AS map1
On Tue, 1 Feb 2011 14:46:39 -0400
Paul Halliday paul.halli...@gmail.com wrote:
I have a query (thanks to this list) that uses a join to add country
information to an IP. It looks like this:
SELECT COUNT(signature) AS count, INET_NTOA(src_ip), map1.cc as
src_cc, INET_NTOA(dst_ip), map2.cc as
On Monday 31 January 2011 21:12, Phillip Baker wrote:
Greetings All,
I am looking for a little help in setting a where clause.
I have a dateAdded field that is a DATETIME field.
I am looking to pull records from Midnight to midnight the previous day.
I thought just passing the date (without
Thank you very much Jørn
Blessed Be
Phillip
Never ascribe to malice what can be explained by incompetence
-- Hanlon's Razor
On Mon, Jan 31, 2011 at 1:18 PM, Jørn Dahl-Stamnes
sq...@dahl-stamnes.netwrote:
Jørn
On 1/31/2011 15:12, Phillip Baker wrote:
Greetings All,
I am looking for a little help in setting a where clause.
I have a dateAdded field that is a DATETIME field.
I am looking to pull records from Midnight to midnight the previous day.
I thought just passing the date (without time) would get
On 01/31/2011 12:18 PM, Jørn Dahl-Stamnes wrote:
On Monday 31 January 2011 21:12, Phillip Baker wrote:
Greetings All,
I am looking for a little help in setting a where clause.
I have a dateAdded field that is a DATETIME field.
I am looking to pull records from Midnight to midnight the
Thank you, that did the trick.
Simon
On 11 January 2011 12:09, Steve Meyers steve-mysql-l...@spamwiz.com wrote:
On 1/11/11 9:31 AM, Simon Wilkinson wrote:
select users.id from users where users.id in (select newletters.user_id
from
newletters left join articles on newletters.id =
On 1/11/11 9:31 AM, Simon Wilkinson wrote:
select users.id from users where users.id in (select newletters.user_id from
newletters left join articles on newletters.id = articles.newsletter_id
where articles.newsletter_id is null);
I think this would do what you require:
SELECT
u.id AS
Pito,
can u show us the innodb parameters in the my.cnf file.
regards
anandkl
On Sat, Jan 8, 2011 at 10:31 PM, Pito Salas r...@salas.com wrote:
I am very new to trying to solve a problem like this and have searched
and searched the web for a useful troubleshooting guide but I am
honestly
thanks
On Wed, Aug 18, 2010 at 10:42 PM, a.sm...@ukgrid.net wrote:
Quoting Norman Khine nor...@khine.net:
What is shown from show master status and show slave status after you
have made a change on the master DB?
this is the output:
http://pastie.org/1100610
it does not seem to have
Quoting Norman Khine nor...@khine.net:
hello,
i have an issue with the replication here is my procedure:
http://pastie.org/1100368
in the log it shows replication works, but when i update a record this
is not updated on the server.
What is shown from show master status and show slave status
1 - 100 of 1597 matches
Mail list logo