.
*From:* shawn l.green
*Sent:* 13 February 2018 09:51:33 PM
*To:* mysql@lists.mysql.com
*Subject:* Re: Optimize fails due to duplicate rows error but no
duplicates found
Hello Machiel,
On 2/13/2018 3:02 AM, Machiel Richards wrote:
Good day guys,
I
: Re: Optimize fails due to duplicate rows error but no duplicates found
Hello Machiel,
On 2/13/2018 3:02 AM, Machiel Richards wrote:
> Good day guys,
>
>
> I am hoping this mail finds you well.
>
>
> I am at a bit of a loss here...
>
>
> We are trying t
due to duplicate rows error but no duplicates found
Hello Machiel,
On 2/13/2018 3:02 AM, Machiel Richards wrote:
> Good day guys,
>
>
> I am hoping this mail finds you well.
>
>
> I am at a bit of a loss here...
>
>
> We are trying to run optimize again
Hello Machiel,
On 2/13/2018 3:02 AM, Machiel Richards wrote:
Good day guys,
I am hoping this mail finds you well.
I am at a bit of a loss here...
We are trying to run optimize against a table in order to reclaim disk
space from archived data which has been removed.
Ho
Good day guys,
I am hoping this mail finds you well.
I am at a bit of a loss here...
We are trying to run optimize against a table in order to reclaim disk
space from archived data which has been removed.
However, after running for over an hour , the optimize fails stating t
Hi Chris,
On 3/24/2015 10:07 AM, Chris Hornung wrote:
Thanks for the suggestions regarding non-printing characters, definitely
makes sense as a likely culprit!
However, the data really does seem to be identical in this case:
mysql> select id, customer_id, concat('-', group_id, '-') from
app_cu
p_customergroupmembership
where customer_id ='ajEiQA';
I suspect one of those group IDs has a trailing space or similar 'invible'
character that makes it not identical.
----- Original Message -
From: "Chris Hornung"
To: "MySql"
Sent: Monday, 23 M
-- Original Message -
> From: "Chris Hornung"
> To: "MySql"
> Sent: Monday, 23 March, 2015 18:20:36
> Subject: duplicate rows in spite of multi-column unique constraint
> Hello,
>
> I'm come across a situation where a table in our production D
Hello,
I'm come across a situation where a table in our production DB has a
relatively small number of duplicative rows that seemingly defy the
unique constraint present on that table.
We're running MySQL 5.6.19a via Amazon RDS. The table in question is
~250M rows.
`show create table` give
Daevid Vincent wrote:
WOW! You are right! That's silly. It's a table with a single column. All
unique.
With out the index MySQL doesn't know they are unique.
Anyways, here's the magic incantation that worked for me:
DROP TABLE IF EXISTS `dupes`;
CREATE TEMPORARY TABLE dupes
SELECT LogID
> -Original Message-
> From: Chris W [mailto:[EMAIL PROTECTED]
> Sent: Monday, February 04, 2008 9:05 PM
> To: Daevid Vincent; MYSQL General List
> Subject: Re: Deleting duplicate rows via temporary table
> either hung or taking way way too long
>
>
&
Daevid Vincent wrote:
DROP TABLE IF EXISTS `dupes`;
CREATE TEMPORARY TABLE dupes
SELECT LogID FROM buglog GROUP BY BID, TS HAVING count(*) > 1 ORDER
BY BID;
LOCK TABLES buglog WRITE;
SELECT * FROM buglog WHERE LogID IN (SELECT LogID FROM dupes) LIMIT 10;
#DELETE FROM buglog WHERE LogID IN (S
Having a bit of trouble deleting 8645 duplicate rows...
#//mySQL is broken and you can't reference a table you're deleting from in a
subselect.
#//http://www.thescripts.com/forum/thread490831.html
#// you can't even update said table, so this elegant solution fails too...
#// up
way, I would duplicate the rows that matched the condition and
>> field1
>> would get it's value automatically incremented by the system.
>>
>> The catch now is that I want the rows to be duplicated like before, but
>> specifying field4 as "abcd" in all
At 09:39 AM 12/7/2007, rfeio wrote:
INSERT INTO table1 (field2, field3, field4) SELECT field2, field3, field4
WHERE field2=x
Have you tried:
INSERT INTO table1 (field2, field3, field4) SELECT field2, field3, "ABCD"
WHERE field2=x
Mike
--
MySQL General Mailing List
For list archives: http:
ying field4 as "abcd" in all of them.
>
> How can I do this?
>
> Cheers!
> --
> View this message in context:
> http://www.nabble.com/HELP%3A-How-to-duplicate-rows...-tf4962682.html#a14214522
> Sent from the MySQL - General mailing list archive at Nabble.
, but
specifying field4 as "abcd" in all of them.
How can I do this?
Cheers!
--
View this message in context:
http://www.nabble.com/HELP%3A-How-to-duplicate-rows...-tf4962682.html#a14214522
Sent from the MySQL - General mailing list archive at Nabble.com.
--
MySQL General Mailing List
Fo
.674.8796 / FAX: 860.674.8341
> -Original Message-
> From: Christian Hammers [mailto:[EMAIL PROTECTED]
> Sent: Friday, November 10, 2006 2:57 AM
> To: Daevid Vincent
> Cc: mysql@lists.mysql.com
> Subject: Re: ORDER BY RAND() gives me duplicate rows sometimes
>
>
>
>
Add DISTINCT(primary_key) in your query?
Regards
Willy
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
On 2006-11-09 Daevid Vincent wrote:
> I am using this query to pull three random comments from a table:
>
> "SELECT *, DATE_FORMAT(created_on, '%b %D') as date_format FROM comments
> ORDER BY RAND() LIMIT 3";
>
> The problem is that sometimes, I get two of the same comment. How can I
> refine t
I am using this query to pull three random comments from a table:
"SELECT *, DATE_FORMAT(created_on, '%b %D') as date_format FROM comments
ORDER BY RAND() LIMIT 3";
The problem is that sometimes, I get two of the same comment. How can I
refine this query to give me 3 unique/distinct ones?
--
M
Philip Hallstrom wrote:
Philip Hallstrom wrote:
Hi all, a though query problem for me...
I have a table with 2 rows that matter: url and id
If url and id are the same in 2 rows, then that's no good (bad data).
I need to find all the rows that are duplicates. I can't think of how
to approach t
uesday, September 12, 2006 7:10 PM
Subject: query to find duplicate rows
Hi all, a though query problem for me...
I have a table with 2 rows that matter: url and id
If url and id are the same in 2 rows, then that's no good (bad data).
I need to find all the rows that are duplicates. I can
Philip Hallstrom wrote:
Hi all, a though query problem for me...
I have a table with 2 rows that matter: url and id
If url and id are the same in 2 rows, then that's no good (bad data).
I need to find all the rows that are duplicates. I can't think of how
to approach the sql for this.. any poi
Philip Hallstrom wrote:
Hi all, a though query problem for me...
I have a table with 2 rows that matter: url and id
If url and id are the same in 2 rows, then that's no good (bad data).
I need to find all the rows that are duplicates. I can't think of how
to approach the sql for this.. any poi
Hi all, a though query problem for me...
I have a table with 2 rows that matter: url and id
If url and id are the same in 2 rows, then that's no good (bad data).
I need to find all the rows that are duplicates. I can't think of how
to approach the sql for this.. any pointers?
Select COUNT(*)
Select COUNT(*) as num_entries, url from table WHERE num_entries>1 GROUP
BY url
Untested, but the concept should work for you.
Steve Musumeche
CIO, Internet Retail Connection
[EMAIL PROTECTED]
Peter Van Dijck wrote:
Hi all, a though query problem for me...
I have a table with 2 rows that m
Hi all, a though query problem for me...
I have a table with 2 rows that matter: url and id
If url and id are the same in 2 rows, then that's no good (bad data).
I need to find all the rows that are duplicates. I can't think of how
to approach the sql for this.. any pointers?
Thanks!
Peter
--
- Original Message -
From: "A Z" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Tuesday, September 21, 2004 10:02 AM
Subject: Duplicate Rows
>
>
> MySQL 4.0.14
>
> In a scenario:
>
> Ref EmailAddr
>
> 1[EMAIL
MySQL 4.0.14
In a scenario:
Ref EmailAddr
1[EMAIL PROTECTED]
2[EMAIL PROTECTED]
3[EMAIL PROTECTED]
4[EMAIL PROTECTED]
how can I delete duplicate email entries (records 1,
2) leaving 4.
regards
Well, doing on all tables at once woule probably bring the server to its knees
due to the cartesian product producing a VERY large temporary table. You can
do it on two tables at once like this (if my memory serves):
SELECT * from mytable as t1, mytable as t2
WHERE t1.column1 = t2.column1 AND
Thanks for the response, Joshua.
I am so very new to MySQL, that I am afraid I require more guidance.
Is there a way to join ALL tables in a database rather than just one table
to itself, or one particular table to another?
SELECT * FROM allTables WHERE column1=column1 AND column2=column2 AND
co
Yes, there is a way. It's called joins. :) I don't remember the exact syntax
off the top of my head, but the approach is thus:
Do a self join on the table and select records that match in their first three
columns, but do not have the same primary key (you *do* have primary keys on
your table
Is there a way to use a SELECT statement (or any other, for that matter)
that will look at every table in a database and return every row whose first
3 columns are duplicated in at least one other row in any of the tables?
Essentially, a command to find duplicate entries in the database . . .
Than
On Tue, 2 Dec 2003 10:47:07 +0100 "Wouter van Vliet" <[EMAIL PROTECTED]>
wrote:
> If you want to select those who HAVE BEEN at level 2 in the yeaer Y,
> you can just do "level_id = 2". But I guess you want to know who
> currently IS at level 2 IN the year Y? In that case, when using a
Yes, that's
On dinsdag 2 december 2003 4:07 Skippy told the butterflies:
> I must admit I'm pretty stumped here. I'm using MySQL 4.0.12.
>
> I have several tables with info, and one which servers as a
> link between them (has id's that refer the id's in all the others).
>
> The info tables hold people(table
I must admit I'm pretty stumped here. I'm using MySQL 4.0.12.
I have several tables with info, and one which servers as a link between
them (has id's that refer the id's in all the others).
The info tables hold people(table 1), which get assigned to groups(2) by
directives(3) and each time they a
handling this.
--
Peter Sap
- Original Message -
From: "Nathan Jones" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Monday, November 17, 2003 4:16 AM
Subject: Merging duplicate rows
> How do you merge duplicate rows? All rows involved contain identi
How do you merge duplicate rows? All rows involved contain identical data
in each column.
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
At 23:30 -0600 2/14/03, Lewis Watson wrote:
I need to delete duplicate rows. Each row that is in the table has an
exact duplicate of itself. There are four columns. No one column could be
defined as a primary key; however, two columns together could. What's
going to be the best way to do
ly.com
- Original Message -
From: "Lewis Watson" <[EMAIL PROTECTED]>
To: "mysql" <[EMAIL PROTECTED]>
Sent: Saturday, February 15, 2003 12:30 AM
Subject: Delete duplicate rows
> I need to delete duplicate rows. Each row that is in the table has an
> exact dup
I need to delete duplicate rows. Each row that is in the table has an
exact duplicate of itself. There are four columns. No one column could be
defined as a primary key; however, two columns together could. What's
going to be the best way to do this?
Thanks.
Lewis
mysql, t
On Monday 04 February 2002 01:12 pm, Greg Bailey wrote:
> What is the real "production" version? ?If 4.0.2 can be called a
> production version, I'd gladly use it on my web site; however, it
> doesn't seem to indicate that on the MySQL home page. ?So if I find a
> bug in 3.23.47 that was fixed a "
I guess I'm a little confused about the MySQL versions.
What is the real "production" version? If 4.0.2 can be called a
production version, I'd gladly use it on my web site; however, it
doesn't seem to indicate that on the MySQL home page. So if I find a
bug in 3.23.47 that was fixed a "long ti
This would give you a list of all users that have entered things more than
once; However, it would not give you all the rows that are duplicated.
SELECT Count(User) from mail_form2 GROUP BY User HAVING Count(User) > 1
Hope this helps!
Daniel Von Fange
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
Tom Churm wrote:
> hi,
>
> i have 2 questions that i badly need answered. i use phpmyadmin, but
> any answers containing SQL syntax should work in this app...
>
> 1)
> this should be simple but i don't know it. i use the following mysql
> table field as the Key for my tables:
>
> "id int(
hi,
i have 2 questions that i badly need answered. i use phpmyadmin, but
any answers containing SQL syntax should work in this app...
1)
this should be simple but i don't know it. i use the following mysql
table field as the Key for my tables:
"id int(10) unsigned NOT NULL auto_increment,"
> how do I OVERWRITE the previous entry in the
> table? ie. is there a SQL command to do like INSERT, but if duplicate
> found, overwrite with the new value.
See REPLACE into
http://www.mysql.com/doc/R/E/REPLACE.html
Take care,
seth
---
Hello,
I have a table where for each row, the tuple is
unique (when creating the table, i did a UNIQUE INDEX index1 (col1, col2))
I wrote a perl program to read lines from a file and insert accordingly
into the table.
How do I prevent insertions of duplicates? (as in during the insertion
loop, i
49 matches
Mail list logo