Bigint BUG

2001-02-21 Thread chchen

hi all
i have make sure there is a bug of mysql with mysql-3.23.32
repeat this bug.

CREATE TABLE A (
   B bigint(20) unsigned DEFAULT '0' NOT NULL,
   value bigint(20) unsigned DEFAULT '0' NOT NULL,
   PRIMARY KEY (B)
);
*p.s the same with take off primary key

insert into A values(9229307903454284864,1)
insert into A values(9229307903454284864,1)

then you will see it say
MySQL said: Duplicate entry '9229307903454285824' for key 1

see?!.. it says duplicate with 9229307903454285824. not what we input 
9229307903454284864 before.
browse the table. you can see there are one rows
  9229307903454285824   812  

this is not what we insert before, either.
and no matter you select * from A where B=9229307903454285824 or
select * from A where B=9229307903454284864
it all return zero row.

i try this on
mysql-3.23.32 on FreeBSD 4.2-RELEASE 
mysql-3.23.25-beta on FreeBSD 3.3-STABLE
mysql-3.22.32 on Linux test 2.4.1 #2 SMP i686 unknown
mysq-3.23.24 on FreeBSD 4.0-STABLE
all the same situation.

oh...suddenly i think there is another fast way to test.
that is
select ; (  18446744073709551615, max of unsigned bigint)
it returns 0656

is it overflow?.

Regards
chChen





BigInt with primary key

2001-02-20 Thread chchen

hi all

i use mysql-3.23.32

my project need to use unsigned Bigint as a primary key.
but when i insert many rows in this table.
sometimes it will error with Duplicate such like

insert into Table 
values('9231852172526977164',0,0,52056,0,0,0,0,0,52056,0,0,0,0,0,0,0,11,0,'184000','184000',1),'9231898557453533324',0,0,5532,0,0,0,0,0,5532,0,0,0,0,0,0,0,11,0,'184000','184000',1),'9230422383529723532',147,0,0,0,0,0,0,0,147,0,0,0,0,0,0,0,91,0,'184000','184000',1)
 query failed
Duplicate entry '9231898557453533324' for key 1

but actually if i select * from Table where a='9231898557453533324' 

it can't find anything.what's wrong with this, bug?

p.s. unsigned Bigint should be from 0 to 18446744073709551615, so 
'9231898557453533324' is included in it, right?

regards
Allen






Re: BigInt with primary key

2001-02-20 Thread chchen

First, i am really sorry about that.

my fields is quite simple.
CREATE TABLE TEST (
   A bigint(20) unsigned DEFAULT '0' NOT NULL,
   C0 bigint(20) unsigned DEFAULT '0' NOT NULL,
   C1 bigint(20) unsigned DEFAULT '0' NOT NULL,
   C2 bigint(20) unsigned DEFAULT '0' NOT NULL,
   C3 bigint(20) unsigned DEFAULT '0' NOT NULL,
   C4 bigint(20) unsigned DEFAULT '0' NOT NULL,
   C5 bigint(20) unsigned DEFAULT '0' NOT NULL,
   C6 bigint(20) unsigned DEFAULT '0' NOT NULL,
   C7 bigint(20) unsigned DEFAULT '0' NOT NULL,
   S1 bigint(20) unsigned DEFAULT '0' NOT NULL,
   S2 bigint(20) unsigned DEFAULT '0' NOT NULL,
   S3 bigint(20) unsigned DEFAULT '0' NOT NULL,
   S4 bigint(20) unsigned DEFAULT '0' NOT NULL,
   S5 bigint(20) unsigned DEFAULT '0' NOT NULL,
   S6 bigint(20) unsigned DEFAULT '0' NOT NULL,
   S7 bigint(20) unsigned DEFAULT '0' NOT NULL,
   Total bigint(20) unsigned DEFAULT '0' NOT NULL,
   sLocation smallint(5) DEFAULT '0' NOT NULL,
   dLocation smallint(5) DEFAULT '0' NOT NULL,
   FirstTime time DEFAULT '00:00:00' NOT NULL,
   LastTime time DEFAULT '00:00:00' NOT NULL,
   Times smallint(5) unsigned DEFAULT '0' NOT NULL,
   PRIMARY KEY (A),
   KEY dLocation (dLocation),
   KEY Total (Total),
   KEY Times (Times)
);

and actually what i use is bigint, it have quotes is because i try many possible reason
still can't find where the problem is, so i add quotes to test if it can 
work.but
unfortunately it still have problem.

and the third question. yes, when problem occur. i thought this first.
but seems not ...isamchk -r, or -f even -c or -e can't find any error.
then, i try delete the table and insert again.

but it still will occur this error.

I summary my question. that is
my table use Bigint as a primary key, i insert/update this table from empty.
then after serveral rows. sometimes it will tell me Duplicate key when insert.
but i can find that row in the table by select.




- Original Message -
Firstly, why do you have quotes around the numbers? I thought you said it
was bigint, not a string?

Secondly, if you are going reply/repost to the list, could you please
simplify it a bit.  Provide field names, etc.  Your insert is very difficult
to figure out especially with unmatched brackets, which doesn't make sense
at all.  I'm surprised the insert even works.

Thirdly, have you tried checktable/myisamcheck?

- Original Message -
From: "chchen" [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Tuesday, February 20, 2001 18:56
Subject: BigInt with primary key


hi all

i use mysql-3.23.32

my project need to use unsigned Bigint as a primary key.
but when i insert many rows in this table.
sometimes it will error with Duplicate such like

insert into Table
values('9231852172526977164',0,0,52056,0,0,0,0,0,52056,0,0,0,0,0,0,0,11,0,'1
84000','184000',1),'9231898557453533324',0,0,5532,0,0,0,0,0,5532,0,0,0,0,0,0
,0,11,0,'184000','184000',1),'9230422383529723532',147,0,0,0,0,0,0,0,147,0,0
,0,0,0,0,0,91,0,'184000','184000',1) query failed
Duplicate entry '9231898557453533324' for key 1




Performance of Heap table?!

2001-02-15 Thread chchen

Hi all

I find out that the performance of Heap table seems bad than MyIsam table.
of course, MyIsam table usually on the H.D., so it will slow down
because of I/O.

but i do a test. I mount a memory disk on FreeBSD 
and update/inser a table. I find out that would faster
than use Heap table with the same structure.

and the speed almost double while the table grows up
to 200K+ rows

any ldeas?

Regards 
chChen



Lock with update,insert and select

2001-02-14 Thread chchen

hi all.

i have a problem. i use mysql-3.23.32 on FreeBSD4.2-release

my project need to update/insert some datas into mysql every 5mins.

but sometims these datas are lots. so i need to reduce the update/insert time as

short as possible, prevent it run over 5mins.

i found out that if i use LOCK write can inprove the insert/update speed.

but by this way. during the lock time, i can;t select until unlock.

so... this is a serious problem to me. if there is a good solution to solve it?

for example, can i do a select without caring the LOCK write?

or there is anyother way to insert/update fast without Lock write?

Regards
chchen



Speed of mysql

2001-02-10 Thread chchen

hi,all
i have a strange question.
I use FreeBSD4.2-RELEASE + mysql-3.23.32
Dual PIII667+ 768MB RAM

I have a table like this
CREATE TABLE ABC (
   A int(10) unsigned DEFAULT '0' NOT NULL,
   B int(10) unsigned DEFAULT '0' NOT NULL,
   C0 bigint(20) unsigned DEFAULT '0' NOT NULL,
   C1 bigint(20) unsigned DEFAULT '0' NOT NULL,
   C2 bigint(20) unsigned DEFAULT '0' NOT NULL,
   C3 bigint(20) unsigned DEFAULT '0' NOT NULL,
   C4 bigint(20) unsigned DEFAULT '0' NOT NULL,
   C5 bigint(20) unsigned DEFAULT '0' NOT NULL,
   C6 bigint(20) unsigned DEFAULT '0' NOT NULL,
   C7 bigint(20) unsigned DEFAULT '0' NOT NULL,
   Total bigint(20) unsigned DEFAULT '0' NOT NULL,
   A2 smallint(5) DEFAULT '0' NOT NULL,
   B2 smallint(5) DEFAULT '0' NOT NULL,
   PRIMARY KEY (A, B),
   KEY B (B),
   KEY B2 (B2)
);
and i write a C program to insert data into this table.
and my data have about 1M+ rows.

i find when it insert data in mysql, it seems at a high speed.
but after about 200K rows+, it slow down and down.

actually i don;t know if it slow down.
but use CPU usage of my C program and Mysqld 
from the begin is 9x% , and then.less and less after about 200K+.

fiannly it usually use only 1x or even less only x%'s cpu usage.

i tried to use memory disk to stored Table, it seems won't happen
what i say before. so i think it is the I/O problem.

soi think maybe is my table too big to search the data.
soi separate my table into 256 tables. to reduce the size of table
but... seems it doesn't useful to solve speed problem.
and when it slow down(CPU useage reduce), i check 
all these 256 tables, no one is bigger than 100K rows.

soanyone have good idea to solve this situation?
please don't ask me to put it all into the memory.
i tried before. but table is too big.
i guess about 500MB for the MYD and MYI

Regards
chChen







Re: Speed of mysql

2001-02-10 Thread chchen

oh.
no.i am not talking about of the speed of SELECT.
my problem is the speed of INSERT and UPDATE.
evenmore indexes only slow down the speend of insert,update.
i ruduce the index to only i need.
-

What sort of queries are you doing on this large table? I notice you only have
a couple of the fields indexed

jason


- Original Message -
From: "chchen" [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Saturday, February 10, 2001 6:35 PM
Subject: Speed of mysql


hi,all
i have a strange question.
I use FreeBSD4.2-RELEASE + mysql-3.23.32
Dual PIII667+ 768MB RAM

I have a table like this
CREATE TABLE ABC (
   A int(10) unsigned DEFAULT '0' NOT NULL,
   B int(10) unsigned DEFAULT '0' NOT NULL,
   C0 bigint(20) unsigned DEFAULT '0' NOT NULL,
   C1 bigint(20) unsigned DEFAULT '0' NOT NULL,
   C2 bigint(20) unsigned DEFAULT '0' NOT NULL,
   C3 bigint(20) unsigned DEFAULT '0' NOT NULL,
   C4 bigint(20) unsigned DEFAULT '0' NOT NULL,
   C5 bigint(20) unsigned DEFAULT '0' NOT NULL,
   C6 bigint(20) unsigned DEFAULT '0' NOT NULL,
   C7 bigint(20) unsigned DEFAULT '0' NOT NULL,
   Total bigint(20) unsigned DEFAULT '0' NOT NULL,
   A2 smallint(5) DEFAULT '0' NOT NULL,
   B2 smallint(5) DEFAULT '0' NOT NULL,
   PRIMARY KEY (A, B),
   KEY B (B),
   KEY B2 (B2)
);
and i write a C program to insert data into this table.
and my data have about 1M+ rows.

i find when it insert data in mysql, it seems at a high speed.
but after about 200K rows+, it slow down and down.

actually i don;t know if it slow down.
but use CPU usage of my C program and Mysqld
from the begin is 9x% , and then.less and less after about 200K+.

fiannly it usually use only 1x or even less only x%'s cpu usage.

i tried to use memory disk to stored Table, it seems won't happen
what i say before. so i think it is the I/O problem.

soi think maybe is my table too big to search the data.
soi separate my table into 256 tables. to reduce the size of table
but... seems it doesn't useful to solve speed problem.
and when it slow down(CPU useage reduce), i check
all these 256 tables, no one is bigger than 100K rows.

soanyone have good idea to solve this situation?
please don't ask me to put it all into the memory.
i tried before. but table is too big.
i guess about 500MB for the MYD and MYI

Regards
chChen







-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php