You can probably use a trigger. Check the section of the manual that
explains triggers.
Baron
On Fri, Feb 27, 2009 at 8:04 AM, Ron wrote:
> Hi All,
>
> is it possible to auto insert to another table once a new data is inserted
> on a table? i'm using asterisk mysql cdr, what
Hi All,
is it possible to auto insert to another table once a new data is
inserted on a table? i'm using asterisk mysql cdr, what i'd like to do
is once asterisk insert new data on the cdr table, i will insert to
another table which includes already how much the call was coz i don
In C API, I want to insert 1 sql.
I must have one strSql= insert into tables hello id ,blob
values(?,?),(?,?),(?,?)..。
then bind every pos with mysql_stmt_bind_param().
then to exec.
In C API don't have batchexec。
if I can have strSQL = inset into tables hello id
On Tue, Feb 24, 2009 at 8:53 PM, PJ wrote:
> Is there a way to insert several rows into a table with one statement?
> I have this:
> $sql2 = "INSERT INTO authors
>(first_name, last_name, ordinal) VALUES
> ('$first_nameIN', '$last_nameIN'
Is there a way to insert several rows into a table with one statement?
I have this:
$sql2 = "INSERT INTO authors
(first_name, last_name, ordinal) VALUES
('$first_nameIN', '$last_nameIN', '1')";
$result2 = mysql_query($sql2, $db
Hi,
On Fri, Feb 13, 2009 at 4:52 PM, PJ wrote:
> I am trying to create a php-mysql page to POST new records to several
> tables from one php page and I have to retrieve records from several
> (like 4 to 8) tables in one query.
> Being quite new to php & mysql, I am wondering what is the best way
I am trying to create a php-mysql page to POST new records to several
tables from one php page and I have to retrieve records from several
(like 4 to 8) tables in one query.
Being quite new to php & mysql, I am wondering what is the best way to
go about this. My tables consist of
* Structure
On Wed, Jan 7, 2009 at 9:17 AM, David Giragosian wrote:
> On 1/7/09, Jim Lyons wrote:
>>
>> There are other factors. If a table is completely fixed in size it makes
>> for a faster lookup time since the offset is easier to compute. This is
>> true, at least, for myisam tables. All books on tun
On 1/7/09, Jim Lyons wrote:
>
> There are other factors. If a table is completely fixed in size it makes
> for a faster lookup time since the offset is easier to compute. This is
> true, at least, for myisam tables. All books on tuning that I have read
> have said the CHAR makes for more effici
There are other factors. If a table is completely fixed in size it makes
for a faster lookup time since the offset is easier to compute. This is
true, at least, for myisam tables. All books on tuning that I have read
have said the CHAR makes for more efficient lookup and comparison that
VARCHAR.
On Wed, Jan 7, 2009 at 2:26 PM, Richard Heyes wrote:
> I still think a CHAR field would be faster than a VARCHAR because of
> the fixed row length (assuming every thing else is fixed). Perhaps
> someone from the MySQL list could clarify...?
Say that your column length goes up to 2000 bytes, but
>>So where's the advantage of VARCHAR ?
>
> Less space on disc = less data retrieved from disc = faster data
> retrieval - sometimes. If you have small columns, a small number of
> rows, or both, then char columns may be faster. If you have large
> columns of varying actual length, lots of rows, or
dex on the location_id column of
the locations table is not unique in the locations_view view. Therefore,
locations is not a key-preserved table and columns from that base table are not
updatable.
INSERT INTO locations_view VALUES
(999, 'Entertainment', 87, 'Roma');
INS
I have a VIEW that is defined over two base tables. One table is subtype
of another table and it's the VIEW that connects them. Now when I want to
insert into a subtable I have to insert through the VIEW. However I am
getting an error message when I try to insert into a VIEW. I found the
sol
> Hey guys,
>
> I am trying to construct a specially crafted view for the powerdns
> DNS-Server.
> This is what I have so far:
> CREATE VIEW test4 AS SELECT nummer AS name, ip as content FROM
> jabix.spaces JOIN jabix.ves ON spaces.veid = ves.id;
Wouldn't this work?
CREATE VIEW test4 (name, co
You can add a column to a view like this:
CREATE VIEW test4 AS SELECT nummer AS name, ip as content, 1 as domain_id
FROM jabix.spaces JOIN jabix.ves ON spaces.veid = ves.id;
This will set the domain_id vaulues to 1
Olaf
On 10/14/08 8:18 AM, "Samuel Vogel" <[EMAIL PROTECTED]> wrote:
> Hey guys,
This is exactly what I tried to avoid by using a view. I do not want to have
to take care about synchronizing two tables.
Is there any way to avoid this?
Regards,
Samy
Hi Samuel,
I am not sure if you can add a new column to a view, but why dont u create a
new table test4 as
create table test4 AS SELECT nummer AS name, ip as content FROM jabix.spaces
JOIN jabix.ves ON spaces.veid = ves.id;
And then add the new column to test4. When ever any new data is added int
Hey guys,
I am trying to construct a specially crafted view for the powerdns
DNS-Server.
This is what I have so far:
CREATE VIEW test4 AS SELECT nummer AS name, ip as content FROM
jabix.spaces JOIN jabix.ves ON spaces.veid = ves.id;
++---+
| name | content |
+
;.
SET @cntr:=0, @lastVal:='A'
INSERT INTO tableB LOC,DATA SELECT CONCAT(LOC,
CONCAT( IF(@lastVal=LOC, @cntr:[EMAIL PROTECTED], @cntr:=0),
IF(@lastVal:=LOC,'',''))) LOC, CONCAT(DATA, @cntr) FROM tableA ORDER
BY LOC
That should add a sequential number to LOC and DAT
Well, for your simple example, you can use query variables to add the
"counters".
SET @cntr:=0, @lastVal:='A'
INSERT INTO tableB LOC,DATA SELECT CONCAT(LOC,
CONCAT( IF(@lastVal=LOC, @cntr:[EMAIL PROTECTED], @cntr:=0),
IF(@lastVal:=LOC,'',''))) LOC, CO
I have an existing data set - here is an example (the real one is more
complex than this)
LOC DATA
-
A 1
B 2
C 3
D 4
E 5
F 6
...
and I am looking to run some sort of INSERT ... SELECT on this to make
a new table like this:
LOC
Hi:
Using mysql 5.0.45 on linux 2.6.22-15 (ubuntu).
I can create tab-deliminated text files using mysqldump -T, but
I haven't found a way to include a header with column names.
Any ideas?
It's any easy task for me to extract them with a script, but if
mysqldump can do it, so much for the better.
TI
I have a table like this:
unique_serial - Auto Increment
field_1
field_2
field_3
The Primary key is a combination of field_1, field_2, and field_3.
I want to do:
INSERT IGNORE INTO table_name (field_1,field_2,field_3) VALUES
('xx','xx','xx')
Sometimes t
ed in your original query. Perhaps this
might be why your original query didn't work in the first place?
Does this not work?
insert into games2 (sea_id,date,time,loc_id,hteam,vteam,div_id)
(
select
'36' as sea_id, s1.game_date as date, s1.begin_time as time,
s1.loc_id as loc
the query
that would work with an INNER JOIN though. Just so that I can study it and
learn from it.
INSERT INTO games2( sea_id, date, time, loc_id, hteam, vteam, div_id )
(
SELECT '36' AS sea_id, s1.game_date AS date, s1.begin_time AS time,
s1.loc_id AS loc_id, s1.home_
TECTED]; [EMAIL PROTECTED]
> CC: mysql@lists.mysql.com
> Subject: RE: Multiple Query/Insert help
> Date: Sat, 16 Aug 2008 10:10:15 -0500
>
> Thank you, but I still get an error and I can't figure it out: " Unknown
> column 's1.div_id' in 'on clause'
y Shellam; Steven Buehler
Cc: mysql@lists.mysql.com
Subject: RE: Multiple Query/Insert help
the only possible suggestion i have would be to disambiguate the selected
columns with 'as'
insert into games2 (sea_id,date,time,loc_id,hteam,vteam,div_id)
(
select
'36' as sea_id,
n Buehler
Cc: mysql@lists.mysql.com
Subject: Re: Multiple Query/Insert help
Hi Steve,
You're seeing this error because this query:
select div_id
from team_season where team_id=s1.div_id
is being run independently of the rest, so it doesn't know of "s1" in
this context. You would
Hi Martin,
Good point, I normally do but was just illustrating the join. I would
also normally fully-qualify each column when using table aliases and
multiple tables to avoid disambiguity.
insert into games2 (sea_id,date,time,loc_id,hteam,vteam,div_id)
(
select
'36' as
the only possible suggestion i have would be to disambiguate the selected
columns with 'as'
insert into games2 (sea_id,date,time,loc_id,hteam,vteam,div_id)
(
select
'36' as sea_id, game_date as date, begin_time as time, loc_id as loc_id,
home_team_id as hteam, a
following (may need tweaking):
e.g.
insert into games2 (sea_id,date,time,loc_id,hteam,vteam,div_id)
(
select
'36', game_date, begin_time, loc_id, home_team_id,
away_team_id, ts.div_id
from
scheduler s1 INNER JOIN team_season ts ON ts.t
I have a query that I just can't seem to get working.
insert into games2 (sea_id,date,time,loc_id,hteam,vteam,div_id) (select
'36',game_date,begin_time,loc_id,home_team_id,away_team_id,(select div_id
from team_season where team_id=s1.div_id) from scheduler s1);
Of course, I
You can also set the bulk_insert_buffer_size=, based
on available RAM.
This would improve bulk insert on myisam tables.
regards
anandkl
On 7/26/08, mos <[EMAIL PROTECTED]> wrote:
>
> At 06:46 PM 7/25/2008, you wrote:
>
>> List,
>>
>> I am bulk inserting a
At 06:46 PM 7/25/2008, you wrote:
List,
I am bulk inserting a huge amount of data into a MyISAM table (a
wikipedia page dump).
Before I issued SOURCE filename.sql; I did an ALTER TABLE page DISABLE
KEYS; LOCK TABLES page WRITE;
The dump consists of about 1,200 bulk INSERT statements with
List,
I am bulk inserting a huge amount of data into a MyISAM table (a
wikipedia page dump).
Before I issued SOURCE filename.sql; I did an ALTER TABLE page DISABLE
KEYS; LOCK TABLES page WRITE;
The dump consists of about 1,200 bulk INSERT statements with roughly
12,000 tuples each.
For the
On Mon, Jul 21, 2008 at 2:43 PM, mos <[EMAIL PROTECTED]> wrote:
> I thought if MySQL found a duplicate key on the insert, it would
> automatically update the existing row that it found with the results from
> table1 if I left out the column expressions in the update clause. But
At 12:16 PM 7/21/2008, you wrote:
So just use REPLACE instead of INSERT...
Sure, but a Replace will delete the existing row and insert the new one
which means also maintaining the indexes. This will take much longer than
just updating the existing row. Now if there were only a couple of
So just use REPLACE instead of INSERT...
http://dev.mysql.com/doc/refman/5.0/en/replace.html
On Mon, Jul 21, 2008 at 11:44 AM, mos <[EMAIL PROTECTED]> wrote:
> At 08:23 PM 7/20/2008, Perrin Harkins wrote:
>
>> On Sun, Jul 20, 2008 at 12:12 AM, mos <[EMAIL PROTECTED]> wr
At 11:00 AM 7/21/2008, Perrin Harkins wrote:
On Mon, Jul 21, 2008 at 11:44 AM, mos <[EMAIL PROTECTED]> wrote:
> I can't specify all of the columns in a Set statement in the
> OnDuplicate clause because I don't know what the column names are and there
> could be 100 columns.
Write code to d
On Mon, Jul 21, 2008 at 11:44 AM, mos <[EMAIL PROTECTED]> wrote:
> I can't specify all of the columns in a Set statement in the
> OnDuplicate clause because I don't know what the column names are and there
> could be 100 columns.
Write code to do it. There is no way around specifying the co
At 08:23 PM 7/20/2008, Perrin Harkins wrote:
On Sun, Jul 20, 2008 at 12:12 AM, mos <[EMAIL PROTECTED]> wrote:
> Is there a way to get "Insert ... select ... On Duplicate Update" to
update
> the row with the duplicate key?
That's what it does.
> Why can't i
On Sun, Jul 20, 2008 at 12:12 AM, mos <[EMAIL PROTECTED]> wrote:
> Is there a way to get "Insert ... select ... On Duplicate Update" to update
> the row with the duplicate key?
That's what it does.
> Why can't it do this?
What makes you think it can't?
Is there a way to get "Insert ... select ... On Duplicate Update" to
update the row with the duplicate key? Otherwise I'll have to use Replace
which is inefficient because it deletes the old duplicated row and then
inserts the new row with the same key. I'd much rathe
On Wed, Jul 9, 2008 at 9:37 AM, Phil <[EMAIL PROTECTED]> wrote:
> Is it possible to do an insert into with subselect and group by with an
> additional on duplicate insert ?
>
> CREATE TABLE NEW_TABLE (
> `a` varchar(10),
> `b` double
> ) engine=MyISAM;
>
>
>
Sorry, that was just a typo,
should have been
INSERT INTO NEW_TABLE (select old.x,sum(old.y) from OLD_TABLE old group by
old.x)
on duplicate key
update b=sum(old.y);
but this gives
ERROR (HY000): Invalid use of group function
INSERT INTO NEW_TABLE (select old.x,sum(old.y) from
you should say "group by old.x" and not "old.a"
On 7/9/08, Arthur Fuller <[EMAIL PROTECTED]> wrote:
>
> I think that you need to select old.a otherwise you cannot group by it.
>
> Arthur
>
> On 7/9/08, Phil <[EMAIL PROTECTED]> wrote:
> >
&g
I think that you need to select old.a otherwise you cannot group by it.
Arthur
On 7/9/08, Phil <[EMAIL PROTECTED]> wrote:
>
> Is it possible to do an insert into with subselect and group by with an
> additional on duplicate insert ?
>
> CREATE TABLE NEW_TABLE (
>
Is it possible to do an insert into with subselect and group by with an
additional on duplicate insert ?
CREATE TABLE NEW_TABLE (
`a` varchar(10),
`b` double
) engine=MyISAM;
INSERT INTO NEW_TABLE (select old.x,sum(old.y) from OLD_TABLE old group by
old.a)
on duplicate key
update b=sum
;[EMAIL PROTECTED]>
> wrote:
> Hi,
> Currently in my production environment, mysql only allow max
> 50.000
> records per insert. Is it possible to increase the number of
> records
> inserted during insert into query? How t
To make the parameter max_allowed_packet larger.
On Mon, May 19, 2008 at 12:17 PM, sangprabv <[EMAIL PROTECTED]> wrote:
> Hi,
> Currently in my production environment, mysql only allow max 50.000
> records per insert. Is it possible to increase the number of records
> inserted
Hi,
Currently in my production environment, mysql only allow max 50.000
records per insert. Is it possible to increase the number of records
inserted during insert into query? How to do it? TIA
Regards,
Willy
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To
Hi all,
Apologies if this isn't the correct list but I couldn't see a more
suitable one.
I have 4 tables. t1 and t3 are a many to many relationship and use t2
as link table. t3 has many t4.
What I want to do is insert a new row into t3 for each row in t1. I
then want
Hi,
table table1 int1 int auto_increament ,
date date not null default '-00-00'
1) insert into table1 (date) values('. . ')
// shows Incorrect date value: '. . ' for column date' at row 1
// '. . ' ==&g
On Thu, 3 Apr 2008, Hiep Nguyen wrote:
hi all, i have a question on insert ... select statement.
tbl_1(fld1,fld2,fld3, )
fld1 int primary key auto_increment not null
tbl_2(fld_a,fld_b,fld_c,...)
how do i construct my select statement so that fld1 is auto increment?
insert into tbl_1
The beauty of this language is exactly as Johan says, you can skip the
obvious, Just insert all the other non-obvious columns. In the event that
you have numerous defaulted columns, though, it's best to supply a NULL so
the syntax is parallel (IMO), or alternatively to name the columns. But
e
>Description:
copying a table with a fulltext index via
insert into ft1 select * from ft2;
into a identical table sometimes leads to select error 1191
when concurrent select's are running.
this happens in an enviroment where the searched
Hi Hiep,
Hiep Nguyen skrev:
On Thu, 3 Apr 2008, Johan Höök wrote:
Hi Hiep,
you can put in either xxx = NULL
or you can skip it completely:
insert into tbl_1(fld2,fld3) select fld_b, NOW() from tbl_2;
Regards,
/Johan
Hiep Nguyen skrev:
hi all, i have a question on insert ... select
On Thu, 3 Apr 2008, Johan Höök wrote:
Hi Hiep,
you can put in either xxx = NULL
or you can skip it completely:
insert into tbl_1(fld2,fld3) select fld_b, NOW() from tbl_2;
Regards,
/Johan
Hiep Nguyen skrev:
hi all, i have a question on insert ... select statement.
tbl_1(fld1,fld2
Hi Hiep,
you can put in either xxx = NULL
or you can skip it completely:
insert into tbl_1(fld2,fld3) select fld_b, NOW() from tbl_2;
Regards,
/Johan
Hiep Nguyen skrev:
hi all, i have a question on insert ... select statement.
tbl_1(fld1,fld2,fld3, )
fld1 int primary key
hi all, i have a question on insert ... select statement.
tbl_1(fld1,fld2,fld3, )
fld1 int primary key auto_increment not null
tbl_2(fld_a,fld_b,fld_c,...)
how do i construct my select statement so that fld1 is auto increment?
insert into tbl_1(fld1,fld2,fld3) select xxx, fld_b, NOW
update` double default NULL,
> > PRIMARY KEY (`proj`,`country`)
> > ) ENGINE=MyISAM DEFAULT CHARSET=latin1;
> >
> > To get the data I can do the following
> >
> > select 'rsi',country,sum(metric1) as total,count(*),sum(metric2) from
> >
;rsi',country,sum(metric1) as total,count(*),sum(metric2) from
> user_table group by country;
>
> This works fine, but then I tried to populate the table with
>
> INSERT INTO countrystats (select 'rsi', country,sum(metric1) as
> total,count(*) as count,sum(metric2
double default NULL,
PRIMARY KEY (`proj`,`country`)
) ENGINE=MyISAM DEFAULT CHARSET=latin1;
To get the data I can do the following
select 'rsi',country,sum(metric1) as total,count(*),sum(metric2) from
user_table group by country;
This works fine, but then I tried to populate the table
Thanks it works great ! :)
Dan Buettner wrote :
Richard, it's possible, & your syntax is pretty close.
Try this:
INSERT INTO info_stamp
(fav_colour, hobby, stamp_date, firstname,
last_name, sexe, age, username, email, insc_date)
SELECT $fav_colour, $hobby, $time, a.firstname, a.
Richard, it's possible, & your syntax is pretty close.
Try this:
INSERT INTO info_stamp
(fav_colour, hobby, stamp_date, firstname,
last_name, sexe, age, username, email, insc_date)
SELECT $fav_colour, $hobby, $time, a.firstname, a.last_name, a.sexe,
a.age, a.username
Hello,
I need to insert one entry(one line) containing 10 values, some from a
form(ok) but some from two other tables, what is the best way to do this ?
Say for example I need to enter this information :
first_name - last_name - age - sexe - username - email - favorite_colour
- hobby
Mário Gamito <[EMAIL PROTECTED]> wrote on 02/12/2008 01:00:25 AM:
> Hi,
>
> Is it possible to send an INSERT statement from a Windows server running
> MS SQL SERVER 2005 to a Linux box running MySQL ?
>
> If so, how ? Do I need any special tools ?
>
> Any help w
Save the sql statement to text file,then execute it inside the mysql shell.
On Feb 12, 2008 2:00 PM, Mário Gamito <[EMAIL PROTECTED]> wrote:
> Hi,
>
> Is it possible to send an INSERT statement from a Windows server running
> MS SQL SERVER 2005 to a Linux box running MySQL ?
&
Hi,
Is it possible to send an INSERT statement from a Windows server running
MS SQL SERVER 2005 to a Linux box running MySQL ?
If so, how ? Do I need any special tools ?
Any help would be appreciated.
Warm Regards,
Mário Gamito
--
MySQL General Mailing List
For list archives: http
> On Jan 30, 2008 12:50 PM, Dmitry E. Oboukhov wrote:
> > Is it possible to add to the syntax of the INSERT operator appoximately
> > in such way:
> > SELECT list INSERT [IGNORE] INTO ... - an added one.
> > SELECT list UPDATE - an added one.
>
> > PS:
On Jan 30, 2008 12:50 PM, Dmitry E. Oboukhov wrote:
> Is it possible to add to the syntax of the INSERT operator appoximately
> in such way:
> SELECT list INSERT [IGNORE] INTO ... - an added one.
> SELECT list UPDATE - an added one.
> PS: I understand that adding the changes
very often the resulting
values are needed at once for report representation or for using in
the next statements INSERT)
So: The INSERT(UPDATE) statement returns the quantity of
strings inserted(updated) into a table.
(*) However as far as I understand at the stage of fulfilling this
operator mysql oper
On Jan 29, 2008, at 6:21 PM, BJ Swope wrote:
drop the indexes for the conversion then rebuild the indexes after the
tables are converted.
As noted in my original email, I tried that, but Jan's suggestion re:
InnoDB tuning fixed it.
Thanks for the advice, everyone!
David
--
MySQL General
On Jan 29, 2008, at 7:04 PM, Jan Kirchhoff wrote:
play around with innodb_log_buffer_size, innodb_log_file_size and
try to set innodb_flush_log_at_trx_commit=0.
Do you don't have a BBU on your raid-controller?
let me know if that changes anything.
That did it! I upped the log_buffer_size
on than the O*log(N) that you would expect.
>
> I have determined that this is in part due to the indexes on the
> table. I have an index on two columns, call them A and B. There is
> no relationship between the ordering in A and the ordering in B. If I
> create the InnoDB table with
mysql> create table test (id int NOT NULL auto_increment, x int NOT
NULL, primary key (id), key (x)) ENGINE=InnoDB;
Query OK, 0 rows affected (0.05 sec)
mysql> insert into test (x) select cast(rand()*1000 as unsigned)
from verybigtable limit 100;
Query OK, 100 rows affected (20.20 s
On Jan 29, 2008, at 6:09 PM, Jan Kirchhoff wrote:
what hardware are you running on and you much memory do you have?
what version of mysql?| |
How did you set innodb_buffer_pool_size?
Hardware:
Dual AMD Opteron 246 2.0 GHz
4 GB DDR RAM (no swap being used)
Dual 146 GB SCSI drives with a RAID
lls me that no swap
space is being used):
mysql> create table test2 (id int NOT NULL auto_increment, x int NOT
NULL, primary key (id), key (x)) Engine=InnoDB;
Query OK, 0 rows affected (0.10 sec)
mysql> insert into test2 (x) select cast(rand()*1000 as
unsigned) from big_table where
get even to the fourth
chunk, it's taking 15-20 minutes, and continuing to worsen. This is
much worse degradation than the O*log(N) that you would expect.
[...]
This problem can even be reproduced in a very simple test case, where
I continuously insert approximately 1 million rows i
ue to the indexes on the
> table. I have an index on two columns, call them A and B. There is
> no relationship between the ordering in A and the ordering in B. If I
> create the InnoDB table with only index A, and insert the data into it
> in the order of A, then almost no gradual
n the ordering in A and the ordering in B. If I
create the InnoDB table with only index A, and insert the data into it
in the order of A, then almost no gradual degradation in performance
can be observed between chunks. Similarly, if I create the InnoDB
table with only index B, and inser
I think I sorted it out ...
INSERT INTO master_comments (comment_no,comment_text,language_id)
SELECT comment_no,comment_text,language_id from mComments
WHERE NOT EXISTS
(
SELECT comment_no
FROM master_comments
WHERE mComments.comment_no = master_comments.comment_no
);
Hope this helps someone
Can anyone tell me why this isn't working... v5.0
INSERT INTO master_comments (comment_no,comment_text,language_id)
SELECT comment_no,comment_text,language_id from mComments
WHERE NOT EXISTS (SELECT comment_no FROM master_comments);
I thought I had it working once but now it
In the last episode (Jan 23), [EMAIL PROTECTED] said:
> In a message dated 1/23/2008 2:02:46 AM Eastern Standard Time,
> [EMAIL PROTECTED] writes:
>
> > http://dev.mysql.com/doc/refman/5.0/en/mysql-stmt-execute.html
>
> Dan,
> Thanks very much for responding to my problem post.
> I was hoping I
Dan,
Thanks very much for responding to my problem post.
I was hoping I was just missing something in the syntax.
Looks like I made a very big mistake recommending MySql to my Client.
I have hundreds of SQL statements in the code I am converting for them. It is
C code in IBM DB/2 running on OS/2
ng?
- how can I check what ids (id1) are in a particular partition? (Remember I
use hash as the partition strategy so I don't know if I can call a partition
by name??)
- anything else I should be aware of?
Thanks,
Geert-Jan
--
View this message in context:
http://www.nabble.com/fast-i
INSERT ... ON DUPLICATE KEY UPDATE:
http://dev.mysql.com/doc/refman/5.0/en/insert-on-duplicate.html
Cheers,
Jay
J Trahair wrote:
This is a question I want to know the answer to, as well! Is there any way of
avoiding looking up a specific record in a table to see if it exists, before
This is a question I want to know the answer to, as well! Is there any way of
avoiding looking up a specific record in a table to see if it exists, before
deciding whether to INSERT INTO or UPDATE, eg:
mstrSQL = "SELECT * FROM Shops WHERE ShopReference = '" &
grd1.Tex
On Dec 16, 2007 10:29 AM, Afan Pasalic <[EMAIL PROTECTED]> wrote:
> try REPLACE
> http://dev.mysql.com/doc/refman/5.1/en/replace.html
>
> -afan
Replace will not do an update. All previous data will be lost.
A mysql-centric alternative solution would be to use INSERT ... ON
DU
possible to
insert if not there for example
Update
if anyone not found then insert new with same criteria as update
Where region_id in (2,3,4,5,6)
Thanks
Steffan
---
T E L 6 0 2 . 7 9 3 . 0 0 1 4 | F A X 6 0 2 . 9 7 1 . 1 6 9
I am trying to think of a trick way to handle something. I have been
successful in using the multiple inserts in one row by using the
,(xx,xx,xx),(xx,xx,xx),(xx,xx,xx) in a values block. BUT Is it possible to
insert if not there for example
Update
if anyone not found then insert new
: Thursday, December 13, 2007 7:09 PM
To: mysql@lists.mysql.com
Subject: before insert trigger
Is it possible for a trigger to test the data to be inserted and if it doesn't
meet specific criteria not insert the data at all?
I have an application that writes a lot of data to my table. I don
Is it possible for a trigger to test the data to be inserted and if it doesn't
meet specific criteria not insert the data at all?
I have an application that writes a lot of data to my table. I don't have
control over this application and it writes a lot more data then I need. So I&
age-
> From: Hiep Nguyen [mailto:[EMAIL PROTECTED]
> Sent: Thursday, November 29, 2007 10:24 AM
> To: mysql@lists.mysql.com
> Subject: repost timestamp for update & insert
>
> i asked this question before, but when i tried what D.Vin suggested, i
> got
> an error.
&g
CURRENT_TIMESTAMP );
-Original Message-
From: Hiep Nguyen [mailto:[EMAIL PROTECTED]
Sent: Thursday, November 29, 2007 8:24 AM
To: mysql@lists.mysql.com
Subject: repost timestamp for update & insert
i asked this question before, but when i tried what D.Vin suggested, i got
an error.
m
i asked this question before, but when i tried what D.Vin suggested, i got
an error.
mysql version is 4.1.12
i tried:
CREATE TABLE tbl_spdate(
spdate_ID int NOT NULL PRIMARY KEY AUTO_INCREMENT ,
spdate_date date NOT NULL ,
spdate_notes varchar( 100 ) ,
spdate_created timestamp default 0,
spdat
On Nov 13, 2007, at 1:25 AM, yaya sirima wrote:
Hi,
CREATE TABLE Test (
COL1INT UNSIGNED AUTO_INCREMENT PRIMARY KEY,
COL2VARCHAR(10) NOT NULL,
COL3VARCHAR(10) NOT NULL,
UNIQUE(COL2, COL3); --(not that)
FULLTEXT(col1,col2)
);
Hi,
>
>
> CREATE TABLE Test (
> COL1INT UNSIGNED AUTO_INCREMENT PRIMARY KEY,
> COL2VARCHAR(10) NOT NULL,
> COL3VARCHAR(10) NOT NULL,
> UNIQUE(COL2, COL3); --(not that)
FULLTEXT(col1,col2)
);
>
>
Try this property FULLTEXT
--
Yaya SI
L1INT UNSIGNED AUTO_INCREMENT PRIMARY KEY,
>>COL2VARCHAR(10) NOT NULL,
>>COL3VARCHAR(10) NOT NULL,
>> UNIQUE(COL2, COL3);
>> );
>>
>> There are two insert scenarios which would cause a DUPLICATE KEY
>> error - one which conta
201 - 300 of 2132 matches
Mail list logo