On Sun, Aug 21, 2011 at 4:08 AM, Amy wrote:
> Hello, my computer needs to update its SQlite program but when I get to
> your
> site for updates there's a lot of options and I don't know which I should
>
Why are you so certain that your PC needs a new SQLite? If a specific
program is telling you
Hello, my computer needs to update its SQlite program but when I get to your
site for updates there's a lot of options and I don't know which I should
try to download. Also I picked one randomly and it opened a window that
requires to type some commands. I don't know the commands to type. Could you
>You can do both in one pass:
>update locations set ItemCount =
>case when Location in (select Location from hive)
> then '1' else '0'
>end;
Thanks for your help Igor, extremely elegant solution.
--
View this message in context:
http://old.nabble.com/Update-one-table-from-matches-in-another-t
flakpit wrote:
> I was just using this to make sure my matches were going to be correct.
>
> select * from locations t1 where exists (
> select * from hive t2
> where t1.Location=t2.Location
> )
A slightly shorter form:
select * from locations where Location in (
select Locatio
>update locations
> set ItemCount='1'
> where exists(
> select Location from hive where locations.Location=hive.Location
> )
Okay, seems that I wasn't too far off. Thank you very much for the help, I
certainly needed it:):)
>update locations
> set ItemCount='0'
>
On 10 Aug 2011, at 10:27am, Martin Engelschalk wrote:
> Am 10.08.2011 11:14, schrieb flakpit:
>> I need to update the ItemCount column in the locations table with
>> '1'when there is a corresponding match in the inventory table but using the
>> query below marks all locations in the locations tab
Hi,
Am 10.08.2011 11:14, schrieb flakpit:
> To see what matches the location table with locations in the inventory table,
> I can issue the following command and it works returning 17 locations
> matched and is correct. There are 21 locations in the locations table but
> only 17 used in the invent
To see what matches the location table with locations in the inventory table,
I can issue the following command and it works returning 17 locations
matched and is correct. There are 21 locations in the locations table but
only 17 used in the inventory table so I know the query is returning correct
On Tue, Mar 29, 2011 at 11:32 AM, Simon Slavin wrote:
> On 29 Mar 2011, at 4:12pm, Sutter, Doug wrote:
>> I know how to create a unique trigger for each column where I hard-code
>> the column's name as shown below. But I would like to create a trigger
>> that would fire when any column is updated
On 29 Mar 2011, at 4:12pm, Sutter, Doug wrote:
> I know how to create a unique trigger for each column where I hard-code
> the column's name as shown below. But I would like to create a trigger
> that would fire when any column is updated and log the specific details
> for that column only.
Yo
Hi,
I am trying to create a trigger that will log changes made to my
database. I need to log only the columns that were updated (not the
entire record). So I need to be able to log the column name, old value,
new value and date/time. I also need to know which row was updated
(identified by t
t; Senior Scientist
> NG Information Systems
> Advanced Analytics Directorate
>
>
>
>
> From: sqlite-users-boun...@sqlite.org [sqlite-users-boun...@sqlite.org] on
> behalf of Simon Slavin [slav...@bigfraud.org] Sent: Monday, Februa
On Mon, 2011-02-21 at 15:13 +0100, Dietmar Hummel wrote:
> std::string strStatement( "UPDATE persistence SET
> name=\"blub\",expiration=\"2011-04-02\",value=\"?\" WHERE id=\"1\"" );
In addition to what Igor said, it isn't really proper (standard?) SQL to
put double quotes around the value li
Dietmar Hummel wrote:
> Maybe someone could help me with a little problem. I am trying to update
> an existing entry in the db
> with an update statement where one of the columns is a blob type. I have
> code that looks like this:
>
> sqlite3_stmt* m_pStatement = NULL;
>
> std::string st
Hi list!
Maybe someone could help me with a little problem. I am trying to update
an existing entry in the db
with an update statement where one of the columns is a blob type. I have
code that looks like this:
sqlite3_stmt* m_pStatement = NULL;
std::string strStatement( "UPDATE persi
On Mon, Feb 14, 2011 at 6:04 PM, Simon Slavin wrote:
>
> If the data is backed up when SQLite has the files closed, things are fine.
> The really hard part of this comes when you're working with systems that must
> be live at all times. Which is why you'll never see a bank use SQLite to
> mai
> I'll give you another failure point that most people never see or think of.
>
> I used to manage numerous Linux systems with RAID-5. One time I had a drive
> fail, the spare kicked in, and then during the rebuild a 2nd drive
> failed...hosing the RAID (i.e. two failed disks).
>
> The problem w
ue, February 15, 2011 4:48:24 PM
>Subject: Re: [sqlite] UPDATE WITH ORDER BY LIMIT ERROR
>
>On 02/15/2011 06:04 PM, venkat easwar wrote:
>> Forgot to mention what error I am getting.
>>
>> near "order": syntax error
>> near "limit": syntax error -
v...@bigfraud.org]
Sent: Monday, February 14, 2011 5:04 PM
To: General Discussion of SQLite Database
Subject: EXT :Re: [sqlite] UPDATE/INSERTing 1-2k rows slower than expected
On 14 Feb 2011, at 8:50pm, Black, Michael (IS) wrote:
> And if you want to talk about data reliability...BACK UP YOUR DATA.
Oh... Thanks Kennedy. Between any options on run time to enable the feature?
VENKAT
From: Dan Kennedy
To: sqlite-users@sqlite.org
Sent: Tue, February 15, 2011 4:48:24 PM
Subject: Re: [sqlite] UPDATE WITH ORDER BY LIMIT ERROR
On 02/15/2011 06:04 PM, venkat
On 02/15/2011 06:04 PM, venkat easwar wrote:
> Forgot to mention what error I am getting.
>
> near "order": syntax error
> near "limit": syntax error - if i remove the order by clause
See under the "Optional LIMIT and ORDER BY Clauses" heading
on this page:
http://www.sqlite.org/lang_update.ht
Discussion of SQLite Database
Sent: Tue, February 15, 2011 4:32:40 PM
Subject: [sqlite] UPDATE WITH ORDER BY LIMIT ERROR
Hi Buddies,
Sqlite support document says, update with limit and order by clauses are
supported. But I found it actually not working. Sample DB schema,
create table chec
1 4:32:40 PM
Subject: [sqlite] UPDATE WITH ORDER BY LIMIT ERROR
Hi Buddies,
Sqlite support document says, update with limit and order by clauses are
supported. But I found it actually not working. Sample DB schema,
create table check_update( a int, b char);
insert into check_update values (1,'
Hi Buddies,
Sqlite support document says, update with limit and order by clauses are
supported. But I found it actually not working. Sample DB schema,
create table check_update( a int, b char);
insert into check_update values (1,'venkat',22);
insert into check_update values (2,'venkat',23);
Now
On 14 Feb 2011, at 8:50pm, Black, Michael (IS) wrote:
> And if you want to talk about data reliability...BACK UP YOUR DATA.
And keep the backups off-site. And once a year try to actually restore one of
them. I've earned /lots/ of money from companies with backup systems that
claimed to work
On Mon, Feb 14, 2011 at 3:02 PM, Max Vlasov wrote:
> On Mon, Feb 14, 2011 at 8:42 PM, Jim Wilcoxson wrote:
>
>> On Mon, Feb 14, 2011 at 10:56 AM, Pavel Ivanov wrote:
>> >> So my question is, does it maintain the other 3 parts of ACID, so that
>> >> the database will never be in a corrupted state
On Mon, Feb 14, 2011 at 11:50 PM, Black, Michael (IS) <
michael.bla...@ngc.com> wrote:
> And if you want to talk about data reliability...BACK UP YOUR DATA.
> The likely failure points I can think of are:
> #1 Power supply (redundant supplies available)
> #2 Hard drive smoked (and your data is toa
te-users-boun...@sqlite.org [sqlite-users-boun...@sqlite.org] on
behalf of Max Vlasov [max.vla...@gmail.com]
Sent: Monday, February 14, 2011 2:02 PM
To: General Discussion of SQLite Database
Subject: EXT :Re: [sqlite] UPDATE/INSERTing 1-2k rows slower than expected
On Mon, Feb 14, 2011 at 8:42 PM, Jim
On Mon, Feb 14, 2011 at 8:42 PM, Jim Wilcoxson wrote:
> On Mon, Feb 14, 2011 at 10:56 AM, Pavel Ivanov wrote:
> >> So my question is, does it maintain the other 3 parts of ACID, so that
> >> the database will never be in a corrupted state after a power loss,
> >> even though it may be missing so
On Mon, Feb 14, 2011 at 10:56 AM, Pavel Ivanov wrote:
>> So my question is, does it maintain the other 3 parts of ACID, so that
>> the database will never be in a corrupted state after a power loss,
>> even though it may be missing some writes that were confirmed?
>
> Jim, I think the answer to yo
On Mon, Feb 14, 2011 at 7:48 PM, Jim Wilcoxson wrote:
> On Mon, Feb 14, 2011 at 10:56 AM, Pavel Ivanov wrote:
> >> So my question is, does it maintain the other 3 parts of ACID, so that
> >> the database will never be in a corrupted state after a power loss,
> >> even though it may be missing so
On 14 Feb 2011, at 4:48pm, Jim Wilcoxson wrote:
> I believe (though am not sure) that the drive can reorder individual
> sector/record writes between syncs however it likes, as long as it
> doesn't allow writes to cross a sync and also executes the sync groups
> (for lack of a better term - the g
On Mon, Feb 14, 2011 at 10:56 AM, Pavel Ivanov wrote:
>> So my question is, does it maintain the other 3 parts of ACID, so that
>> the database will never be in a corrupted state after a power loss,
>> even though it may be missing some writes that were confirmed?
>
> Jim, I think the answer to yo
On 14 Feb 2011, at 3:56pm, Pavel Ivanov wrote:
>> So my question is, does it maintain the other 3 parts of ACID, so that
>> the database will never be in a corrupted state after a power loss,
>> even though it may be missing some writes that were confirmed?
>
> Jim, I think the answer to your qu
> So my question is, does it maintain the other 3 parts of ACID, so that
> the database will never be in a corrupted state after a power loss,
> even though it may be missing some writes that were confirmed?
Jim, I think the answer to your question is already in Max's tests:
the USB drive is compl
On Mon, Feb 14, 2011 at 2:33 AM, Max Vlasov wrote:
>
> On Sun, Feb 13, 2011 at 8:53 PM, Jim Wilcoxson wrote:
>
> > > But I thought about how it would be possible to test this explanation .
> > I'm
> > > going to do some tests that works like this. The same id updating, but in
> > > the middle of
Also...here's another part of the benchmark which shows triggers aren't as evil
as I thought. Trigger for this example was 2X the manual update.
F:\>d:batch 50 1000 12
using wal mode
using update trigger
Sqlite Version: 3.7.5
Inserting 50 rows using a bulk of 1000
commits per second: 714
SOLVED!!
Marcus Grimm and I went back forth a number of times trying to figure out why
my benchmarks were so much faster than his.
Found it...
My SATA RAID setup had "Enable Advanced Performance" on by default (I had never
turned it on).
My secondary tests on an IDE drive showed similar perfor
On 14 Feb 2011, at 10:30am, Max Vlasov wrote:
> On Mon, Feb 14, 2011 at 12:05 PM, Simon Slavin wrote:
>
>> Can you expand upon your conclusion ? I like your test, and am interested
>> to know precisely where you think the lag is happening.
>
> Simon, the conclusion came from the assumption th
On Mon, Feb 14, 2011 at 12:05 PM, Simon Slavin wrote:
> > The second variation was just unplugging the cord but keeping the power
> > intact, so if it's drive that caches, it would end its operations
> > completely. This time the results were perfect, for example 4822 -> 4822,
> > and even 5371 -
On 14 Feb 2011, at 7:33am, Max Vlasov wrote:
> The first variation was similar to the one with bus-powered, this time I
> used hard button on the box that is equivalent to unplugging both connection
> and power. The difference for a single test was really impressive 5355 ->
> 4445 = -910. And whe
On Sun, Feb 13, 2011 at 8:53 PM, Jim Wilcoxson wrote:
> > But I thought about how it would be possible to test this explanation .
> I'm
> > going to do some tests that works like this. The same id updating, but in
> > the middle of 10,000 operation I will unplug the cord, the sqlite will
> say
>
On 13 Feb 2011, at 5:53pm, Jim Wilcoxson wrote:
> I also found this page, used to force a Linux system crash:
>
> http://evuraan.blogspot.com/2005/06/how-to-force-paniccrash-in-linux.html
>
> I seem to remember a post that SQLite commit/sync is tested with the kill
> command, but it seems like
On Sun, Feb 13, 2011 at 11:55 AM, Max Vlasov wrote:
> On Sun, Feb 13, 2011 at 5:31 PM, Jim Wilcoxson wrote:
>
> > On Sun, Feb 13, 2011 at 1:15 AM, Max Vlasov
> wrote:
> > > So the final results:
> > > - the db was always ok and contains the correct value (id=10001 for
> > initial
> > > 1).
> >
>But what I postulate is that you can't physically write *the same* record
>over and over more than 90 times per second on a 5400 rpm drive,
>unless the
>drive, OS, or filesystem implements something like wear-leveling,
>where the
>physical location of sectors is constantly changing.
It's still
On Sun, Feb 13, 2011 at 5:31 PM, Jim Wilcoxson wrote:
> On Sun, Feb 13, 2011 at 1:15 AM, Max Vlasov wrote:
> > So the final results:
> > - the db was always ok and contains the correct value (id=10001 for
> initial
> > 1).
> > - the speed was about 227 commits per second so very close to my syst
On Sun, Feb 13, 2011 at 7:07 AM, Jean-Christophe Deschamps wrote:
>
> >So if the drive is 5400 rpm, 227 is much more than 5400/60=90 and even if
> >it's 7200 (manufacturers sometimes upgrade drives inside portable hd
> >without
> >prior notice), it's still twice as much as 7200/60=120.
>
> 5400/6
On Sun, Feb 13, 2011 at 1:15 AM, Max Vlasov wrote:
> On Sat, Feb 12, 2011 at 7:11 PM, Jim Wilcoxson wrote:
>
> >
> >
> > Unless I'm missing something, SQLite has to update the first page of the
> > database on every commit, to update the change counter. Assuming you are
> > using rotating media
>So if the drive is 5400 rpm, 227 is much more than 5400/60=90 and even if
>it's 7200 (manufacturers sometimes upgrade drives inside portable hd
>without
>prior notice), it's still twice as much as 7200/60=120.
5400/60, 7200/60 ... those values rely on the assumption that
successive LBAs are ma
On Sat, Feb 12, 2011 at 7:11 PM, Jim Wilcoxson wrote:
>
>
> Unless I'm missing something, SQLite has to update the first page of the
> database on every commit, to update the change counter. Assuming you are
> using rotating media, that record can only be updated 120 times per second,
> maximum,
On Sun, Feb 13, 2011 at 12:27 AM, Marcus Grimm wrote:
> So my brain isn't that lasy, we count 3 syncs :-)
>
> When you turn journaling to off you will most likely
> see less syncs, probably 2 in your case. So that is all
> in line..
>
> Marcus
>
>
Marcus, you're right about general journalin
On February 12, 2011, Black, Michael (IS) wrote:
> Are you wrapping your data dump into a "BEGIN" "COMMIT"?
Yup.
> Also...you mention using a select...I assume you are testing first to see
> if you need to update or insert?
The select is not used when dumping data. The in memory data structure k
>>
>> Interesting, I did a test on a 7200 file and the best I could do was 50
>> commits per second (a simple base/table with only id, journalling off
>> and
>> no
>> extra code since the tool I use has "a repeated query" option with
>> accurate
>> timing). You mentioned 3 syncs per commit, but I t
> On Sat, Feb 12, 2011 at 9:48 PM, Marcus Grimm
> wrote:
>
>> > I should've realized it wasn't running this fast but the small 5000
>> record
>> > size got me.
>> > Test it yourself.
>> > I do have a 7200RPM drive. My 261.4 numer is still 2+X your
>> theoretical.
>>
>> I don't want to be a smart-a
> OK...I added your trigger example as option 8. And I had pasted the wrong
> version in my last email. My timings were correct. Your example also did
> sql_exec instead of using prepare so it will run slower.
Yes, but that should be marginal.
When I send my code the trigger version wasn't fin
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 02/12/2011 11:54 AM, Simon Slavin wrote:
> Compensating for this behaviour is a big part of what SQLite does in
> journaling.
SQLite and other journalling mechanisms depend on an operating system call
fsync that flushes their relevant data to th
On Sat, Feb 12, 2011 at 9:48 PM, Marcus Grimm wrote:
> > I should've realized it wasn't running this fast but the small 5000
> record
> > size got me.
> > Test it yourself.
> > I do have a 7200RPM drive. My 261.4 numer is still 2+X your theoretical.
>
> I don't want to be a smart-arse, but I stil
OK...I added your trigger example as option 8. And I had pasted the wrong
version in my last email. My timings were correct. Your example also did
sql_exec instead of using prepare so it will run slower.
I also made this compilable on Unix too.
On Unix my timing matches the run time and there
On 12 Feb 2011, at 4:11pm, Jim Wilcoxson wrote:
> I don't understand how you can do 360K commits per second if your system is
> actually doing "to the platter" writes on every commit. Can someone clue me
> in?
My field of expertise, I'm afraid. The answer is "Hard disks lie.".
Almost all hard
begin",NULL,NULL,NULL);
> }
> sqlite3_bind_int(stmt,1,n);
> rc = sqlite3_step(stmt);
> if (rc != SQLITE_DONE) {
>puts(sqlite3_errmsg(db));
> }
> sqlite3_reset(stmt);
> }
> sqlite3_exec(db,"commit",NULL,NULL,NULL);
> //printf("%.1f i
f("%.1f inserts per sec\n",nrec/((elapsed()-t1)/CLOCKS_PER_SEC));
printf("%.1f\n",nrec/((elapsed()-t1)/CLOCKS_PER_SEC));
sqlite3_close(db);
}
Michael D. Black
Senior Scientist
NG Information Systems
Advanced Analytics Directorate
________
From: sqlite-users-boun...@sqlite.org [s
> On Sat, Feb 12, 2011 at 9:10 AM, Black, Michael (IS)
> > D:\SQLite>batch 5000 1
>> 360766.6 inserts per sec
>>
>
> Unless I'm missing something, SQLite has to update the first page of the
> database on every commit, to update the change counter. Assuming you are
> using rotating media, that reco
On Sat, Feb 12, 2011 at 9:10 AM, Black, Michael (IS) wrote:
> Here's a little benchmark program I wrote to test a super-simple
> one-integer insert to test sql speed and commit interval behavior.
>
> Running it on my machine (Windows XP64 8-core 3Ghz gcc (tdm64-1) 4.5.1) I
> go from 320M inserts
n Systems
Advanced Analytics Directorate
From: sqlite-users-boun...@sqlite.org [sqlite-users-boun...@sqlite.org] on
behalf of Thomas Fjellstrom [tfjellst...@strangesoft.net]
Sent: Friday, February 11, 2011 9:50 PM
To: Teg
Cc: General Discussion of SQLite
On Fri, Feb 11, 2011 at 10:50 PM, Thomas Fjellstrom <
tfjellst...@strangesoft.net> wrote:
> I've seen numbers on the internet stating that sqlite is super fast, should
> be
> able to handle tens of thousands of inserts or updates in a second or two,
> even in sync mode. So I'm a bit stumped as to
On February 11, 2011, Teg wrote:
> Hello Thomas,
>
> I download off the net at between 400Mbps to 1 Gbps and have to pack
> the data away into a Sqlite DB. I use an intermediate buffer on disk
> and a background thread to write the data to the DB. Obviously, I
> couldn't keep up if the data was co
Hello Thomas,
I download off the net at between 400Mbps to 1 Gbps and have to pack
the data away into a Sqlite DB. I use an intermediate buffer on disk
and a background thread to write the data to the DB. Obviously, I
couldn't keep up if the data was continuous but, the lulls in between
the peak d
Hi, I have a small problem with a program I've developed.
It basically captures packets from a network device on linux and stores
transfer stats on each host seen. To store the stats long term I decided to
use sqlite, and when the program saves the stats every few minutes, it takes
about 4-5 se
Quoth Joe Bennett , on 2010-12-24 18:28:18 -0600:
> Have a question regarding a particular issue I am dealing with... I
> have a database in which I create a pivot table to get a no dupe list
> using a 'select distinct'. I then take that list into Python and then
> iterate over it to look up data i
On 25 Dec 2010, at 12:28am, Joe Bennett wrote:
> update_data = 'update matrix set %s = %f, %s = %f where %s = "%s"' %
> (A_B + '_Lat', Lat_Site, A_B + '_Lon',Lon_Site, A_B, Site[0])
You've used double quotes. I'm guessing that the value is a string, in which
case you mean single quotes. But t
Hi all,
Have a question regarding a particular issue I am dealing with... I
have a database in which I create a pivot table to get a no dupe list
using a 'select distinct'. I then take that list into Python and then
iterate over it to look up data in another table to add in the
latitude and longit
On 24 Nov 2010, at 3:08am, Pavel Ivanov wrote:
> Probably SQLite's implementation allows some other types of behavior,
> but in any case behavior would be "erroneous" and unpredictable, so
> you better avoid changing table that is currently being selected, or
> at very least avoid changing indexe
> Could there be any other
> consequences like unpredictable behavior and such?
Yes, it will be unpredictable and undefined behavior.
I can't say exactly how SQLite will behave in such situation. What I
know is it doesn't execute all select at once, it fetches row by row
on each sqlite3_step call
Pavel,
What will happen if you had an index on the other_column for the
select/update you mentioned below? Is it just that your tree will be
unbalanced every time you change the other_column? Could there be any other
consequences like unpredictable behavior and such?
Thanks
> Exactly t
Nikolaus Rath wrote:
> "Igor Tandetnik" writes:
>> Nikolaus Rath wrote:
>>> I understand that running INSERT or DELETE during an active SELECT query
>>> can get me into trouble. But is it safe to run (in pseudocode):
>>>
>>> for value in "SELECT main_column IN mytable":
>>>
>>> UPDATE mytabl
> for value in "SELECT main_column IN mytable":
> UPDATE mytable SET other_column='foobar' WHERE main_column=value
Exactly this sequence is safe. Things can go nuts in case if you have
index on other_column and you do something like this:
for value in "SELECT main_column IN mytable WHERE other_
"Igor Tandetnik" writes:
> Nikolaus Rath wrote:
>> I understand that running INSERT or DELETE during an active SELECT query
>> can get me into trouble. But is it safe to run (in pseudocode):
>>
>> for value in "SELECT main_column IN mytable":
>>
>> UPDATE mytable SET other_column='foobar' WHE
Nikolaus Rath wrote:
> I understand that running INSERT or DELETE during an active SELECT query
> can get me into trouble. But is it safe to run (in pseudocode):
>
> for value in "SELECT main_column IN mytable":
>
> UPDATE mytable SET other_column='foobar' WHERE main_column=value
It should be
Hello,
I understand that running INSERT or DELETE during an active SELECT query
can get me into trouble. But is it safe to run (in pseudocode):
for value in "SELECT main_column IN mytable":
UPDATE mytable SET other_column='foobar' WHERE main_column=value
?
Thanks,
-Nikolaus
--
»Time
I appreciate the prompt response to my prior question about triggers,
and have another one.
I'm using a python program to query an SQLite database from realtime
data (IDS data actually). Every time I get a hit in the database, I
want to record in that record an updated hitcount, and the timestamp
On Sun, Jul 11, 2010 at 01:46:19PM -0400, Sam Carleton scratched on the wall:
> I have some audit fields, one being updatedby, I would like to create an
> update trigger that would prevent the row from being updated if this was not
> set. Can I do that in sqlite?
Define "not set." You can make
On 11 Jul 2010, at 6:46pm, Sam Carleton wrote:
> I have some audit fields, one being updatedby, I would like to create an
> update trigger that would prevent the row from being updated if this was not
> set. Can I do that in sqlite?
Use a TRIGGER on BEFORE UPDATE. In the TRIGGER check that new
I have some audit fields, one being updatedby, I would like to create an
update trigger that would prevent the row from being updated if this was not
set. Can I do that in sqlite?
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:80
Great thanks.
Now why didn't I think of that especially as I was messing around with ROWID
earlier?
Cheers
> update TABLE2 set z = @z
> where rowid in (
> select t2.rowid
> from TABLE1_2 t12, TABLE2 t2
> where t12.a = @a
> and t12.b = @b
> and t2.x = t12.x
> and t2.y = t12.y
>)
>
update TABLE2 set z = @z
where rowid in (
select t2.rowid
from TABLE1_2 t12, TABLE2 t2
where t12.a = @a
and t12.b = @b
and t2.x = t12.x
and t2.y = t12.y
)
Pavel
On Fri, Jun 25, 2010 at 9:04 AM, Matthew Jones wrote:
> I've seen various posts about who to get around the lack of UPDATE with
> a JO
I've seen various posts about who to get around the lack of UPDATE with
a JOIN but they all seem to refer to tables joined on a single column. I
need to do something very similar but with two-column primary key. E.g.
sqlite> create table TABLE1 (a int, b int, primary key(a, b));
sqlite> create t
This is SQLite 3.6.23.1 compiled with SQLITE_ENABLE_UPDATE_DELETE_LIMIT
(plus a few others, which should not matter to the problem).
The UPDATE ... LIMIT clause works fine when applied to tables, but
suppresses any updates when applied to a view with an update trigger.
Here is some example SQL:
> where exists (
> select 1 from master_table where
>master_table.id=detail_table.masterid and
>masterfieldtocheck = okvalue and
>masterfield2tocheck = okvalue2);
>
> -- or
>
> where detail_table.masterid in (
> select id from master_table where
>masterfieldtocheck = okvalue and
>
Max Vlasov wrote:
> We all know UPDATE sometimes is limited so in order to make some complex
> conditional updating we can only rely on the complexity of WHERE clause.
>
> I would like to update my detail table based on master properties (so to set
> some value only if the corresponding master re
On 26 May 2010, at 9:21am, Max Vlasov wrote:
> UPDATE detail_table SET somefield=somevalue
> WHERE
> (SELECT masterfieldtocheck FROM master_table WHERE master_table.id
> =detail_table.masterid)=okvalue
>
> Is there a way to alias the master table when I need to check several fields
> of the ma
We all know UPDATE sometimes is limited so in order to make some complex
conditional updating we can only rely on the complexity of WHERE clause.
I would like to update my detail table based on master properties (so to set
some value only if the corresponding master record fits some conditions).
: 02.05.2010 04:53:17
An: General Discussion of SQLite Database
Betreff: Re: [sqlite] sqlite update with joins
> But it is ambigous ...
It is not ambiguous when you know how SQL works and it's the only
standard way of doing such things.
Pavel
On Sat, May 1, 2010 at 1:31 PM, wrote:
&g
but...
...but I LOVE my hammer! How dare every problem not be a nail?
;)
Good point. Likely all the updates can fit nicely into a transaction.
On Mon, May 10, 2010 at 5:11 PM, Simon Slavin wrote:
>
> On 10 May 2010, at 9:25pm, Adam DeVita wrote:
>
> > Simon, can you expand your syntax,
On 10 May 2010, at 9:25pm, Adam DeVita wrote:
> Simon, can you expand your syntax, or are you just saying, "get x,y,z store
> them in a set of variables, then run update with appropriate bindings"?
Just that. You have a programming language with variables, so use it. That's
what your program
Simon, can you expand your syntax, or are you just saying, "get x,y,z store
them in a set of variables, then run update with appropriate bindings"?
Hopefully this related question isn't called hijacking a thread. I feel this
belongs together under set multiple values using the update query.
I'm
On 9 May 2010, at 8:41pm, Simon Hax wrote:
> I think in sqlite the following is not possible:
>
> update T
> set (a,b,c) = ( select x,y,z from ...)
>
> Does anyone know how to do in an easy way ?
Do your SELECT first, then set the multiple variables to the values retrieved
from that:
UP
I think in sqlite the following is not possible:
update T
set (a,b,c) = ( select x,y,z from ...)
Does anyone know how to do in an easy way ?
Simon
___
GRATIS: Movie-Flat mit über 300 Top-Videos. Für WEB.DE Nutzer
dauerhaft kostenlos!
r Tandetnik
> Gesendet: 01.05.2010 16:00:01
> An: sqlite-users@sqlite.org
> Betreff: Re: [sqlite] sqlite update with joins
>
> wdl...@web.de wrote:
>> The general problem with this issue is that it is not possible
>>
>> to set an alias for the table in an update stateme
Yes that works.
But it is ambigous ...
-Ursprüngliche Nachricht-
Von: Igor Tandetnik
Gesendet: 01.05.2010 16:00:01
An: sqlite-users@sqlite.org
Betreff: Re: [sqlite] sqlite update with joins
wdl...@web.de wrote:
> The general problem with this issue is that it is not possible
>
&g
wdl...@web.de wrote:
> The general problem with this issue is that it is not possible
>
> to set an alias for the table in an update statement.
> That should be implemented because otherwise some necessary statements are
> not possible.
>
> e.g.
>
> update T x
> set x.col1 = ( select sum(c) f
301 - 400 of 701 matches
Mail list logo