On 6/17/2013 9:57 AM, Vijay Khurdiya wrote:
> Do we can use attach & deattach option to read the other DB file data from
> other process.?
Yes you can, but you'd have exactly the same concurrency-related
restrictions as when you open multiple connections to the same file
directly. Connecting
On Jun 17, 2013, at 6:14 PM, Roman Fleysher
wrote:
> Dear SQLiters,
First thing first… don't hijack a thread… instead start a new one, with a new
subject.
> Can someone recommend an ORM?
No.
> What are the pros and cons of using them?
On 6/17/2013 9:57 AM, Vijay Khurdiya wrote:
Do we can use attach & deattach option to read the other DB file data from
other process.?
Yes you can, but you'd have exactly the same concurrency-related
restrictions as when you open multiple connections to the same file
directly. Connecting
", and also the value of field2 that corresponds to the maximum field3"
<<< now that is useful.
Many thanks.
Dave
Ward Analytics Ltd - information in motion
Tel: +44 (0) 118 9740191
Fax: +44 (0) 118 9740192
www: http://www.ward-analytics.com
Registered office address: The Oriel, Sydenham Road,
Dear SQLiters,
I can not add solutions, since I am a physicist designing database for the
first time, but I would like to add questions...
Object-relational mapping (ORM) is a new and interesting concept for me that I
learned. I will read about it more. However, I do not understand why new
On Mon, Jun 17, 2013 at 12:03 PM, Dave Wellman
wrote:
> Hi,
>
> Igor and Richard - thanks for your answers.
>
> Following up on the example below from Igor, what is the use case ?
>
SELECT field1, field2, max(field3) FROM table GROUP BY field1;
The above returns the
Hi,
Igor and Richard - thanks for your answers.
Following up on the example below from Igor, what is the use case ?
select field1, field2, sum(field3) group by field1;
If the answer set contains one row per field1 value and an arbitrary value
for field2 - what does this answer provide?
-
Yes... but it should be clear that each process will have
only access to the data it has written and it won't have
Access to the data written by other processes.
> I am not sure writing from other process is possible, provided other process
> opening the connection to that DB file.
> Do we can
Yes... but it should be clear that each process will have
only access to the data it has written and it won't have
access to the data written by other processes.
On Mon, Jun 17, 2013 at 3:08 PM, Igor Tandetnik wrote:
> On 6/17/2013 1:01 AM, Vijay Khurdiya wrote:
>>
>> In that
create index ddate_hits_a_idadvertiser_site_hostname_bench on
a_idadvertiser_site_hostname_bench (ddate, hits);
---
() ascii ribbon campaign against html e-mail
/\ www.asciiribbon.org
> -Original Message-
> From: sqlite-users-boun...@sqlite.org [mailto:sqlite-users-
>
On 6/17/2013 1:01 AM, Vijay Khurdiya wrote:
In that case can I have separate DB file associated with each process.
Of course. Just pass different file names to sqlite3_open or similar.
--
Igor Tandetnik
___
sqlite-users mailing list
On 6/17/2013 8:36 AM, Dave Wellman wrote:
So I think that what this is saying is that when you execute an aggregate
query without a GROUP BY, the chosen non-aggregate values are random (i.e.
arbitrary).
This is true with GROUP BY as well - consider:
select field1, field2, sum(field3) group by
On Mon, Jun 17, 2013 at 8:36 AM, Dave Wellman
wrote:
>
>
> So I think that what this is saying is that when you execute an aggregate
> query without a GROUP BY, the chosen non-aggregate values are random (i.e.
> arbitrary).
>
>
If there is exactly one aggregate
Hi,
The following sql was in a recent post which was complaining about
performance and it looks like a solution has been provided for that.
However, looking at the original SQL I would have expected an error message
to be generated for it because there is no "GROUP BY" clause.
SELECT
Iván de Prado wrote:
> SELECT ddate, sum(hits) from a_idadvertiser_site_hostname_bench where ddate <
> '2013-08-01' group by ddate;
>
> SCAN TABLE a_idadvertiser_site_hostname_bench (~33 rows)
> USE TEMP B-TREE FOR GROUP BY
>
> [...] means that this query is running almost 6 times slower than
Seems that using pragma temp_store=2 improves the speed. It now tooks 20
seconds. So now the penalty time is 9 seconds. It would possible to improve
it?.
Regards.
Iván
2013/6/17 Iván de Prado
> I have query where adding a simple "group by" by date is slowing the query
> too
I have query where adding a simple "group by" by date is slowing the query
too much, from my point of view.
The following query:
SELECT ddate, sum(hits) from a_idadvertiser_site_hostname_bench where ddate
< '2013-08-01';
runs in 12 seconds. Approximately 423,327 rows scanned per second. 1 row
Hello,
Suppose one has an expression on the columns of a single table, say x
+y, and that this expression occurs in multiple queries. Then it is
attractive to define it at a single place, using a view:
create view v as select *, x+y as a from t;
I had hoped that substituting such a
> statement is only doing pointer arithmetic
Apparently it is so. But it is not obvious. This code would be cleaner and
faster:
...
if( pPage ){
pcache1PinPage(pPage);
goto fetch_out;
}
if( createFlag==0 ){
return NULL;
...
Have a good day.
With best regards,
DIFF : EMA(CLOSE,SHORT) - EMA(CLOSE,LONG);
DEA : EMA(DIFF,M);
MACD : 2*(DIFF-DEA), COLORSTICK;
I wanna use sqlite to count macd value, have any idea?
thank you!
___
sqlite-users mailing list
sqlite-users@sqlite.org
Thanks Stephan for your feedback. Bad news for me.
I see two solutions:
- hire a dev for this. I'm not following enough sqlite lists to estimate if
this is feasible. Any opinion?
- hire a QGIS dev to hack a autodetection of type using content of fields.
Ugly, probably slow and prone to errors..
21 matches
Mail list logo