> > I will need to use the actual files to test for dependency (just as
> > the dependency that can be used by GNU make)
>
> I don’t understand what that means. You want to use a makefile that checks
> the mod date of the database?
Suppose A is a sqlite3 db file, B is some other file which is
—Jens
> On Oct 16, 2019, at 3:37 AM, Peng Yu wrote:
>
> I will need to use the actual files to test for dependency (just as
> the dependency that can be used by GNU make)
I don’t understand what that means. You want to use a makefile that checks the
mod date of the database?
—Jens
> On Oct 16, 2019, at 6:08 AM, Mitar wrote:
>
> Quite
> some of datasets we are dealing with have 100k or so columns.
There was a thread about this a few months ago. You Should not store every
number of a huge vector in a separate column. You don’t need to individually
query on every
Am 12.10.2019 um 16:47 schrieb Bart Smissaert:
Sorry, I forgot to tell that. It is date column with an integer number.
ID xValue xDate
1 130 40123
1 120 41232
1 140 40582
1 100 40888
1 110 42541
2 140 41225
2 130 41589
2 150
On Oct 16, 2019, at 4:08 PM, Warren Young wrote:
>
> I think this project needs someone to fork it.
Sorry, that’s immoderate. It looks like they’ve still got active committers,
so the software isn’t abandonware.
Still, that long list of old issues is a problem. I wonder if the real issue
On Oct 16, 2019, at 8:45 AM, Graham Holden wrote:
>
> ...write a pair of what could be relatively simple
> client-server programs that police access to the SQLite DB (which the
> server will be accessing as a local file).
>
> ...
>
> ** I believe someone has tried/succeeded in doing something
Hey All,
I have this state in my gram.out:
State 6:
mem ::= idlist ptr idlist * SEMI
idlist ::= idlist * IDENT
IDENT shift-reduce 3 idlist ::= idlist IDENT
SEMI shift-reduce 2 mem ::= idlist ptr
idlist SEMI
The
Hi,
I'm having a situation where the results of a large SELECT
operation are apparently too big to fit in memory.
Obviously I could jerry-rig something to work around this, but
I have a vague recollection that SQLite provides a nice way to
get the results of a query in "chunks" so that the
On Wed, 16 Oct 2019 17:38:28 +, you wrote:
> I'm having a situation where the results of a large
> SELECT operation are apparently too big to fit in memory.
>
> Obviously I could jerry-rig something to work around
> this, but I have a vague recollection that SQLite
> provides a nice way to
SQLite could, in theory, be enhanced (with just a few minor tweaks) to
support up to 2 billion columns. But having a relation with a large
number of columns seems like a very bad idea stylistically. That's
not how relational databases are intended to be used. Normally when a
table acquires more
On 16 Oct 2019, at 7:03pm, Mitar wrote:
> On Wed, Oct 16, 2019 at 3:16 PM Hick Gunter wrote:
>> 100k distinct column names? Or is that 1 repeats of 10 attributes?
>
> 100k distinct names. Like each column a different gene expression.
Don't do that. It's an abuse of how relational
What language/library are you using?
In Python for example there's .fetchone() to get just the next result row,
.fetchmany(n) to get the next n rows, or .fetchall() to go get them all.
In general though at its core SQLite will get and return one row at a time.
Though if there's grouping or
Hi!
On Wed, Oct 16, 2019 at 3:29 PM Richard Hipp wrote:
> Are you trying to store a big matrix with approx 100k columns? A
> better way to do that in a relational database (*any* relational
> database, not just SQLite) is to store one entry per matrix elements:
Sure, this is useful for sparse
On 16 Oct 2019, at 6:38pm, Randall Smith wrote:
> I'm having a situation where the results of a large SELECT operation are
> apparently too big to fit in memory.
SQLite only stores results if it has to. It would have to if there is no good
index for your SELECT terms.
Are you actually using
Hi!
On Wed, Oct 16, 2019 at 3:16 PM Hick Gunter wrote:
> 100k distinct column names? Or is that 1 repeats of 10 attributes?
100k distinct names. Like each column a different gene expression.
Mitar
--
http://mitar.tnode.com/
https://twitter.com/mitar_m
I was reading about Lemon, and I noticed the following trivial
typos:
1) "It's operation is similar" ->
"Its operation is similar"
2) "Lemon is also used to generate parse" ->
"Lemon is also used to generate parsers"
3) "Lemon has the concept of a "fallback" tokens." ->
"Lemon has the concept
I'm having a situation where the results of a large SELECT operation are
apparently too big to fit in memory.
Obviously I could jerry-rig something to work around this, but I have a vague
recollection that SQLite provides a nice way to get the results of a query in
"chunks" so that the memory
Off topic
And a something relatively new from the future (I hope)
filed of DVCS/Data (like SQLite+Fossil in one).
Dolt:
https://github.com/liquidata-inc/dolt
Underlying versioned DB:
https://github.com/attic-labs/noms
Data commit sample:
On 2019-10-16 01:47, Peng Yu wrote:
Is there a solution that are known to fill in this niche? Thanks.
Would be clusteded SQLite (distributed SQLite instead of central shared
DB) be a good option for your project? - Bellow, I am pasting my
bookmarks for few well established projects
Regarding:
> Why not use an actual client-server database system like MySQL? It's
> optimized for this use case, so it incurs a lot less disk (network)
I/O.
"I will need to use the actual files to test for dependency (just as
the dependency that can be used by GNU make).
I was raising/discussing similar question. Look through SQLite archive for :
disable file locking mechanism over the network
Client/server manager for SQLite is not enough. Internally, Sqlite will still
request lock from the file system and the overhead will still be there. Once
the
"Keith, what if one has a peanut allergy?"
Well, the maid dutifully logs the changes she makes to the tin, so that in the
event of an anaphylactic crash the tin can be returned to its original state.
This helps ensure we have ACID peanuts.
___
Wednesday, October 16, 2019, 1:22:58 AM, Gary R. Schmidt
wrote:
> On 16/10/2019 10:38, Jens Alfke wrote:
>>
>>> On Oct 15, 2019, at 3:47 PM, Peng Yu wrote:
>>>
>>> I'd like to use sqlite3 db files on many compute nodes. But they
>>> should access the same storage device for the sqlite3 db
On 10/16/19, Mitar wrote:
> Hi!
>
> We are considering using SQLite as a ML dataset archival format for
> datasets in OpenML (https://www.openml.org/). When investigating it,
> we noticed that it has a very low limit on number of columns. Quite
> some of datasets we are dealing with have 100k or
100k distinct column names? Or is that 1 repeats of 10 attributes?
-Ursprüngliche Nachricht-
Von: sqlite-users [mailto:sqlite-users-boun...@mailinglists.sqlite.org] Im
Auftrag von Mitar
Gesendet: Mittwoch, 16. Oktober 2019 14:57
An: sqlite-users@mailinglists.sqlite.org
Betreff:
Hi!
We are considering using SQLite as a ML dataset archival format for
datasets in OpenML (https://www.openml.org/). When investigating it,
we noticed that it has a very low limit on number of columns. Quite
some of datasets we are dealing with have 100k or so columns. Are
there any fundamental
On 16/10/2019 10:38, Jens Alfke wrote:
On Oct 15, 2019, at 3:47 PM, Peng Yu wrote:
I'd like to use sqlite3 db files on many compute nodes. But they
should access the same storage device for the sqlite3 db files.
Why not use an actual client-server database system like MySQL? It's
Transaction delays apply to read as well. SQLite places a lock while reading
too, to ensure the database is intact during read. Otherwise tables will half
half complete rows.
Read "begin transaction", difference between immediate and exclusive
transactions.
Roman
Sent from my T-Mobile 4G
Peng Yu, on Tuesday, October 15, 2019 06:47 PM, wrote...
>
> Hi,
>
> I'd like to use sqlite3 db files on many compute nodes. But they
> should access the same storage device for the sqlite3 db files. The
> directory storing the db files looks the same on any compute node
> logically---the storage
Wednesday, October 16, 2019, 11:43:25 AM, Peng Yu wrote:
> On 10/16/19, Simon Slavin wrote:
>> Unfortunately, no. Multiuser SQLite depends on locking being implemented
>> correctly. The developers haven't found any Network File Systems which do
>> this. Unless one of the readers of this list
On 10/16/19, Simon Slavin wrote:
> On 15 Oct 2019, at 11:47pm, Peng Yu wrote:
>
>> Is there a solution that are known to fill in this niche? Thanks.
>
> Unfortunately, no. Multiuser SQLite depends on locking being implemented
> correctly. The developers haven't found any Network File Systems
> I know for sure that IBM's GPFS guarantees locking. I think GPFS is "global
> parallel file system". It is a distributed file system. But it will be
> rather slow. If only few jobs run in parallel, all will be ok. Locking will
> always guarantee database integrity.
>
> With lots of jobs, you
> Why not use an actual client-server database system like MySQL? It's
> optimized for this use case, so it incurs a lot less disk (network) I/O.
I will need to use the actual files to test for dependency (just as
the dependency that can be used by GNU make). With just database
tables in MySQL,
Am 15.10.2019 um 23:53 schrieb Simon Slavin:
... There is no reason for a table to disappear.
But sometimes intent... ;-)
Maybe one of the App-Users is an xkcd-fan...
https://xkcd.com/327/
@the OP
Don't tell us now, that the table in question
was indeed named "Students"...
Olaf
Then the first peanut may well be the last one, irrespective of the cardinality
of the tin.
-Ursprüngliche Nachricht-
Von: sqlite-users [mailto:sqlite-users-boun...@mailinglists.sqlite.org] Im
Auftrag von Don V Nielsen
Gesendet: Dienstag, 15. Oktober 2019 21:52
An: SQLite mailing list
The order of rows returned by a query is undefined - i.e. from the point of
view of the application, a random member of the result set will be returned
last - unless you include an ORDER BY clause that uniquely defines the order of
the records to be returned. Given the latter, it is easy to
36 matches
Mail list logo