http://avocat.prunelle.org/wp-content/plugins/extended-comment-options/02gfns.html;>
http://avocat.prunelle.org/wp-content/plugins/extended-comment-options/02gfns.html
___
sqlite-users mailing list
sqlite-users@sqlite.org
Make Income 0nIine with revoIutionary system
http://incident.zxq.net/pjob4jou/httpjob4journal0129.php?netjpageID=58
Thu, 1 Mar 2012 19:14:07
__
" In modern times walls are always attacked with mortars and cannon" (c)
jevaun visszaszallitasat
>
> I would recommend SQLite Studio without hesitation. I think it pretty
> much covers your criteria, have a look:
>
> http://sqlitestudio.one.pl/index.rvt?act=about
>
> It is fast, graphical, a single executable install (eg: trivial), and
> works well with existing databases ...
> Thank you for this message. We've just fixed the bug you mentioned and
> uploaded an updated version of SQLite Maestro at
> http://www.sqlmaestro.com/products/sqlite/maestro/download/
>
Wow. Nice to see you folks monitoring this list.
> > (Support for visual relation design would be
Hello,
Can anyone recommend a Free, or reasonably priced Non-Free, GUI tool for
creating and maintaining an SQlite databases that can run on both Windows and
Linux?
(Support for visual relation design would be great, too.)
I found a list at:
> Simon Slavin wrote:
>
>
> I'm not sure you appreciate what Roger (please be more careful about
> your quoting, by the way) is telling you. SQL is not a programming
> language. It's a way of accessing a database. The two are not at all
> equivalent: everything you can do in SQL you
> >
> > What am I missing here? Am I doing the query wrong?
>
> Yes. The "group by" doesn't know which rows to use for columns that
> are not either aggregate functions (such as min) or grouped columns
> (such as name). You know what min() does, but the query processor
> doesn't.
>
>
> I'd recommend continuing down the path you are exploring which is having
> test data and tweaking/tuning/correcting your queries until they are
> acceptable.
>
> You'll probably find it easier to write the processing algorithm in the
> programming language of your choice. This will
>
> You could be storing event duration, not stop time. Or perhaps store
> both.
>
Here is what I have so far:
sqlite> create table events (id INTEGER PRIMARY KEY AUTOINCREMENT, name, kind,
start, end);
# Now add some events for "tomorrow"
sqlite>
insert into events values (null,
> What are you doing about timezones and DST? Are "start" and "end" UTC?
>
For v1, all local times. UTC is not a requirement yet, but if can be added
with out hassle, then why not.
> Is a location (and by extension a timezone) associated with events like
> face-to-face meetings?
>
>
> One table of the events with fields you need (eg description, start and
> end, repeating rule). A second table with the exceptions, or depending
> on how much you want to normalize a table per exception type.
>
> > Where is this calculation being done? In SQL? At the app level? How?
> > I'm looking for suggestions on how to store and retrieve events for a
> > calendering system in SQlite.
> >
> > For each user there must be:
> >
> > 1) All day events on a specific day.
> > 2) All day events that are repeated over a given date range.
> > 3) All day events that are repeat
Hello,
I'm looking for suggestions on how to store and retrieve events for a
calendering system in SQlite.
For each user there must be:
1) All day events on a specific day.
2) All day events that are repeated over a given date range.
3) All day events that are repeat each day from until
>
> My experience has been that VMs strongly focus on correctness and
> reliability, and will obey sync orders and everything else databases
> depend on.
>
>
This is true on the CPU level.
However, since I/O is a major bottleneck for VM's, things can get more complex
inside the
>
> Since my code works in blocks, read/compress/encrypt/write, loop. Almost
> all the real data was being written to the compressed file, however any
> finalization and flushing of the stream wasn't occurring (since the encrypt
> was failing) so the last bit of any SQLite database wouldn't
> I think I found my defect: my old stress tests was based on doing
> compression/encryptions/decryption/decompression passes on files of random
> sizes; so I would do about a 10 million passes or so and say...that's pretty
> good.
>
> Well...a more structured test exposed the problem and it
> just for anybody who is interested:
>
> I translated Jim's function into window code and added
> a page of 1024 that will be written, instead of a single byte.
> On my Win-XP system I got 55 TPS, much faster than sqlite
> seems to write a page but that might be related to the
> additional
> Thank you.
>
> I missed the EXCLUSIVE clause in the docs comes with the newbie
> territory,
> i guess.
>
> So to confirm, would something like this work?
>
> Tables:
> task_log => (id, task_data, time_stamp)
> task_fifo = > (id, fk_task_log)
> task_status_log => (id, fk_task_log,
> Wrap the above two statements in:
>
> 0) BEGIN EXCLUSIVE
> ...
> 3) COMMIT
>
> The BEGIN EXCLUSIVE above is all you need (and more, a simple BEGIN
> may be enough).
>
> > Can someone with more knowledge of SQLite internals explain the
> > right way to "atomic"-lly "pop"-off an item
> Have you considered using a more generic message queuing program?
> Wikipedia has a good page about it:
>
> http://en.wikipedia.org/wiki/Message_queue
>
> There is even a standardised protocol - AMQP:
>
> http://en.wikipedia.org/wiki/Advanced_Message_Queuing_Protocol
>
> You could
> >The simple solution would just create a race condition... i think:
> >
> >1) INSERT INTO status_table FROM SELECT oldest task in queue
> >2) DELETE oldest task in queue
> >
> >Right?
>
> It might work fine if you wrap it in an exclusive
> transaction.
>
"exclusive transaction"? Great!
> >I have several CGI and cron scripts and that I would like coordinate via a
> "First In
> >/ First Out" style buffer.That is, some processes are adding work
> >units, and some take the oldest and start work on them.
> >
> >Could SQLite be used for this?
> >
>
> For what it's worth, here
> One thing to watch out for - using SQLITE for a FIFO will have limited
> throughput, because commits will have to be done after inserting or removing
> each entry.
This is fine for now. Wiling to migrate to MySQL, etc if needed for speed.
> This might not be an issue in some
[Typo fix]
Thanks for the help
Though, I am not quite clear on how to get the FIFO aspect of it.
Assuming three tables:
task_log => (id, task_data, time_stamp)
task_fifo = > (id, fk_task_log)
task_status_log => (id, fk_task_log, status_code, time_stamp)
How do I create the correct stored
Thanks for the help
Though, I am not quite clear on how to get the FIFO aspect of it.
Assuming three tables:
task_log => (id, code, time_stamp)
task_fifo = > (id, fk_task_log)
task_status_log => (id, fk_task_incoming_log, status_code, time_stamp)
How do I create the correct stored procedures
> > I have several CGI and cron scripts and that I would like coordinate via a
> "First In
> > / First Out" style buffer.That is, some processes are adding work
> > units, and some take the oldest and start work on them.
> >
> > Could SQLite be used for this?
> >
> > It would seem very
Hello,
I have several CGI and cron scripts and that I would like coordinate via a
"First In
/ First Out" style buffer.That is, some processes are adding work
units, and some take the oldest and start work on them.
Could SQLite be used for this?
It would seem very complex to use SQL for
27 matches
Mail list logo