Re: [sqlite] calculation of a fraction stored in a text column

2014-02-20 Thread Patrick Proniewski
Simon,

> Date: Thu, 20 Feb 2014 14:04:59 +
> From: Simon Slavin 
> 
> On 20 Feb 2014, at 1:56pm, Patrick Proniewski  wrote:
> 
>> Thanks for you reply. In fact I would like the script to remain portable. 
>> I'm running all this on Mac OS X 10.6.8 (sqlite3 3.6.12) but I plan to share 
>> it and use it on FreeBSD too. And developing a extension is probably out of 
>> my reach :)
> 
> I don't think the thing you want to do can be done easily within SQLite.  I 
> would expect to see it done in whatever programming language you're using.
> 
> One approach would be to save two columns when you extract from EXIF.  One 
> has exactly the text from the EXIF column.  Another would be to save the 
> factor as a REAL number.  Do some string processing to get the bit before the 
> '/' and the bit after it, and divide one by the other.
> 
> You might be able to do the same calculation inside SQLite but it'll be 
> horrid to look at.  Something like
> 
> SELECT 
> expTime,substr(expTime,1,instr(expTime,'/'))/substr(expTime,instr(expTime,'/')+1)
>  AS etAsReal FROM photos


I'm going to rework my database to include proper "already calculated" columns, 
but using bash to do the math would have been a little bit unpleasant (not 
enough streamlined for my liking). Fortunately exiftool has an option to 
extract raw value for EXIF tags, and I've found a way to read them exactly as I 
need, "one-pot".

thanks,
Patrick
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] calculation of a fraction stored in a text column

2014-02-20 Thread Patrick Proniewski
Donald,

On 20 févr. 2014, at 15:16, Donald Griggs wrote:

> Am I right that fractional exposures will always have one as the numerator?   
> I.e., you might record an exposure as "1.5" seconds, but never as "3/2" 
> seconds?   If so, then that might simplify things.

Yes, no 3/2, only 1/x and regular REALs.


> The example below creates a column named "canon" to hold the canonical 
> exposure string value for sorting and grouping.
> It assumes any non-decimal fractions will begin with the string "1/"

Thank you for this example, the idea of storing the computed number into the 
database is very good and made me rethink the initial database feeding. I'm 
using exiftool to script EXIF reading from my files. exiftool has a very nice 
option that allows the reading of raw data. Exposure Time, displayed in "raw" 
is the REAL equivalent to my strings: 1/200 is read as 0.005. Very handy.
I'm going to redesign my database in order to include raw data aside 
human-readable data when I need it.

thanks,
Patrick
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] calculation of a fraction stored in a text column

2014-02-20 Thread Patrick Proniewski
hi Kevin,

On 20 févr. 2014, at 14:11, Kevin Martin wrote:

> 
> On 20 Feb 2014, at 12:54, Patrick Proniewski  wrote:
> 
>> My problem is that my database holds too many different values for 
>> ExposureTime, so the resulting plot is unreadable. I want to be able to 
>> "bin" those values to create a proper histogram. It's not possible to "bin" 
>> string values, because SQLite has no idea that "1/60" has nothing to do near 
>> "1/6000".
> 
> Are you able to use an extension? A custom collation on the ExposureTime 
> column seems pretty simple (although not thought about it in detail). You 
> should then be able to bin the values as they are.


Thanks for you reply. In fact I would like the script to remain portable. I'm 
running all this on Mac OS X 10.6.8 (sqlite3 3.6.12) but I plan to share it and 
use it on FreeBSD too. And developing a extension is probably out of my reach :)

Patrick
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] calculation of a fraction stored in a text column

2014-02-20 Thread Patrick Proniewski
Hello,

I'm coming back with my EXIF database. I have a TEXT column, ExposureTime, 
where I store the EXIF representation of photograph's exposure time. Sample 
values: "1/6000", "1/250", "0.5", "1", "6"...
I need to retain this representation, because it's how photographers deal with 
exposure time. 0.004 would be great for math, sorting, etc. but the real life 
thing is "1/250".

My problem is that my database holds too many different values for 
ExposureTime, so the resulting plot is unreadable. I want to be able to "bin" 
those values to create a proper histogram. It's not possible to "bin" string 
values, because SQLite has no idea that "1/60" has nothing to do near "1/6000".

I need to convert strings like "1/6000" and "1/250" into their REAL 
counterparts "0.000166", "0.004" during my SELECT request for "binning"/sorting 
and counting purposes. I've started to work on an over-complex substr()+instr() 
combo that is not finished yet, but will probably be a dead-end.

Is there a straightforward way to do this, instead of conditionally decomposing 
the TEXT and recomposing a mathematical expression that SELECT can calculate?

regards,
Patrick

(you can Cc me, I'm subscribed to digest)
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] select/count/group by question

2014-02-19 Thread Patrick Proniewski
> Message: 10
> Date: Wed, 19 Feb 2014 09:11:27 -0500
> From: Igor Tandetnik 
> 
> select strftime(...) as year, sum(ISO=100)
> FROM exifdata GROUP BY year ORDER BY year;
> 
> --
> 
> Message: 11
> Date: Wed, 19 Feb 2014 09:16:56 -0500
> From: Richard Hipp 

> Maybe this:
> 
> SELECT strftime('%Y', CreateDate, 'unixepoch') AS year,
> sum(iso==100)
> FROM exifdata WHERE year NOT NULL
> GROUP BY year
> ORDER BY year;
> 
> The main trick is to use "sum(iso==100)" instead of "count(iso)".  Note
> also that SQL strings use a single quote, not a double-quote.  And the
> datetime() function call is redundant as strftime can do the unixepoch
> conversion for you.  And you don't need to GROUP BY iso, since you are only
> interested in a single iso value.


Thank you Igor and Richard for your fast replies.
It solved my problem.

I thought about using sum() or total() but the ISO==100 part is something I 
would never have thought about.

thanks again,
Patrick
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] select/count/group by question

2014-02-19 Thread Patrick Proniewski
Hello,

I'm currently designing a small tool with shell scripts, sqlite3 backend, and 
gnuplot for graphical output.
My SQLite database has one table, each row being a picture name and a subset of 
its EXIF tags (date, ISO, metering mode...).
In order to process data into gnuplot, I use SELECT to output what I need in a 
flat file.

Example: 

For ISO = 100, I issue this request: 

SELECT strftime("%Y", datetime(CreateDate,"unixepoch")) as year, COUNT(ISO) 
FROM exifdata WHERE ISO = 100 AND year not null GROUP BY ISO, year ORDER BY 
year;

It returns :

2008 9
2009 581
2010 3144
2011 768
2012 1195
2013 1270

That is correct, but not enough: the database includes pictures taken in 2014, 
none of which at ISO = 100, but I need a line "2014 0" in my output (otherwise 
gnuplot mixes things when it draws stacked histograms).

My question is: how can I write a request that will successfully pull a result 
for every years, even if the count is 0?

I've created a second table "years", with only one column ("year"), rows are 
"2008", "2009"..., "2014". Then I've tried using JOIN to get at least a NULL 
result, without success.

Full table with create statement is available here: 
http://patpro.net/~patpro/exifdata.sql.zip (106 kB).
Any hint appreciated!

Thanks,
Patrick
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] feed "endless" data into sqlite, thru a shell script

2011-09-28 Thread Patrick Proniewski
On 28 sept. 2011, at 13:38, Paul Linehan wrote:

>> The granularity I'm looking for is between 1 second and 10 seconds. Cron is 
>> not
>> an option here.
> 
> I woke up this morning and there is a way that cron *_could_* do what you
> want. You appear to have figured out a way that suits you, but cron could
> be used.
> 
> 10 second granularity.
> 
> You have 6 cron jobs, each launched on the minute.
> 
> The first launches iostat and puts data into SQLite.
> The second does a sleep 10, launches iostat and puts data into SQLite,
> the third sleep 20 &c.
> 
> I know it's an appalling hack, but could be useful to somebody?

That's appalling :)
Especially if you consider the fact that some systems can have a crond launched 
with the -j flag (jitter : adds a random sleep before running cron job).

patpro
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] feed "endless" data into sqlite, thru a shell script

2011-09-27 Thread Patrick Proniewski
On 27 sept. 2011, at 23:11, Scott Hess wrote:

> On Tue, Sep 27, 2011 at 2:07 PM, Patrick Proniewski wrote:
> 
>> On 27 sept. 2011, at 20:18, Gabor Grothendieck wrote:
>>> gawk has fflush()
>> 
>> none of them is available out of the box on Mac OS X, or FreeBSD. gawk can
>> be installed, but I'd rather use my "while true" loop instead of installing
>> gawk.
>> 
> 
> Did you try it?

nop, I don't have gawk so I didn't even think about trying.

>  On my Mac fflush() fixes it.

indeed. Thanks. So it's not specific to gawk, that's great news! My problem is 
solved.

regards,
patpro
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] feed "endless" data into sqlite, thru a shell script

2011-09-27 Thread Patrick Proniewski
On 27 sept. 2011, at 20:44, Paul Linehan wrote:

> 2011/9/27 Patrick Proniewski :
> 
>>> Take a look at a utility called dstat.
> 
>> no, it's linux only.
> 
> But it is written in Python - so it should be relatively
> transportable.

and it relies on /proc/, Mac OS X does not have a /proc/

patpro

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] feed "endless" data into sqlite, thru a shell script

2011-09-27 Thread Patrick Proniewski
On 27 sept. 2011, at 20:41, Paul Linehan wrote:

> 2011/9/27 Patrick Proniewski :
> 
>> That's what I do, but I think using a loop is ugly, and I would like to find 
>> a way
>> to feed data continuously into sqlite.
> 
> I can't see why you would want to do this more than once every minute
> - or do you?

The granularity I'm looking for is between 1 second and 10 seconds. Cron is not 
an option here.

> Why, exactly, do you want to do this anyway? I'm interested because I've
> done something similar.

I've performance issue on a file server hooked to a raid enclosure, and 
exporting the corresponding volume via NFS.
The performance problem seems to be on the raid itself. So I'm logging I/O 
performances during production, to detect anomaly.
sample: http://perso.univ-lyon2.fr/~pproniew/kbpt-2011-09-27-22.png (besier 
smoothing, 24 hours of data).
We will change the storage in few days, and this iostat logging will help 
compare before/after performances.

regards,
patpro
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] feed "endless" data into sqlite, thru a shell script

2011-09-27 Thread Patrick Proniewski
On 27 sept. 2011, at 20:18, Gabor Grothendieck wrote:

> gawk has fflush()


On 27 sept. 2011, at 20:29, Roger Andersson wrote:

> stdbuf?
> unbuffer?


none of them is available out of the box on Mac OS X, or FreeBSD. gawk can be 
installed, but I'd rather use my "while true" loop instead of installing gawk.

patpro
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] feed "endless" data into sqlite, thru a shell script

2011-09-27 Thread Patrick Proniewski
On 27 sept. 2011, at 20:14, David Garfield wrote:

> Any entry in a pipe could be buffering.  In a quick test here, awk is
> buffering.  To find the buffering, try using the pieces up to a given
> stage with " | cat " added at the end.  If this buffers, you've found
> the problem.

as far as my understanding goes, the simple fact I don't have my last output 
into a tty is enough to trigger buffering.


>  Unbuffered output is usually slower, so it is normally
> done only to a terminal.  I think the only easy way to externally
> disable the buffer is to wrap the program in a pseudo-tty.

apparently... not so easy by the way :)

> Alternatively, look for an option that lets you explicitly unbuffer.
> (for instance, in perl, do: $| = 1; )

nothing in awk, but I could try sed instead (-l  Make output line buffered)

regards,
patpro
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] feed "endless" data into sqlite, thru a shell script

2011-09-27 Thread Patrick Proniewski
On 27 sept. 2011, at 20:04, Paul Linehan wrote:

> 2011/9/27 Patrick Proniewski :
> 
> 
>> I'm facing a challenging problem. I want to log some data into an SQLite3 DB.
>> Data come from a system command (iostat) in an endless steam, one row every 
>> X seconds:
> 
> 
> Take a look at a utility called dstat. 

no, it's linux only.

patpro
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] feed "endless" data into sqlite, thru a shell script

2011-09-27 Thread Patrick Proniewski
On 27 sept. 2011, at 18:31, Roger Andersson wrote:

> I do not know if tee makes any difference or if it's available on Mac?
> http://unixhelp.ed.ac.uk/CGI/man-cgi?tee

tee is available, but no more luck here, as it won't allow to disable the 
buffer.


> iostat -d -w 10 disk0 | tee -a logfile
> and then
> tail -f logfile | awk '!/[a-zA-Z]/ {print "INSERT INTO io 
> VALUES(datetime(\"now\",\"localtime\"),"$1","$2","$3");"}' |\
> sqlite3 iostat.db

same problem here ;)

patpro
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] feed "endless" data into sqlite, thru a shell script

2011-09-27 Thread Patrick Proniewski
On 27 sept. 2011, at 13:44, Simon Slavin wrote:

> On 27 Sep 2011, at 12:03pm, Patrick Proniewski wrote:
> 
>> You're assuming I'm running Linux, but I'm running Mac OS X Server (or 
>> FreeBSD by the way), so no /proc here, and iostat is probably working 
>> differently too.
>> <http://developer.apple.com/library/mac/#documentation/darwin/reference/manpages/10.5/man8/iostat.8.html>
> 
> If you're using the OS X version, I don't think you need to run iostat as a 
> continuous process.  Write a shell script with a timed loop which runs iostat 
> without the '-w 10'.  So you could write a script which does

That's what I do, but I think using a loop is ugly, and I would like to find a 
way to feed data continuously into sqlite. 

patpro
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] feed "endless" data into sqlite, thru a shell script

2011-09-27 Thread Patrick Proniewski
On 27 sept. 2011, at 08:31, Baptiste Daroussin wrote:

> You don't need awk :)
> 
> iostat -d -w 10 disk0 | while read a b c; do case $a in *[a-zA-Z]*)
> continue ;; *) sqlite3 iostat.db "INSERT INTO io
> VALUES(datetime('now', 'localtime'), \"$a\", \"$b\", \"$c\");" ;;
> esac; done


Ok, this forks less, but still, you can't get rid of the loop ;) (I love awk)

thanks,
patpro

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] feed "endless" data into sqlite, thru a shell script

2011-09-27 Thread Patrick Proniewski
On 27 sept. 2011, at 08:02, Stephan Beal wrote:

> That's a tricky one, it seems. If you're not restricted to shell code, you
> could possibly do this using perl, PHP, or similar. You could open a pipe
> for iostat, read a line from the pipe, and feed that line to your db (not in
> the form of a raw text line but using the script language's sqlite3 API).
> Repeat until the pipe is eof or a signal is caught or whatever.

Changing languages could be an option, but I'd rather keep my ugly while loop 
than learn PERL :)

patpro
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] feed "endless" data into sqlite, thru a shell script

2011-09-27 Thread Patrick Proniewski
On 27 sept. 2011, at 12:58, Simon Slavin wrote:

> On 27 Sep 2011, at 6:48am, Patrick Proniewski wrote:
> 
>> I've tried various solutions with named pipes, file descriptors 
>> redirections… but none worked, because they all seem to require the data 
>> steam to end before feeding data into the DB.
> 
> Most of your problems are caused because you're using iostat.  Can you 
> instead read the data directly out of /proc ?  Take a look at the end of the 
> 'man iostat' page for details.

You're assuming I'm running Linux, but I'm running Mac OS X Server (or FreeBSD 
by the way), so no /proc here, and iostat is probably working differently too.
<http://developer.apple.com/library/mac/#documentation/darwin/reference/manpages/10.5/man8/iostat.8.html>

patpro
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] feed "endless" data into sqlite, thru a shell script

2011-09-26 Thread Patrick Proniewski
On 27 sept. 2011, at 08:21, Roger Binns wrote:

> The easiest solution is to just be patient and accept the data will be a
> little delayed.

that won't work for me, because my SQL command includes a datetime('now'). Any 
row input that is delayed won't be recorded with the proper datetime. That's 
one of the reasons why I must use tail -1 in my infinite loop. When I send more 
than one line, they all have the same datetime.


> Other solutions involve various helper programs such as using a pty so that
> the programs think they are using terminals:
> 
>  http://stackoverflow.com/questions/1000674/turn-off-buffering-in-pipe

I've neither unbuffer nor socat available on my system, but I'll read the full 
thread to grab info.

thanks,
patpro

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] feed "endless" data into sqlite, thru a shell script

2011-09-26 Thread Patrick Proniewski
Hello,

I'm facing a challenging problem. I want to log some data into an SQLite3 DB. 
Data come from a system command (iostat) in an endless steam, one row every X 
seconds:

 disk0 
   KB/t tps  MB/s 
   4.02 2318  9.09 
   4.00 1237  4.83 
   6.63 979  6.34 
  46.30  15  0.69 
  30.58  23  0.69 
  12.90  32  0.41 
 107.85  55  5.75 

I though I could easily pipe data into SQLite:

iostat -d -w 10 disk0 |\
awk '!/[a-zA-Z]/ {print "INSERT INTO io 
VALUES(datetime(\"now\",\"localtime\"),"$1","$2","$3");"}' |\
sqlite3 iostat.db

but it won't work, because sqlite3 won't record any data until the iostat 
command ends. And of course, this iostat command will never end.
So I'm stuck with a working but very ugly script:

while true; do
 iostat -c 2 -d -w 10 disk0 |\
 tail -1 |\
 awk '!/[a-zA-Z]/ {print "INSERT INTO io 
VALUES(datetime(\"now\",\"localtime\"),"$1","$2","$3");"}' |\
 sqlite3 iostat.db
done

endless loop, forking iostat for 2 rows of data (-c 2), keeping only the last 
row because the first one is an artifact (tail -1).
I've tried various solutions with named pipes, file descriptors redirections… 
but none worked, because they all seem to require the data steam to end before 
feeding data into the DB.

Any idea?

regards,
patpro
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users