Re: [GENERAL] Data Guard for Postgres?

2013-12-13 Thread AI Rumman
Postgresql 9.2 streaming replication which is very much similar with Oracle
Data Guard.


On Fri, Dec 13, 2013 at 1:57 PM, Roy Anderson wrote:

> Good day. I am inquiring as to whether there is a free solution
> available that approximates, in Postgres, what Data Guard does for
> Oracle DB. Can anyone advise?
>
> Thank you,
>
> Roy
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


Re: [GENERAL] out of memory issue

2013-12-13 Thread Jamin Shanti
I wanted to answer this for you but I didn't see a reply button on the site.


In pgadmin,

it's File ==> Options ==> Query tool ==> History file ==> default is 1024.
 try 4096 if you have more then 8G on your PC.


[GENERAL] Data Guard for Postgres?

2013-12-13 Thread Roy Anderson
Good day. I am inquiring as to whether there is a free solution
available that approximates, in Postgres, what Data Guard does for
Oracle DB. Can anyone advise?

Thank you,

Roy


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Unexpected update behaviour

2013-12-13 Thread Victor Yegorov
Greetings.

I've hit a strange issue. This is a simplified setup.
First, run create.sql to get a couple of tables.
Then, run the update query.

Tested on 9.2.6 and 9.3.2.

Now:
- if I remove the UNION ALL part of the inner query, UPDATE works;
- if I move the `raw` subquery into the CTE, UPDATE works (but takes
  hours on my full data);
- if I convert this UPDATE into a SELECT, I get the row to be updated.

Could you kindly explain me why the query as it is updates no records?


-- 
Victor Y. Yegorov


create.sql
Description: Binary data


update.sql
Description: Binary data

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Grouping, Aggregate, Min, Max

2013-12-13 Thread Kevin Grittner
Misa Simic  wrote:

> So I wonder - is there some kind of aggregate window function
> what does desired results?

Not built in, but PostgreSQL makes it pretty easy to do so.  With a
little effort to define your own aggregate function, your query can
look like this:

SELECT
    thing_id,
    category,
    int4range_list(period_id)
  FROM thing
  GROUP BY thing_id, category
  ORDER BY 1, 2;

I've attached a couple files -- one which creates the desired
aggregate function, and the other loads a table with two of your
sample data sets and runs the above.  This is just intended as a
quick example of the capabilities available to you.

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL CompanyCREATE OR REPLACE FUNCTION int4range_list_accum(rangelist_in int4range[], newvalue int)
  RETURNS int4range[]
  LANGUAGE plpgsql
  STRICT
  IMMUTABLE
AS $$
DECLARE
  range  int4range;
  index  int;
  x  boolean;
BEGIN
  index = 0;
  FOREACH range IN ARRAY rangelist_in LOOP
index := index + 1;
IF newvalue <@ range THEN
  RETURN rangelist_in;
END IF;
IF newvalue = (lower(range) - 1) THEN
  -- extend range lower
  RETURN rangelist_in[1:index - 1]
  || int4range(newvalue, upper(range))
  || rangelist_in[index + 1:array_upper(rangelist_in, 1)];
END IF;
IF newvalue < (lower(range)) THEN
  -- add range before
  RETURN rangelist_in[1:index - 1]
  || int4range(newvalue, newvalue + 1)
  || rangelist_in[index:array_upper(rangelist_in, 1)];
END IF;
IF newvalue = (upper(range)) THEN
  x := CASE
 WHEN array_upper(rangelist_in, 1) <= index THEN FALSE
 WHEN newvalue < lower(rangelist_in[index + 1]) - 1 THEN FALSE
 ELSE TRUE
   END;
  IF x THEN
-- combine adjacent ranges
RETURN rangelist_in[1:index - 1]
|| int4range(lower(range), upper(rangelist_in[index + 1]))
|| rangelist_in[index + 2:array_upper(rangelist_in, 1)];
  ELSE
-- extend range higher
RETURN rangelist_in[1:index - 1]
|| int4range(lower(range), newvalue + 1)
|| rangelist_in[index + 1:array_upper(rangelist_in, 1)];
  END IF;
END IF;
  END LOOP;
  -- add range at end
  RETURN rangelist_in || int4range(newvalue, newvalue + 1);
END;
$$;

CREATE OR REPLACE FUNCTION int4range_list_final(rangelist int4range[])
  RETURNS text
  LANGUAGE plpgsql
  IMMUTABLE
AS $$
DECLARE
  range int4range;
  resulttext = '';
  separator text = '';
BEGIN
  IF coalesce(array_upper(rangelist, 1), 0) = 0 THEN
RETURN NULL;
  END IF;
  FOREACH range IN ARRAY rangelist LOOP
result = result
|| separator
|| lower(range)::text
|| CASE
 WHEN upper(range) = lower(range) + 1 THEN ''::text
 ELSE '-'::text || (upper(range) - 1)::text
   END;
separator := ', ';
  END LOOP;
  RETURN result;
END;
$$;

DROP AGGREGATE IF EXISTS int4range_list(int);

CREATE AGGREGATE int4range_list(int)
  (
SFUNC = int4range_list_accum,
STYPE = int4range[],
FINALFUNC = int4range_list_final,
INITCOND = '{}'
  );
DROP TABLE IF EXISTS thing;

CREATE TABLE thing
  (
id intNOT NULL,
thing_id   intNOT NULL,
category   varchar(1) NOT NULL,
period_id  intNOT NULL
  );

COPY thing FROM STDIN;
1	1	A	1
2	1	A	2
3	1	A	3
4	1	A	4
5	1	B	5
6	1	B	6
7	1	B	7
8	1	A	8
9	1	A	9
10	2	A	1
11	2	A	2
12	2	A	3
13	2	A	4
\.

SELECT
thing_id,
category,
int4range_list(period_id)
  FROM thing
  GROUP BY thing_id, category
  ORDER BY 1, 2;

TRUNCATE TABLE thing;

COPY thing FROM STDIN;
1	1	A	1
2	1	A	2
3	1	A	3
4	1	A	7
5	1	A	8
6	1	A	9
7	2	A	1
8	2	A	2
9	2	A	3
10	2	A	4
\.

SELECT
thing_id,
category,
int4range_list(period_id)
  FROM thing
  GROUP BY thing_id, category
  ORDER BY 1, 2;

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] design for multiple time series

2013-12-13 Thread salah jubeh
>On 12/13/2013 4:46 AM, rob stone wrote:
>> The only fly in the ointment with this is a rain gauge. If you don't
>> empty it each day the actual rainfall is the difference between
>> readings.

>(somewhat off topic)

>The electronic rain gauges I've seen have all been tip-bucket. they 
>measure each 0.01" (or equiv metric unit) at a time, and the rain total 
>is counter based, the weather station software automatically rolls over 
>daily, weekly, monthly totals, also tracks 'last 24 hours' which is a 
>rolling total.

>the one I have has a 'teeter totter' rocker, each side of it is a 
>'bucket', when 0.01" of precip. comes down the funnel into the currently 
>high side of the teeter, it flips over to the other side, dumping that 
>0.01" and a magnet on the side of the teeter registers a pulse on a 
>nearby coil.  now the other side fills, and it tips back

>we've had 1.40" total at my house since the rainy season started in 
>August 1.    we should be getting that much or more weekly this time of 
>year.    this will be the 3rd year of draught on the central coast ;(((

>best design for multiple time series *I've* seen is RRDTOOL, which is an 
>updated version of the classic MRTG.   this is /not/ SQL.

Hello, 

(out of the topic also)


NetCDF , SCIDB

Regards




On Friday, December 13, 2013 7:55 PM, John R Pierce  wrote:
 
On 12/13/2013 4:46 AM, rob stone wrote:
> The only fly in the ointment with this is a rain gauge. If you don't
> empty it each day the actual rainfall is the difference between
> readings.

(somewhat off topic)

The electronic rain gauges I've seen have all been tip-bucket. they 
measure each 0.01" (or equiv metric unit) at a time, and the rain total 
is counter based, the weather station software automatically rolls over 
daily, weekly, monthly totals, also tracks 'last 24 hours' which is a 
rolling total.

the one I have has a 'teeter totter' rocker, each side of it is a 
'bucket', when 0.01" of precip. comes down the funnel into the currently 
high side of the teeter, it flips over to the other side, dumping that 
0.01" and a magnet on the side of the teeter registers a pulse on a 
nearby coil.  now the other side fills, and it tips back

we've had 1.40" total at my house since the rainy season started in 
August 1.    we should be getting that much or more weekly this time of 
year.    this will be the 3rd year of draught on the central coast ;(((

best design for multiple time series *I've* seen is RRDTOOL, which is an 
updated version of the classic MRTG.   this is /not/ SQL.



-- 
john r pierce                                      37N 122W
somewhere on the middle of the left coast




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: [GENERAL] design for multiple time series

2013-12-13 Thread John R Pierce

On 12/13/2013 4:46 AM, rob stone wrote:

The only fly in the ointment with this is a rain gauge. If you don't
empty it each day the actual rainfall is the difference between
readings.


(somewhat off topic)

The electronic rain gauges I've seen have all been tip-bucket. they 
measure each 0.01" (or equiv metric unit) at a time, and the rain total 
is counter based, the weather station software automatically rolls over 
daily, weekly, monthly totals, also tracks 'last 24 hours' which is a 
rolling total.


the one I have has a 'teeter totter' rocker, each side of it is a 
'bucket', when 0.01" of precip. comes down the funnel into the currently 
high side of the teeter, it flips over to the other side, dumping that 
0.01" and a magnet on the side of the teeter registers a pulse on a 
nearby coil.  now the other side fills, and it tips back


we've had 1.40" total at my house since the rainy season started in 
August 1.we should be getting that much or more weekly this time of 
year.this will be the 3rd year of draught on the central coast ;(((


best design for multiple time series *I've* seen is RRDTOOL, which is an 
updated version of the classic MRTG.   this is /not/ SQL.




--
john r pierce  37N 122W
somewhere on the middle of the left coast



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Grouping, Aggregate, Min, Max

2013-12-13 Thread David Johnston
Re:custom aggregate:

I'd probably try building a two dimensional array in the state transition
function.  Take the new value and check if it is adjacent to the last value
in the last bin of the current state. If so add it to that bin.  If not
create a new bin and store it there.  Requires sorted input.

You could also just store all the values encountered and at the end group
them into bins after sorting internally.  That way you just need to get the
partition right - not the order by and sub-groups.

The final output is just a call to string_agg though you would have to
unnest the array in a custom manner since unnest() flattens
multiple-dimensional arrays.  See a recent thread for specifics.

David J.




--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/Grouping-Aggregate-Min-Max-tp5783279p5783318.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] invisible dependencies on a table?

2013-12-13 Thread Adrian Klaver

On 12/12/2013 08:24 PM, Tim Uckun wrote:

I have a table foo. It has a serial column called "id".  I execute the
following statement

ALTER TABLE table_name RENAME TO  archived_table_name;
   CREATE TABLE table_name (LIKE archived_table_name INCLUDING
DEFAULTS   INCLUDING CONSTRAINTS INCLUDING INDEXES);

. Archieve the table here...

DROP TABLE arhived_table_name

This doesn't work because the archived table name has a dependency on
the sequence created by the serial field.  So I try to remove that
dependency by doing this.

alter table "archived_table_name" alter column id drop default;
  ALTER TABLE"archived_table_name"  DROP CONSTRAINT systemevents_pkey;

So by now there should not be a dependency on the sequence but I still
can't drop the table and and pgadmin tells me it's still depending on
the sequence.

When I look at the table definition it doesn't seem to have any
reference to the sequence at all.

How can I drop this table and leave the sequence alone? Obviously the
newly created table needs it.


In addition to what David said here is another option, create the 
original table with a non-dependent sequence:


test=> CREATE SEQUENCE shared_seq;
CREATE SEQUENCE

test=> create table seq_test(id integer default nextval('shared_seq'), 
fld varchar);

CREATE TABLE

test=> ALTER TABLE seq_test RENAME to archived_seq_test;
ALTER TABLE

test=> CREATE TABLE seq_test (LIKE archived_seq_test INCLUDING DEFAULTS 
  INCLUDING CONSTRAINTS INCLUDING INDEXES);

CREATE TABLE

test=> DROP TABLE archived_seq_test;
DROP TABLE


When you use the serial type it creates a dependency on the serial and 
as David pointed out you can do the same thing with ALTER SEQUENCE. 
However as shown above there is no requirement that a sequence be 
dependent.  It is at its core a 'table' that is a number generator.




Thanks.



--
Adrian Klaver
adrian.kla...@gmail.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] While only running pg_dump, postgresql performs writes inside base/ directory. How can we stop this?

2013-12-13 Thread Spiros Ioannou
- autovacuum is not off in the pre-snapshot (production) database. It
auto-runs every 2 weeks to avoid transaction ID wraparound.
- I tried modifying pgdump to have a more relaxed transaction isolation
(READ UNCOMMITTED) instead of the default (serializable) but this didn't
help.
thanks for the info.




On 10 December 2013 21:13, Jeff Janes  wrote:

> On Tue, Dec 10, 2013 at 12:43 AM, Spiros Ioannou wrote:
>
>> Hi Jeff,
>> autovacuum is off on the DB running on the filesystem snapshot.
>>
>
>
> Which probably makes sense on the snapshot, but is it also off on the
> pre-snapshot database?
>
>
>> What "hint bits" do you suppose it is setting? It's running only one COPY
>> command for days. Do you have any suggestions to make it more "read only" ?
>>
>
>
> When a query sees a tuple that is still listed as part of an open
> transaction, it needs to figure out whether that transaction has now
> completed, and if so whether it committed or aborted.  This can be quite a
> bit of work to do, so once complete it sets a hint bit locally to that
> tuple, so that the next visitor doesn't have to repeat the work.  I don't
> believe that there is any way to turn this off, unless you want to run your
> own custom build of PostgreSQL.
>
> Cheers,
>
> Jeff
>


Re: [GENERAL] design for multiple time series

2013-12-13 Thread rob stone
O
n Thu, 2013-12-12 at 12:45 -0600, Seb wrote:I
'm working on the design of a database for time series data collected
-- 
Et in Arcadia, ego.
Floripa -- city of Land Rovers and alligators swimming in creeks.


> sampling scheme, but not all.  I initially thought it would be a good
> idea to have a table identifying each parameter (variable) that the
> sensors report on:
> 
> CREATE TABLE parameters (
> parameter_id serial PRIMARY KEY,
> parameter_name character_varying(200) NOT NULL,
> ...
> )
> 
> and then store the data in a table referencing it:
> 
> CREATE TABLE series (
> record_id serial PRIMARY KEY,
> parameter_id integer REFERENCES parameters,
> reading 
> ...
> )
> 
> but of course, the data type for the parameters may vary, so it's
> impossible to assign a data type to the "reading" column.  The number of
> variables measured by the sensors is quite large and may grow or
> decrease over time, and grouping them into subjects (tables) is not
> clear, so it's not simple to just assign them to different columns.
> 
> I've been trying to search for solutions in various sources, but am
> having trouble finding relevant material.  I'd appreciate any advice.
> 
> Cheers,
> 
> -- 
> Seb
> 
> 

Hello Seb,
I am not a meteorologist and don't know "how" your sensors function, so
please bear with me. I am assuming each sensor reading consists of an
identifier, a timestamp and a numeric value.
As a first cut:-
1) a table to hold type of sensor and its reading metric. E.g, degrees
celsius, kph, litres/minute, etc. Maybe also hold min and max ranges for
validation purposes. E.g. wind direction would have a range of zero to
359.
2) a table to hold geographic location of sensor with a FK pointing to
its type. You could hold its latitude and longitude. Its "identifier"
matches the identifier returned by a reading. 
3) a table to hold the readings with a FK pointing to its geographical
location with the actual reading held in a NUMBER(7,3) column, say?
4) a view over these tables using straightforward cartesian joins.

Use the view for analysis.
E.g., if you were going to build an airfield you know the location so
you can graph your wind rose by obtaining wind direction and velocity
each day for 'n' days.

The only fly in the ointment with this is a rain gauge. If you don't
empty it each day the actual rainfall is the difference between
readings.

HTH.

Cheers,
Robert



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Grouping, Aggregate, Min, Max

2013-12-13 Thread Misa Simic
Thanks Rémi-C,

Well, not sure is it a goal to avoid aggregates...

Bellow problem/solution even works (not sure) I guess would produce (if we
imagine instead of count it use min and max in a row, though this case a
bit complicated because of it should take real values from the source
table, because of simplicity it starts in examples from 1 - what is not
always the case...)

1A1-4
1B5-71A8-9

so next step, is to aggregate1-4,8-9 in one row

I think we have used similar approach, what with a few CTE's provides
desired result, just think would be simpler with 1 windowed aggregate
function i.e.

for:
c1, c2
A 1
A 2
A 3

SELECT DISTINCT c1, custom_agg_function(c2) OVER (PARTITION BY c1 ORDER BY
c2)


result
a, 1 -3

in case
c1, c2
A 1
A 2
A 3
A 5

result:

A , 1-3, 5-5

thanks,

Misa

2013/12/13 Rémi Cura 

> There is a trick to simplify the thing and avoid using aggregates :
> I think it will give you your answer.
>
>
> http://postgresql.1045698.n5.nabble.com/Count-of-records-in-a-row-td5775363i20.html
>
> Cheers,
> Rémi-C
>
>
> 2013/12/13 Misa Simic 
>
>> Hi All,
>>
>> I am not sure how to define with words what I want to accomplish (so
>> can't ask google the right question :) )
>>
>> So will try to explain with sample data and expected result:
>>
>> Scenario 1)
>>
>>   id thing_id category period_id  1 1 A 1  2 1 A 2  3 1 A 3  4 1 A 4  5 1
>> A 5  6 1 A 6  7 1 A 7  8 1 A 8  9 1 A 9  10 2 A 1  11 2 A 2  12 2 A 3  13
>> 2 A 4
>>
>>
>> Expected result:
>>
>>   thing_id category periods  1 A 1-9  2 A 1-4
>> (Sounds easy, group by, thing_id, category use Min and Max for period id
>> - but further scenarios makes it a bit complicated...)
>>
>> Scenario 2)
>>
>>   id thing_id category period_id  1 1 A 1  2 1 A 2  3 1 A 3  4 1 A 4  5 1
>> B 5  6 1 B 6  7 1 B 7  8 1 A 8  9 1 A 9  10 2 A 1  11 2 A 2  12 2 A 3  13
>> 2 A 4
>> Expected result:
>>  thing_id category periods  1 A 1-4, 8-9  1 B 5-7  2 A 1-4
>> Scenario 3)
>>
>>  id thing_id category period_id  1 1 A 1  2 1 A 2  3 1 A 3  4 1 A 7  5 1
>> A 8  6 1 A 9  7 2 A 1  8 2 A 2  9 2 A 3  10 2 A 4
>> Expected result:
>>
>>  thing_id category periods  1 A 1-3, 7-9  2 A 1-4
>>
>>
>> So goal is, to group by thing_id, category id - but if period_id is
>> interupted (not in incremented by 1) to have aggregated spans...
>>
>> To desired results we have came up using several CTE's (what makes a
>> query a bit big, and more "procedural way": make cte what calculated diff
>> between current and previous row, next cte uses previous one to define
>> groupings, next cte to make aggregates etc...)
>>
>> So I wonder - is there some kind of aggregate window function what does
>> desired results?
>>
>>
>> Many Thanks,
>>
>> Misa
>>
>>
>


Re: [GENERAL] Grouping, Aggregate, Min, Max

2013-12-13 Thread Rémi Cura
There is a trick to simplify the thing and avoid using aggregates :
I think it will give you your answer.

http://postgresql.1045698.n5.nabble.com/Count-of-records-in-a-row-td5775363i20.html

Cheers,
Rémi-C


2013/12/13 Misa Simic 

> Hi All,
>
> I am not sure how to define with words what I want to accomplish (so can't
> ask google the right question :) )
>
> So will try to explain with sample data and expected result:
>
> Scenario 1)
>
>   id thing_id category period_id  1 1 A 1  2 1 A 2  3 1 A 3  4 1 A 4  5 1
> A 5  6 1 A 6  7 1 A 7  8 1 A 8  9 1 A 9  10 2 A 1  11 2 A 2  12 2 A 3  13
> 2 A 4
>
>
> Expected result:
>
>   thing_id category periods  1 A 1-9  2 A 1-4
> (Sounds easy, group by, thing_id, category use Min and Max for period id -
> but further scenarios makes it a bit complicated...)
>
> Scenario 2)
>
>   id thing_id category period_id  1 1 A 1  2 1 A 2  3 1 A 3  4 1 A 4  5 1
> B 5  6 1 B 6  7 1 B 7  8 1 A 8  9 1 A 9  10 2 A 1  11 2 A 2  12 2 A 3  13
> 2 A 4
> Expected result:
>  thing_id category periods  1 A 1-4, 8-9  1 B 5-7  2 A 1-4
> Scenario 3)
>
>  id thing_id category period_id  1 1 A 1  2 1 A 2  3 1 A 3  4 1 A 7  5 1 A
> 8  6 1 A 9  7 2 A 1  8 2 A 2  9 2 A 3  10 2 A 4
> Expected result:
>
>  thing_id category periods  1 A 1-3, 7-9  2 A 1-4
>
>
> So goal is, to group by thing_id, category id - but if period_id is
> interupted (not in incremented by 1) to have aggregated spans...
>
> To desired results we have came up using several CTE's (what makes a query
> a bit big, and more "procedural way": make cte what calculated diff between
> current and previous row, next cte uses previous one to define groupings,
> next cte to make aggregates etc...)
>
> So I wonder - is there some kind of aggregate window function what does
> desired results?
>
>
> Many Thanks,
>
> Misa
>
>


[GENERAL] Grouping, Aggregate, Min, Max

2013-12-13 Thread Misa Simic
Hi All,

I am not sure how to define with words what I want to accomplish (so can't
ask google the right question :) )

So will try to explain with sample data and expected result:

Scenario 1)

  id thing_id category period_id  1 1 A 1  2 1 A 2  3 1 A 3  4 1 A 4  5 1 A
5  6 1 A 6  7 1 A 7  8 1 A 8  9 1 A 9  10 2 A 1  11 2 A 2  12 2 A 3  13 2 A
4


Expected result:

  thing_id category periods  1 A 1-9  2 A 1-4
(Sounds easy, group by, thing_id, category use Min and Max for period id -
but further scenarios makes it a bit complicated...)

Scenario 2)

 id thing_id category period_id  1 1 A 1  2 1 A 2  3 1 A 3  4 1 A 4  5 1 B 5
6 1 B 6  7 1 B 7  8 1 A 8  9 1 A 9  10 2 A 1  11 2 A 2  12 2 A 3  13 2 A 4
Expected result:
 thing_id category periods  1 A 1-4, 8-9  1 B 5-7  2 A 1-4
Scenario 3)

 id thing_id category period_id  1 1 A 1  2 1 A 2  3 1 A 3  4 1 A 7  5 1 A 8
6 1 A 9  7 2 A 1  8 2 A 2  9 2 A 3  10 2 A 4
Expected result:

 thing_id category periods  1 A 1-3, 7-9  2 A 1-4


So goal is, to group by thing_id, category id - but if period_id is
interupted (not in incremented by 1) to have aggregated spans...

To desired results we have came up using several CTE's (what makes a query
a bit big, and more "procedural way": make cte what calculated diff between
current and previous row, next cte uses previous one to define groupings,
next cte to make aggregates etc...)

So I wonder - is there some kind of aggregate window function what does
desired results?


Many Thanks,

Misa


Re: [GENERAL] design for multiple time series

2013-12-13 Thread Jayadevan M
On Fri, Dec 13, 2013 at 12:15 AM, Seb  wrote:

> Hi,
>
> I'm working on the design of a database for time series data collected
> by a variety of meteorological sensors.  Many sensors share the same
> sampling scheme, but not all.  I initially thought it would be a good
> idea to have a table identifying each parameter (variable) that the
> sensors report on:
>
> CREATE TABLE parameters (
> parameter_id serial PRIMARY KEY,
> parameter_name character_varying(200) NOT NULL,
> ...
> )
>
> and then store the data in a table referencing it:
>
> CREATE TABLE series (
> record_id serial PRIMARY KEY,
> parameter_id integer REFERENCES parameters,
> reading 
> ...
> )
>
> but of course, the data type for the parameters may vary, so it's
> impossible to assign a data type to the "reading" column.  The number of
> variables measured by the sensors is quite large and may grow or
> decrease over time, and grouping them into subjects (tables) is not
> clear, so it's not simple to just assign them to different columns.
>
> I've been trying to search for solutions in various sources, but am
> having trouble finding relevant material.  I'd appreciate any advice.
>
> Cheers,
>
> --
> Seb
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>
If you are not keen on using PostgreSQL, you could have a look at
http://opentsdb.net/
That was one project we found interesting when we were faced with a similar
problem a couple of years ago. In the end, many other factors made us opt
for Cassandra. We started with PostgreSQL. But our requirements included,
among others, ability to add new devices/parameters quickly. So the
persistence layer was mostly a data sink and we planned to move
cleansed/aggregated data to PostgreSQL for analysis. Most of the master
data was also in PostgreSQL - devicies, parameters, units.