Re: Oracle vs. PostgreSQL - a comment

2020-05-31 Thread Stefan Keller
Hi Paul

Paul Förster  wrote:
> Also, I like the idea of global container/cluster-wide views such as 
> CDB_TABLES, etc.,
> a thing which I definitely and seriously miss about PostgreSQL.

Can you specify little more: What's the use case for this (assuming
you know dblink and postgres_fdw)?

:Stefan

Am So., 31. Mai 2020 um 17:09 Uhr schrieb Paul Förster
:
>
> Hi Tim,
>
> > On 31. May, 2020, at 15:26, Tim Cross  wrote:
> > P.S. for moving Oracle databases, we use to just use sed and change the
> > paths in the control file. Worked remarkably well. Often used this
> > technique to 'refresh' our dev or testing systems to current prod data.
>
> it works well if the length of path+filename does not change. I had bad 
> experiences with this technique if the length changes because controlfiles 
> are binary files unless you alter database backup controlfile to trace as 
> '...'. So, as I said, you need to recreate the controlfile.
>
> But this is Oracle and not PostgreSQL... let's leave it at that here.
>
> Cheers,
> Paul
>




Re: Data entry / data editing tools (more end-user focus).

2019-05-17 Thread Stefan Keller
Dear all

What about following „Rapid App Development Tools"?
* OpenXava (Java): https://www.openxava.org/ate/visual-studio-lightswitch
* Radzen (.NET): https://www.radzen.com/visual-studio-lightswitch-alternative/
* Other: https://aurelia.io/ (JS) or CUBA https://www.cuba-platform.com/ (Java)

:Stefan

Am Do., 28. März 2019 um 15:39 Uhr schrieb Adrian Klaver
:
>
> On 3/27/19 11:49 PM, Tony Shelver wrote:
>
> Please reply to list also, more eyes on the the problem.
> Ccing list
>
> My take on below is since you are feeding a Website why not use Web
> technologies for your data entry. My language of choice is Python. I
> have done something similar to this(on small scale) using the Django
> framework. For something lighter weight there is Flask. Then your client
> becomes a browser and you do not have to distribute forms around. You
> could integrate with the existing Web apps you are using e.g. SnipCart.
>
>
> > Actually I found a possibility.  LibreOffice Base on top of PG lets me
> > paste photos into a Postgresql bytea field no problem.  MS Access should
> > work well also, but I am not going to buy it, and running Ubuntu most of
> > the time.
> > Possibly will distribute the Base forms to select users to enter data.
> > We are a startup company, so this is an affordable temporary fix, until
> > the product I have been looking at matures, or we can roll our own.
> >
> > We are building a company website, including an eStore, and have a few
> > hundred products to load and maintain. Our product data currently isn't
> > suitable for a sales catalog.
> > (Brands, categories, products, pricing and deal info, specials, images,
> > product comparisons and so on).
> >
> > Right now I input / maintain this via CSV files maintained through a
> > spreadsheet  (LibreOffice Calc) which our site generator (Jekyll) uses
> > to build out the static HTML product [pages automatically.
> > This is really quick to enter basic data, but I have to manually
> > maintain image uploads, image names and so on manually in the
> > spreadsheet and through individual file uploads. We have at least one,
> > preferably 3 and up to 6 photos per product to maintain.  Call it a 1000
> > images right now, and that will only go up.
> > Invalid text / characters in product descriptions and so on can break
> > the CSV as well.
> >
> > There are headless CMS solutions out on the market targeting this same
> > area, but for various reasons the suitable ones are still maturing and
> > shaking out in the marketplace, so I am not in a hurry to make a choice.
> >
> > So the goal is to replace CSV with JSON file input.  This will also make
> > my life easier for more complex structures such as multiple categories
> > and specs per product.
> > I also want to migrate text that can change from the HTML pages into the
> > database for easier changes by end users. For this the users could use
> > a WYSIWIG MarkDown editor, and just cut and past the MarkDown into Base
> > forms when finished.  This will be converted to HTML at build time by
> > Jekyll static site generator or a script.
> >
> > So the proposed solution:
> > 1. Create the database in Postgresql.
> > 2. Link Base or other tool to it and design input forms where necessary
> >
> > 3. Enter the data through Base into PG including images, MarkDown / HTML
> > text, long descriptions and so on.
> > 3a. If I don't get a suitable CMS going, I could spend some time
> > developing a Vue/Quasar/Nuxt whatever front end to handle this, in
> > several months time.
> >
> > 4. Pull the data from Postgres using Python (Psycopg2 will handle
> > images). Or a node.js app once my JS skills improve.
> > 4A: optionally use PostgREST, Postgraphile, Pytone Graphene or other to
> > create an externally accessible API, and then use Python or javascript
> > module to pull the data out.
> >
> > 5. This program will then write the JSON product file to the website
> > data source directory with image tags, and upload the files to the image
> > store.  Also create product item HTML page templates or or modify HTML
> > content where necessary.
> > 6. At this stage the Jekyll static site generator will detect the new
> > JSON data and any changed pages, and regenerate all changed pages, move
> > images and so on.
> >
> > 7. Git commit will push the generated site to Github, and Git will then
> > send everything to our CDN.
> >
> > There is no traditional web server / app server / db server setup as you
> > would find for most websites using, for example, Wordpress, Magento
> > commerce or other tools.  Just the CDN.
> >
> > Everything is static HTML and some javascript.  Because there is no
> > backend system, database or anything else at run time,just when
> > generating the site,  I am not concerned about performance except at
> > site build time, which will not happen that often.  All the SEO data
> > (w3schema / Google, OG / Facebook and Twitter cards) is automatically
> > built into the templates and fleshed out by 

Re: How to set parameters in 'options'?

2019-01-29 Thread Stefan Keller
Many thanks to Igor and Adrian for your hints.

Got it to work like this:

create function link_server(_server text, _host text, _port text, _dbname text)
returns void as $$
begin
  execute format('create server %s
foreign data wrapper postgres_fdw
options (host %L, port %L, dbname %L)', _server, _host, _port, _dbname);
end;
$$ language plpgsql;

select link_server('other_db_server', '111.11.11.11', '5432', 'other_db');

But I actually hoped being able avoid "execute"...

:Stefan

Am Di., 29. Jan. 2019 um 16:21 Uhr schrieb Adrian Klaver
:
>
> On 1/29/19 6:40 AM, Stefan Keller wrote:
> > Hi,
> >
> > I'd like to write a function like this:
> >
> > create function foo(_host text, _port text, _dbname text)
> > returns void as $$
> >create server _server
> >  foreign data wrapper postgres_fdw
> >  options (host _host, port _port, dbname _dbname);
> >  -- ERROR: syntax error at or near "_host"
> > $$ language sql;
> >
> > In order to e.g. do:
> > select foo('111.11.11.11', '5432', 'mydb');
> >
> > How can I set the parameters in 'options' using those variables?
>
> If you are going to use plpgsql, then format:
>
> https://www.postgresql.org/docs/10/functions-string.html#FUNCTIONS-STRING-FORMAT
>
> https://www.postgresql.org/docs/10/plpgsql-statements.html#PLPGSQL-QUOTE-LITERAL-EXAMPLE
>
> So something like:
>
> execute format('create server _server
>  foreign data wrapper postgres_fdw
>  options (host $1, port $2, dbname $3') USING _host, _port, _dbname
>
> Not sure where the _server is coming from so that might have to be dealt
> with.
>
> >
> > :Stefan
> >
> > P.S. Actually I'll put this code in a plggsql function later on.
> >
> >
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com



How to set parameters in 'options'?

2019-01-29 Thread Stefan Keller
Hi,

I'd like to write a function like this:

create function foo(_host text, _port text, _dbname text)
returns void as $$
  create server _server
foreign data wrapper postgres_fdw
options (host _host, port _port, dbname _dbname);
-- ERROR: syntax error at or near "_host"
$$ language sql;

In order to e.g. do:
select foo('111.11.11.11', '5432', 'mydb');

How can I set the parameters in 'options' using those variables?

:Stefan

P.S. Actually I'll put this code in a plggsql function later on.



Re: PG Schema to be used as log and monitoring store

2017-12-09 Thread Stefan Keller
Hi

Thanks James and Steven!
I hoped somebody will advise me not to do this.
I was just bothered with NoSQL databases.
Even TimescaleDB made me wonder because it says it scales Postgres [1]
for IoT which implies that Postsgres does not scale...

2017-12-09 23:01 GMT+01:00 Steven Lembark <lemb...@wrkhors.com>:
> In general this is a bad idea *unless* you have benchmarked the
> database and found that the amount of space saved really does make
> some difference.

I actually made some tests on my own (using generate_series) and did
not find any disk space or performance issues yet.
I've also found this paper from 2012 about "Sensor Data Storage
Performance: SQL or NoSQL, Physical or Virtual"  [2] which confirms my
observations.

Now, you have to know that there are about 100 attributes for the
machines/tables - not only 40 - so I initially thought, it's easier to
setup the schema using bit(50) and float8[50].

Below I re-modeled it to a relational schema as you suggested and also
tried to utilize the INHERITS feature.
Does that look better?

:Stefan

[1] https://blog.timescale.com/choose-postgresql-for-iot-19688efc60ca
[2] 
https://www.ceid.upatras.gr/webpages/faculty/vasilis/Courses/SpatialTemporalDM/Papers/SQLorNoSQL2012.pdf


/*  Pure relational logging and monitoring schema */

  create table m_meta (
id int primary key,
name text
  );

  drop table if exists m cascade;
  create table m (
id bigint primary key,
gid int references m_meta not null,
created timestamp,
b1 bit,
b2 bit,
b3 bit,
-- b2 .. b20
f1 float8,
f2 float8,
f3 float8
--f4 ... f20
  );

  create table m1 (
b21 bit,
-- b22 .. b50 bit,
f21 float8,
--f4 ... f20 float8,
primary key (id),
foreign key (gid) references m_meta
  ) inherits (m);

  --create table m1 ( ... ) inherits (m);

/* end */


2017-12-09 23:01 GMT+01:00 Steven Lembark <lemb...@wrkhors.com>:
> On Sat, 9 Dec 2017 20:22:02 +0100
> Stefan Keller <sfkel...@gmail.com> wrote:
>
>>   create table m1 (
>> id bigint,
>> created timestamp,
>> b20 bit(20) default b'',
>> farr20 float8[20]
>>   );
>
> In general this is a bad idea *unless* you have benchmarked the
> database and found that the amount of space saved really does make
> some difference. Using the packed format makes most SQL a lot harder
> to write and makes indexing impossible (or at least messy and rather
> error prone). This also makes adding add'l fields harder.
>
> If you were really intent on doing this I'd add a few million recods
> with both formats on a database tuned to handle the load and see if
> the packed bits really do make a difference. My guess is that you
> won't see all that much difference in storage and the query speed
> with effective indexing is going to be decent.
>
> Using this database might be a lot simpler with a few that
> breaks the sub-fields out, or which has indexes on the sub
> -fields within the packed data.
>
>
> --
> Steven Lembark   1505 National Ave
> Workhorse Computing Rockford, IL 61103
> lemb...@wrkhors.com+1 888 359 3508
>



PG Schema to be used as log and monitoring store

2017-12-09 Thread Stefan Keller
Hi,

Given this kind of sensors (Internet-of-Things) log and monitoring scenario:

* There are 3 production machines monitored every few seconds for
forthcoming (~2) years.
* Machine m1 is emitting 20 boolean and 20 float4 captured in sensors
(m1s1..m1s40).
* Machine m2 has same attributes as m1 plus 10+10 more (m2s1..m2s20).
* Machine m3: like m2 but half of the attributes are different.
* Queries are happening once every day, like:
  SELECT m1s1,m1s2 FROM m1 WHERE logged BETWEEN '2017-11-01' AND '2017-11-30'".

So this is a kind of an "Immutable DB" with where there are
* rather static schema with sources which have overlapping attributes
* heavy writes,
* periodic reads

Would you model this schema also like my proposition, which saves
place but makes it little bit more complex to insert/update due to the
arrays?

  create table m1 (
id bigint,
created timestamp,
b20 bit(20) default b'',
farr20 float8[20]
  );

:Stefan