Re: Large scale reliable software system

2023-06-27 Thread Tony Shelver
On Tue, 27 Jun 2023 at 07:08, Guyren Howe  wrote:

> Correct. It’s a tragically wrong piece of folk wisdom that’s pretty
> general across web development communities.
>
> On Jun 26, 2023, at 21:32, Michael Nolan  wrote:
>
> It's not just Ruby, dumb databases are preferred in projects like
> WordPress, Drupal and Joomla, too.
>
> Now, if it's because they're used to using MySQL, well maybe that's
> not so hard to understand.  :-)
>
> On Mon, Jun 26, 2023 at 8:05 PM Guyren Howe  wrote:
>
>
> This is a reasonable answer, but I want to offer a caveat.
>
> Likely because of the influence of the originator of Ruby on Rails, it is
> close to holy writ in the web development community that the database must
> be treated as a dumb data bucket and all business logic must be implemented
> in the Ruby or Python or whatever back end code.
>
> This heuristic is nearly always mostly wrong.
>
> Guyren G Howe
> On Jun 26, 2023 at 17:48 -0700, Adrian Klaver ,
> wrote:
>
> On 6/26/23 16:48, B M wrote:
>
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>
>
>
>
>  The accepted front-end developer wisdom of treating the DB as a dumb data
store works under conditions, for example the DB will never be accessed
from a different ORM / framework, and where the performance attributes of
using an ORM with 'standard' datastructures are acceptable.

The moment you need to plug in something like reporting tools, or access
from a different system / API / framework / language / ORM or whatever, the
approach not having rules / views / procedures / whatever built into the
database falls apart.

Other things to consider are performance / load / overhead:  we have one
system that involves processing through large amounts of data for reports /
queries.  Shipping all that back through the ORM / db interface (ODBC /
JDBC / psycopg2 / whatever for resolution / filtering on the front end
application where SQL / procedures / views could do that in the DB and just
ship back the required data seems counterproductive.

Tony Shelver

>


Re: Best Open Source OS for Postgresql

2023-01-31 Thread Tony Shelver
Copied to the list
On Wed, 1 Feb 2023 at 08:18, Tony Shelver  wrote:

>
>
> On Wed, 1 Feb 2023 at 08:04, Tony Shelver  wrote:
>
>>
>> On Tue, 31 Jan 2023 at 15:10, Marc Millas  wrote:
>>
>>> Sorry for inappropriate "reply".
>>>
>>> if you do check the debian postgis repo, you ll find that its NOT
>>> possible to choose a postgis version.
>>> its possible for postgis 2.4 and 2.5, then ALL 3.x versions are
>>> inaccessible but one, that did change from time to time.
>>> (you MUST ask for postgis 3 without being able to ask for 3.0 or 3.1
>>> or...  its like asking for postgres 9 without .5 or .6)
>>> 2 of my customers reverse to a RH family linux because they have been
>>> unable to install the requested postgres/postgis version on debian.
>>> when I did ask the team, the reply was: we cannot package for all cross
>>> possibilities (ie. 5 postgres x 6 postgis, less some impossibilities
>>> according to postgis matrix)
>>>
>>>
> Maybe I am not understanding this, but have you checked the UbuntuGIS
> source?  I know not pure Debian, but...
>
> Sent before complete:
>
> UbuntuGIS stable shows PostGIS 2.4 and 2.5 available, as well as 3.1 and
> 3.2  :
> https://launchpad.net/%7Eubuntugis/+archive/ubuntu/ppa/+index?batch=75=75=75
> Got this link from https://wiki.ubuntu.com/UbuntuGIS  via
> https://postgis.net/install/
> Also note that UbuntuGIS is based off the DebianGIS project:
> https://wiki.debian.org/DebianGis
>
> We run both Ubuntu and Centos servers.  The change to Centos licensing has
> led us to support it only for one specific application that is embedded in
> Centos that we are trying to end-of-life ASAP.  At least Ubuntu server has
> a 5 year support window, and Ubuntu has now announced a limited 'free' Pro
> option for smaller businesses that provides a 10 year window.
>
>
> Regards
>
>
>>


Re: Is there a guide to use PostgresSQL as alternative to MariaDB/MySQL in OpenStack?

2022-11-05 Thread Tony Shelver
On Sat, 5 Nov 2022 at 01:15, Larry Sevilla 
wrote:

> Hi,
>
> I'm studying OpenStack Zed using Ubuntu 22.04 with the following guides:
>
> https://docs.openstack.org/install-guide/
>
> And I have successfully installed OpenStack services and tested Horizon.
>
>
> But the guide uses MariaDB/mysql.
>
> https://docs.openstack.org/install-guide/environment-sql-database-ubuntu.html
>
>
> Is there a guide for PostgreSQL?
>
>
Look at
https://docs.openstack.org/security-guide/databases/database-backend-considerations.html.
No experience with OpenStack


Re: Fwd: Modelling a web CMS in Postgres ... a little advice needed

2022-08-11 Thread Tony Shelver
On Thu, 11 Aug 2022 at 12:00, Laura Smith <
n5d9xq3ti233xiyif...@protonmail.ch> wrote:

> Hi Tony
>
> The reason I'm looking to do it from scratch is that its a case of "once
> bitten, twice shy".
>
> This CMS will be replacing a Joomla based CMS.
>
> I can't quite say I'm enamoured by the option of trading one "baggage
> included" opinionated CMS for another.  Also in this day and age, removing
> the excess baggage means an improved security footprint.
>
>
>
Laura, fair enough.

At the time i used a static site generator (SSG), which was Jekyll at the
time, to generate our website.
I could have used Joomla or Wordpress as the CMS, but these are terrible
solutions as you mentioned, as they are not designed as pure CMS engines,
especially for use with a static site generator.

I went with my own rolled CMS as well.

One part of my requirement was for a product sales database, which ended up
being a lot more complex than your design, as I needed categories, product
groups, products, prices by area and more.  I did this in Postgresql, and
pulled the data via a GraphQL API implemented via Postgraphile, which
generates a GraphQL API from a Postgres schema automagically.

The other part was for the static web pages.  What I came up with was to
store pages and then a set of tags where I could assign content to each tag
for the page.
For example, you talked about Page Headers and PageMainLeft.  I stored
these tagged with the name of the page, and the tag name, plus the content.

The SSG then saw the tags in the HTML page template, and pulled in the
content via Postgraphile.

At the time I looked at several 'headless' CMS engines, and Strapi came
close to meeting requirements, but was still in early development stages.

Today, I would look at a different route, likely leveraging something like
Strapi.  I *may* still consider using native Postgres tables to manage the
product database, but there are a lot of other proven CMS options on the
market now without the baggage of Joomla, Wordpress et al.
Node.js I regard as a necessary evil if you are developing relatively
complex websites, integrating multiple APIs and other resources.

Very much depends on your environment though.
For example, one huge advantage of the headless CMS options is that the
whole user management and access is built in if you have a large user base
maintaining different parts of the website content.  Another is things like
formal APIs to get content, and access management / security of the content
itself.

There isn't really a one-solution-fits-all-requirements option when it
comes to content management, unfortunately.

Regards


Fwd: Modelling a web CMS in Postgres ... a little advice needed

2022-08-11 Thread Tony Shelver
From: Tony Shelver 
Date: Thu, 11 Aug 2022 at 11:47
Subject: Re: Modelling a web CMS in Postgres ... a little advice needed
To: Laura Smith 



On Thu, 11 Aug 2022 at 09:35, Laura Smith <
n5d9xq3ti233xiyif...@protonmail.ch> wrote:

> Hi
>
> I'm looking at using pgsql as a backend to a web CMS but could do with a
> little advice from the crowd on the wiseness of my schema thinking.
>
> TL;DR the design is centered around two tables "pages" and "page_content",
> where "pages" has a jsonb column that refers to "page_content" in a
> key-value style (key for content block location ID on the web page, value
> for the database lookup).  Probably both, but certainly "page_content"
> would need to be versioned.
>
> My present thinking is along the following lines (table columns minimised
> for this post):
>
> create table pages (
>   page_id uuid primary key not null,
>   page_metadata jsonb not null
> );
>
> create table page_content(
>   content_id uuid not null,
>   content_version_id uuid not null
>   content_valid tstzrange not null default tstzrange(now(),'infinity'),
>   content_data text,
> EXCLUDE USING gist (content_id WITH =, content_valid WITH && ) DEFERRABLE
> INITIALLY DEFERRED
> );
> create unique index if not exists on page_content(content_version_id);
> CREATE OR REPLACE VIEW current_content AS select * from page_content where
> content_valid @> now();
>
>
> An example "page_metadata" entry might look something like :
> {
> "page":"foo",
> "description":"bar",
> "content":[
> "pageHeader":"E52DD77C-F3B5-40D9-8E65-B95F54E1C76B",
> "pageMainLeft":"0BEFA002-7F9B-4A6A-AD33-CA916751B648"
> ]
> }
>
>
> So I guess my questions here are :
> Am i nuts with this thinking ? Is there a saner way to do this ? Should I
> be using pgsql at all for this, e.g. the cool kids will probably say I
> should be using a graph database ?  (N.B. I did consider a pure old-school
> relational model with no jsonb, but I considered it too difficult to model
> the dynamic nature of the fields, i.e. unknown many-many relationship
> between page content locations and content ... but I'm willing to be proven
> wrong by wiser minds)
>
> Then, on a more technical level  what would an optimal query for
> looping through the json content array look like ?  I have always been
> pretty useless when it comes to CTE expressions !
>
> Thanks all
>
>
> Off the top of my head, it seems like you are reinventing the wheel.

There are some very good CMS systems out there, such as Strapi, which is an
open source, headless CMS, and which can use Postgresql as the underlying
DB.

It could be worth a while to install this and see how it generates the
underlying pages. As a benefit, it also generates the UI to maintain the
underlying data, and also provides a set of APIs that you can call to
access the data from many different front-ends.

Tony


Re: simple reporting tools for postgres in aws

2020-12-07 Thread Tony Shelver
LibreOffice Base?

I have used LibreOffice Base to spin up a quick data entry front end to
Postgresql.  Has some basic reporting functionality as well, and can
integrate to other LIber\Office products.

Poor man's version of MS Access.

As for other tools, https://querytreeapp.com seems to have some of the
species you are looking for, but have never used it.

Postgres lists a variety of data reporting products :
https://www.postgresql.org/download/products/5-reporting-tools/



On Mon, 7 Dec 2020 at 17:53, Chris Stephens  wrote:

> Hello,
>
> We have a need to display some very simple reports generated from views in
> an rds postgres database. We need little more functionality beyond simple
> filtering, column hiding, sorting, grouping. Anything much beyond that
> would be a strike against the tool.
>
> i'm looking for something similar to oracle application
> express's interactive reports (i come from an oracle background).
>
> i took a look at what's available from amazon's aws marketplace but
> quickly got overwhelmed. every application i saw was overkill.
>
> users will not be tech savvy. we have a tentative fallback plan to use
> excel with pages linked to views in the database but i'm looking for
> something a little more targeted.
>
> does anyone have any suggestions that fit the description above?
>
> thanks for any input,
> chris
>


Re: New "function tables" in V13 documentation

2020-11-09 Thread Tony Shelver
On Mon, 9 Nov 2020 at 02:54, Adrian Klaver 
wrote:

> On 11/8/20 1:57 PM, Thomas Kellerer wrote:
> > In case someone is interested: there is a little discussion going on on
> > Reddit whether the new format of presenting functions in V13 is a step
> > backwards:
> >
> >
> >
> https://www.reddit.com/r/PostgreSQL/comments/jpi0rp/does_anyone_else_feel_like_the_v13_docs_are_a/
>
> Yeah, I would agree with the mobile first design comments. Then again
> that plague is hitting most sites these days. My 2 cents is it is a step
> backwards. You can cover more ground quickly and digest it faster in the
> old format.
>
> >
> >
> > Thomas
> >
> >
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>
> Agreed, old format much more readable.
>


Re: JSONB order?

2020-11-05 Thread Tony Shelver
On Thu, 5 Nov 2020 at 18:27, Rob Sargent  wrote:

>
>
> On Nov 5, 2020, at 8:45 AM, Tony Shelver  wrote:
>
>
>
> -- Forwarded message -----
> From: Tony Shelver 
> Date: Thu, 5 Nov 2020 at 17:45
> Subject: Re: JSONB order?
> To: Christophe Pettus 
>
>
> Thanks Christophe, that's what I thought.
> Just seemed weird that they were 'disordered' in exactly the same way
> every time.
>
> FYI, as of Python 3.7, dicts *are* ordered.
>
> The problem is that we are possibly going to have many versions of these
> forms with slightly differing keys, which will be a pain to order in some
> hard coded way.
>
>
> Sounds like you’ll need a separate mechanism for maintaining versions of
> the forms and which headers represent the same data concept.  Always access
> data via canonical header translated to current form.
>

Did a workaround.  For what I needed, I used the python dict to json
function, which creates a string, and then stored that string in a varchar
column on the DB, leaving the json in place for other use.

For reporting, pulled the varchar back and used the json to dict function
on python.  Minimal code changes required.

The output is formatted into an Excel spreadsheet, writing the column
header / title from the dict keys, and then formatting the values
underneath.


Not exactly 3rd normal form and all the other best practices, but this is a
hard prototype we have some customers for, so ease of being able to iterate
multiple forms and changes is key.


Fwd: JSONB order?

2020-11-05 Thread Tony Shelver
-- Forwarded message -
From: Tony Shelver 
Date: Thu, 5 Nov 2020 at 17:45
Subject: Re: JSONB order?
To: Christophe Pettus 


Thanks Christophe, that's what I thought.
Just seemed weird that they were 'disordered' in exactly the same way every
time.

FYI, as of Python 3.7, dicts *are* ordered.

The problem is that we are possibly going to have many versions of these
forms with slightly differing keys, which will be a pain to order in some
hard coded way.



On Thu, 5 Nov 2020 at 17:40, Christophe Pettus  wrote:

>
>
> > On Nov 5, 2020, at 07:34, Tony Shelver  wrote:
> > But...  seen above, the order gets mixed up.
> >
> > Any ideas?
>
> JSON objects, like Python dicts, are not automatically ordered by key.
> Once you move from the column space to the JSON object space, you can't
> rely on the object keys being in a consistent order.
>
> You'll want to have a step when ingesting the JSON object into a report
> that lines up the key values appropriately with the right presentation in
> the report.
> --
> -- Christophe Pettus
>x...@thebuild.com
>
>


JSONB order?

2020-11-05 Thread Tony Shelver
I am getting data out of a spreadsheet (Google API) and loading it into a
Python 3.8 dict.
I then dump it to json format. On printing, it's in the correct order:
{
"Timestamp": "05/11/2020 17:08:08",
"Site Name": "SureSecurity Calgary",
"Last Name": "Shelver",
"First Name": "Anthony",
"Middle Name(s)": "",
"Phone": 555757007,
 "Person visited": "test",
 "Body Temperature": 44,
 "Fever or chills": "No",
 "Difficulty breathing or shortness of breath": "No",
 "Cough": "No",
 "Sore throat, trouble swallowing": "No",
 "Runny nose/stuffy nose or nasal congestion": "No",
 "Decrease or loss of smell or taste": "No",
 "Nausea, vomiting, diarrhea, abdominal pain": "No",
 "Not feeling well, extreme tiredness, sore muscles":
 "Yes", "Have you travelled outside of Canada in the past 14 days?": "No",
 "Have you had close contact with a confirmed or probable case of
COVID-19?": "No"
 }

It's passed to a plpgsql function, using a jsonb parameter variable.
This insets it into the table, into into a jsonb column.

When looking at what the column contents are, it's been rearranged.  The
order always seems to have been rearranged in the same way, as below:
{
"Cough": "No",
"Phone": 757007,
"Last Name": "Shelver",
"Site Name": "SureSecurity Calgary",
"Timestamp": "04/11/2020 17:34:48",
"First Name": "Anthony",
"Middle Name(s)": "",
"Person visited": "Many",
"Fever or chills": "No",
"Body Temperature": 44,
"Sore throat, trouble swallowing": "No",
"Decrease or loss of smell or taste": "No",
"Nausea, vomiting, diarrhea, abdominal pain": "No",
"Runny nose/stuffy nose or nasal congestion": "No",
"Difficulty breathing or shortness of breath": "No",
"Not feeling well, extreme tiredness, sore muscles": "No",
"Have you travelled outside of Canada in the past 14 days?": "No",
"Have you had close contact with a confirmed or probable case of
COVID-19?": "No"
}

If the order had remained the same, it's child's play to pull the data out
and present it in a report, even if the data elements change.
But...  seen above, the order gets mixed up.

Any ideas?

Thanks

Tony Shelver


Re: What's your experience with using Postgres in IoT-contexts?

2020-10-09 Thread Tony Shelver
Everything is sent tot he cloud from the vehicle.  The trackers we
currently run don't have the power to be able to do much.
There are a few vehicle devices we have access to (hub systems with a
central computer / cpu / storage) which could handle a database, but in
terms of our business requirements, everything is needed at the cloud /
server level for analysis.


On Wed, 7 Oct 2020 at 08:28, Thorsten Schöning 
wrote:

> Guten Tag Tony Shelver,
> am Dienstag, 6. Oktober 2020 um 16:33 schrieben Sie:
>
> > Each vehicle reports position to the database at least every 30 seconds
> > when moving, along with any other data as it happens.  So it's quite a
> bit.
>
> > H2 seemed to handle it fine for a limited number of vehicles, but we
> moved
> > it to PG once we moved to production.
>
> But where does Postgres run in your setup? :-) On the tracked vehicle
> itself with limited resources most likely or do you simply send things
> over network to some cloud/server/...? Your first paragraph reads like
> the former, but the second like the latter. :-)
>
> If it's on the vehicle, I would be interested to somewhat know which
> hardware you use, to compare what I have in mind. Thanks!
>
> Mit freundlichen Grüßen,
>
> Thorsten Schöning
>
> --
> Thorsten Schöning   E-Mail: thorsten.schoen...@am-soft.de
> AM-SoFT IT-Systeme  http://www.AM-SoFT.de/
>
> Telefon...05151-  9468- 55
> Fax...05151-  9468- 88
> Mobil..0178-8 9468- 04
>
> AM-SoFT GmbH IT-Systeme, Brandenburger Str. 7c, 31789 Hameln
> AG Hannover HRB 207 694 - Geschäftsführer: Andreas Muchow
>
>
>
>


Re: What's your experience with using Postgres in IoT-contexts?

2020-10-06 Thread Tony Shelver
On Mon, 5 Oct 2020 at 16:34, Thorsten Schöning 
wrote:

> Guten Tag Tony Shelver,
> am Montag, 5. Oktober 2020 um 15:44 schrieben Sie:
>
> > Not sure about PG in that environment. Have you thought about something
> > like H2 java database?  https://www.h2database.com/html/main.html
>
> Yes, like SQLite as well. The point is that I was really interested in
> keeping as much as possible and we e.g. use some Postgres-specific
> constructs currently. Of course things can be changed, but the
> question is if it's necessary at all, especially after reading the
> PDF:
>
>
> https://www.postgresql.eu/events/pgconfeu2019/sessions/session/2781/slides/251/PostgreSQL%20for%20IoT.pdf
>
> > That is included as the standarDB in a vehicle tracking system we use,
> > although we have re[placed with PG.
>
> And your tracking system is more like the server that I already have
> or an embedded system within the vehicles themself?
>
> Mit freundlichen Grüßen,
>
> Thorsten Schöning
>
> --
> Thorsten Schöning   E-Mail: thorsten.schoen...@am-soft.de
> AM-SoFT IT-Systeme  http://www.AM-SoFT.de/
>
> Telefon...05151-  9468- 55
> Fax...05151-  9468- 88
> Mobil..0178-8 9468- 04
>
> AM-SoFT GmbH IT-Systeme, Brandenburger Str. 7c, 31789 Hameln
> AG Hannover HRB 207 694 - Geschäftsführer: Andreas Muchow
>
>
> .


 My tracking system stores, analyzes and reports on multiple events
occuring in the vehicle.  This can include speed, change in GPS location
(tracking), vehicle data / events such as RPM, engine temperature, change
in direction, data from the OBDII system (vehicle diagnostics from engine,
transmission, cooling and electrical systems, chassis and so on), data from
the tracker's built-in accelerometer, connect / disconnect, ignition on /
off and so on

This data is used to trigger operator / admin messages and warnings, report
historical vehicle routes on map, follow current vehicle position and
route, geofencing, and more.

Each vehicle reports position to the database at least every 30 seconds
when moving, along with any other data as it happens.  So it's quite a bit.

H2 seemed to handle it fine for a limited number of vehicles, but we moved
it to PG once we moved to production.


Re: What's your experience with using Postgres in IoT-contexts?

2020-10-05 Thread Tony Shelver
Not sure about PG in that environment. Have you thought about something
like H2 java database?  https://www.h2database.com/html/main.html

That is included as the standarDB in a vehicle tracking system we use,
although we have re[placed with PG.

On Mon, 5 Oct 2020 at 11:20, Thorsten Schöning 
wrote:

> Hi all,
>
> TL;DR:
>
> Does anyone actually use Postgres with ARM based low performance
> hardware and only 256 MiB of RAM? What are your experiences in other
> stripped down environments? Is there some lower RAM limit with which
> using Postgres doesn't make any sense anymore? Is Postgres able to
> compete with SQLite regaridng lower performance boundaries or is there
> some point at which SQLite will be the better choice, because Postgres
> needs some more resources to work properly?
>
> Thanks!
>
> Background:
>
> I have some server side backend exposing web services and storing data
> in Postgres. The stored data are telegrams from smart metering
> devices, lots of them, mostly small. The provided web services take
> care of generating reports etc. based on those telegrams.
>
> It's now considered to put large parts of that backend onto some ARM
> based IoT device, simply because that device would do a lot of the
> same things. The available backend is Java-based, so it would simply
> be possible as well and I already ran things on some Raspi-like
> hardware and things worked pretty well. OTOH, the targetted hardware
> is less capable than some Raspi, we talk about only 256 MiB of RAM
> currently. The exact details of the hardware are not clear yet.
>
> In the easiest case, the newly created device would only need to store
> some hundreds of telegrams per day in some known time window and
> forward those telegrams afterwards. In most cases it would be those of
> the same day only, but might be those of the last X few days as well
> in case of problems. In the long term, though, that device should be
> able to store telegrams for years and might generate reports on it's
> own as well.
>
> I've already found some interesting infos like the following, but
> would be interested in other expereinces as well.
>
>
> https://www.postgresql.eu/events/pgconfeu2019/sessions/session/2781/slides/251/PostgreSQL%20for%20IoT.pdf
>
> Mit freundlichen Grüßen,
>
> Thorsten Schöning
>
> --
> Thorsten Schöning   E-Mail: thorsten.schoen...@am-soft.de
> AM-SoFT IT-Systeme  http://www.AM-SoFT.de/
>
> Telefon...05151-  9468- 55
> Fax...05151-  9468- 88
> Mobil..0178-8 9468- 04
>
> AM-SoFT GmbH IT-Systeme, Brandenburger Str. 7c, 31789 Hameln
> AG Hannover HRB 207 694 - Geschäftsführer: Andreas Muchow
>
>
>
>


Re: Can I get some PostgreSQL developer feedback on these five general issues I have with PostgreSQL and its ecosystem?

2020-09-24 Thread Tony Shelver
>
>
> On 9/23/20 11:51 AM, tutilu...@tutanota.com wrote:
>
> Huh? A schema is just a name space, why does it matter how the
> extension chooses to define it? I mean you could have number of
> permutations of postgis.
>
> I'm saying that PostGIS has a bug due to incorrectly constructed
> internal queries which makes it impossible to properly name the schema
> where PostGIS is to reside, causing my database to look very ugly when
> it has to say "postgis" instead of "PostGIS" for PostGIS's schema. And
> that was an example of how sloppy/bad third-party things always are, and
> is one reason why I don't like it when I have to rely on "extensions".


U?   I have PostGIS installed in my core app schema, in part because at
the time I didn't know what I was doing. Better to have been in  public...

You may also want to look at comparable ANSI (standards based) database
products (Oracle for example) when it comes to the use of case in naming
conventions.  Different products handle things in different ways.

You may want to google around the issue, for example
https://postgis.net/2017/11/07/tip-move-postgis-schema/ for moving schemas.

You may want to do some research on where PostGIS comes from.  It could
never have been developed as a core part of Postgres, so the fact that
products like PostGIS are so domain specific.
The fact that the Postgesql extension system is so flexible and robust was
probably a key factor in the choice it's choice in the development of
PostGIS.

Effectively, you may lose a bit, but you gain a whole lot more.  My
suggestion is 'live with it'.

Or, move to a product that suits your use cases / desires better, But, good
luck finding another open source "free" (or any) product with the
functionality, robusiness and performance of PostGIS / Postgresql.
We tried the MS SQLServer equivalent, the install and use there is way
nastier than PostGIS.  Both the use and the performance sucked in
comparison for our use cases.
MySQL's equivalent is nowhere near as functional, robust, as well
documented or as widely used.

Oracle's equivalent is probably the closest.  Pity that installing Oracle
and their products as a whole is a nightmare, and rather wallet-draining...

Have a look at GIS / Mapping projects around the world, a majority are
implemented on PostGIS.  Openstreetmap is  probably the biggest (think open
source version of Google Maps), and it moved to PostGIS from MySQL several
years ago.
We did a lot of research into PostGIS, as GIS / tracking is a core part of
our business.
We didn't find a product that could compare on

   - Maturity
   - Functionality
   - Performance
   - Cost
   - Documentation
   - Support (huge community)


Re: PostgreSQL on Windows' state

2020-09-23 Thread Tony Shelver
A quick Google search came up with this: https://www.npgsql.org.  No
experience with it however.
We moved to Python just before we moved to Postgresql.  The key motivator
for C# at the time is now more of a React / JS proponent

Everything in our stack apart from 1 proprietary Java / Oracle system which
is being slowly replaced is now open source, and just about all core
components have well outlasted several Microsoft initiatives (Silverlight,
VB, et al...).



On Thu, 24 Sep 2020 at 01:18, raf  wrote:

> On Wed, Sep 23, 2020 at 12:52:58PM +0200, Alessandro Dentella <
> sandro.dente...@gmail.com> wrote:
>
> > On Wed, Sep 23, 2020 at 12:21:39PM +0200, Pavel Stehule wrote:
> > > st 23. 9. 2020 v 10:38 odesílatel Alessandro Dentella <
> > > sandro.dente...@gmail.com> napsal:
> > >
> > > >
> > > > Hi,
> > > >
> > > > disclaimer: I've never been a Windows user and I send this email
> just on
> > > > behalf
> > > > of a friend that has problems convincing his team to use PostgreSQL.
> > > >
> > > > I'd like to understand if what the team replies to him is correct or
> not.
> > > >
> > > > The final project should work with Python (we know it just works)
> and with
> > > > c#.
> > > > The team states that:
> > > >
> > > >   "It does not exist a native driver. You need to use ODBC, that is
> an old
> > > >   technology and doesn't give warranties in terms of security,
> reliability
> > > > and
> > > >   performance."
> > > >
> > > > Is it any true? is it biased?
> > > >
> > >
> > >
> https://kb.objectrocket.com/postgresql/how-to-install-psycopg2-in-windows-1460
> > >
> > > So minimally psycopg2 - native driver for Python exists for Win
> >
> > Thanks Pavel, but psycopg2 (that I always use is just for Python). T
> > hey claim/complain that from c# there's no native solution.
> >
> > Personally I don't know even if ODBC is really to be considered a second
> class
> > choice. It sounds awkward to me that such a great db has flaws like this
> (I'm
> > NOT stating it *has* flaws)
> >
> > sandro
>
> Surely, it's a flaw in the C# ecosystem, not a flaw in the database?
> (assuming it's true, and assuming there's anything wrong with ODBC). :-)
>
> cheers,
> raf
>
>
>
>


Fwd: "Go" (lang) standard driver

2020-08-18 Thread Tony Shelver
-- Forwarded message -
From: Tony Shelver 
Date: Tue, 18 Aug 2020 at 09:33
Subject: Re: "Go" (lang) standard driver
To: Edson Richter 


A quick Google search found https://github.com/lib/pq.   Has 6.1K stars so
I would guess fairly well used.

On Tue, 18 Aug 2020 at 05:56, Edson Richter 
wrote:

> Is there any standard PostgreSQL driver for Go language?
> Do members of this group could share experiences with Go + PostgreSQL
> development?
>
> Thanks,
>
> E.R.
>


Re: Querying PostgreSQL / PostGIS Databases in Python

2020-08-04 Thread Tony Shelver
We use Postgresql, python and PostGIS.
Nearly all our data access is via Postgres functions, both DML and
queries.  Our system is a vehicle GPS tracking and business
function analysis system, with quite a lot of data.

Most of our reports require accessing hundreds to 100s of thousands of
records for each vehicle, and all geo transforms (coordinates, areas /
polygons, routes / lines and so on) are done in  PostGIS, and just the
reporting result set is returned to Python for format, some final
calculations and

Our queries run an order of magnitude faster than the same data in a
tracking system written in java on top of Postgres, where the queries are
done via the java ORM system and the GIS processing is done in Java.

This is even more true where the python client is situated on physically
separate servers to the database.

On Tue, 4 Aug 2020 at 14:09, Tony Shelver  wrote:

> We are using python on top of Postgresql / PostGIS, for a vehicle
> tracking system.
> THis is quite data intensive, and we have some 'interesting' GIS queries
> where we see where a vehicle or fleet has stopped within specific areas,
> where it has traveled, any incidents along the way and much more.
>
> Postgresql functions are used almost exclusively for DML and queries, as
> running individual SQL statements that will return all the records required
> to process a complex report was just too slow.  For even simple stuff, we
> are an order of magnitude faster than a similar system using the same data
> written in Java against Posgresql, but using the java ORM for queries.
>
> All geographic functions are processed in PostGIS, other than the odd
> reverse geocoding call which is performed against Google Maps or similar
>
> YMMV.
>
> On Fri, 31 Jul 2020 at 12:50, Allan Kamau  wrote:
>
>> You may write stored procedures using PL/pgSQL,alternatively you may
>> write your queries in python.
>> You may use psycopg2 to query the DB from Python.
>> You may have a mix of the two, it will depend on your preference.
>> Ideally you may not want your users running queries against the data by
>> connecting to the database directly using database tools psql or pgadmin3
>> or pgadmin4.
>> This means that having a database access application written in Python to
>> restrict the and encapsulate data access may be advisable.
>> In this case you may place all the DML statements in python and execute
>> them or you may have much of the data access logic written into several
>> PL/pgSQL functions, then call these functions via Python.
>>
>>
>> Below is python code illustrating the use of psycopg2. This code has not
>> been run so expect some errors.
>> Here I am executing an SQL query on a table, you may modify this code to
>> execute a PL/pgSQL function.
>>
>>
>> import psycopg2;
>> from psycopg2 import sql;
>> import psycopg2.extras;
>> from psycopg2.extensions import AsIs;
>>
>>
>> db__pg_conn__str="host='host_name_of_computer_hosting_pg_db_service'
>> port=5432 dbname='your_pg_db_name' user='your_username'
>> password='user_password'";
>> db__pg_conn=psycopg2.connect(db__pg_conn__str);
>>
>> query_table(
>> dataset_name
>> ,some_value_2
>> ,db__pg_conn
>> );
>>
>>
>> def query_table(
>> dataset_name
>> ,some_value_2
>> ,db__pg_conn
>> ):
>> """
>> """;
>> table__id=-1;
>> _sql_query1a="""
>> SELECT {}::TEXT AS some_string,a.id AS
>> table__id,a.*,clock_timestamp() FROM {}.{} a WHERE a.dataset_name={}::TEXT
>> ;
>> """;
>> sqlSQL1a=None;
>> sqlSQL1a=sql.SQL(_sql_query1a);
>>
>> pg_cursor1a=db__pg_conn.cursor(cursor_factory=psycopg2.extras.DictCursor);
>> _sql_query1a_processed=pg_cursor1a.mogrify(
>> sqlSQL1a.format(
>>
>> sql.SQL(',').join(map(psycopg2.sql.Placeholder,['some_value_1']))
>>
>> ,psycopg2.sql.Identifier("my_schema.my_table".split(".")[0])
>>
>> ,psycopg2.sql.Identifier("my_schema.my_table".split(".")[1])
>>
>> ,sql.SQL(',').join(map(psycopg2.sql.Placeholder,['some_value_2']))
>> )
>> ,{
>> 'some_value_1':'ABC'
>> ,'some_value_2':dataset_name
>> }
>> );
>>
>> _sql_query1a_processed=_sql_query1a_processed.decode().replace("\\'","'");
>>
>> #LOGGER.info(" '{0}', -- _sql_query1a_processed
>> is:'{1}'.".format(datetime.datetime

Re: Querying PostgreSQL / PostGIS Databases in Python

2020-08-04 Thread Tony Shelver
We are using python on top of Postgresql / PostGIS, for a vehicle
tracking system.
THis is quite data intensive, and we have some 'interesting' GIS queries
where we see where a vehicle or fleet has stopped within specific areas,
where it has traveled, any incidents along the way and much more.

Postgresql functions are used almost exclusively for DML and queries, as
running individual SQL statements that will return all the records required
to process a complex report was just too slow.  For even simple stuff, we
are an order of magnitude faster than a similar system using the same data
written in Java against Posgresql, but using the java ORM for queries.

All geographic functions are processed in PostGIS, other than the odd
reverse geocoding call which is performed against Google Maps or similar

YMMV.

On Fri, 31 Jul 2020 at 12:50, Allan Kamau  wrote:

> You may write stored procedures using PL/pgSQL,alternatively you may
> write your queries in python.
> You may use psycopg2 to query the DB from Python.
> You may have a mix of the two, it will depend on your preference.
> Ideally you may not want your users running queries against the data by
> connecting to the database directly using database tools psql or pgadmin3
> or pgadmin4.
> This means that having a database access application written in Python to
> restrict the and encapsulate data access may be advisable.
> In this case you may place all the DML statements in python and execute
> them or you may have much of the data access logic written into several
> PL/pgSQL functions, then call these functions via Python.
>
>
> Below is python code illustrating the use of psycopg2. This code has not
> been run so expect some errors.
> Here I am executing an SQL query on a table, you may modify this code to
> execute a PL/pgSQL function.
>
>
> import psycopg2;
> from psycopg2 import sql;
> import psycopg2.extras;
> from psycopg2.extensions import AsIs;
>
>
> db__pg_conn__str="host='host_name_of_computer_hosting_pg_db_service'
> port=5432 dbname='your_pg_db_name' user='your_username'
> password='user_password'";
> db__pg_conn=psycopg2.connect(db__pg_conn__str);
>
> query_table(
> dataset_name
> ,some_value_2
> ,db__pg_conn
> );
>
>
> def query_table(
> dataset_name
> ,some_value_2
> ,db__pg_conn
> ):
> """
> """;
> table__id=-1;
> _sql_query1a="""
> SELECT {}::TEXT AS some_string,a.id AS
> table__id,a.*,clock_timestamp() FROM {}.{} a WHERE a.dataset_name={}::TEXT
> ;
> """;
> sqlSQL1a=None;
> sqlSQL1a=sql.SQL(_sql_query1a);
>
> pg_cursor1a=db__pg_conn.cursor(cursor_factory=psycopg2.extras.DictCursor);
> _sql_query1a_processed=pg_cursor1a.mogrify(
> sqlSQL1a.format(
>
> sql.SQL(',').join(map(psycopg2.sql.Placeholder,['some_value_1']))
>
> ,psycopg2.sql.Identifier("my_schema.my_table".split(".")[0])
>
> ,psycopg2.sql.Identifier("my_schema.my_table".split(".")[1])
>
> ,sql.SQL(',').join(map(psycopg2.sql.Placeholder,['some_value_2']))
> )
> ,{
> 'some_value_1':'ABC'
> ,'some_value_2':dataset_name
> }
> );
>
> _sql_query1a_processed=_sql_query1a_processed.decode().replace("\\'","'");
>
> #LOGGER.info(" '{0}', -- _sql_query1a_processed
> is:'{1}'.".format(datetime.datetime.now().strftime('%Y-%m-%d
> %H:%M:%S.%f')[:-1],_sql_query1a_processed));
> pg_cursor1a.execute(
> _sql_query1a_processed
> );
> rowcount1a=pg_cursor1a.rowcount;
> rows=None;
> rows=pg_cursor1a.fetchall();
> row_cnt=0;
> for row in rows:
> pass;
> row_cnt+=1;
> table__id=row["table__id"];//do something with table__id
> //do something with rows.
> rows=None;
> db__pg_conn.commit();
> sqlSQL1a=None;
> pg_cursor1a=None;
>
>
>
> On Fri, Jul 31, 2020 at 12:30 PM Shaozhong SHI 
> wrote:
>
>> Hi,
>>
>> What is the advantage of querying in Python?
>>
>> Has anyone got much experience?
>>
>> What not just use standard query?
>>
>> What is the rationale for querying in Python?
>>
>> Would the performance be better?
>>
>> Regards,
>>
>> Shao
>>
>


Re: Return Table in StoredProceure/Function

2019-11-20 Thread Tony Shelver
Well then SQL Server breaks that rule big time :)
Most people coming from a SQL Server background expect procedures to return
a result set that can be queried, and in-out or out parameters to return
variables for further information.



On Wed, 20 Nov 2019 at 17:20, Thomas Kellerer  wrote:

> İlyas Derse schrieb am 20.11.2019 um 09:18:
>
> > How can I return table in Stored Procedure ? I can do it in function but
> I
> > have inout parameters.So I can not create in function. What can I do this
> > case ?
> >
> > I guess,It should be like for function :
> >
> > CREATE or REPLACE FUNCTION public."test" (INOUT "x" integer, INOUT "y"
> text)
> >
> > RETURNS TABLE  ("id" integer,"filesize" character varying(36))AS $$
> >  BEGINRETURN QUERYSELECT * FROMpublic."tbl_employees" ;
> > END;$$ LANGUAGE plpgsql;
> >
>
> Procedures aren't meant to return anything (in Postgres specifically and
> in Computer Science in general).
>
> If you want to return something use a function.
>
>
>
>
>


Re:

2019-11-20 Thread Tony Shelver
Research Postgraphile, it provides an easy-to-use GraphQL interface to
Postgres.
Postgraphile.org 

On Wed, 20 Nov 2019 at 17:00, Soukaina Lahchiouach 
wrote:

> Hello ,
> does postgresql with react native allow us to create chat applications?
>


Re: pgmodeler ?

2019-09-03 Thread Tony Shelver
<<
On Sun, 1 Sep 2019 at 13:45, Thiemo Kellner 
wrote:

> Quoting Olivier Gautherot :
>
> This is the specific error message:
>
>>
>> Could not execute the SQL command.
>> Message returned: ERROR: column pr.proisagg does not exist
>> LINE 1: ...namespace AS ns ON pr.pronamespace = ns.oid WHERE pr.proisag...
>> ^
>> HINT: Perhaps you meant to reference the column
>>
>> --
>> "They that would give up essential liberty for temporary safety deserve
>> neither liberty nor safety."
>> -- Benjamin Franklin
>
>
> Can you confirm the versions of pgModeler and PG? Pgmodeler 0.91 does not
> fully support PG 11.x and there is an update in preparation in 0.92 which
> should. You may want to givi it a try.
>
>
> Actually, this is a known problem:
> https://github.com/pgmodeler/pgmodeler/issues/1281
>
> Maybe you want to give the beta a shot: https://pgmodeler.io/download
> >>
> I built 9.2 beta and have beeen using it quite successfully against pg11
> for a while
>


Re: SQL equivalint of #incude directive ?

2019-09-01 Thread Tony Shelver
Similar to what Francisco said.  Not exactly sure what your use case is
though..

In the past I have usually used a DB modeling / design front end tool to
design my database, and then maintain and generate most of the build
scripts.
Datanamic Dezign used to be my go-to for SQL Server when i still paid money
for tools, and I maintained that as a central repository for all SQL
scripts for functions, tables, views, procedures and so on, and it would
generate create / diff scripts as necessary.

On Postgres I use pgmodeler, the only issue there is that I find it quicker
and easier to create procedures and functions and so on directly in code,
then have those in a script file that is run after the schema creation.
There are other (commercial) tools out there that will allow SQL script
preprocessing and generation.

I think you will find that most DBAs build their own scripts using tools
like Perl or Python, or a commercial product.

A similar situation exists for HTML, there is no standard pre-processor
directive.  I started off creating a Python utility to replace include
directives with input from external files, which worked fine.  Fairly
trivial if you are reasonably familiar with a scripting language.
Eventually i switched to an open source site generator that included that
capability along with a lot more functionality.



On Sat, 31 Aug 2019 at 10:12, Francisco Olarte 
wrote:

> Stan:
>
> On Fri, Aug 30, 2019 at 3:49 PM stan  wrote:
> ...
> > I am defining a bunch  of functions, and I would prefer to store them in
> a
> > separate file, which then gets "source" by the main DB init file.
> > Is there a standard way to do this?
>
> I assume you mean "standard for postgres", AFAIK there is not even a
> (multidb) standard way to feed an sql script to a server (content of
> queries / results are standard, how to send them / get them from the
> server is not ).
>
> \i in psql has been pointed. I do not know if this is one of your
> "workarounds", but what I normally do for this is trat the "init"
> script as a program and "make" it. I've done it using many of the
> preprocessor freely available around ( i.e. m4, cpp ) and also using a
> perl program ( but this is becuase I'm quite fluent in it and it makes
> some thing easier ), and even doing "cat *.sql | psql" and naming the
> scripts appropiately. You'll probably have it easier going by that
> route ( generating the sql script from chunks using a tool of your
> choice, it's just a big text chunk and you can easily debug it with a
> text editor ), and it is not that complex.
>
> Francisco Olarte.
>
>
>


Re: How to set up PostGIS to support ArcGIS, MapInfo and QGIS desktop users?

2019-08-27 Thread Tony Shelver
This is probably the wrong place to ask.  Have a look at the POSTGis
support options   page.  Stack Exchange has a
lot of info.
Also go to the particular products that you are interested in, they should
have their requirements listed.

You will likely also want to set up the maps: the most commonly used
dataset is derived from (or based on) openstreetmap.org.  You will soon be
indicted to the joys of tools like osm2pgsql :)

However, you users should define the mapping data source and boundaries for
you.
Make sure to get a really good user spec / system boundary.  Trying to
download and set up a full world map dataset can take days or weeks
depending on your hardware.

Take the recommended hardware requirement and system setup / tuning
recommendations seriously.  I was bitten by this one in the past.

Also think about a bit of future proofing.  In the years I have been
contributing to / sourcing from OSM, the amount of data has grown
exponentially.

On Mon, 26 Aug 2019 at 20:08, Shaozhong SHI  wrote:

> Hi,
>
> I should be grateful if anyone can offer insight into how to set up
> PostGIS to support ArcGIS, MapInfo and QGIS desktop users.
>
> Looking forward to hearing from you.
>
> Regards,
>
> Shao
>


Re: Recomended front ends?

2019-08-08 Thread Tony Shelver
On Wed, 7 Aug 2019 at 20:57, stan  wrote:

> I am in the process of defining an application for a very small company
> that uses Postgresql for the backend DB. This DB will eventually run on a
> hosted machine. As you imagine all of the employees have Windows machines
> for their normal work asks. Frankly I am not very strong on Windows. so I
> am wondering what the consensus is for creating forms and reports?
>
> My first though is Libre Office as that is cross platform, and i can test
> on my development Linux machine. However, i am getting a bit of push-back
> from the user as he is having issues with installing Libre Office on his
> computer. he says it does not play well with MS Office. Also we seem to be
> having some bugs with Libre Office Base in early development.
>
>
>
I went through this some months ago, and put out a similar query to this
list.
It very much depends what type of app / data you are trying to address.
If you are considering something like Base:  what your is users' level of
expertise, and your appetite for distributing / maintaining / installing
the app and it's infrastructure.

*Options*
I looked at several options, and ended up using Base as a quick and dirty
front end to build a content management system for an eStore.
I looked at a few other options, including Kexi.  Most I ruled out as they
didn't seem to have active support communities.
One interesting looking one I played with a little was Kexi,but you need to
build your database through Kexi (
https://kde.org/applications/office/org.kde.kexi).
One of my constraining factors was that I didn't want Kexi to build the
database for me, as I have a DB schema graphical design and generation tool
I prefer to use (pgmodeler).

*Bugs / features / learning curve*
I didn't find any bugs in Base that caused me problems, more like missing
features, or features that don't work the way I'd expected.
With a good Youtube series for a guide, I got a reasonable application up
and running in a weekend. I do have a basic development background dating
back to tools such as Oracle Forms, Mantis, and various other products of
that ilk. Also have a little experience with MS Access, and Cobol and Java
app generation.

*End-user 'friendliness'*
I would be a little wary of using Base for robust end-user data
interaction, unless the users are fairly savvy on how to work with
something like Base or Access applications.  Things like deleting,
inserting, and updating data on the forms are not all that intuitive at
first, especially if you have multi-table forms (I have 4 tables embedded
on my product form).

*Some limitations*

   - You will need to be a bit aware of Base macro programming.  Base does
   not have anything like VBA to work on.
   - You will be limited on the type of application you can deliver.  Basic
   data editing, probably suitable.  Something more complex like selectable
   calendars, WYSIWYG text editors and so on, not so much. For example, I
   would love to include a markdown text editor for product descriptions
   (which Jekyll or Python can convert to HTML).  Trivial in Vue.js or Quasar,
   not so much in Base, where I have to cut and paste the text to and from an
   editor.
   - Using Base to search / navigate through large datasets is not very
   easy, and you need to think very carefully how you will design around this.


I am using my application quite regularly to maintain the data, but intend
to replace it with a javascript (Vue and / or Quasar) front end at some
stage.
One comment on the recommendations to create a web application, if you do
not have current skills in Python or javascript and probably stuff like
CSS  / HTML the learning curve can be very time consuming.

*Security*:
Another issue is security.  If the database is accessed within your
corporate firewall, then it's not too bad.  If you need to access it from
across the internet using standard postgres drivers, then you may want to
have a careful think as to how you can lock down access to the database. I
allow postgres to communicate only to specific IP address ranges, and I
also have the firewall set up to restrict access to port 5432 to a few
specific IP ranges.

When it comes to pulling the product data out of the system, that's only
through a GraphQL interface to a GraphQL server.  That was pretty easy to
generate.  I pull the data out via a Python program, which in combination
with Jekyll static site generator generates our 3 websites, The python
program creates XML product file feeds to Jekyll, and in other cases
generates the basic HTML product page for Jekyll to complete site
generation.

*Most can ignore from here*
For a weekend's work and a day or two of later enhancements, the Base app
works pretty well.
To give you an idea of the complexity it consists of:

   1. Site: The domain name, the site base directory, discount and tax
   percentages
  - Block to add / delete / display product categories applicable to
  the site
  2. Category: 

Re: Does pgadmin4 work with postgresql 8.4?

2019-08-06 Thread Tony Shelver
8.4's final release was 5 years ago, so unlikely anyone would know the
answer to your question.
The only way to really know is to try it.

I just had an issue with the current version (4.11) against PG11.4, where
the version of psycopg2 was out of date by a release (2.7x instead of
2.8).  Most stuff works, but there are a few things that don't.
Based on that example, I reckon any reply either way is pure guesswork.

On Tue, 6 Aug 2019 at 14:46, Benedict Holland 
wrote:

> We cant upgrade because of business reasons but that is very high on our
> agenda to fix. To me, there is a huge difference between unsupported and
> wont work. I dont expect support for this but are there significant changes
> between 8.4 and 9.2 that would prevent pgadmin4 from working?
>


Re: PGAdmin4.11.1 on Ubuntu 18.04

2019-08-02 Thread Tony Shelver
> >> What repo are you getting the packages from?
>
> Repo looks like PostgreSQL apt repository https://apt.postgresql.org.
.../bionic-pgdg.main

> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>


Re: PGAdmin4.11.1 on Ubuntu 18.04

2019-08-01 Thread Tony Shelver
The article at stackoverflow is here
<https://stackoverflow.com/questions/57226520/select-does-not-return-values-postgres-11-4>

On Thu, 1 Aug 2019 at 17:28, Tony Shelver  wrote:

> Thanks for the replies: the version is PG11.4.
>
> As for needing to upgrade:  I just want to fix the problem.  This was a
> working environment until I updated pgAdmin4.
>
> The fix suggested on Stackoverflow was to move to psycopg2 2.8, but it
> doesn't seem to be possible.
>
> If there is another fix that anyone is aware of, I would love to know.
>
> I tried to uninstall pgAdmin4, then install psycopg 2.8 via pip, then
> reinstall pgadmoin4 again using the Synatpic / Ubunto package manager, but
> it just installs pscopg2.7 over the top.
>
> On Thu, 1 Aug 2019 at 16:50, Adrian Klaver 
> wrote:
>
>> On 8/1/19 7:39 AM, Tony Shelver wrote:
>> > I am getting an 'able oid' when querying any tables using the Query
>> tool.
>> > When using the view / edit data option, everything works fine.
>>
>> Should have asked in previous post:
>>
>> What Postgres version(s)?
>>
>> >
>> > I came across this
>> > <<<
>> >
>> > The problem is due to python3-psycopg2. The latest pgadmin4 version
>> > requires psycopg2-2.8. But if you're on Debian/Ubuntu stable, apt
>> > installed v2.7. So you need to update it with pip :
>> >
>> > |sudo pip3 install -U psycopg2
>> >>>>
>> > |
>> >
>> > |The problem is that the upgrade to psycopg 2.8 doesn't work, as it's
>> > installed as part of the dtsutils package along with pgadmin4.
>> >
>> > |
>> >
>> > |Any ideas?
>> > |
>> >
>>
>>
>> --
>> Adrian Klaver
>> adrian.kla...@aklaver.com
>>
>


Re: PGAdmin4.11.1 on Ubuntu 18.04

2019-08-01 Thread Tony Shelver
Thanks for the replies: the version is PG11.4.

As for needing to upgrade:  I just want to fix the problem.  This was a
working environment until I updated pgAdmin4.

The fix suggested on Stackoverflow was to move to psycopg2 2.8, but it
doesn't seem to be possible.

If there is another fix that anyone is aware of, I would love to know.

I tried to uninstall pgAdmin4, then install psycopg 2.8 via pip, then
reinstall pgadmoin4 again using the Synatpic / Ubunto package manager, but
it just installs pscopg2.7 over the top.

On Thu, 1 Aug 2019 at 16:50, Adrian Klaver 
wrote:

> On 8/1/19 7:39 AM, Tony Shelver wrote:
> > I am getting an 'able oid' when querying any tables using the Query tool.
> > When using the view / edit data option, everything works fine.
>
> Should have asked in previous post:
>
> What Postgres version(s)?
>
> >
> > I came across this
> > <<<
> >
> > The problem is due to python3-psycopg2. The latest pgadmin4 version
> > requires psycopg2-2.8. But if you're on Debian/Ubuntu stable, apt
> > installed v2.7. So you need to update it with pip :
> >
> > |sudo pip3 install -U psycopg2
> >>>>
> > |
> >
> > |The problem is that the upgrade to psycopg 2.8 doesn't work, as it's
> > installed as part of the dtsutils package along with pgadmin4.
> >
> > |
> >
> > |Any ideas?
> > |
> >
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>


PGAdmin4.11.1 on Ubuntu 18.04

2019-08-01 Thread Tony Shelver
I am getting an 'able oid' when querying any tables using the Query tool.
When using the view / edit data option, everything works fine.

I came across this
<<<

The problem is due to python3-psycopg2. The latest pgadmin4 version
requires psycopg2-2.8. But if you're on Debian/Ubuntu stable, apt installed
v2.7. So you need to update it with pip :

sudo pip3 install -U psycopg2
>>>

The problem is that the upgrade to psycopg 2.8 doesn't work, as it's
installed as part of the dtsutils package along with pgadmin4.

Any ideas?


Re: DRY up GUI wiki pages

2019-07-11 Thread Tony Shelver
> I created the first one because the second one was full of old, stale,
> useless things. I believe that everything valid on the second one was added
> to the first one at the time.
>
> Also look at https://wiki.postgresql.org/wiki/Design_Tools and
> https://wiki.postgresql.org/wiki/Documentation_Tools if you're looking to
> combine / divide things.
>
> Cheers,
>   Steve
>
>
While on the subect, pgmodeler has a change in website:
https://pgmodeler.io/


Re: Data entry / data editing tools (more end-user focus).

2019-05-23 Thread Tony Shelver
I looked at quite a few options.   Some constraints on my side that our
direction is open source, with Linux development and servers.
Radzen is .NET:  I could just as well use MS Access to cobble together a
front end.

CUBA and OpenXava are Java based and seem to require writing Java for
logic: I last used nearly 20 years ago and 'fast' development and Java IMHO
is an oxymoron.

Aurelia looks a bit promising, but I am not sure if it gains anything over
the current crop of JS libraries and frameworks, such as Vue, React et al,
and then libraries / frameworks built on top of those such as  Nuxt / Vue,
Quasar / Vue or Vuetify / Vue, which seem to have far more activity on
Github.

I managed to set up a quick and dirty front end using LibreOffice Base over
a weekend, next iteration i will probably move over to a Vue framework,
probably using Quasar..

On Sat, 18 May 2019 at 00:26, Stefan Keller  wrote:

> Dear all
>
> What about following „Rapid App Development Tools"?
> * OpenXava (Java): https://www.openxava.org/ate/visual-studio-lightswitch
> * Radzen (.NET):
> https://www.radzen.com/visual-studio-lightswitch-alternative/
> * Other: https://aurelia.io/ (JS) or CUBA https://www.cuba-platform.com/
> (Java)
>
> :Stefan
>
> Am Do., 28. März 2019 um 15:39 Uhr schrieb Adrian Klaver
> :
> >
> > On 3/27/19 11:49 PM, Tony Shelver wrote:
> >
> > Please reply to list also, more eyes on the the problem.
> > Ccing list
> >
> > My take on below is since you are feeding a Website why not use Web
> > technologies for your data entry. My language of choice is Python. I
> > have done something similar to this(on small scale) using the Django
> > framework. For something lighter weight there is Flask. Then your client
> > becomes a browser and you do not have to distribute forms around. You
> > could integrate with the existing Web apps you are using e.g. SnipCart.
> >
> >
> > > Actually I found a possibility.  LibreOffice Base on top of PG lets me
> > > paste photos into a Postgresql bytea field no problem.  MS Access
> should
> > > work well also, but I am not going to buy it, and running Ubuntu most
> of
> > > the time.
> > > Possibly will distribute the Base forms to select users to enter data.
> > > We are a startup company, so this is an affordable temporary fix, until
> > > the product I have been looking at matures, or we can roll our own.
> > >
> > > We are building a company website, including an eStore, and have a few
> > > hundred products to load and maintain. Our product data currently isn't
> > > suitable for a sales catalog.
> > > (Brands, categories, products, pricing and deal info, specials, images,
> > > product comparisons and so on).
> > >
> > > Right now I input / maintain this via CSV files maintained through a
> > > spreadsheet  (LibreOffice Calc) which our site generator (Jekyll) uses
> > > to build out the static HTML product [pages automatically.
> > > This is really quick to enter basic data, but I have to manually
> > > maintain image uploads, image names and so on manually in the
> > > spreadsheet and through individual file uploads. We have at least one,
> > > preferably 3 and up to 6 photos per product to maintain.  Call it a
> 1000
> > > images right now, and that will only go up.
> > > Invalid text / characters in product descriptions and so on can break
> > > the CSV as well.
> > >
> > > There are headless CMS solutions out on the market targeting this same
> > > area, but for various reasons the suitable ones are still maturing and
> > > shaking out in the marketplace, so I am not in a hurry to make a
> choice.
> > >
> > > So the goal is to replace CSV with JSON file input.  This will also
> make
> > > my life easier for more complex structures such as multiple categories
> > > and specs per product.
> > > I also want to migrate text that can change from the HTML pages into
> the
> > > database for easier changes by end users. For this the users could use
> > > a WYSIWIG MarkDown editor, and just cut and past the MarkDown into Base
> > > forms when finished.  This will be converted to HTML at build time by
> > > Jekyll static site generator or a script.
> > >
> > > So the proposed solution:
> > > 1. Create the database in Postgresql.
> > > 2. Link Base or other tool to it and design input forms where necessary
> > >
> > > 3. Enter the data through Base into PG including images, MarkDown /
> HTML
> > > text, long descriptions and so on.
> > > 3a.

Re: bigint out of range

2019-05-16 Thread Tony Shelver
Adding to what David said,  I don't see much point of having a bigint
status. Usually status attributes are a fixed set of values that can be
checked programmatically.

>


Re: Postgres for SQL Server users

2019-05-07 Thread Tony Shelver
For me, another very useful featureset in Postgres is the extensive set of
datatypes and functions, including the strong JSONB support.

Also, i would focus on the widespread support of Postgresql by services
such as Amazon, Google, Heroku,

Another place to focus on would be the really extensive list of extensions
for datatypes, languages, specialized functions and so on.   Things like
PostGIS (already mentioned), PipelineDB and so on, plus Postgres-based
things like Timescale, Citusdata and so on.



On Tue, 7 May 2019 at 07:35, Tony Shelver  wrote:

> I have to agree on the geospatial (GIS) features.
> I converted from SQL Server to Postgresql for our extended tracking
> database.  The SS geospatial feature set doesn't seem nearly as robust or
> complete or perfoirmant as that supplied by PostGIS.
> The PostGIS ecosystem of open source / 3rd party tools is also far bigger,
> for anything to do with mapping.  Openstreetmaps.org stores their world
> dataset on Postgresql / PostGIS, and there a ton of mostly open
> source-based tools and organizations that work with it or any other PostGIS
> data to provide a complete GIS solution.
>
> My first sS implementation had me backing out of storing geographic points
> in the relevant SQL Server datatype as the performance hit during loading
> was just too big.  Doing the same thing in Postgresql / PostGIS is nardly
> noticeable.
>
> Another feature in Postgres is that you are not restricted to just plpgsql
> as an internal procedural language.
>
> I am not an expert, but it also seems far easier to create, install and
> work with major extensions to Postgresql than SQL Server.  I found
> installing the GIS featureset in SS to be a bit of a pain back oin the
> day..
>
> On Tue, 7 May 2019 at 00:53, Michel Pelletier 
> wrote:
>
>> On Mon, May 6, 2019 at 2:49 PM Adam Brusselback <
>> adambrusselb...@gmail.com> wrote:
>>
>>> I think the main "gotcha" when I moved from SQL Server to Postgres was I
>>> didn't even realize the amount of in-line t-sql I would use to just get
>>> stuff done for ad-hoc analysis. Postgres doesn't have a good way to emulate
>>> this. DO blocks cannot return resultsets, so short of creating a function
>>> and dropping it, it's not possible to get the same workflow.
>>>
>>
>> Just ruminating here, and this has probably already been discussed in the
>> past, but I've always wanted something like a 'SELECT DO [LANGUAGE ...]
>> RETURNS rettype | TABLE (...) $$ RETURN [NEXT | QUERY] ... $$; but haven't
>> had any serious problem with creating/dropping functions like you mentioned.
>>
>> -Michel
>>
>>
>>> The lack of GUI tooling was also a huge "whoa" moment for me, which I
>>> still grapple with.
>>>
>>


Re: Postgres for SQL Server users

2019-05-06 Thread Tony Shelver
I have to agree on the geospatial (GIS) features.
I converted from SQL Server to Postgresql for our extended tracking
database.  The SS geospatial feature set doesn't seem nearly as robust or
complete or perfoirmant as that supplied by PostGIS.
The PostGIS ecosystem of open source / 3rd party tools is also far bigger,
for anything to do with mapping.  Openstreetmaps.org stores their world
dataset on Postgresql / PostGIS, and there a ton of mostly open
source-based tools and organizations that work with it or any other PostGIS
data to provide a complete GIS solution.

My first sS implementation had me backing out of storing geographic points
in the relevant SQL Server datatype as the performance hit during loading
was just too big.  Doing the same thing in Postgresql / PostGIS is nardly
noticeable.

Another feature in Postgres is that you are not restricted to just plpgsql
as an internal procedural language.

I am not an expert, but it also seems far easier to create, install and
work with major extensions to Postgresql than SQL Server.  I found
installing the GIS featureset in SS to be a bit of a pain back oin the
day..

On Tue, 7 May 2019 at 00:53, Michel Pelletier 
wrote:

> On Mon, May 6, 2019 at 2:49 PM Adam Brusselback 
> wrote:
>
>> I think the main "gotcha" when I moved from SQL Server to Postgres was I
>> didn't even realize the amount of in-line t-sql I would use to just get
>> stuff done for ad-hoc analysis. Postgres doesn't have a good way to emulate
>> this. DO blocks cannot return resultsets, so short of creating a function
>> and dropping it, it's not possible to get the same workflow.
>>
>
> Just ruminating here, and this has probably already been discussed in the
> past, but I've always wanted something like a 'SELECT DO [LANGUAGE ...]
> RETURNS rettype | TABLE (...) $$ RETURN [NEXT | QUERY] ... $$; but haven't
> had any serious problem with creating/dropping functions like you mentioned.
>
> -Michel
>
>
>> The lack of GUI tooling was also a huge "whoa" moment for me, which I
>> still grapple with.
>>
>


Data entry / data editing tools (more end-user focus).

2019-03-27 Thread Tony Shelver
Looking for a good tool that I can give to users to enter data (for
example, products, categories, brands, price tables and so on).
Preferably it should also allow images to be copied into a bytea field but
I know I can't have everything.

Been battling with a few open source 'headless' content management systems
the last few weeks.  All they really are is a data schema designer, an API
interface (for the API / database based ones like Strapi and Directus), and
a content entry front end, along with some access management added in.
And they don't necessarily play well with the DB, or the technology stack
is something I don't want to deal with.

I figure using PostgREST or Postgraphile or Python Graphene  or any of the
dedicated 3rd party REST / GraphQL APIs will probably give as good an API
as most of the new content managers,

pgModeler.io is a way better schema design tool than what I have found in
the CMS systems I have used so far as well.

Data that I would like to store (and edit) is the usual, but also images,
HTML sections, and markdown.

Any ideas?


Re: Key encryption and relational integrity

2019-03-27 Thread Tony Shelver
Not in Europe, but have worked a bit with medical records systems in the
USA, including sharing across providers.

The primary key of the user is _should_ be system generated, and this is
meaningless from a user identity standpoint.  If you encrypt user name and
other significant personal data on the user id record, you can query the id
column all day long, and there would be no way to identify who the user is
without the encryption key.

The only other way to do it would be to store the encrypted key value in
both user.id and medications.user_id.  That would encrypt the data and
maintain relational integrity.

For US medical record systems, there is a strict act (HIPAA) that specifies
both privacy and security rules, with lists of what is regarded as
sensitive or private information, what can be shared with various providers
and outside organizations, and so on..  As far as user name goes, that is
almost never a decisive form of identification for any person in a system.

While GDPR is relatively young and untested, surely someone in your
organization (or outside) has a list of the relevant rules broken down to
specific requirements.

Also, securing the data internally on a database system MAY have very
different requirements to making that data visible to applications or
transmitting it to external parties.

Storing the user id in plain on the medications record and encrypted on the
user primary key would seem meaningless, assuming some meaning could be
derived from a system generated ID.

I would suggest you sit down with the application / legal guys generating
the requirements to see what the real rules are.  if they want everything
encrypted, then automatic primary key generation in the DB has to be
invalidated, and they should provide that from the application side.



On Tue, 26 Mar 2019 at 16:19, Moreno Andreo 
wrote:

> Hello folks :-)
>
> Is there any workaround to implement key encryption without breaking
> relational integrity?
>
> Thanks
>
> Moreno.-
>
>
>
>


Re: When to store data that could be derived

2019-03-24 Thread Tony Shelver
Not the answer you are looking for, but...

I'd suggest trying to create a non-trivial set of dummy data to test your
assumptions before deciding on a route.
It's saved my (professional) life a few times over the years when dealing
with untested designs and new (to us) technology.

Some years ago we were implementing an identity management system for a
large US bank, with SQL Server as the store, with a planned integration to
the id / access / permissions of some 300+ systems, targeting 30k plus
users.

Requirements kept changing as we added new systems to the initial mix,
which the Id management package couldn't handle out the box, so we had to
implement a custom design.  We had to choose between 2 database designs,
one being fully 'normalized' (considering that everything was an object')
and one where we made some assumptions and fixed some table structures in
the interest of performance.

Eventually we spent a few days adding non-trivial amounts of test data to
the proposed designs and it became quickly became very apparent that option
1 was unworkable once we got beyond 10 systems or so.


Re: Forks of pgadmin3?

2019-03-22 Thread Tony Shelver
Not sure I understand the issue.  On Ubuntu every time I open pgAdmin4, it
opens up a new tab on the browser.  If it's not in a separate browser
window, just drag the tab out and Firefox at least will open a new browser
window.
I have 3 pgA4 windows open right now.

Maybe I am missing the point, wouldn't be the first time...

On Fri, 22 Mar 2019 at 18:25,  wrote:

> This is probably my 10th attempt to move from pgadminIII to pgadmin4. At
> least the performance has significantly improved over time and seems now
> acceptable.
>
> The biggest drawback is however that all elements are locked up in one
> browser window – I cannot find any option to detach a query windows and put
> it on a different monitor.
>
> 95% of my time I use pgadminIII just to type select and update statements
> and review the output rows.
>
> I know that I can do this in psql but it’s not handy with many columns.
>
> For that reason we currently stay with pgadminIII (and this is for us also
> one of several reasons to delay any move from 9.6 to a more recent version).
>
>
>
> Klaus
>
>
>
> *Von:* Tony Shelver 
> *Gesendet:* Freitag, 22. März 2019 15:34
> *Cc:* PG-General Mailing List 
> *Betreff:* Re: Forks of pgadmin3?
>
>
>
> Or just persevere with pgadmin4 for a few months?   Pretty common for
> people to hate any major changes to a tool that they are very comfortable
> with.
>
>
>
> This year I've invested the time to learn a few new toolsets (not on
> Postgresql necessarily) and found it to be well worth while.
>
>
>
> At least pgAdmin4 is up to date with all the new features in 11.
>


Re: Forks of pgadmin3?

2019-03-22 Thread Tony Shelver
Or just persevere with pgadmin4 for a few months?   Pretty common for
people to hate any major changes to a tool that they are very comfortable
with.

This year I've invested the time to learn a few new toolsets (not on
Postgresql necessarily) and found it to be well worth while.

At least pgAdmin4 is up to date with all the new features in 11.

On Fri, 22 Mar 2019 at 14:04, Steve Atkins  wrote:

>
>
> > On Mar 22, 2019, at 10:56 AM, Christian Henz 
> wrote:
> >
> > I know I'm late to the party, but we're only now migrating from
> > Postgres 9.x, realizing that pgadmin3 does not support Postgres 11.
> >
> > I have checked out pgadmin4, but I don't like it at all. My colleagues
> > feel the same way, and some web searching suggests that we are not
> > alone.
> >
> > So I wonder if there are any active forks of pgadmin3?
>
> There's the BigSQL fork, which had at least some minimal support
> for 10. I've no idea whether it's had / needs anything for 11.
>
> >
> > I found some on Github with some significant changes that I assume
> > were done by people working for VK, the Russian social network. These
> > appear to be personal hacks though (monosyllabic commit messages, build
> > scripts added with hard coded local paths etc.).
> >
> > There are also the Debian packages that have patches adding Postgres
> > 10 support among other things. Not sure if there would be interest
> > there in continuing to support newer Postgres versions.
> >
> > Are there other, more organized efforts to continue pgadmin3?
> >
> > Are there technical reasons why such a continuation would not make
> > sense?
> >
>
> It's significant work, and it'd be expended maintaining a fairly mediocre
> GUI client.
>
> You might see if you like OmniDB, or one of the other GUI clients, perhaps?
>
> https://wiki.postgresql.org/wiki/PostgreSQL_Clients
>
> Cheers,
>   Steve
>
> > Cheers,
> > Christian
> >
> > --
> > Christian Henz
> > Software Developer, software & vision Sarrazin GmbH & Co. KG
> >
> >
>
>
>


Re: Tools to migrate data from Json files to PostgreSQL DB.

2019-03-07 Thread Tony Shelver
You may want to assess how you want to store and access the data in
Postgres before deciding on an import strategy.

I have a system with a mix of relational and JSON data.  The data was
originally sourced in flat file format.
I wrote a few Python programs to take the data, then format to JSON, which
I then submitted to pg functions.
To make life easier, I submitted it as 1 JSON field to be decomposed by
Postgres into relational data, and another to be loaded straight into pg
JSON columns.

The functions then either strip out the data from JSON using the PG JSON
functions and store it relationally, or plug it straight into a JSON data
element.

Not terribly difficult to do, especially if you are not navigating complex
JSON structures in pl/pgsql to strip out the data.

Plenty of python JSON examples out there. Less so for PG :)

On Thu, 7 Mar 2019 at 23:21, github kran  wrote:

> Hello PostgreSQL Team,
>
> Are there are any tools to migrate data present in the json files ? to the
> postgreSQL database.
> We have data in flat files about 2 billion records across multiple files.
>
> 1) What is the easiest way I can transfer this data to relational database
> ?.
> 2) Any tools I can use ?. and any better ways do it ?
>
> Appreciate your help.
>
>
> Thanks
> Kran,
>


Re: Help : Update and insert record based on several value in the parameter

2019-01-29 Thread Tony Shelver
I don't know what you are using as a front end to call this update, but I
have been finding json/jsonb parameters passed into a function / procedure
and then using postgres json functions to process the data as a good
solution.

If your data fits postgres array datatype, as others have mentioned, that
is also a good (and possibly simpler) solution.

On Tue, 29 Jan 2019 at 14:50, Hengky Lie  wrote:

> Hi,
>
> I want to create a function to update my table (flag='9') and insert new
> record (flag='0') with the rfidnumber specified in a parameter.
>
> This parameter may have several value seperated by space (ie. 11 22 33 44)
>
> CREATE OR REPLACE FUNCTION public.fcreate_rfid (
>   znumber varchar
> )
> RETURNS boolean AS
> $body$
> BEGIN
> --update old record which has the same rfid number and flag='0' if exists
> update tblrfid set flag='9' where flag='0' and rfidnumber in (znumber);
>
> -- generate new record
> insert into tblrfid(tanggal, flag, rfidnumber)
> select localtimestamp, '0', regexp_split_to_table(znumber, ' ');
>
>  return true;
> END;
> $body$
> LANGUAGE 'plpgsql';
>
> when i called this function using command :
>
> select fcreate_rfid('11 22 33 44');
>
> This function fail to update the old record, but success to insert the new
> record.
>
> Please help me how to fixed this problem. I know the problem is the update
> command, but i don't know the correct it. Googling anywhere didn't find any
> solution.
>
> Thank you
>
>
>
> 
>  Virus-free.
> www.avg.com
> 
> <#m_5923011125801819668_DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2>
>


Re: the installation of pgadmin4 makes me weep in frustration

2019-01-19 Thread Tony Shelver
I know this has already been answered, but for those who are also python
users, there is another and probably better option:

If you already have a python 3.6+ installation, you can download the
pgadmin4 wheel (python installable) from https://www.pgadmin.org/download/

Then do a pip install on it as for any other python module.

I prefer this route as it avoids having multiple python versions loaded on
my machine.

The standard install for the current version of pgadmin4 will download
python along with it.

As far as other tools go, I prefer to design my database using a modeling
tool.

As the only database we use or will use is postgresql, I find
https://pgmodeler.io to be pretty good for e free(ish) product.
It provides support for many postgres-only features such as extensions,
functions, json/jsonb and so on
And for a new python user, the product manual (contribution required) is an
excellent source for postgres-specific design topics.



On Mon, 14 Jan 2019 at 15:49, Murtuza Zabuawala <
murtuza.zabuaw...@enterprisedb.com> wrote:

> Thanks to Postgres Debian/Ubuntu packaging team, Installing pgAdmin4 is
> straightforward task,
>
> 1) Create the file /etc/apt/sources.list.d/pgdg.list and add a line for
> the repository
> deb http://apt.postgresql.org/pub/repos/apt/ bionic-pgdg main
>
> 2) Import the repository signing key, and update the package lists
> wget --quiet -O - https://www.postgresql.org/media/keys/ACCC4CF8.asc |
> sudo apt-key add -
>
> 3) sudo apt-get update
>
> 4) sudo apt-get install pgadmin4
>
> Check https://www.postgresql.org/download/linux/ubuntu/ for more
> information.
>
> --
> Regards,
> Murtuza Zabuawala
> EnterpriseDB: http://www.enterprisedb.com
> The Enterprise PostgreSQL Company
>
>
>
> On Mon, Jan 14, 2019 at 1:29 PM robert  wrote:
>
>> Hi There
>>
>>
>> first, thanks a lot for the great an beautiful software we get from
>> PostgreSQL and all people around it.
>>
>> But I wonder how it comes, that installing pgadmin4 is so incredibly hard?
>>
>> And no documentation.
>>
>> I would like to install pgadmin4 to my ubuntu 18.1 laptop.
>>
>> Where do I find doku on how to install pgadmin4. Preferably I would like
>> to install it using pip?
>>
>>
>> thanks
>>
>> robert
>> --
>> Robert Rottermann CEO
>>
>> 031 333 10 20
>> rob...@redo2oo.ch
>> Sickingerstrasse 3, 3014 Bern
>> 
>>
>> [image: https://Redo2oo.ch] 
>> *Ihr Partner wenn es um ERP Lösungen geht.*
>>
>


Re: the installation of pgadmin4 makes me weep in frustration

2019-01-14 Thread Tony Shelver
Just over a year ago, I started to look for free / open source clients for
Postgres, and went through most of that list.

Unless you are willing to pay for a commercial license, most of them are
not close to PG or have some limitations that made them unusable for me.
Several 'free' tools were limited as to the number of tables, databases or
whatever that they could handle.

Currently I am on PGAdmin 4.4, and it works fine for me (on Windows 10),
and has kept up with all the functionality available in Postgres, such as
procedures.

On Mon, 14 Jan 2019 at 12:25, Steve Atkins  wrote:

>
>
> > On Jan 14, 2019, at 7:58 AM, robert  wrote:
> >
> > Hi There
> >
> >
> >
> > first, thanks a lot for the great an beautiful software we get from
> PostgreSQL and all people around it.
> >
> > But I wonder how it comes, that installing pgadmin4 is so incredibly
> hard?
> >
> > And no documentation.
> >
> > I would like to install pgadmin4 to my ubuntu 18.1 laptop.
> >
> > Where do I find doku on how to install pgadmin4. Preferably I would like
> to install it using pip?
>
> pgadmin 4 is nothing to do with the PostgreSQL project itself, it's just a
> third party client.
>
> There are many other third-party clients listed here -
> https://wiki.postgresql.org/wiki/PostgreSQL_Clients -
> most of them probably better than pgadmin4.
>
> Cheers,
>   Steve
>


Re: Benchmark of using JSON to transport query results in node.js

2019-01-11 Thread Tony Shelver
I'm fairly new to Postgres, but one question is how node.js implements the
native driver when fetching the data: fetchall, fetchmany or fetch.single?
Also which native driver is it using?
Does the native driver do a round trip for each record fetched, or can it
batch them into multiples?

For example, in the Oracle native driver (for Python, in my case), setting
the cursor arraysize makes a huge performance difference when pulling back
large datasets.
Pulling back 800k + records through a cursor on  a remote machine with the
default arraysize was way too long(3 hours before I canceled it).
Upping the arraysize to 800 dropped that to around 40 minutes, including
loading each record into a local Postgres via a function call (more complex
database structure to be handled).
This is on low-level test equipment.

This is a relevant issue for us, as we well be developing a new front end
to our application. and we still haven't finalized the architecture.
The backend build to date uses Python / Postgres.  Python/Flask is one
option, possibly serving the data to Android / web via JSON / REST.
Another option is to query directly from node.js and get JSON or native
query from the database (extensive use of functions / stored procedures).

Our application is data-intensive, involving a lot of geotracking data
across hundreds of devices at it's core, and then quite a bit of
geo/mapping/ analytics around that..



On Thu, 10 Jan 2019 at 23:52, Mitar  wrote:

> Hi!
>
> I made some benchmarks of using JSON to transport results to node.js
> and it seems it really makes a difference over using native or
> standard PostgreSQL. So the idea is that you simply wrap all results
> into JSON like SELECT to_json(t) FROM (... original query ...) AS t. I
> am guessing because node.js/JavaScript has really fast JSON parser but
> for everything else there is overhead. See my blog post for more
> details [1]. Any feedback welcome.
>
> This makes me wonder. If serialization/deserialization makes such big
> impact, where there efforts to improve how results are serialized for
> over-the-wire transmission? For example, to use something like
> Capnproto [2] to serialize into structure which can be directly used
> without any real deserialization?
>
> [1]
> https://mitar.tnode.com/post/181893159351/in-nodejs-always-query-in-json-from-postgresql
> [2] https://capnproto.org/
>
>
> Mitar
>
> --
> http://mitar.tnode.com/
> https://twitter.com/mitar_m
>
>


Re: Pulling data from Postgres DB table for every 5 seconds.

2019-01-10 Thread Tony Shelver
I am not familiar with Aurora, but..

What something like https://github.com/subzerocloud/pg-amqp-bridge?

Set up a message queue in Postgres, which calls into AMPQ (RabbitMQ) to
send a message for consumption by one or more clients.

This provides a function that can be called from a trigger to send the
message.

After that, you have all the goodness of a standards-based open source MQ
platform to distribute your data / notifications.



On Wed, 9 Jan 2019 at 19:02, github kran  wrote:

>
>> Hi Postgres Team,
>>
>> I have an application using RDS Aurora Postgresql 9.6 version having 4 TB
>> of DB size. In this DB we have a table PRODUCT_INFO with around  1 million
>> rows and table size of 1 GB.
>> We are looking for a implementation where we want to pull the data in
>> real time for every 5 seconds from the DB ( Table mentioned above) and send
>> it to IOT topic whenever an event occurs for a product. ( event is any new
>> product information or change in the existing
>> product information.).
>>
>> This table has few DML operations in real time either INSERT or UPDATE
>> based on the productId. ( Update whenever there is a change in the product
>> information and INSERT when a record doesnt exists for that product).
>>
>> We have REST API's built in the backend pulling data from this backend
>> RDS Aurora POSTGRES DB and used by clients.
>>
>> *UseCase*
>> We dont want clients to pull the data for every 5 seconds from DB but
>> rather provide a service which can fetch the data from DB in real time and
>> push the data to IOT topic by pulling data for every 5 seconds from DB.
>>
>> *Questions*
>> 1) How can I get information by pulling from the DB every 5 seconds
>> without impacting the performance of the DB.
>> 2) What are the options I have pulling the data from this table every 5
>> seconds. Does POSTGRES has any other options apart from TRIGGER ?.
>>
>>
>> Any ideas would be helpful.
>>
>> Thanks !!
>> GithubKran
>>
>


Re: Importing tab delimited text file using phpPgAdmin 5.1 GUI

2018-12-08 Thread Tony Shelver
Just a side comment: Why use phpPgAdmin when pgAdmin 4.6 is current, free
and readily available?

It also has a graphical table-from-file loader as well.

On Fri, 7 Dec 2018 at 23:35, Adrian Klaver 
wrote:

> On 12/7/18 9:04 AM, s4...@yahoo.co.jp wrote:
> > I didn't specify any schema, so it was created in public schema.
> > The error message also says "public"...
> > //--
> > ERROR: column "rec_id" of relation "spec" does not exist
> > LINE 1: INSERT INTO "public"."spec" ("rec_id","title_c...
> > //--
> >
> > Output of the \d spec:
> >
> >
> >   Table "public.spec"
> > Column   |  Type   | Modifiers
> > +-+---
> >   rec_id | character varying(32)   | not null
> >   title_category | character varying(255)  |
> >   doctype| character varying(255)  |
> >   ... goes on like this for other columns.
> >
> > What are you trying to see in the output of \d spec?
>
> My basic procedure in troubleshooting is starting from the known and
> working out to the unknown. So my questions about the schema(s) and the
> table definition where to establish a know starting point. Also a common
> issue that hit this list are multiple versions(across schemas) of an
> object in a database and code hitting the wrong version. One of the
> signs of that being error messages of the form you got.
>
>
> >
> > I don't understand what you mean by the import code is trying to insert
> > in to wrong version of the table.
> > I visually checked the left side "menu like" structure of the
> > phpPgAdmin- there is no other table of that name.
>
> See above.
>
> >
> > You mentioned that quoted identifiers are not the issue.
> > This prompted me to test the process in a table with a few columns and
> > ascii characters.
> > Immediately it was clear that quoted identifiers were not to blame.
> >
> > I found that I got that error when I change encoding of the tab
> > delimited file to UTF-8.
> > Because my data contains non-ascii characters, if I don't use UTF-8, I
> > get this error.
> >
> > ERROR:  invalid byte sequence for encoding "UTF8": 0x82
> >
> >
> > ... and I read somewhere that if I open the text file in notpad and save
> > it with UTF-8 encoding, I can get rid of the error. (When inserting
> > using pyDev (psycopg2)/Eclipse, that does get rid of the error...
>
> Notepad is not a text editor to use in general and in particular for
> data transformation work. It has limited knowledge of the text format.
> If you need to do that on Windows use Wordpad or better yet Notepad++:
>
> https://notepad-plus-plus.org/
>
> >
> > That's why I changed encoding.
> >
> > And now I am stuck with this error.
> >
> > But at least, now I am not blaming phpPgAdmin :)
> > Thanks for the lead.
> >
> > BTW, both server and client encoding of my pg db are UTF8.
>
> The original encoding was Win-10 (Japanese) correct?
>
> >
> > testdb=# SHOW SERVER_ENCODING;
> >   server_encoding
> > -
> >   UTF8
> > (1 row)
> >
> > testdb=# SHOW CLIENT_ENCODING;
> >   client_encoding
> > -
> >   UTF8
> > (1 row)
> >
> > testdb=#
> >
> >
> > - Original Message -
> > *From:* Adrian Klaver 
> > *To:* s4...@yahoo.co.jp; rob stone ;
> > "pgsql-general@lists.postgresql.org"
> > 
> > *Date:* 2018/12/7, Fri 23:47
> > *Subject:* Re: Importing tab delimited text file using phpPgAdmin
> > 5.1 GUI
> >
> > On 12/7/18 12:28 AM, s4...@yahoo.co.jp 
> wrote:
> >  > Hello Adrian, Rob!
> >  >
> >  > Thank you for the comments.
> >  >
> >  > Oh, yes, I forgot to mention that I am using Postgresql version
> 9.6.
> >  > I had read somewhere that last supported version was 9.3 or
> > something,
> >  > could be 9.2 as you say.
> >  >
> >  > I wanted to use phpPgAdmin, that's why I went back to 9.6 even if
> > I had
> >  > installed ver. 10 first.
> >  > But if the phpPgAdmin quotes identifiers by defaults, I will need
> to
> >  > look for some other ways.
> >
> > I don't think the quoted identifiers are the issue. I am suspecting
> > that
> > the import code may be trying to INSERT into the wrong version of the
> > table. Some questions:
> >
> > 1) Which schema did you create spec in?
> >
> > 2) In psql what does \d spec show?
> >
> >
> >  >
> >  > Strictly speaking, I don't need to use the phpPgAdmin, but I am
> > trying
> >  > to find a GUI way to upload a file quickly (the file has
> > thousands of
> >  > records.)
> >  >
> >  > Now, I am using pyDev in Eclipse to insert records, but I need to
> > use a
> >  > web-based click and upload.
> >
> > ?
> > https://www.pgadmin.org/
> >
> > 

Re: querying both text and non-text properties

2018-12-06 Thread Tony Shelver
I would suggest doing testing out btree_gin with a non-insignificant amount
of data before going ahead with it.

I did a test case last month, and the size of the generated index was
_much_ bigger than the base table.
The case involved a compound key if 1 int column and 1 timestamp range
column.

On Wed, 5 Dec 2018 at 16:02, Rob Nikander  wrote:

>
>
> > On Dec 4, 2018, at 4:59 PM, Laurenz Albe 
> wrote:
> >
> > You have two options:
> >
> > A combined index:
> >
> >  CREATE EXTENSION btree_gin;
> >  CREATE INDEX ON fulltext USING gin (to_tsvector('english', doc), color);
> >
> > That is the perfect match for a query with
> >
> >  WHERE color = 'red' AND to_tsvector('german', doc) @@
> to_tsquery('english', 'word');
> >
> > But you can also create two indexes:
> >
> >  CREATE INDEX ON fulltext USING gin (to_tsvector('english', doc));
> >  CREATE INDEX ON fulltext (color);
> >
> > Then you don't need the extension, and PostgreSQL can still use them for
> the search,
> > either only one of them if the condition is selective enough, or a
> "BitmapAnd" of both.
>
> Thanks! I will try both these methods and compare the performance.
>
> Rob
>


Re: Question about index on different tablespace and rebuild it

2018-11-06 Thread Tony Shelver
Did you check the documentation for alter index?
https://www.postgresql.org/docs/10/sql-alterindex.html

You could create a script file (plenty of examples on the internet on
generating these) and then run through psql or whatever.

Also, if you just have a few indexes to move, you could use (for example)
pgadmin4 to generate and run stop/start scripts.

Data in the index is stored separately from the table, dropping indexes in
any of SQL standard databases doesn't affect the data in the tables.

On Tue, 6 Nov 2018 at 21:49, Condor  wrote:

> On 05-11-2018 10:56, Condor wrote:
> > Hello,
> >
> > I have a database that use index on different table space (nvme). I
> > read documentation about table space and understand table space cannot
> > be treated as an autonomous collection of data files.
> > My question is: Is this always true ? I mean if I have table .. okay
> > here is example:
> >
> > create table t1 ( i integer);
> > create index t1_i_idx on t1 using btree (i) tablespace nvme;
> >
> > In this case only the index is on different table space nvme that is
> > not part of data stored into table, I mean if I drop the index, data
> > stored in table will still be untouched and not damaged.
> > So in this case if I lost table space nvme that is stored into
> > different nvme drive, can I insert new one and rebuild all index files
> > with reindexdb for example or some other tool like
> > mysql/mariadb for example myismcheck or something like that, that can
> > rebuild index files when DB sever is offline ?
> >
> >
> > Regards,
> > HS
>
>
> Some one can answer ?
>
>


Re: editable spreadsheet style interface

2018-10-31 Thread Tony Shelver
For a quick and dirty data editor, LibreOffice Base seems to work fine.

On Tue, 30 Oct 2018 at 23:05, Tim Clarke 
wrote:

> On 30/10/2018 20:32, Martin Mueller wrote:
> >
> > I have used Aqua Data Studio for several years. Jetbrains recently
> > released a similar product. Academic licensing is affordable (~ $200 a
> > year) and very cheap if considered in terms of the time it saves you.
> >
> > *From: *David Gauthier 
> > *Date: *Tuesday, October 30, 2018 at 2:06 PM
> > *To: *"pgsql-gene...@postgresql.org" 
> > *Subject: *editable spreadsheet style interface
> >
> > I think I know the answer to this one but I'll ask anyway...
> >
> > Is there a spreadsheet style interface to a PG DB where users can...
> >
> > - lock records
> >
> > - edit records
> >
> > - submit changes (transaction)
> >
> > Is there any after-market tool for PG that does something like this ?
> >
>
> Have a look here https://www.postgresql.org/download/product-categories/
> under "Administration/Development". I'm using pgAdmin right now.
>
>
> Tim Clarke
>
>
>


Re: Function for Exception Logging

2018-10-31 Thread Tony Shelver
I'd be interested if there is an answer to this.   The big issue with
writing to a table is the interaction with rollbacks and commits.

Even more difficult in PG functions as they have no commit / rollback
capability.  I haven't played with stored procedures in in PG11 yet.

In Oracle, I wrote logging / error messages to a logging file on the fire
system, directly from PL/SQL.


This file can then be monitored for error messages via a messaging
solutions.


Regards



On Mon, 29 Oct 2018 at 19:13, Patrick FICHE 
wrote:

> Hi community,
>
>
>
> I would like to implement a function that would log managed Exceptions
> into a dedicated table.
>
> For example, I have some code like :
>
> BEGIN
>
> Code generation exception
>
> EXCEPTION
>
>   WHEN OTHERS THEN Log_Error();
>
> END;
>
>
>
> The Log_Error function would be able to get the exception context /
> parameters to log the exception parameters into a table.
>
> Is there any way to manage this without providing the Exception parameters
> as parameters to the Log_Error function ?
>
>
>
> Thanks,
>
> Patrick
>
>
>
>
>
> *Patrick Fiche*
>
> Database Engineer, Aqsacom Sas.
>
> *c.* 33 6 82 80 69 96
>
> *e.* patrick.fi...@aqsacom.com
>
>
>
> [image: 01-03_AQSA_Main_Corporate_Logo_JPEG_White_Low.jpg]
> 
>
>
>


Time-based table design / index selection

2018-10-24 Thread Tony Shelver
I come form a DB background using Oracle (which is quite outdated) and more
recently SQL Server, 2012 and 2016, so would like some ideas from the group.

I have 2 tables which represent a time-based relationship between various
object types, most of which also have a geographical component.
There are other tables involved obviously, but these two represent the core.

Objects can be customer accounts (each of which has it's own reporting
substructure), customer branches, vehicles, telematic / GPS tracking
devices, drivers, managers, geolocations such as geofences (area
boundaries, exclusionary and inclusionary), geopoints / locations, and much
more.
Objects are stored in the to_obj_locn_attrib table.

These objects can be organized hierarchically (vehicles and drivers within
branches, branches within customer account, or at a global level, or both
(geographical locations and areas).
These relationships / relationship types (drivers authorized to drive a
specific vehicle, drivers who have actually driven a vehicle during a
specific period, for example) are stored in the to_reln table.

The objects mutate over time, and  we always need to be able to report on
the data at a point in time.

The two main tables in this implementations are to_obj_locn_attrib and
to_reln.
There are other tables containing object data (such as to_object, which is
the core id and immutable portion of the object. :

CREATE TABLE teleon.to_obj_locn_attrib
(
obj_id smallint NOT NULL,
active_dt timestamp without time zone NOT NULL DEFAULT
CURRENT_TIMESTAMP,
active_status character(1) COLLATE pg_catalog."default" NOT NULL
DEFAULT 'A'::bpchar,
user_upd character varying(40) COLLATE pg_catalog."default" NOT NULL,
obj_name character varying(150) COLLATE pg_catalog."default",
geo_type character(1) COLLATE pg_catalog."default",
geo_locn geometry(Polygon,4326),
geo_area character varying COLLATE pg_catalog."default",
time_zone character varying(15) COLLATE pg_catalog."default",
obj_locn_attribs jsonb,
CONSTRAINT to_obj_locn_attr_pk PRIMARY KEY (obj_id, active_dt)
USING INDEX TABLESPACE "TeleTS1",
CONSTRAINT to_object_obj_locn_attr_fk FOREIGN KEY (obj_id)
REFERENCES teleon.to_object (obj_id) MATCH FULL
ON UPDATE CASCADE
ON DELETE RESTRICT,
CONSTRAINT geo_type_check CHECK (geo_type = ANY (ARRAY['P'::bpchar,
'O'::bpchar, 'L'::bpchar, 'R'::bpchar, 'C'::bpchar])),
CONSTRAINT to_obj_locn_active_dt CHECK (active_status = ANY
(ARRAY['A'::bpchar, 'I'::bpchar, 'D'::bpchar]))
)
.
There is a unique btree index on obj_id and active_dt (composite primary
key).
Any changes to the information contained in this table results in a new
record with the same obj_id and a new active_dt being inserted into the
table.
The to_obj_locn_attrib table is relatively static.

CREATE TABLE teleon.to_reln
(
reln_id bigint NOT NULL DEFAULT
nextval('teleon.to_reln_reln_id_seq'::regclass),
reln_typ_id character varying(6) COLLATE pg_catalog."default" NOT NULL,
active_dt timestamp without time zone NOT NULL,
active_status character(1) COLLATE pg_catalog."default" NOT NULL,
user_upd character varying(40) COLLATE pg_catalog."default" NOT NULL,
obj_id_owner_of   integer NOT NULL,
obj_id_owned_by integer NOT NULL,
reln_attribs jsonb,
CONSTRAINT to_reln_pk PRIMARY KEY (reln_id)
USING INDEX TABLESPACE "TeleTS1",
CONSTRAINT to_object_owned_by FOREIGN KEY (obj_id_owned_by)
REFERENCES teleon.to_object (obj_id) MATCH FULL
ON UPDATE NO ACTION
ON DELETE NO ACTION,
CONSTRAINT to_object_owner_of FOREIGN KEY (obj_id_owner_of)
REFERENCES teleon.to_object (obj_id) MATCH FULL
ON UPDATE CASCADE
ON DELETE RESTRICT,
CONSTRAINT to_reln_type_reln_fk FOREIGN KEY (reln_typ_id)
REFERENCES teleon.to_reln_type (reln_typ_id) MATCH FULL
ON UPDATE NO ACTION
ON DELETE NO ACTION
);

Indexes: Apart from the Id column of reln_id, there is a btree index on
reln_typ_id, obj_id_owner_of, and active_dt. and another on  reln_typ_id, ,
obj_id_owned_by and active_dt to cater to the most common queries.

This identifies the specific relationship of a specific relationship type
between two objects at a specific point in time.

A typical query would be something like:
SELECT p.obj_id_owner_of , active_status, ...
FROM teleon.to_reln p
WHERE p.reln_typ_id = 'DPTAST' AND
p.obj_id_owned_by = l_id AND
p.active_dt = (
SELECT MAX(c.active_dt)
FROM teleon.to_reln c
WHERE c.reln_typ_id = p.reln_typ_id AND
c.obj_id_owned_by = p.obj_id_owned_by );

A single object could be involved in many types of relationships with many
other objects,

Considering that this query would form the basis of a recursive query when
extracting hierarchical structures, I would like this to be as efficient as
possible.

Note that I have implemented the type of structure above in a few quite
large projects with millions of objects and 

Re: Rationale for aversion to the central database?

2018-04-24 Thread Tony Shelver
I have done some big contracts for large financial companies, and for most
of them, ANY changes to the DB structure required extensive 3rd party
testing and a change control process that sometimes took weeks.

But we did get a waiver for the use of DB 'code' like stored procedures and
views, which only had to follow the standard development test / acceptance
procedure by separate developer, end user and third party test teams.

For me, the database is more immutable than the application logic and
especially the GUI, so it pays to spend a lot of time up front on DB
design.  Past experience has also lead me to expect that the DBMS will have
a much longer shelf life than the application language / toolsets used
against it, or at least, over time the languages / toolsets tend to
multiply.

For my part, I like to spend a lot of tie in getting an optimal DB design,
and also putting a lot of validation logic into the DB.

I also like making expensive use of stored procedures, my experience is
that for a data-intensive multi-tool application they are faster and more
secure...



On 23 April 2018 at 19:22, Sven R. Kunze  wrote:

> So far, I have nothing to add, but just one thing. See below:
>
>
> On 09.04.2018 00:37, g...@luxsci.net wrote:
>
>> One advantage to using logic and functions in  the db is that you can fix
>> things immediately without having to make new application builds. That in
>> itself is a huge advantage, IMO.
>>
>
> This is actually not the case. You want to have those logic tested as
> thoroughly as possible being so close to your precious data.
>
> So, you write migration code that substitutes the old logic, test the
> whole package, if successful, deploy (and thus run the migration).
>
> Cheers,
> Sven
>
>