> seconds. Then I get a line in my log saying:
>>
>> >> pg_ctl: server did not start in time
>
> Hi Adam,
>
> how did you start the server? Via pg_ctlcluster, the init system, or
> directly via pg_ctl?
>
>> > Followed by:
>> >> 2017-
> You might want to increase pg_ctl's wait timeout for this situation,
> since the default's evidently too little. However ...
Got it, thanks.
> ... pg_ctl itself wouldn't decide to forcibly shut down the server
> if the timeout expired. It merely stops waiting and tells you so.
> It seems like
ve-get command end: terminated on
> signal [SIGTERM]
> 2017-11-10 20:27:35.978 UTC [7142] LOG: shutting down
> 2017-11-10 20:27:36.151 UTC [7132] LOG: database system is shut down
This happens weather I have the server configured as a standby or not.
Any help would be very appreciated
> Since you are migrating data into a staging table in PostgreSQL, you may set
> the field data type as TEXT for each field where you have noticed or
> anticipate issues.
> Then after population perform the datatype transformation query on the given
> fields to determine the actual field value
> If that's the correct theory, yes. Did you match up the OID yet?
Yes, I did just now. The OID matches the TOAST table for the temp
table: contract_actual_direct.
This just really surprises me I haven't seen it before considering I
know for a fact that some of my other functions are way more
in quite a few (at least 50) functions, so it's
surprising I've not seen this issue until now.
Thanks,
-Adam
Alright I figured it out.
The OID does not match any of the temp tables, so not sure what's up there.
I have the function RETURN QUERY,
and then I drop all my temp tables.
If I don't drop the tmp_base table at the end of the function, it will
work just fine. If I keep the drop at the end in
I believe it's one of the temp tables. The oid changes each time the
function is run.
I'll put some logging in place to identify the exact temp table it is
though.
ion.
Any help would be appreciated.
Thanks,
-Adam
-- Function: gosimple.contract_exposure_direct(uuid, boolean, boolean, uuid[])
-- DROP FUNCTION gosimple.contract_exposure_direct(uuid, boolean, boolean,
uuid[]);
CREATE OR REPLACE FUNCTION gosimple.contract_exposure_direct(
IN p_contract_id uuid,
Appreciate the link, didn't come up when I was googling the issue.
As you said, a mention in the release notes would have been helpful.
Thanks,
-Adam
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref
that the syntax was incorrect and
would change later.
Thanks,
-Adam
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
Happy to hear jpgAgent is working alright for you. If you have any
questions with it feel free to ask me.
If you do want to help with pgAutomator, that sounds like something
you could start to learn on. jpgAgent is pretty much feature complete
as far as my needs go, and no one has requested any
or after the call to
"REFRESH PUBLICATION?
Thanks in advance,
Adam
was that this is
something that has been a personal pain point, and I haven't found
anything I liked better up to this point in time.
I'm slowly working on remedying that, but until the time my
alternative is ready, i'm sticking with (j)pgAgent, and pgAdmin to
manage it.
Thanks,
-Adam
--
Sent via
Here is the last discussion I saw on it:
https://www.postgresql.org/message-id/flat/90261791-b731-a516-ab2a-dafb97df4464%40postgrespro.ru#90261791-b731-a516-ab2a-dafb97df4...@postgrespro.ru
>
> I cannot image a single postgres index covering more than one physical
> table. Are you really asking for that?
While not available yet, that is a feature that has had discussion before.
Global indexes are what i've seen it called in those discussions. One of
the main use cases is to
>
> Do you believe the only path is Windows 10?
Not Peter...But I believe there are better choices considering if this has
to be on any sort of network, you're going to be potentially vulnerable to
any number of never-to-be-patched security holes. Using XP in this day and
age is a hard argument
On 2017-06-21 Adam Sjøgren <a...@novozymes.com> wrote:
> Adam Sjøgren <a...@novozymes.com> wrote:
>> Meanwhile, I can report that I have upgraded from 9.3.14 to 9.3.17 and
>> the errors keep appearing the log.
Just to close this, for the record: We haven't seen the
out eg. in PostgreSQL logs how the read-only flag is
setup for current transaction? We have tried to enable full logging
(postgresql.conf) however reading it is quite tough and we did not get any
closer to the solution.
Thank you for your help so far,
Adam
--
ringframework.orm.jpa.JpaTransactionManager
Spring @Transactional(read-only) hint -> where we could we set it to "false"
Our typical @Repository extends
org.springframework.data.jpa.repository.JpaRepository, which uses
implementation from
org.springframework.data
You can use something like cron, windows task scheduler (if you're running
windows), pgagent (or jpgagent), pg_cron, or any of the others.
I personally use (and wrote) jpgagent, at the time pgagent was the only
alternative and it was very unstable for me. Here is the link if
interested:
Adam Sjøgren <a...@novozymes.com> wrote:
> Meanwhile, I can report that I have upgraded from 9.3.14 to 9.3.17 and
> the errors keep appearing the log.
Just a quick update with more observations:
All the errors in the postgres.log from one of the tables are triggered
by a stor
Alvaro Herrera <alvhe...@2ndquadrant.com> wrote:
> ADSJ (Adam Sjøgren) wrote:
>
>> Our database has started reporting errors like this:
>>
>> 2017-05-31 13:48:10 CEST ERROR: unexpected chunk number 0 (expected 1)
>> for toast value 14242189 in pg_toas
s other than pg_default?
> I can only replicate the issue when using separately mounted
> tablespaces.
No, we are using pg_default only.
I hope your finding can be reproduced, it would be really interesting to
see.
Best regards,
Adam
--
"Lägg ditt liv i min hand
tify the damaged row(s). And then do some
> good update to create a new version.
Yes - we started by doing a quick pg_dump, but I guess we should switch
to something that can tell us exactly what rows hit the problem.
Anyone has a handy little script lying around?
Thanks for the respons
efore.
We will be upgrading to PostgreSQL 9.3.17 during the weekend, but I'd like to
hear
if anyone has seen something like this, or have some ideas of how to
investigate/what the cause might be.
Best regards,
Adam
--
"Lägg ditt liv i min hand Adam Sjøgren
Säl
>
> there's also pg_agent which is a cron-like extension, usually bundled with
> pg_admin but also available standalone
>
> https://www.pgadmin.org/docs4/dev/pgagent.html
>
>
> --
> john r pierce, recycling bits in santa cruz
>
In addition to that, there is also jpgAgent:
? Is there a
way to get pg_dump to output the necessary statements such that
running the dump back through psql results in the same priviliges that
I started with?
I am using version 9.5.6.
Thanks very much,
--
Adam Mackler
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make
Do you have non overlapping check constraints on the partitions by date to
allow the planner to exclude the child tables from needing to be looked at?
Oh sorry, I misunderstood. Didn't realize you meant database users an not
an application user table implemented in Postgres. I'll let others answer
that then because i'm not aware of a way to do that.
Whoops, accidentally sent this to only Pawan instead of the list:
>
>
Hey there, so I would highly suggest you avoid arbitrary password strength
policies like that. I wrote a library for my company which we use for
password strength estimation, but it is written in Java. I've been
thinking about
someone more experienced help me to decode this value? I am talking
about this part of code:
https://github.com/xstevens/decoderbufs/blob/master/src/decoderbufs.c#L527-L528
Cheers
Adam
I for one would love having something like this available. I also know
i've seen discussed in the past, divorcing the physical column order from
the logical column order, which seems like it'd be useful here as well to
not break the workflow of those who do use ordinal positions for columns.
On Tue, Dec 13, 2016 at 3:36 PM, George Weaver wrote:
> I've never used it but what about:
>
> https://developer.sugarcrm.com/2012/08/03/like-postgresql-
> and-want-to-use-it-with-sugarcrm-check-out-a-new-community-project/
>
> Cheers,
> George
Looks like not much came out of
orrectly create my
applications database correctly?
Many Thanks,
Adam.
On Thu, Sep 29, 2016 at 8:10 AM, Nguyễn Trần Quốc Vinh wrote:
> Dear,
>
> As it was recommended, we pushed our projects into github:
> https://github.com/ntqvinh/PgMvIncrementalUpdate.
>
> 1) Synchronous incremental update
> - For-each-row triggers are generated for all
On Mon, Sep 26, 2016 at 3:21 PM, Kevin Grittner wrote:
> On Mon, Sep 26, 2016 at 2:04 PM, Rakesh Kumar
> wrote:
>
> > Does PG have a concept of MV log, from where it can detect the
> > delta changes and apply incremental changes quickly.
>
> That
On Mon, Sep 26, 2016 at 2:35 PM, Rob Sargent wrote:
> Of course 9.5 is the current release so the answer is Yes, since 9.5
>
> It seems like there is some confusion about what we're talking about. I am
talking about incremental updates to a sort of "fake" materialized view
I require eagerly refreshed materialized views for my use case, which is
something Postgres does not currently support. I need my updates to a
table the view refers to visible within the same transaction, and often it
is a single change to one row which will only effect a single row in the
view.
everything I need.
I'd really appreciate any help with this, as i'd love a better way to get
eagerly refreshed materialized views in Postgres rather than doing
everything manually as I have to now.
If I can provide any more info please let me know.
Thanks,
-Adam
Yes that very well could happen because the size of the table changed, as
well as stats being more accurate now. Just because you have a seq scan
doesn't mean the planer is making a bad choice.
I have wondered if there were any plans to enhance fkey support for
partitioned tables now that more work is being done on partitioning (I know
there has been a large thread on declarative partitioning on hackers,
though I haven't followed it too closely).
Foreign keys are all done through
for the state of the project. The latter possibly could if the community
gets on board.
Thanks,
-Adam
Agreed with Joshua, a single ssd will have way more performance than all 15
of those for random io for sure, and probably be very close on sequential.
That said, a raid controller able to handle all 15 drives (or multiple that
handle a subset of the drives) is likely to be more expensive than a
Just wondering what others have done for using enum or uuid columns in
exclusion constraints?
I have a solution now, but I just wanted to see what others have ended up
doing as well and see if what i'm doing is sane. If i'm doing something
unsafe, or you know of a better way, please chime in.
Here is an example that works in a single query. Since you have two
different orders you want the data back in, you need to use subqueries to
get the proper data back, but it works, and is very fast.
CREATE TEMPORARY TABLE foo AS
SELECT generate_series as bar
FROM generate_series(1, 100);
Is there a reason you can't do that now with a limit 1/order by/union all?
Just have it ordered one way on the first query and the other on the
bottom. That will give you two rows that are the first / last in your set
based on whatever column you order on.
On May 18, 2016 8:47 PM, "Tom Smith"
> On Tue, May 17, 2016 at 1:54 PM, Raymond O'Donnell wrote:
> > Having said all that, I've rarely had any trouble with pgAdmin 3 on
> > Windows 7 and XP, Ubuntu and Debian; just a very occasional crash (maybe
> > one every six months).
So just to chime in, it has not been at all
Yes, foreign keys are implemented using triggers. Here is a blog post
explaining a little more:
http://bonesmoses.org/2014/05/14/foreign-keys-are-not-free/
I would assume it's still got to do a seq scan even on every referencing
table even if it's empty for every record since there are no
Hello Sridhar,
Have you tried the 'coalesce' function to handle the nulls?
Kind Regards,
Adam Pearson
From: pgsql-general-ow...@postgresql.org <pgsql-general-ow...@postgresql.org>
on behalf of Sridhar N Bamandlapally <sridhar@gmail.com>
>It is not difficult to simulate column store in a row store system if
>you're willing to decompose your tables into (what is essentially)
>BCNF fragments. It simply is laborious for designers and programmers.
I could see a true column store having much better performance than
tricking a row
Rob,
I understand that if I were to replicate the logic in that view for every
use case I had for those totals, this would not be an issue. But that would
very much complicate some of my queries to the point of absurdity if I
wanted to write them in a way which would push everything down properly.
I responded yesterday, but it seems to have gotten caught up because it was
too big with the attachments... Here it is again.
Sorry about not posting correctly, hopefully I did it right this time.
So I wanted to see if Sql Server (2014) could handle this type of query
differently than Postgres
to contain that logic within a view of some sort though, as
a bunch of other stuff is built on top of that. Having to push that
aggregate query into all of those other queries would be hell.
Thanks,
-Adam
On Tue, Mar 8, 2016 at 5:17 PM, Tom Lane <t...@sss.pgh.pa.us> wrote:
> Adam Br
system, but it'll do. Please see attached.
Thanks,
-Adam
CREATE SCHEMA test;
SET search_path = 'test';
CREATE TABLE header (
header_id serial primary key,
description text not null,
amount numeric not null
);
CREATE TABLE detail_1 (
detail_1_id serial primary key,
header_id integer
On 24 February 2016 at 20:27, Stephen Frost wrote:
> Yeah, looks like a bug to me. My gut reaction is that we're pulling up
> a subquery in a way that isn't possible and that plan shouldn't be
> getting built/considered.
Thanks - shall I go ahead and submit a bug report?
>
ALL TABLES IN SCHEMA public TO test;
SET ROLE test;
SELECT * FROM b;
UPDATE b SET text = 'ONE' WHERE id = 1;
gives error:
psql:/tmp/test.sql:26: ERROR: plan should not reference subplan's variable
Is this a bug or am I doing something wrong?
Any help much appreciated,
Adam
--
Sent via pgs
reference as a foreign key.
In the case of a many to many situation, I prefer to use a two column
composite key. In the case of a many to many, i've never run into a case
where I needed to reference a single row in that table without knowing
about both sides of that relation.
Just my $0.02
-Adam
something that did the same type of
validations / inserts, but did it row by row in a cursor (not written by
me), and that took a good 5 min (if I remember correctly) to process 20,000
lines. This was also on a server running Sql Server on a 32 core machine.
Anyways, good luck!
-Adam
On Fri, Jul 24
/camwjz6gf9tm+vwm_0ymqypi4xk_bv2nyaremwr1ecsqbs40...@mail.gmail.com
Enjoy life,
Adam
--
Adam Hooper
+1-613-986-3339
http://adamhooper.com
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
On Wed, Apr 15, 2015 at 9:57 AM, Andreas Joseph Krogh
andr...@visena.com wrote:
På onsdag 15. april 2015 kl. 15:50:36, skrev Adam Hooper
a...@adamhooper.com:
On Wed, Apr 15, 2015 at 4:49 AM, Andreas Joseph Krogh
andr...@visena.com wrote:
In other words: Does vacuumlo cause diskspace
Here you are:
do $$
declare
job_id int;
begin
/* add a job and get its id: */
insert into
pgagent.pga_job (
jobjclid
, jobname
)
values
(
1 /*1=Routine Maintenance*/
, 'DELETE_NAMES' /* job name */
)
returning
-1252/ISO-8859-1/ISO-8859-15. That's very common. MySQL in
particular is a ghastly Internet denizen, in that it defaults to
ISO-8859-15 in an apparent crusade against globalization and modern
standards.
Enjoy life,
Adam
--
Adam Hooper
+1-613-986-3339
http://adamhooper.com
--
Sent via pgsql
.
Enjoy life,
Adam
--
Adam Hooper
+1-613-986-3339
http://adamhooper.com
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
On Tue, Feb 3, 2015 at 3:12 PM, Bill Moran wmo...@potentialtech.com wrote:
On Tue, 3 Feb 2015 14:48:17 -0500
Adam Hooper a...@adamhooper.com wrote:
It's doable for us to VACUUM FULL and add a notice to our website
saying, you can't upload files for the next two hours. Maybe that's
a better
Enjoy life,
Adam
--
Adam Hooper
+1-613-986-3339
http://adamhooper.com
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
On Tue, Feb 3, 2015 at 12:58 PM, Bill Moran wmo...@potentialtech.com wrote:
On Tue, 3 Feb 2015 10:53:11 -0500
Adam Hooper a...@adamhooper.com wrote:
This plan won't work: Step 2 will be too slow because pg_largeobject
still takes 266GB. We tested `VACUUM FULL pg_largeobject` on our
staging
On Tue, Feb 3, 2015 at 2:29 PM, Bill Moran wmo...@potentialtech.com wrote:
On Tue, 3 Feb 2015 14:17:03 -0500
Adam Hooper a...@adamhooper.com wrote:
My recommendation here would be to use Slony to replicate the data to a
new server, then switch to the new server once the data has synchornized
for the response and insight.
--
Adam Mackler
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
developers to change the function's name, but that
could break other code that currently uses it, and even if it didn't,
I would prefer something less intrusive on that project.
Thanks very much for any ideas about this,
--
Adam Mackler
--
Sent via pgsql-general mailing list (pgsql-general
.
This is on an i5 desktop with 16 gigs of ram and an ssd.
This is a pretty good test though, as it's a real world use case (even if
the data was generated with PGBench). We now know that area needs some
work before it can be used for anything more than a toy database.
Thanks,
-Adam
On Thu, Oct 2, 2014 at 7:52
Ended up running for 28 min, but it did work as expected.
On Thu, Oct 2, 2014 at 10:27 AM, Adam Brusselback adambrusselb...@gmail.com
wrote:
Testing that now. Initial results are not looking too performant.
I have one single table which had 234575 updates done to it. I am rolling
back
will be easy, as it offers the same functions than JSON afaik.
Gesendet: Dienstag, 30. September 2014 um 21:16 Uhr
Von: Adam Brusselback adambrusselb...@gmail.com
An: Felix Kunde felix-ku...@gmx.de
Cc: pgsql-general@postgresql.org pgsql-general@postgresql.org
Betreff: Re: [GENERAL] table versioning
Felix, I'd love to see a single, well maintained project. For example, I
just found yours, and gave it a shot today after seeing this post. I found
a bug when an update command is issued, but the old and new values are all
the same. The trigger will blow up. I've got a fix for that, but if we
using 'CREATE TYPE' and excludes domains. If I am mistaken on
that point I would be grateful to learn of that mistake.
Thanks again,
--
Adam Mackler
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref
.
Thanks again,
--
Adam Mackler
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
of the
query. I've also tried anyelement, but that does not work even with a
cast.
Thank you,
--
Adam Mackler
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
Hi,
we have a problem since we migrate from 8.4 to 9.1
when we play : ANALYSE VERBOSE; ( stat on all databases, with 500 tables
and 1to DATA in all tables)
we now have this message :
org.postgresql.util.PSQLException: ERROR: out of shared memory Indice
: You might need to increase
Hi,
thanks you both for your quick answers,
Le 12/05/2014 15:29, Tom Lane a écrit :
Merlin Moncure mmonc...@gmail.com writes:
On Mon, May 12, 2014 at 7:57 AM, Souquieres Adam
adam.souquie...@axege.com wrote:
when we play : ANALYSE VERBOSE; ( stat on all databases, with 500 tables and
1to
juste hit more than 3200 lock owned
by the same transaction !
Can you explain what is the difference between 8.4 and 9.1 on this point
please ?
regards,
Adam
Le 12/05/2014 15:33, Souquieres Adam a écrit :
Hi,
thanks you both for your quick answers,
Le 12/05/2014 15:29, Tom Lane a écrit
I just hit the 20k locks in pg_locks, on 18k differents relations owned
by the same virtual transaction and PID.
I only have like 500 tables and like 2k indexes, i must miss something.
Le 12/05/2014 15:42, Tom Lane a écrit :
Souquieres Adam adam.souquie...@axege.com writes:
ANALYSE VERBOSE
Le 12/05/2014 16:24, Tom Lane a écrit :
Souquieres Adam adam.souquie...@axege.com writes:
When i relaunch my ANALYSE VERBOSE, pg_locks table grows quickly from 20
lines to more than 1000 lines and still growing, all the lines are owned
by the same virtual transaction and the same pid.
Hm. I
Along the lines of the equality operator; I have ran into issues trying to
pivot a table/result set with a json type due what seemed to be no equality
operator.
On Nov 4, 2013 10:14 AM, Merlin Moncure
mmonc...@gmail.comjavascript:_e({}, 'cvml', 'mmonc...@gmail.com');
wrote:
On Fri, Nov 1,
Where are you measuring the connections? From your app to PGBouncer, or
from PGBouncer to PostgreSQL?
If it is from your app to PGBouncer, that sounds strange, and like the app
is not properly releasing connections as it should. If it is from
PGBouncer to PostgreSQL, that sounds normal. I
For where you are measuring, everything looks normal to me.
Your application will make connections to the pooler as needed, and the
pooler will assign the application connection to a database connection it
has available in it's pool. This gets rid of the overhead of creating a
brand new
I thought this was interesting, and wanted to make sure I understood what
is going on, but the more tests I run the more confused I get.
if I take the exact set up outlined by Mosche I get the same results in 9.3
(as expected) , but if I insert one row before I run the sql the CTE is
executed and
Would help to include the explain(s). Did you ANALYZE after the insert; if
not the planner probably still thought the table was empty (thus the
matching explain) but upon execution realized it had records and thus
needed
to run the CTE.
I did not do an ANALYZE after the insert, I think the plan
the specifics of what I'm trying to. It
seems like there ought to be a way, but I haven't figured it out.
Thanks very much.
--
Adam Mackler
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
On Thu, Oct 10, 2013 at 10:42:47AM -0700, David Johnston wrote:
Adam Mackler-3 wrote
http://stackoverflow.com/questions/19237450/can-sql-view-have-infinite-number-of-rows-repeating-schedule-each-row-a-day
Not sure how you can state But I'm willing to agree never to query such a
view
the specifics of what I'm trying to. It
seems like there ought to be a way, but I haven't figured it out.
Thanks very much.
--
Adam Mackler
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
http://kettle.pentaho.com/ works pretty good to.
On Mon, Oct 7, 2013 at 11:39 AM, Michal TOMA i...@webmining-systems.comwrote:
Talend?
http://talend.com/
But usually all major ETL tools do work with any database including
PostgreSQL
On Monday 07 October 2013 17:54:36 Vick Khera wrote:
I almost always alias my tables by default with something short (Usually 1
- 3 characters), but not my subselects for an in list. In this case I
would do d1, d2, ps, and p for the different tables. I then do my best to
use the same alias in all my queries. I am also big on formatting the SQL
I'm using Excel. I needed to set the MAXVARCHARSIZE parameter in the
connection string to take care of my issue (MAXVARCHARSIZE=2048 for me).
That allowed the defined size of the field to equal the actual size.
Thanks everyone for your help!
Adam
From: Vincent Veyron vv.li...@wanadoo.fr
the type of
the record, it was a VarBinary. Is there a way to have all of the data
returned to the recordset? Thanks for any help.
Adam
)?
Thanks again.
Adam C. Falkenberg
Quality Engineer
Great Lakes Works
Phone: (313) 749 - 3758
Cell: (313) 910 - 3195
From: Bret Stern bret_st...@machinemanagement.com
To: Adam C Falkenberg acfalkenb...@uss.com,
Cc: pgsql-general@postgresql.org
Date: 09/17/2013 10:06 AM
Subject
With
SQL = SELECT data FROM pg_largeobject WHERE loid = id ORDER BY
pageno
rs.Open SQL, conn
stream.Type = adTypeBinary
stream.Open
' Loop through the recordset and write the binary data to the stream
While Not rs.EOF
stream.Write rs.Fields(data).Value
rs.MoveNext
Wend
Adam
From
the planner itself?
Regards,
Adam
smime.p7s
Description: S/MIME cryptographic signature
given a restriction based on
one of the other two tables:
adam=# explain select * from l1, l2, foreign1 where foreign1.a = l1.a and
foreign1.b = l2.a;
QUERY PLAN
Hello,
I’m in the process of writing a Postgres FDW that can interface with web
service endpoints. Certain FDW columns would act as web service parameters,
while others would be the output.
For example:
adam=# select * from bing where query = 'xbox';
query | url
On Thu, 2012-11-01 at 14:28 -0400, Daniel Popowich wrote:
I'm making this post here in hopes I may save someone from beating
their head against the wall like I did...
I am writing a custom Name Service Switch (NSS) module to take
advantage of already existing account information in a pg
1 - 100 of 567 matches
Mail list logo