> 2) "tcp:postgresql://localhost?user=myuser=password" looks like
>
> "tcp:postgresql://localhost?user=myuser=my"
>
> and password is parsed on the & and you also end up with an extra parameter
> pwd
Perhaps it helps to URL-encode the & in the password as %26?
Alban Hertroys
--
There is
On Wed, 2019-08-28 at 21:44 +0530, Vijaykumar Jain wrote:
> If I change wal_level back to replica, will it corrupt wal? coz it
> will then be having diff information ( r format of data ?)
That's why you have to restart the server when you change that
parameter. This way, there will be a
On Wed, 2019-08-28 at 12:27 -0600, Michael Lewis wrote:
> > You can do almost as good by setting "synchronous_commit = off",
> > and that is crash-safe.
>
> It seems like it depends on your definition of crash-safe. Data loss
> can occur but not data corruption, right?
Right.
> Do you know any
The server was running Moodle. The slow load time was noticed when loading
a quiz containing
multiple images. All Apache log results showed a 6 seconds or a multiple
of 6 for how long
it took to retrieve each image.
Interestingly, if I did a wget, on the server, to the image link (which was
On 2019-08-27 08:16:08 -0700, Adrian Klaver wrote:
> Django takes Postgres as it's reference database which makes things easier,
> especially when you add in
> contrib.postgres(https://docs.djangoproject.com/en/2.2/ref/contrib/postgres/)
Looks nice.
hp
--
_ | Peter J. Holzer|
On 2019-08-27 16:04:02 +0100, Daniele Varrazzo wrote:
> Using the Django ORM to create complex queries is a joy (especially
> nesting subqueries),
Not for me. I usually know what SQL I want to execute. Then I have to
convert that SQL into a weird[1] and limited query language composed of
method
On Tue, Aug 27, 2019 at 9:45 PM Laurenz Albe
wrote:
> Holtgrewe, Manuel wrote:
> > Switching off fsync leads to a drastic time improvement but still
> > higher wall-clock time for four threads.
>
> Don't do that unless you are ready to start from scratch with a new
> "initdb" in the case of a
Hello Team,
wal_level = logical
wal_level = replica
As per docs,
wal_level determines how much information is written to the WAL. The
default value is replica, which writes enough data to support WAL archiving
and replication, including running read-only queries on a standby server.
minimal
On Wed, 28 Aug 2019 at 01:02, Ron wrote:
> We did something similar to that, except all the columns were in one
> single table. It wasn't a data warehouse, though: the RDBMS we used could
> be coerced into using a date index when large ranges were needed in detail
> tables by joining it to
Adrian Klaver writes:
> On 8/28/19 5:36 AM, francis picabia wrote:
>> I had no clue the database was the issue because I
>> had the minimal
>> log_duration = on
>> log_line_prefix = '<%t>'
>> With those settings all queries seen were roughly 1ms
>>
>> I need this log to show the true time it
On 8/28/19 5:36 AM, francis picabia wrote:
Recently had a problem where autovacuum was accidentally left off
and the database took 6 seconds for every task from PHP.
I had no clue the database was the issue because I
had the minimal
log_duration = on
log_line_prefix = '<%t>'
With those
On 8/27/19 6:18 PM, Egashira, Yusuke wrote:
Hi, Giuseppe,
Thanks to response to my question!
It seems to me that ECPG documentation does not allow specifying
username and/or password in the connection string. The correct syntax
should be:
EXEC SQL CONNECT TO
*select sum(case when extract(dow from t.d) in (1,2,3,4,5) then 1 else 0
end) * 8 as hours*
* from generate_series(current_date::date, (current_date + '10
days'::interval), '1 day'::interval) as t(d)*
*This calculates the working days/hours between 2 dates. You can make your
firt/lastr day of
Sonam Sharma writes:
> Is there any option to run reindex or vaccum in background?
> Every time the session gets logged off in between.
If your session is getting killed by a network idle-time timeout,
you could probably prevent that with suitable adjustment of the
server's TCP-keepalive
On Tue, Aug 27, 2019 at 11:59 PM Sonam Sharma wrote:
> Is there any option to run reindex or vaccum in background?
> Every time the session gets logged off in between.
>
There is not - though you can make your auto-vacuum parameters super
aggressive.
I'm having trouble imagining a scenario
Recently had a problem where autovacuum was accidentally left off
and the database took 6 seconds for every task from PHP.
I had no clue the database was the issue because I
had the minimal
log_duration = on
log_line_prefix = '<%t>'
With those settings all queries seen were roughly 1ms
I need
On Wed, Aug 28, 2019 at 12:27 AM stan wrote:
> Any thoughts as to the best way to approach this?
I've written a couple of functions to compute working hours depending
on a possible per-day hour template.
Another possible implementation besides the other proposed solutions.
On Tue, Aug 27, 2019 at 5:59 PM Laurenz Albe wrote:
>
> On Tue, 2019-08-27 at 12:00 +1000, Simon Kissane wrote:
> > We have an application that works fine with Postgres 9.6, but fails
> > with this error when we try installing it against 11.5
> >
> > I simplified the problem down to the following
Le 28/08/2019 à 09:43, Luca Ferrari a écrit :
I don't want to be pedantic, but I would have tried with a single
change at a time.
And my bet is: the local file would do the trick (i.e., it is a weird
share problem).
You're not don't worry. This process is quite important in our workflow
(not
On Wed, Aug 28, 2019 at 9:09 AM Arnaud L. wrote:
> OK, so this was enough for last night's schedule to run without problem.
> I still don't get it so I'm not satisfied with this solution, but at
> least it works.
> I'll keep the list informed if something new arises.
I don't want to be pedantic,
Thanks, will try both answers.
On Wed, Aug 28, 2019, 12:38 PM ROS Didier wrote:
> Hi
>
> It is possible to use background workers with the pg_background extension.
>
>
>
> Best Regards
>
> Didier ROS
>
> EDF
>
>
>
> *De :* sonams1...@gmail.com [mailto:sonams1...@gmail.com]
> *Envoyé :* mercredi
Le 27/08/2019 à 13:17, Arnaud L. a écrit :
I move the offending line at the end of the script, so it will run some
minutes later, maybe this will be enough.
OK, so this was enough for last night's schedule to run without problem.
I still don't get it so I'm not satisfied with this solution,
Hi
It is possible to use background workers with the pg_background extension.
Best Regards
Didier ROS
EDF
De : sonams1...@gmail.com [mailto:sonams1...@gmail.com]
Envoyé : mercredi 28 août 2019 08:59
À : pgsql-general
Objet :
Is there any option to run reindex or vaccum in background?
Every
Hi,
if you have access to the OS, then you have plenty of options.
Else, I think pg_cron might do the job
regards,
fabio pardi
On 28/08/2019 08:58, Sonam Sharma wrote:
> Is there any option to run reindex or vaccum in background?
> Every time the session gets logged off in between.
> On Aug 27, 2019, at 11:27 PM, stan wrote:
>
> I am just starting to explore the power of PostgreSQL's time and date
> functionality. I must say they seem very powerful.
>
> I need to write a function that, given a month, and a year as input returns
> the "work hours" in that month. In
Is there any option to run reindex or vaccum in background?
Every time the session gets logged off in between.
26 matches
Mail list logo