emoving contents of data directory "data-default"
$
$ sysctl -a|grep semm
kern.ipc.semmsl: 512
kern.ipc.semmnu: 256
kern.ipc.semmns: 512
kern.ipc.semmni: 256
The system is running 9.4 just fine and the kernel configuration
requirements shouldn't have changed for semaphores should they
On 2/4/2016 12:47, Tom Lane wrote:
> I wrote:
>> Karl Denninger <k...@denninger.net> writes:
>>> $ initdb -D data-default
>>> ...
>>> creating template1 database in data-default/base/1 ... FATAL: could not
>>> create semaphores: Invalid argume
On 2/4/2016 12:28, Tom Lane wrote:
> Karl Denninger <k...@denninger.net> writes:
>> $ initdb -D data-default
>> ...
>> creating template1 database in data-default/base/1 ... FATAL: could not
>> create semaphores: Invalid argument
>> DETAIL: Failed syst
-- and the rest of the base of the data store -- is)
--
Karl Denninger
k...@denninger.net
/Cuda Systems LLC/
smime.p7s
Description: S/MIME Cryptographic Signature
.
--
Karl Denninger
k...@denninger.net
/Cuda Systems LLC/
smime.p7s
Description: S/MIME Cryptographic Signature
me to quit doing what I'm doing now and do something
else; the problem only rests in one place when it comes to enticing me
to do so -- money. :-))
--
Karl Denninger
k...@denninger.net
/Cuda Systems LLC/
of the tuple and convert it */
out now contains the BINARY (decoded) photo data. When done with it you:
PQfreemem(out) to release the memory that was allocated.
That's the rough outline -- see here:
http://www.postgresql.org/docs/current/static/libpq-exec.html
--
Karl Denninger
k...@denninger.net
On 5/9/2013 11:12 AM, Karl Denninger wrote:
On 5/9/2013 10:51 AM, Achilleas Mantzios wrote:
Take a look here first :
http://www.postgresql.org/docs/9.2/interactive/datatype-binary.html
then here :
http://www.dbforums.com/postgresql/1666200-insert-jpeg-files-into-bytea-column.html
On 5/9/2013 11:34 AM, Alvaro Herrera wrote:
Karl Denninger escribió:
To encode:
write_conn = Postgresql communication channel in your software that is
open to write to the table
char*out;
size_tout_length, badge_length;
badge_length = function-to-get-length-of(badge_binary_data
On 5/9/2013 12:08 PM, Nelson Green wrote:
Thanks Karl, but I'm trying to do this from a psql shell. I can't use
the C functions there, can I?
On Thu, May 9, 2013 at 11:21 AM, Karl Denninger k...@denninger.net
mailto:k...@denninger.net wrote:
On 5/9/2013 11:12 AM, Karl Denninger wrote
of a disruption
between the two systems you're virtually guaranteed to suffer data
corruption which is (much worse) rather likely to go undetected.
--
-- Karl Denninger
/The Market Ticker ®/ http://market-ticker.org
Cuda Systems LLC
-ing the second
instance with a different data directory structure, and when starting it
do so with a different data directory structure?
e.g. initdb -D data
and
initdb -D data2
And that as long as there are no collisions (E.g. port numbers) this
works fine?
--
-- Karl Denninger
/The Market
if the
machines are up from a standpoint of reachability on the network as well.
--
-- Karl Denninger
/The Market Ticker ®/ http://market-ticker.org
Cuda Systems LLC
.
There's no status update on the pgfoundry page indicating activity or
testing with the current releases.
Thanks in advance.
--
-- Karl Denninger
/The Market Ticker ®/ http://market-ticker.org
Cuda Systems LLC
unless the offset is
breached at which point it will emit an email to the submitting owner of
the job.
--
-- Karl Denninger
/The Market Ticker ®/ http://market-ticker.org
Cuda Systems LLC
.
IF THEY ARE NOT then it will probably work 95% of the time, and the
other 5% it will be unrecoverable. Be very, very careful -- the
snapshot must in fact snapshot ALL of the involved database volumes (log
data included!) at the same instant.
--
Karl Denninger
k...@denninger.net
/The Market Ticker/
On 5/28/2012 11:44 AM, Tom Lane wrote:
Karl Denninger k...@denninger.net writes:
I am attempting to validate the path forward to 9.2, and thus tried the
following:
1. Build 9.2Beta1; all fine.
2. Run a pg_basebackup from the current master machine (running 9.1) to
a new directory
parallel environment instead of trying to
attach a 9.2Beta1 slave to an existing 9.1 master? (and if so, why
doesn't the code complain about the mismatch instead of the bogus WAL
message?)
--
-- Karl Denninger
/The Market Ticker ®/ http://market-ticker.org
Cuda Systems LLC
On 5/27/2012 11:08 PM, Jan Nielsen wrote:
Hi Karl,
On Sun, May 27, 2012 at 9:18 PM, Karl Denninger k...@denninger.net
mailto:k...@denninger.net wrote:
Here's what I'm trying to do in testing 9.2Beta1.
The current configuration is a master and a hot standby at a
diverse
On 10/5/2010 2:12 PM, Chris Barnes wrote:
I would like to know if there is a way to configure 9 to do this.
I have 4 unique databases running on 4 servers.
I would like to have them replicate to a remote site for disaster
recovery.
I would like to consolidate these 4 database into one
On 10/3/2010 1:34 AM, Guillaume Lelarge wrote:
Le 03/10/2010 07:07, Karl Denninger a écrit :
On 10/2/2010 11:40 PM, Rajesh Kumar Mallah wrote:
I hope u checked point #11
http://wiki.postgresql.org/wiki/Streaming_Replication#How_to_Use
* *11.* You can calculate the replication lag
On 10/3/2010 3:44 PM, Karl Denninger wrote:
On 10/3/2010 1:34 AM, Guillaume Lelarge wrote:
Le 03/10/2010 07:07, Karl Denninger a écrit :
On 10/2/2010 11:40 PM, Rajesh Kumar Mallah wrote:
I hope u checked point #11
http://wiki.postgresql.org/wiki/Streaming_Replication#How_to_Use
I'm trying to come up with an automated monitoring system to watch the
WAL log progress and sound appropriate alarms if it gets too far behind
for some reason (e.g. communications problems, etc.) - so far without
success.
What I need is some sort of way to compute a difference between the
master
On 10/2/2010 11:40 PM, Rajesh Kumar Mallah wrote:
I hope u checked point #11
http://wiki.postgresql.org/wiki/Streaming_Replication#How_to_Use
* *11.* You can calculate the replication lag by comparing the
current WAL write location on the primary with the last WAL
location
I am playing with the replication on 9.0 and running into the following.
I have a primary that is running at a colo, and is replicated down to a
secondary here using SLONY. This is working normally.
I decided to set up a replication of the SLONY secondary onto my
sandbox machine to see what I
On 9/29/2010 8:55 PM, Jeff Davis wrote:
On Wed, 2010-09-29 at 20:04 -0500, Karl Denninger wrote:
Sep 29 19:58:54 dbms2 postgres[8564]: [2-2] STATEMENT: update post set
views = (select views from post where number='116763' and toppost='1') +
1 where number='116763' and toppost='1'
Sep 29 20
If you use Slony, expect it to lose the replication status.
I attempted the following:
1. Master and slaves on 8.4.
2. Upgrade one slave to 9.0. Shut it down, used pg_upgrade to perform
the upgrade.
3. Restarted the slave.
Slony appeared to come up, but said it was syncing only TWO tables
Uh, is there a way around this problem?
$ bin/pg_upgrade -c -d /usr/local/pgsql-8.4/data -D data -b
/usr/local/pgsql-8.4/bin -B bin
Performing Consistency Checks
-
Checking old data directory (/usr/local/pgsql-8.4/data) ok
Checking old bin directory
On 9/21/2010 10:16 PM, Bruce Momjian wrote:
Karl Denninger wrote:
Uh, is there a way around this problem?
$ bin/pg_upgrade -c -d /usr/local/pgsql-8.4/data -D data -b
/usr/local/pgsql-8.4/bin -B bin
Performing Consistency Checks
-
Checking old data directory
So I have myself a nice pickle here.
I've got a database which was originally created with SQL_ASCII for the
encoding (anything goes text fields)
Unfortunately, I have a bunch of data that was encoded in UTF-8 that's
in an RSS feed that I need to load into said database. iconv barfs all
Peter C. Lai wrote:
The doublequotes isn't UTF8 it's people copying and pasting from Microsoft
stuff, which is WIN-1252. So try to use that with iconv instead of utf8
On 2010-08-16 12:40:03PM -0500, Karl Denninger wrote:
So I have myself a nice pickle here.
I've got a database which
Bruce Momjian wrote:
Craig Ringer wrote:
On 13/08/2010 9:31 PM, Bruce Momjian wrote:
Karl Denninger wrote:
I may be blind - I don't see a way to enable this. OpenSSL kinda
supports this - does Postgres' SSL connectivity allow it to be
supported/enabled?
What
I may be blind - I don't see a way to enable this. OpenSSL kinda
supports this - does Postgres' SSL connectivity allow it to be
supported/enabled?
- Karl
attachment: karl.vcf
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
Tom Duffey wrote:
Hi Everyone,
I have a table with several hundred million rows of timestamped
values. Using pg_dump we are able to dump the entire table to disk no
problem. However, I would like to retrieve a large subset of data
from this table using something like:
COPY (SELECT * FROM
This may better-belong in pgsql-sql but since it deals with a function
as opposed to raw SQL syntax I am sticking it here
Consider the following DBMS schema slice
Table public.post
Column | Type |
Has there been an update on this situation?
Koichi Suzuki wrote:
I understand the situation. I'll upload the improved code ASAP.
--
Koichi Suzuki
2010/2/11 Karl Denninger k...@denninger.net:
Will this come through as a commit on the pgfoundry codebase? I've
subscribed
2010/4/19 Karl Denninger k...@denninger.net:
Has there been an update on this situation?
Koichi Suzuki wrote:
I understand the situation. I'll upload the improved code ASAP.
--
Koichi Suzuki
2010/2/11 Karl Denninger k...@denninger.net:
Will this come through
and will
upload the fix shortly.
Sorry for inconvenience.
--
Koichi Suzuki
2010/2/8 Karl Denninger k...@denninger.net:
This may belong in a bug report, but I'll post it here first...
There appears to be a **SERIOUS** problem with using pg_compresslog and
pg_uncompresslog
it during TESTING of my archives - before I
needed them.
-- Karl Denninger
attachment: karl.vcf
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
the checksums.
This is VERY BAD - if pg_compresslog is damaging the files in some
instances then ANY BACKUP TAKEN USING THEM IS SUSPECT AND MAY NOT
RESTORE!!
Needless to say this is a MAJOR problem.
-- Karl Denninger
attachment: karl.vcf
--
Sent via pgsql-general mailing list (pgsql
Is there a way through the libpq interface to access performance data on
a query?
I don't see an obvious way to do it - that is, retrieve the amount of
time (clock, cpu, etc) required to process a command or query, etc
Thanks in advance!
--
--
Karl Denninger
k...@denninger.net
childrensjustice=# create table petition_bail like petition_white;
ERROR: syntax error at or near like
LINE 1: create table petition_bail like petition_white;
Huh?
Yes, the source table exists and obviously as postgres superuser
(pgsql) I have select permission on the parent.
--
Karl
Douglas McNaught wrote:
On Sat, Jul 19, 2008 at 9:02 PM, Karl Denninger [EMAIL PROTECTED] wrote:
childrensjustice=# create table petition_bail like petition_white;
ERROR: syntax error at or near like
LINE 1: create table petition_bail like petition_white;
It's not super-easy to see
with this setup and
it is very fast. Really quite amazing when you get right down to it.
The latest release of the PostgreSQL code markedly improved query
optimization, by the way. The performance improvement when I migrated
over was quite stunning.
Karl Denninger ([EMAIL PROTECTED])
http
frequent inserts and/or updates.
Karl Denninger ([EMAIL PROTECTED])
http://www.denninger.net
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
Richard Huxton wrote:
Karl Denninger wrote:
The problem is that I was holding the ts_vector in a column in the
table with a GIST index on that column. This fails horribly under
8.3; it appears to be ok on the reload but as there is a trigger on
updates any update or insert fails
I can reproduce this as I have the dump from before conversion and can
load it on a different box and make it happen a second time.
Would you like it on the list or privately?
Karl Denninger ([EMAIL PROTECTED])
http://www.denninger.net
Richard Huxton wrote:
Karl Denninger wrote:
Richard
.
both ARE loaded on the system; is there a way to do that?
Thanks in advance
--
Karl Denninger ([EMAIL PROTECTED])
http://www.denninger.net
---(end of broadcast)---
TIP 4: Have you searched our list archives?
http
be
that the change in configure requires a gmake clean; not sure)
In any event I have another machine and will get something more detailed
ASAP - I will also try the restore program and see if that works.
Karl Denninger ([EMAIL PROTECTED])
http://www.denninger.net
Scott Marlowe wrote:
On Sun, Mar 2, 2008
Scott Marlowe wrote:
On Sun, Mar 2, 2008 at 1:41 PM, Karl Denninger [EMAIL PROTECTED] wrote:
Ugh.
I am attempting to move from 8.2.6 to 8.3, and have run into a major
problem.
The build goes fine, the install goes fine, the pg_dumpall goes fine.
However, the reload does not. I do
Tom Lane wrote:
Karl Denninger [EMAIL PROTECTED] writes:
It looks like the problem had to do with the tsearch2 module that I have
in use in a number of my databases, and which had propagated into
template1, which meant that new creates had it in there.
The old tsearch2 module isn't
Joshua D. Drake wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On Sun, 02 Mar 2008 15:46:25 -0600
Karl Denninger [EMAIL PROTECTED] wrote:
I'm not quite clear what I have to do in terms of if/when I can drop
the old tsearch config stuff and for obvious reasons (like not
running
- and on
what?
--
Karl Denninger ([EMAIL PROTECTED])
http://www.denninger.net
%SPAMBLOCK-SYS: Matched [EMAIL PROTECTED], message ok
---(end of broadcast)---
TIP 6: explain analyze is your friend
I don't know. How do I check?
Karl Denninger ([EMAIL PROTECTED])
http://www.denninger.net
Alvaro Herrera wrote:
Karl Denninger wrote:
A manual Vacuum full analyze fixes it immediately.
But... .shouldn't autovacuum prevent this? Is there some way to look in a
log somewhere and see
Tom Lane wrote:
Karl Denninger [EMAIL PROTECTED] writes:
But... .shouldn't autovacuum prevent this? Is there some way to look in
a log somewhere and see if and when the autovacuum is being run - and on
what?
There's no log messages (at the default log verbosity anyway). But you
Steve Crawford wrote:
Karl Denninger wrote:
Are your FSM settings enough to keep track of the dead space you have?
I don't know. How do I check?
vacuum verbose;
Toward the bottom you will see something like:
...
1200 page slots are required to track all free space.
Current
Scott Marlowe wrote:
On 8/28/07, Karl Denninger [EMAIL PROTECTED] wrote:
Am I correct in that this number will GROW over time? Or is what I see
right now (with everything running ok) all that the system
will ever need?
They will grow at first to accomodate your typical load
to the
OR and that didn't work; it excluded all of the NULL records)
--
Karl Denninger ([EMAIL PROTECTED])
http://www.denninger.net
%SPAMBLOCK-SYS: Matched [EMAIL PROTECTED], message ok
---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster
things about my 7.4.1 DBMS is that I do have
significant amonuts of binary data stored in the dbms itself, and in
addition I have tsearch loaded.
Any ideas?
--
--
Karl Denninger ([EMAIL PROTECTED]) Internet Consultant Kids Rights Activist
http://www.denninger.netMy home on the net - links
On Thu, Feb 03, 2005 at 01:03:57PM -0600, Karl Denninger wrote:
Hi folks;
Trying to move from 7.4.1 to 8.0.1
All goes well until I try to reload after installation.
Dump was done with the 8.0.1 pg_dumpall program
On restore, I get thousands of errors on the console, and of course
and cfg.oid= $2 order by
lt.tokid desc;
Ai!
A reindex did nothing.
What did I miss? Looks like there's something missing, but what?!
--
--
Karl Denninger ([EMAIL PROTECTED]) Internet Consultant Kids Rights Activist
http://www.denninger.netMy home on the net - links to everything I do
with the same
error; it looks like something is badly mangled internally in the tsearch2
module... even though it DOES appear that it loaded properly.
--
--
Karl Denninger ([EMAIL PROTECTED]) Internet Consultant Kids Rights Activist
http://www.denninger.netMy home on the net - links to everything I
to do so
without the tsearch2.sql stuff. I can then reload the tsearch2.sql
functions and re-create the indices.
Sound plausible?
-
--
Karl Denninger ([EMAIL PROTECTED]) Internet Consultant Kids Rights Activist
http://www.denninger.netMy home on the net - links to everything I do!
http
and indices, but that's not a big deal.
All fixed... thanks to the pointer to the OID issue, that got me on the
right track.
--
--
Karl Denninger ([EMAIL PROTECTED]) Internet Consultant Kids Rights Activist
http://www.denninger.netMy home on the net - links to everything I do!
http
On Thu, Feb 03, 2005 at 06:59:55PM -0700, Michael Fuhr wrote:
On Thu, Feb 03, 2005 at 06:44:55PM -0600, Karl Denninger wrote:
As it happens, there's an untsearch2.sql script in the contrib directory.
That reminds me: it would be useful if all contributed modules had
an unmodule.sql file
On Thu, Feb 03, 2005 at 10:20:47PM -0500, Tom Lane wrote:
Karl Denninger [EMAIL PROTECTED] writes:
I agree with this - what would be even better would be a way to create
'subclasses' for things like this, which could then be 'included' easily.
We could decree that a contrib module's script
66 matches
Mail list logo