ail address
from mailing lists.
Procedures to verify that an email address works and to administer its use
under rules like CAN_SPAM cannot exist solely within the database itself.
And as others have noted, what makes for a 'well-formed' email address has
always been a bit complicated.
--
Mike Nolan
this should keep the low level disk structure the same,
> but if this has corrupted the files I can drop, dump and restore, in which
> case how do I ‘drop’ the DB without postgres running?
>
> Ta,
>
> Martin.
>
Was the server you were backing up shut down or in backup mode when you did
the 'dd' copy?
--
Mike Nolan
I also have some pre-defined percentage functions, they check the
denominator and return null if it is zero, to avoid 'divide by zero'
errors.
--
Mike Nolan
On Sun, Apr 16, 2017 at 11:37 AM, Melvin Davidson <melvin6...@gmail.com>
wrote:
>
>
> On Sun, Apr 16, 2017 at 12:23 PM
In case it wasn't clear, the sample data was 3 rows of data. (There are
actually around 890K rows in the table pgfutter built from the JSON file.)
-
Mike Nolan
row)
The fact that the null values were stripped out is not an issue here.
But,
uscf=> insert into goldmast_test select * from
json_populate_record(NULL::"goldmast_test", (select * from gold1604_test
limit 2) )
uscf-> \g
ERROR: more than one row returned by a subquery used as an expression
Is there a way to get around the one row per subquery issue?
--
Mike Nolan
On Sun, Apr 10, 2016 at 2:30 AM, David G. Johnston <
david.g.johns...@gmail.com> wrote:
> On Sat, Apr 9, 2016 at 9:48 PM, Michael Nolan <htf...@gmail.com> wrote:
>
>>
>> 2nd Followup: It turns out that loading a table from a JSON string is
>> more complic
to the table into which the
inserts are made, an insert could fail or result in incorrect data.
--
Mike Nolan
available to app developers.
My next task is to find out if validating and importing a JSON file into a
table is as easy as exporting a table in JSON turned out to be. Thanks for
the help.
--
Mike Nolan
.
--
Mike Nolan
-null?
--
Mike Nolan
no...@tssi.com
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
er the years that it is far easier to write a short PHP or
PERL program to do tasks like this. Much easier to debug and the speed
improvement by using SQL is not important for 200,000 records.
--
Mike Nolan
. If it represented, say, Hillary
Clinton's deleted email, recovering that data might be more valuable to
some people than the data that was not deleted.
--
Mike Nolan
been working on a project that requires MySQL, their
documentation is far inferior.)
--
Mike Nolan
no...@tssi.com
(some_column) from (
select extract ('dow' from some_date) as dow, some_number from some_table
union select generate_series(0,6) as dow, null as some_number) as x
group by 1 order by 1
--
Mike Nolan
no...@tssi.com
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes
On Mon, Jul 6, 2015 at 5:50 PM, David G. Johnston
david.g.johns...@gmail.com wrote:
On Mon, Jul 6, 2015 at 6:16 PM, Michael Nolan htf...@gmail.com wrote:
But you can see it wont give correct results since (for example)
Monday's
with no new users will not be counted in the average as 0
3.
--
Mike Nolan
I've needed to
redefine the type for a column, I've made a dump, edited the dump file to
change the type and then renamed the table and reloaded it. That's usually
several orders of magnitude faster.
--
Mike Nolan
no...@tssi.com
are the tables in pg_dumpall
files?
--
Mike Nolan
On Wed, May 20, 2015 at 12:40 PM, David G. Johnston
david.g.johns...@gmail.com wrote:
Yes. The entire dump is performed within a single transaction.
On Wed, May 20, 2015 at 9:24 AM, Michael Nolan htf...@gmail.com wrote:
The documentation for pg_dump says that dump files are created
One of my sons was hired by Google last year after spending the past
several years working on various open-source projects, it took 2 days of
back-and-forth with Google's legal department before he was satisfied with
the restrictions in their offer.
--
Mike Nolan
On Wed, Mar 11, 2015 at 4:46 PM
Might not do what you want, but I just change the password.
--
Mike Nolan
On Fri, Feb 6, 2015 at 4:11 PM, Melvin Davidson melvin6...@gmail.com
wrote:
Possibly,
To disble:
ALTER USER name RENAME TO xname;
To enable
ALTER USER xname RENAME TO name;
???
On Fri, Feb 6, 2015 at 3:57 PM
On 2/6/15, David G Johnston david.g.johns...@gmail.com wrote:
On Fri, Feb 6, 2015 at 2:22 PM, Michael Nolan [via PostgreSQL]
ml-node+s1045698n5836989...@n5.nabble.com wrote:
Might not do what you want, but I just change the password.
How do you do that and re-enable using the previous
For what it's worth, this week's run covered even more months than
last week's did, and ran in about 5 1/2 hours, with no slowdowns,
under a similar system load. So, it could have been a one-time thing
or some combination of factors that will be difficult to reproduce.
--
Mike Nolan
--
Sent
On Mon, Jan 12, 2015 at 7:46 PM, Bob Futrelle bob.futre...@gmail.com
wrote:
You should be able to find a cloud provider that could give you many TB.
Or so they like to claim.
Nope, but you probably find one willing to SELL you access to many TB.
--
Mike Nolan
and there would not have been many
inserts or updates to the tables used by the lookup function since the
latest vacuum analyze. I think I may have even done a vacuum analyze on
the two largest tables after the first DB shutdown.
--
Mike Nolan
On Fri, Jan 9, 2015 at 7:52 PM, Tomas Vondra tomas.von...@2ndquadrant.com
wrote:
On 9.1.2015 23:14, Michael Nolan wrote:
I'm running 9.3.5 on a virtual machine with 5 cores and 24 GB of
memory. Disk is on a SAN.
I have a task that runs weekly that processes possibly as many as
120
be an even longer run than
this week's was.)
--
Mike Nolan
On Sat, Jan 10, 2015 at 12:55 PM, Andy Colson a...@squeakycode.net wrote:
On 01/09/2015 07:52 PM, Tomas Vondra wrote:
On 9.1.2015 23:14, Michael Nolan wrote:
I'm running 9.3.5 on a virtual machine with 5 cores and 24 GB of
memory. Disk
, the other virtual servers weren't being slowed
down, so I don't suspect problems with the virtual server or the SAN.
If this happens again, what sorts of settings in postgresq.conf or
other tools should I be using to try to track down what's causing
this?
--
Mike Nolan
--
Sent via pgsql-general
limitation parameters in the pg_hba.conf
file, are there any simple ways to do this?
--
Mike Nolan
Yeah, a cron job to swap pg_hba.conf files is the best solution I've come
up with so far. It's not one web app, it's closer to two dozen of them, on
multiple sites.
--
Mike Nolan
On Sat, Dec 13, 2014 at 11:10 PM, Adrian Klaver adrian.kla...@aklaver.com
wrote:
On 12/13/2014 08:13 PM, Michael
is better?
--
Mike Nolan
Nolan
when building complex queries (I've written some that ran over
100 lines and involved a dozen or more joined tables), I need to build
them up, testing them as I build.
--
Mike Nolan
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http
the log_line_prefix in the postgresql.conf file and reload it.
I use:
log_line_prefix = '%m %u '
You might also want to use this, at least temporarily:
log_statement = all
--
Mike Nolan
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http
I think that PHP has modules (eg, PEAR) that can read MS Access database
files, and once you have it in an array you can create INSERT statements
for PostgreSQL, including cleaning up any data format issues (eg, dates of
00-00-)
--
Mike Nolan
On Fri, Feb 28, 2014 at 6:21 PM, Rich Shepard
Thomas, try this:
'2013-02-31'::date
--
Mike Nolan
On Mon, Jan 20, 2014 at 7:44 AM, Thomas Kellerer spam_ea...@gmx.net wrote:
Hi,
I asked this a while back already:
select to_date('2013-02-31', '-mm-dd');
will not generate an error (unlike e.g. Oracle)
However in the release
a cast to date instead:
select '33-oct-2013'::date throws an error.
--
Mike Nolan
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
You could write a plperlul function that runs a shell script to back up
your database, you can even pass it parameters and put a call to that in a
trigger.
BUT, this could result in multiple backups running at the same time and
become a performance drag.
--
Mike Nolan
On Tue, Oct 22, 2013 at 9
Assuming the database hasn't changed much since the failover, doing a
fsync from the new primary back to the old primary should be fairly
quick.
--
Mike Nolan
On 9/19/13, Vick Khera vi...@khera.org wrote:
On Thu, Sep 19, 2013 at 11:31 AM, ascot.m...@gmail.com
ascot.m...@gmail.com
wrote:
I
much work you have to do to get the physical files back in sync.
--
Mike Nolan
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
?
--
Mike Nolan
On Sat, Sep 14, 2013 at 8:32 AM, Moshe Jacobson mo...@neadwerx.com wrote:
How do I migrate my 9.1 directory to a new file system with the least
downtime possible?
I don't know if this makes any difference, but my pg_xlog directory is on
its own volume as well, so I would have
of the field 'birthday''? My guess is there's a
null value for the field, in which case you are comparing two nulls.
--
Mike Nolan
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
On 8/29/13, Michael Nolan htf...@gmail.com wrote:
On 8/29/13, Andreas Kretschmer akretsch...@spamfence.net wrote:
I'm using 9.2.4.
What is the content of the field 'birthday''? My guess is there's a
null value for the field, in which case you are comparing two nulls.
Oops, missed seeing
or
last updated may help.
A true incremental backup would IMHO be a very useful tool for
database administrators, but there are a number of technical
challenges involved, especially dealing with deleted records.
--
Mike Nolan
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org
On 8/1/13, haman...@t-online.de haman...@t-online.de wrote:
Hi,
I want to store copies of our data on a remote machine as a security
measure.
Wolfgang
2 questions:
1. How secure is the remote site?
2. How much data are we talking about?
--
Mike Nolan
--
Sent via pgsql-general mailing
already got
replication working to one server and you're apparently not planning
to have the second slave server poll the master for updates.
--
Mike Nolan
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref
for the last 10 years or so. I'd take
PostgreSQL over the other two in a heartbeat!
Data integrity/data preservation issues (backup is just one aspect of
that) are going to be your biggest problems with VERY large databases,
no matter how much money you throw at it.
--
Mike Nolan
--
Sent via
And then there's this:
create table wkdata
(numval numeric(5,2))
CREATE TABLE
Time: 6.761 ms
nolan= insert into wkdata
nolan- values (123.456789);
INSERT 569625265 1
Time: 4.063 ms
nolan= select * from wkdata;
select * from wkdata;
numval
--
123.46
So rounding a money field doesn't seem
in pg_hba.conf.
--
Mike Nolan
Nolan
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
It is probably not the most efficient, but I often use this syntax,
which reads better.
Select . where col_type_timestamp::date between '2011-01-01' and
'2011-12-31'
This will use a timestamp index.
--
Mike Nolan
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org
I'm looking to spec a new production server for a small client and
have been looking at the Drobo SAN units.
Has anybody run PG on one of these yet?
It looks like only the B1200i supports Linux operating systems.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make
on shipping streaming
replication data.
--
Mike Nolan
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
rather than checksumming the files, that did not appear to cause any problems
and it sped up the rsync considerably. (This was about a 40 GB database.)
--
Mike Nolan
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http
On 7/16/12, Chris Angelico ros...@gmail.com wrote:
On Tue, Jul 17, 2012 at 1:40 AM, Michael Nolan htf...@gmail.com wrote:
I did several weeks of tests on 9.1.3 using mod time and file size
rather than checksumming the files, that did not appear to cause any
problems
and it sped up the rsync
On 7/16/12, Sergey Konoplev sergey.konop...@postgresql-consulting.com wrote:
On Mon, Jul 16, 2012 at 8:01 PM, Chris Angelico ros...@gmail.com wrote:
On Tue, Jul 17, 2012 at 1:58 AM, Michael Nolan htf...@gmail.com wrote:
As I understand the docs for rsync, it will use both mod time and file
On 7/16/12, Steven Schlansker ste...@likeness.com wrote:
I think it's pretty easy to show that timestamp+size isn't good enough to do
this 100% reliably.
That may not be a problem if the slave server synchronization code
always starts to play back WAL entries at a time before the worst case
for
version of postgresql are you running?
--
Mike Nolan
,
John Townsend
PL/pgSQL and PL/perlu are the only ones I use. I use PL/perlu primarily to
launch shell scripts from triggers, for example to update an external
website when a row in a table has been inserted, deleted or updated.
--
Mike Nolan
-- Forwarded message --
From: Michael Nolan htf...@gmail.com
Date: Thu, May 31, 2012 at 2:49 PM
Subject: Re: [GENERAL] Procedural Languages
To: Darren Duncan dar...@darrenduncan.net
On Thu, May 31, 2012 at 2:23 PM, Darren Duncan dar...@darrenduncan.netwrote:
Michael Nolan
-- Forwarded message --
From: Michael Nolan htf...@gmail.com
Date: Tue, May 29, 2012 at 1:37 PM
Subject: Re: [GENERAL] Disable Streaming Replication without restarting
either master or slave
To: Fujii Masao masao.fu...@gmail.com
On Tue, May 29, 2012 at 1:15 PM, Fujii Masao
need to re-think your backup protocols. You
got lucky here, because there were only index files in the tablespace you
lost. Next time you may not be so fortunate.
--
Mike Nolan
) are to be supported on a slave,
will the applications using those temporary tables expect to be able to use
'nextval' on inserts to temporary tables as well?
As a bonus question, I guess it would be the same if using synchroneous
replication ?
Yes.
--
Mike Nolan
This is due to how sequences are pre-allocated in blocks to sessions
running on the master.
Since the slave is updated via the WALs, and not via 'nextval' function
calls in queries, the sequences that are actually used will remain in sync
with the master.
--
Mike Nolan
Nolan
the database, that doesn't always mean it
should. Even a well-written startup script might not know enough to make
that decision for you.)
This might be good material for a tutorial on the wiki site, with some
system-specific sections.
--
Mike Nolan
themselves readily to partial recoveries.
--
Mike Nolan
' version.
Specifically, look at the way the rsyncs are done in two stages, one while
the primary database is in backup mode, and one afterwards.
--
Mike Nolan
On Thu, Apr 19, 2012 at 1:07 PM, Michael Nolan htf...@gmail.com wrote:
On Thu, Apr 19, 2012 at 12:46 PM, Jen jennifer.s...@oeconnection.comwrote:
I have been working on a hot backup for Postgres 9.1 for awhile and have
run
into a consistent issue.
The instructions in the Binary
from or how to locate where it is coming from.
According to the documentation, the current_timestamp family of functions
is stable, could that be the cause? Better yet, should it?
--
Mike Nolan
of research into efficient ways to store and search chess positions,
and some of it may have dealt with SQL database structures.
--
Mike Nolan
have negative consequences as well.
Another question:
Does master send WAL to standby before the transaction commit ?
That's another question for the core team, I suspect. A related
question is what happens
if there is a rollback?
--
Mike Nolan
--
Sent via pgsql-general mailing list (pgsql
On 4/11/12, Kevin Grittner kevin.gritt...@wicourts.gov wrote:
Michael Nolan htf...@gmail.com wrote:
On 4/11/12, 乔志强 qiaozhiqi...@leadcoretech.com wrote:
But when a transaction larger than 1GB...
Then you may need WAL space larger than 1GB as well. For
replication to work, it seems likely
action to take on the master? PANICing
it seems to be a bad idea, but having transactions never complete
because they never hear back from the synchronous slave (for whatever
reason) seems bad too.
--
Mike Nolan
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes
-- Forwarded message --
From: Michael Nolan htf...@gmail.com
Date: Wed, 11 Apr 2012 14:48:18 -0400
Subject: Re: [HACKERS] [GENERAL] [streaming replication] 9.1.3
streaming replication bug ?
To: Robert Haas robertmh...@gmail.com
On Wed, Apr 11, 2012 at 2:14 PM, Robert Haas robertmh
-- Forwarded message --
From: Michael Nolan htf...@gmail.com
Date: Tue, Apr 10, 2012 at 9:47 PM
Subject: Re: [GENERAL] [streaming replication] 9.1.3 streaming replication
bug ?
To: Fujii Masao masao.fu...@gmail.com
On Tue, Apr 10, 2012 at 9:09 PM, Fujii Masao masao.fu
?
It isn't clear what you want from synchronous streaming replication, or if
you understand the difference between synchronous streaming replication and
asynchronous streaming replication.
--
Mike Nolan
/pgsql/data
rsync -av /usr/local/pgsql/data2/ postgres@xxx:/usr/local/pgsql/data2
/usr/local/pgsql/bin/psql -c select pg_stop_backup() postgres postgres
rsync -av /usr/local/pgsql/data/pg_xlog postgres@xxx:/usr/local/pgsql/data/
echo ok to start standby
--
Mike Nolan
On Mon, Apr 2, 2012 at 6:19 PM, Michael Nolan htf...@gmail.com wrote:
I got similar messages the first few times I tried to start up my slave
server, I never did figure out exactly what caused it.
One possibility is that I may not have restarted the master server after
changing
.
A transaction that is rolled back (such as due to an error) after the
nextval() function has been called will not roll back the sequence value,
for example.
You cannot issue a nextval() call on a standby server, because it is in
read-only mode.
--
MIke Nolan
messy.
Are you committing each insert separately or doing them in batches using
'begin transaction' and 'commit'?
I have a database that I do inserts in from a text file. Doing a commit
every 1000 transactions cut the time by over 90%.
--
Mike Nolan
OPERATION
Select field from table1;
...
Select other_field from table2;
...
END OPERATION
How can I lock these tables to assure that the tables are not getting
INSERTS's or UPDATE's during the operation?
Best Regards,\
Isn't that what 'begin transaction' and 'commit' are for?
--
Mike Nolan
.
Benjamin, have you checked to see if your 'sudden death' problem is heat
related?
-
Mike Nolan
Fedora 15.
I found that the encrypted database ran 15-20% slower on PostgreSQL 9.0.4 on
most queries.
--
Mike Nolan
, nor can you cast an integer to an xid.
The only way I can get this to work is:
select count(*) from tablename where not xmin = 2
That seems pretty obscure.
--
Mike Nolan
no...@tssi.com
On Thu, Jul 28, 2011 at 12:23 PM, Andy Colson a...@squeakycode.net wrote:
On 7/28/2011 11:40 AM, Michael Nolan wrote:
Why does this query succeed:
select count(*) from tablename where xmin = 2
while this query fails:
select count(*) from tablename where xmin != 2
You probably want
On Thu, Jul 28, 2011 at 2:27 PM, Tom Lane t...@sss.pgh.pa.us wrote:
Michael Nolan htf...@gmail.com writes:
Why does this query succeed:
select count(*) from tablename where xmin = 2
while this query fails:
select count(*) from tablename where xmin != 2
It told you why not:
ERROR
On Thu, Jul 28, 2011 at 5:09 PM, Tom Lane t...@sss.pgh.pa.us wrote:
Michael Nolan htf...@gmail.com writes:
It seems like we're being inconsistent here in allowing 'where xid =
integer' but not allowing 'where xid != integer'.
Well, if you look into pg_operator you'll soon find
On Thu, Jul 28, 2011 at 5:36 PM, Tom Lane t...@sss.pgh.pa.us wrote:
Michael Nolan htf...@gmail.com writes:
It also appears you cannot group on a column of type xid.
You can in 8.4 and up. Previous versions only know how to GROUP BY
sortable columns, which requires a btree opclass, which
On Tue, Jul 26, 2011 at 6:10 PM, Chris Travers chris.trav...@gmail.comwrote:
On Tue, Jul 26, 2011 at 3:48 PM, Michael Nolan htf...@gmail.com wrote:
I suggest adding the following parameter to pg_restore:
--rename-table=
When used in conjunction with the --data-only, --schema and -t
-table=xyz_copy
would restore a copy of table xyz into the existing (and presumably empty)
table xyz_copy, leaving table xyz untouched.
--
Mike Nolan
no...@tssi.com
no way to find out which compiler was used to build
PostgreSQL binaries
You can do a strings on a binary file (eg, postmaster) and search for GCC in
the output.
--
Mike Nolan
2011/7/16 - - loh@hotmail.com
The weird thing is that before I updated my server the query was about 5
times faster.
Updated it from what to what, and how?
--
Mike Nolan
no...@tssi.com
if that is what schemas are for.
The ability to do cross-database (most likely cross-server as well) queries
would address a lot of real-world problems.
-
Mike Nolan
no...@tssi.com
Utilization monitoring based on the
postgres processes
I don't think log file analysis tools can gather information about CPU
usage.. You'd need something that gathered real time data from the OS, eg
from /proc on a linux kernel.
sar doesn't tell you a lot about what postgres is up to.
--
Mike Nolan
the
source code.
--
Mike Nolan
no...@tssi.com
reports this as an error, of course.
--
Mike Nolan
no...@tssi.com
.
--
Mike Nolan
no...@tssi.com
On Wed, Jun 22, 2011 at 3:48 PM, Tom Lane t...@sss.pgh.pa.us wrote:
Michael Nolan htf...@gmail.com writes:
Has anyone successfully used encfs with postgresq recently?
PANIC: could not open file pg_xlog/00010009000D (log file
9,
segment 13): Invalid argument
The database
(0,1095)
as s(a)
where to_char('2011-01-01'::date+s.a,'dd') between '01' and '07'
and to_char('2011-01-01'::date+s.a,'dy') = 'sat'
--
Mike Nolan
1 - 100 of 256 matches
Mail list logo