ddress
from mailing lists.
Procedures to verify that an email address works and to administer its use
under rules like CAN_SPAM cannot exist solely within the database itself.
And as others have noted, what makes for a 'well-formed' email address has
always been a bit complicated.
--
Mike Nolan
structure the same,
> but if this has corrupted the files I can drop, dump and restore, in which
> case how do I ‘drop’ the DB without postgres running?
>
> Ta,
>
> Martin.
>
Was the server you were backing up shut down or in backup mode when you did
the 'dd' copy?
--
Mike Nolan
I also have some pre-defined percentage functions, they check the
denominator and return null if it is zero, to avoid 'divide by zero'
errors.
--
Mike Nolan
On Sun, Apr 16, 2017 at 11:37 AM, Melvin Davidson
wrote:
>
>
> On Sun, Apr 16, 2017 at 12:23 PM, Adrian Klaver >
In case it wasn't clear, the sample data was 3 rows of data. (There are
actually around 890K rows in the table pgfutter built from the JSON file.)
-
Mike Nolan
|
(1 row)
The fact that the null values were stripped out is not an issue here.
But,
uscf=> insert into goldmast_test select * from
json_populate_record(NULL::"goldmast_test", (select * from gold1604_test
limit 2) )
uscf-> \g
ERROR: more than one row returned by a subquery used as an expression
Is there a way to get around the one row per subquery issue?
--
Mike Nolan
On Sun, Apr 10, 2016 at 2:30 AM, David G. Johnston <
david.g.johns...@gmail.com> wrote:
> On Sat, Apr 9, 2016 at 9:48 PM, Michael Nolan wrote:
>
>>
>> 2nd Followup: It turns out that loading a table from a JSON string is
>> more complicated than going from a t
o the table into which the
inserts are made, an insert could fail or result in incorrect data.
--
Mike Nolan
make available to app developers.
My next task is to find out if validating and importing a JSON file into a
table is as easy as exporting a table in JSON turned out to be. Thanks for
the help.
--
Mike Nolan
setting them to null.
--
Mike Nolan
e non-null?
--
Mike Nolan
no...@tssi.com
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
far easier to write a short PHP or
PERL program to do tasks like this. Much easier to debug and the speed
improvement by using SQL is not important for 200,000 records.
--
Mike Nolan
ary
Clinton's deleted email, recovering that data might be more valuable to
some people than the data that was not deleted.
--
Mike Nolan
t is
first-rate. (I've been working on a project that requires MySQL, their
documentation is far inferior.)
--
Mike Nolan
no...@tssi.com
Sat 3.0000
--
Mike Nolan
On Mon, Jul 6, 2015 at 5:50 PM, David G. Johnston <
david.g.johns...@gmail.com> wrote:
> On Mon, Jul 6, 2015 at 6:16 PM, Michael Nolan wrote:
>
>> > But you can see it wont give correct results since (for example)
>> Monday's
>> > with no new user
hat and your original query as a subquery
and do your averages, since nulls are not included in either count()
or average() aggregates:
select dow, count(*), avg(some_column) from (
select extract ('dow' from some_date) as dow, some_number from some_table
union select generate_series(0,6) as
y.
>
> Cheers,
> Casey
>
Probably too late for this time, but in the past when I've needed to
redefine the type for a column, I've made a dump, edited the dump file to
change the type and then renamed the table and reloaded it. That's usually
several orders of magnitude faster.
--
Mike Nolan
no...@tssi.com
On Wed, May 20, 2015 at 12:40 PM, David G. Johnston <
david.g.johns...@gmail.com> wrote:
> Yes. The entire dump is performed within a single transaction.
>
> On Wed, May 20, 2015 at 9:24 AM, Michael Nolan wrote:
>
>> The documentation for pg_dump says that dum
consistent are the tables in pg_dumpall
files?
--
Mike Nolan
One of my sons was hired by Google last year after spending the past
several years working on various open-source projects, it took 2 days of
back-and-forth with Google's legal department before he was satisfied with
the restrictions in their offer.
--
Mike Nolan
On Wed, Mar 11, 2015 at 4:
On 2/6/15, David G Johnston wrote:
> On Fri, Feb 6, 2015 at 2:22 PM, Michael Nolan [via PostgreSQL] <
> ml-node+s1045698n5836989...@n5.nabble.com> wrote:
>
>> Might not do what you want, but I just change the password.
>>
>>
> How do you do that and r
Might not do what you want, but I just change the password.
--
Mike Nolan
On Fri, Feb 6, 2015 at 4:11 PM, Melvin Davidson
wrote:
> Possibly,
>
> To disble:
> ALTER USER name RENAME TO xname;
>
> To enable
> ALTER USER xname RENAME TO name;
>
> ???
>
>
> On
For what it's worth, this week's run covered even more months than
last week's did, and ran in about 5 1/2 hours, with no slowdowns,
under a similar system load. So, it could have been a one-time thing
or some combination of factors that will be difficult to reproduce.
--
Mike N
On Mon, Jan 12, 2015 at 7:46 PM, Bob Futrelle
wrote:
> You should be able to find a cloud provider that could give you many TB.
> Or so they like to claim.
>
>
> Nope, but you probably find one willing to SELL you access to many TB.
--
Mike Nolan
would not have been many
inserts or updates to the tables used by the lookup function since the
latest vacuum analyze. I think I may have even done a vacuum analyze on
the two largest tables after the first DB shutdown.
--
Mike Nolan
On Fri, Jan 9, 2015 at 7:52 PM, Tomas Vondra
wrote:
> On 9.1.2015 23:14, Michael Nolan wrote:
> > I'm running 9.3.5 on a virtual machine with 5 cores and 24 GB of
> > memory. Disk is on a SAN.
> >
> > I have a task that runs weekly that processes possibly as many
data so it'll be an even longer run than
this week's was.)
--
Mike Nolan
On Sat, Jan 10, 2015 at 12:55 PM, Andy Colson wrote:
> On 01/09/2015 07:52 PM, Tomas Vondra wrote:
>
>> On 9.1.2015 23:14, Michael Nolan wrote:
>>
>>> I'm running 9.3.5 on a virtual m
r as I can tell, the other virtual servers weren't being slowed
down, so I don't suspect problems with the virtual server or the SAN.
If this happens again, what sorts of settings in postgresq.conf or
other tools should I be using to try to track down what's causing
this?
--
Mike
Yeah, a cron job to swap pg_hba.conf files is the best solution I've come
up with so far. It's not one web app, it's closer to two dozen of them, on
multiple sites.
--
Mike Nolan
On Sat, Dec 13, 2014 at 11:10 PM, Adrian Klaver
wrote:
>
> On 12/13/2014 08:13 PM, Michael Nol
day access limitation parameters in the pg_hba.conf
file, are there any simple ways to do this?
--
Mike Nolan
Mike Nolan
e which is better?
--
Mike Nolan
thing_id
etc.
I find when building complex queries (I've written some that ran over
100 lines and involved a dozen or more joined tables), I need to build
them up, testing them as I build.
--
Mike Nolan
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make ch
line_prefix in the postgresql.conf file and reload it.
I use:
log_line_prefix = '%m %u '
You might also want to use this, at least temporarily:
log_statement = all
--
Mike Nolan
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscriptio
I think that PHP has modules (eg, PEAR) that can read MS Access database
files, and once you have it in an array you can create INSERT statements
for PostgreSQL, including cleaning up any data format issues (eg, dates of
00-00-)
--
Mike Nolan
On Fri, Feb 28, 2014 at 6:21 PM, Rich Shepard
Thomas, try this:
'2013-02-31'::date
--
Mike Nolan
On Mon, Jan 20, 2014 at 7:44 AM, Thomas Kellerer wrote:
> Hi,
>
> I asked this a while back already:
>
>select to_date('2013-02-31', '-mm-dd');
>
> will not generate an error (unlike
the data is not proper.
Try using a cast to date instead:
select '33-oct-2013'::date throws an error.
--
Mike Nolan
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
You could write a plperlul function that runs a shell script to back up
your database, you can even pass it parameters and put a call to that in a
trigger.
BUT, this could result in multiple backups running at the same time and
become a performance drag.
--
Mike Nolan
On Tue, Oct 22, 2013 at 9
ssues that need to be
resolved before putting it back online, and fixing them could affect
how much work you have to do to get the physical files back in sync.
--
Mike Nolan
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
Assuming the database hasn't changed much since the failover, doing a
fsync from the new primary back to the old primary should be fairly
quick.
--
Mike Nolan
On 9/19/13, Vick Khera wrote:
> On Thu, Sep 19, 2013 at 11:31 AM, ascot.m...@gmail.com
> > wrote:
>
>> I use
?
--
Mike Nolan
On Sat, Sep 14, 2013 at 8:32 AM, Moshe Jacobson wrote:
> How do I migrate my 9.1 directory to a new file system with the least
> downtime possible?
>
> I don't know if this makes any difference, but my pg_xlog directory is on
> its own volume as well, so I would ha
On 8/29/13, Michael Nolan wrote:
> On 8/29/13, Andreas Kretschmer wrote:
>
>> I'm using 9.2.4.
>
>
> What is the content of the field 'birthday''? My guess is there's a
> null value for the field, in which case you are comparing two nulls.
Oo
0 rows, why?
>
>
> I'm using 9.2.4.
What is the content of the field 'birthday''? My guess is there's a
null value for the field, in which case you are comparing two nulls.
--
Mike Nolan
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
tes when the row was inserted or
last updated may help.
A true incremental backup would IMHO be a very useful tool for
database administrators, but there are a number of technical
challenges involved, especially dealing with deleted records.
--
Mike Nolan
--
Sent via pgsql-general mailing list (
On 8/1/13, haman...@t-online.de wrote:
> Hi,
> I want to store copies of our data on a remote machine as a security
> measure.
> Wolfgang
2 questions:
1. How secure is the remote site?
2. How much data are we talking about?
--
Mike Nolan
--
Sent via pgsql-general mailing
a
directory files, but you probably won't need to restart the master to
change the master configuration files since you've already got
replication working to one server and you're apparently not planning
to have the second slave server poll the master for updates.
--
Mike Nolan
--
stgreSQL databases for the last 10 years or so. I'd take
PostgreSQL over the other two in a heartbeat!
Data integrity/data preservation issues (backup is just one aspect of
that) are going to be your biggest problems with VERY large databases,
no matter how much money you throw at it.
--
---
1
And then there's this:
create table wkdata
(numval numeric(5,2))
CREATE TABLE
Time: 6.761 ms
nolan=> insert into wkdata
nolan-> values (123.456789);
INSERT 569625265 1
Time: 4.063 ms
nolan=> select * from wkdata;
select * from wkdata;
numval
--
123.46
So rounding a mon
specifically enabled in pg_hba.conf.
--
Mike Nolan
It is probably not the most efficient, but I often use this syntax,
which reads better.
Select . where col_type_timestamp::date between '2011-01-01' and
'2011-12-31'
This will use a timestamp index.
--
Mike Nolan
--
Sent via pgsql-general mailing list (pgsql-genera
2-31'::TIMESTAMP
is the same as 2011-12-31 00:00:00.0
so records timestamped later in the day on the 31st would not get selected
SELECT ... WHERE
'2011-01-01'::TIMESTAMP <= col_of_type_timestamp
AND col_of_type_timestamp < '2012-01:0
I'm looking to spec a new production server for a small client and
have been looking at the Drobo SAN units.
Has anybody run PG on one of these yet?
It looks like only the B1200i supports Linux operating systems.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make cha
g streaming
replication data.
--
Mike Nolan
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
On 7/16/12, Steven Schlansker wrote:
> I think it's pretty easy to show that timestamp+size isn't good enough to do
> this 100% reliably.
That may not be a problem if the slave server synchronization code
always starts to play back WAL entries at a time before the worst case
for timestamp precisi
On 7/16/12, Sergey Konoplev wrote:
> On Mon, Jul 16, 2012 at 8:01 PM, Chris Angelico wrote:
>> On Tue, Jul 17, 2012 at 1:58 AM, Michael Nolan wrote:
>>> As I understand the docs for rsync, it will use both mod time and file
>>> size
>>> if told not to do chec
On 7/16/12, Chris Angelico wrote:
> On Tue, Jul 17, 2012 at 1:40 AM, Michael Nolan wrote:
>> I did several weeks of tests on 9.1.3 using mod time and file size
>> rather than checksumming the files, that did not appear to cause any
>> problems
>> and it sped up the rsy
several weeks of tests on 9.1.3 using mod time and file size
rather than checksumming the files, that did not appear to cause any problems
and it sped up the rsync considerably. (This was about a 40 GB database.)
--
Mike Nolan
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To
p.
>
>
What version of postgresql are you running?
--
Mike Nolan
-- Forwarded message --
From: Michael Nolan
Date: Thu, May 31, 2012 at 2:49 PM
Subject: Re: [GENERAL] Procedural Languages
To: Darren Duncan
On Thu, May 31, 2012 at 2:23 PM, Darren Duncan wrote:
> Michael Nolan wrote:
>
>> PL/pgSQL and PL/perlu are the only on
Which ones do you use and why?
>
> Thanks,
>
> John Townsend
>
PL/pgSQL and PL/perlu are the only ones I use. I use PL/perlu primarily to
launch shell scripts from triggers, for example to update an external
website when a row in a table has been inserted, deleted or updated.
--
Mike Nolan
-- Forwarded message --
From: Michael Nolan
Date: Tue, May 29, 2012 at 1:37 PM
Subject: Re: [GENERAL] Disable Streaming Replication without restarting
either master or slave
To: Fujii Masao
On Tue, May 29, 2012 at 1:15 PM, Fujii Masao wrote:
> On Tue, May 29, 2012 at 10
ables (and
insert/delete/update transactions to them) are to be supported on a slave,
will the applications using those temporary tables expect to be able to use
'nextval' on inserts to temporary tables as well?
> As a bonus question, I guess it would be the same if using synchroneous
> replication ?
>
Yes.
--
Mike Nolan
in the lost tablespace.
Whichever method you use, you need to re-think your backup protocols. You
got lucky here, because there were only index files in the tablespace you
lost. Next time you may not be so fortunate.
--
Mike Nolan
This is due to how sequences are pre-allocated in blocks to sessions
running on the master.
Since the slave is updated via the WALs, and not via 'nextval' function
calls in queries, the sequences that are actually used will remain in sync
with the master.
--
Mike Nolan
e restart of the master, since you're not actually setting up
replication, so you won't be changing the postgresql.conf file on your
master.)
This uses a two-step process. First you copy all the files EXCEPT the ones
on pg_xlog, then you copy those files, so you have a complete set.
See http://wiki.postgresql.org/wiki/Binary_Replication_Tutorial
--
Mike Nolan
se postgres can restart the database, that doesn't always mean it
should. Even a well-written startup script might not know enough to make
that decision for you.)
This might be good material for a tutorial on the wiki site, with some
system-specific sections.
--
Mike Nolan
strategies lend themselves readily to partial recoveries.
--
Mike Nolan
On Thu, Apr 19, 2012 at 1:07 PM, Michael Nolan wrote:
>
>
> On Thu, Apr 19, 2012 at 12:46 PM, Jen wrote:
>
>> I have been working on a hot backup for Postgres 9.1 for awhile and have
>> run
>> into a consistent issue.
>>
>
> The instructions in the Binar
n.
Specifically, look at the way the rsyncs are done in two stages, one while
the primary database is in backup mode, and one afterwards.
--
Mike Nolan
ming
> from or how to locate where it is coming from.
>
>
> According to the documentation, the current_timestamp family of functions
is stable, could that be the cause? Better yet, should it?
--
Mike Nolan
but there's been a
lot of research into efficient ways to store and search chess positions,
and some of it may have dealt with SQL database structures.
--
Mike Nolan
-- Forwarded message --
From: Michael Nolan
Date: Wed, 11 Apr 2012 14:48:18 -0400
Subject: Re: [HACKERS] [GENERAL] [streaming replication] 9.1.3
streaming replication bug ?
To: Robert Haas
On Wed, Apr 11, 2012 at 2:14 PM, Robert Haas wrote:
>
>
> We've talked
e with synchronous replication? If it
fails, what's the appropriate action to take on the master? PANICing
it seems to be a bad idea, but having transactions never complete
because they never hear back from the synchronous slave (for whatever
reason) seems bad too.
--
Mike Nolan
--
Sent vi
On 4/11/12, Kevin Grittner wrote:
> Michael Nolan wrote:
>> On 4/11/12, 乔志强 wrote:
>
>>> But when a transaction larger than 1GB...
>>
>> Then you may need WAL space larger than 1GB as well. For
>> replication to work, it seems likely that you may need to
replication cannot be achieved, although
that might have negative consequences as well.
> Another question:
> Does master send WAL to standby before the transaction commit ?
That's another question for the core team, I suspect. A related
question is what happens
if there is a rollback?
-
-- Forwarded message --
From: Michael Nolan
Date: Tue, Apr 10, 2012 at 9:47 PM
Subject: Re: [GENERAL] [streaming replication] 9.1.3 streaming replication
bug ?
To: Fujii Masao
On Tue, Apr 10, 2012 at 9:09 PM, Fujii Masao wrote:
> On Wed, Apr 11, 2012 at 10:06 AM,
ckup?
It isn't clear what you want from synchronous streaming replication, or if
you understand the difference between synchronous streaming replication and
asynchronous streaming replication.
--
Mike Nolan
On Mon, Apr 2, 2012 at 6:19 PM, Michael Nolan wrote:
>
>
>
> I got similar messages the first few times I tried to start up my slave
> server, I never did figure out exactly what caused it.
>
>
One possibility is that I may not have restarted the master server after
changin
safe-links /usr/local/pgsql/data/ postgres@xxx:/usr/local/pgsql/data
rsync -av /usr/local/pgsql/data2/ postgres@xxx:/usr/local/pgsql/data2
/usr/local/pgsql/bin/psql -c "select pg_stop_backup()" postgres postgres
rsync -av /usr/local/pgsql/data/pg_xlog postgres@xxx:/usr/local/pgsql/data/
echo "ok to start standby"
--
Mike Nolan
to the standby server.
A transaction that is rolled back (such as due to an error) after the
nextval() function has been called will not roll back the sequence value,
for example.
You cannot issue a nextval() call on a standby server, because it is in
read-only mode.
--
MIke Nolan
g. I consider
> this a bit messy.
Are you committing each insert separately or doing them in batches using
'begin transaction' and 'commit'?
I have a database that I do inserts in from a text file. Doing a commit
every 1000 transactions cut the time by over 90%.
--
Mike Nolan
e:
>
> BEGIN OPERATION
> Select field from table1;
> ...
> Select other_field from table2;
> ...
> END OPERATION
>
> How can I lock these tables to assure that the tables are not getting
> INSERTS's or UPDATE's during the operation?
>
> Best Regards,\
>
Isn't that what 'begin transaction' and 'commit' are for?
--
Mike Nolan
et a
substantive answer.
Benjamin, have you checked to see if your 'sudden death' problem is heat
related?
-
Mike Nolan
s being stored on a 500 GB external hard drive
connected via USB2 to an HP laptop running Linux Fedora 15.
I found that the encrypted database ran 15-20% slower on PostgreSQL 9.0.4 on
most queries.
--
Mike Nolan
On Thu, Jul 28, 2011 at 5:36 PM, Tom Lane wrote:
> Michael Nolan writes:
> > It also appears you cannot group on a column of type xid.
>
> You can in 8.4 and up. Previous versions only know how to GROUP BY
> sortable columns, which requires a btree opclass, which xid doesn
On Thu, Jul 28, 2011 at 5:09 PM, Tom Lane wrote:
> Michael Nolan writes:
> > It seems like we're being inconsistent here in allowing 'where xid =
> > integer' but not allowing 'where xid != integer'.
>
> Well, if you look into pg_operator you'
On Thu, Jul 28, 2011 at 2:27 PM, Tom Lane wrote:
> Michael Nolan writes:
> > Why does this query succeed:
> > select count(*) from tablename where xmin = 2
>
> > while this query fails:
>
> > select count(*) from tablename where xmin != 2
>
> It told you w
On Thu, Jul 28, 2011 at 12:23 PM, Andy Colson wrote:
> On 7/28/2011 11:40 AM, Michael Nolan wrote:
>
>> Why does this query succeed:
>>
>> select count(*) from tablename where xmin = 2
>>
>> while this query fails:
>>
>> select count(*) from tab
nteger, nor can you cast an integer to an xid.
The only way I can get this to work is:
select count(*) from tablename where not xmin = 2
That seems pretty obscure.
--
Mike Nolan
no...@tssi.com
On Tue, Jul 26, 2011 at 6:10 PM, Chris Travers wrote:
> On Tue, Jul 26, 2011 at 3:48 PM, Michael Nolan wrote:
> > I suggest adding the following parameter to pg_restore:
> >
> > --rename-table=
> >
> > When used in conjunction with the --data-only, --schema a
-table=xyz_copy
would restore a copy of table xyz into the existing (and presumably empty)
table xyz_copy, leaving table xyz untouched.
--
Mike Nolan
no...@tssi.com
!
>
> AFAIK there's no way to find out which compiler was used to build
> PostgreSQL binaries
You can do a strings on a binary file (eg, postmaster) and search for GCC in
the output.
--
Mike Nolan
2011/7/16 - -
>
> The weird thing is that before I updated my server the query was about 5
> times faster.
>
Updated it from what to what, and how?
--
Mike Nolan
no...@tssi.com
e
database, even if that is what schemas are for.
The ability to do cross-database (most likely cross-server as well) queries
would address a lot of real-world problems.
-
Mike Nolan
no...@tssi.com
n a linux kernel.
sar doesn't tell you a lot about what postgres is up to.
--
Mike Nolan
no...@tssi.com
always just built everything from the
source code.
--
Mike Nolan
no...@tssi.com
amused.
PostgreSQL reports this as an error, of course.
--
Mike Nolan
no...@tssi.com
On Wed, Jun 22, 2011 at 3:48 PM, Tom Lane wrote:
> Michael Nolan writes:
> > Has anyone successfully used encfs with postgresq recently?
>
> > PANIC: could not open file "pg_xlog/00010009000D" (log file
> 9,
> > segment 13): Invalid argume
testing it to see if performance is going to be a major concern.
--
Mike Nolan
no...@tssi.com
ate_series(0,1095)
as s(a)
where to_char('2011-01-01'::date+s.a,'dd') between '01' and '07'
and to_char('2011-01-01'::date+s.a,'dy') = 'sat'
--
Mike Nolan
1 - 100 of 264 matches
Mail list logo