Re: Binary support for pgoutput plugin

2019-06-05 Thread Dave Cramer
On Wed, 5 Jun 2019 at 07:21, Dave Cramer  wrote:

> Hi,
>
>
> On Wed, 5 Jun 2019 at 07:18, Petr Jelinek 
> wrote:
>
>> Hi,
>>
>> On 05/06/2019 00:08, Andres Freund wrote:
>> > Hi,
>> >
>> > On 2019-06-05 00:05:02 +0200, David Fetter wrote:
>> >> Would it make sense to work toward a binary format that's not
>> >> architecture-specific? I recall from COPY that our binary format is
>> >> not standardized across, for example, big- and little-endian machines.
>> >
>> > I think you recall wrongly. It's obviously possible that we have bugs
>> > around this, but output/input routines are supposed to handle a
>> > endianess independent format. That usually means that you have to do
>> > endianess conversions, but that doesn't make it non-standardized.
>> >
>>
>> Yeah, there are really 3 formats of data we have, text protocol, binary
>> network protocol and internal on disk format. The internal on disk
>> format will not work across big/little-endian but network binary
>> protocol will.
>>
>> FWIW I don't think the code for binary format was included in original
>> logical replication patch (I really tried to keep it as minimal as
>> possible), but the code and protocol is pretty much ready for adding that.
>>
> Yes, I looked through the public history and could not find it. Thanks for
> confirming.
>
>>
>> That said, pglogical has code which handles this (I guess Andres means
>> that by predecessor of pgoutput) so if you look for example at the
>> write_tuple/read_tuple/decide_datum_transfer in
>>
>> https://github.com/2ndQuadrant/pglogical/blob/REL2_x_STABLE/pglogical_proto_native.c
>> that can help you give some ideas on how to approach this.
>>
>
> Thanks for the tip!
>

Looking at:
https://github.com/postgres/postgres/blob/8255c7a5eeba8f1a38b7a431c04909bde4f5e67d/src/backend/replication/pgoutput/pgoutput.c#L163

this seems completely ignored. What was the intent?

Dave


Re: Binary support for pgoutput plugin

2019-06-05 Thread Dave Cramer
Hi,


On Wed, 5 Jun 2019 at 07:18, Petr Jelinek 
wrote:

> Hi,
>
> On 05/06/2019 00:08, Andres Freund wrote:
> > Hi,
> >
> > On 2019-06-05 00:05:02 +0200, David Fetter wrote:
> >> Would it make sense to work toward a binary format that's not
> >> architecture-specific? I recall from COPY that our binary format is
> >> not standardized across, for example, big- and little-endian machines.
> >
> > I think you recall wrongly. It's obviously possible that we have bugs
> > around this, but output/input routines are supposed to handle a
> > endianess independent format. That usually means that you have to do
> > endianess conversions, but that doesn't make it non-standardized.
> >
>
> Yeah, there are really 3 formats of data we have, text protocol, binary
> network protocol and internal on disk format. The internal on disk
> format will not work across big/little-endian but network binary
> protocol will.
>
> FWIW I don't think the code for binary format was included in original
> logical replication patch (I really tried to keep it as minimal as
> possible), but the code and protocol is pretty much ready for adding that.
>
Yes, I looked through the public history and could not find it. Thanks for
confirming.

>
> That said, pglogical has code which handles this (I guess Andres means
> that by predecessor of pgoutput) so if you look for example at the
> write_tuple/read_tuple/decide_datum_transfer in
>
> https://github.com/2ndQuadrant/pglogical/blob/REL2_x_STABLE/pglogical_proto_native.c
> that can help you give some ideas on how to approach this.
>

Thanks for the tip!


Dave Cramer

>
>
>


Re: Binary support for pgoutput plugin

2019-06-04 Thread Dave Cramer
On Tue, 4 Jun 2019 at 18:08, Andres Freund  wrote:

> Hi,
>
> On 2019-06-05 00:05:02 +0200, David Fetter wrote:
> > Would it make sense to work toward a binary format that's not
> > architecture-specific? I recall from COPY that our binary format is
> > not standardized across, for example, big- and little-endian machines.
>
> I think you recall wrongly. It's obviously possible that we have bugs
> around this, but output/input routines are supposed to handle a
> endianess independent format. That usually means that you have to do
> endianess conversions, but that doesn't make it non-standardized.
>

Additionally there are a number of drivers that already know how to handle
our binary types.
I don't really think there's a win here. I also want to keep the changes
small .

Dave


Re: Binary support for pgoutput plugin

2019-06-04 Thread Dave Cramer
On Tue, 4 Jun 2019 at 16:46, Andres Freund  wrote:

> Hi,
>
> On 2019-06-04 16:39:32 -0400, Dave Cramer wrote:
> > On Tue, 4 Jun 2019 at 16:30, Andres Freund <
> andres.fre...@enterprisedb.com>
> > wrote:
> > > > There's also no reason that I am aware that binary outputs can't be
> > > > supported.
> > >
> > > Well, it *does* increase version dependencies, and does make
> replication
> > > more complicated, because type oids etc cannot be relied to be the same
> > > on source and target side.
> > >
> > I was about to agree with this but if the type oids change from source
> > to target you still can't decode the text version properly. Unless I
> > mis-understand something here ?
>
> The text format doesn't care about oids. I don't see how it'd be a
> problem?  Note that some people *intentionally* use different types from
> source to target system when logically replicating. So you can't rely on
> the target table's types under any circumstance.
>
> I think you really have to use the textual type which we already write
> out (cf logicalrep_write_typ()) to call the binary input functions. And
> you can send only data as binary that's from builtin types - otherwise
> there's no guarantee at all that the target system has something
> compatible. And even if you just assumed that all extensions etc are
> present, you can't transport arrays / composite types in binary: For
> hard to discern reasons we a) embed type oids in them b) verify them. b)
> won't ever work for non-builtin types, because oids are assigned
> dynamically.
>

I figured arrays and UDT's would be problematic.

>
>
> > > I think if we were to add binary output - and I think we should - we
> > > ought to only accept a patch if it's also used in core.
> > >
> >
> > Certainly; as not doing so would make my work completely irrelevant for
> my
> > purpose.
>
> What I mean is that the builtin logical replication would have to use
> this on the receiving side too.
>
> Got it, thanks for validating that the idea isn't nuts. Now I *have* to
produce a POC.

Thanks,
Dave

>
>


Re: Binary support for pgoutput plugin

2019-06-04 Thread Dave Cramer
Dave Cramer


On Tue, 4 Jun 2019 at 16:30, Andres Freund 
wrote:

> Hi,
>
> On 2019-06-04 15:47:04 -0400, Dave Cramer wrote:
> > On Mon, 3 Jun 2019 at 20:54, David Fetter  wrote:
> >
> > > On Mon, Jun 03, 2019 at 10:49:54AM -0400, Dave Cramer wrote:
> > > > Is there a reason why pgoutput sends data in text format? Seems to
> > > > me that sending data in binary would provide a considerable
> > > > performance improvement.
> > >
> > > Are you seeing something that suggests that the text output is taking
> > > a lot of time or other resources?
> > >
> > > Actually it's on the other end that there is improvement. Parsing text
> > takes much longer for almost everything except ironically text.
>
> It's on both sides, I'd say. E.g. float (until v12), timestamp, bytea
> are all much more expensive to convert from binary to text.
>
>
> > To be more transparent there is some desire to use pgoutput for something
> > other than logical replication. Change Data Capture clients such as
> > Debezium have a requirement for a stable plugin which is shipped with
> core
> > as this is always available in cloud providers offerings. There's no
> reason
> > that I am aware of that they cannot use pgoutput for this.
>
> Except that that's not pgoutput's purpose, and we shouldn't make it
> meaningfully more complicated or slower to achieve this. Don't think
> there's a conflict in this case though.
>

agreed, my intent was to slightly bend it to my will :)

>
>
> > There's also no reason that I am aware that binary outputs can't be
> > supported.
>
> Well, it *does* increase version dependencies, and does make replication
> more complicated, because type oids etc cannot be relied to be the same
> on source and target side.
>
> I was about to agree with this but if the type oids change from source to
target you
still can't decode the text version properly. Unless I mis-understand
something here ?

>
>
> > The protocol would have to change slightly and I am working
> > on a POC patch.
>
> Hm, what would have to be changed protocol wise? IIRC that'd just be a
> different datum type? Or is that what you mean?
> pq_sendbyte(out, 't');  /* 'text' data follows */
>
> I haven't really thought this through completely but one place JDBC has
problems with binary is with
timestamps with timezone as we don't know which timezone to use. Is it safe
to assume everything is in UTC
since the server stores in UTC ? Then there are UDF's. My original thought
was to use options to send in the
types that I wanted in binary, everything else could be sent as text.

IIRC there was code for the binary protocol in a predecessor of
> pgoutput.
>

Hmmm that might be good place to start. I will do some digging through git
history

>
> I think if we were to add binary output - and I think we should - we
> ought to only accept a patch if it's also used in core.
>

Certainly; as not doing so would make my work completely irrelevant for my
purpose.

Thanks,

Dave


Re: Binary support for pgoutput plugin

2019-06-04 Thread Dave Cramer
Dave Cramer


On Mon, 3 Jun 2019 at 20:54, David Fetter  wrote:

> On Mon, Jun 03, 2019 at 10:49:54AM -0400, Dave Cramer wrote:
> > Is there a reason why pgoutput sends data in text format? Seems to
> > me that sending data in binary would provide a considerable
> > performance improvement.
>
> Are you seeing something that suggests that the text output is taking
> a lot of time or other resources?
>
> Actually it's on the other end that there is improvement. Parsing text
takes much longer for almost everything except ironically text.

To be more transparent there is some desire to use pgoutput for something
other than logical replication. Change Data Capture clients such as
Debezium have a requirement for a stable plugin which is shipped with core
as this is always available in cloud providers offerings. There's no reason
that I am aware of that they cannot use pgoutput for this. There's also no
reason that I am aware that binary outputs can't be supported. The protocol
would have to change slightly and I am working on a POC patch.

Thing is they aren't all written in C so using binary does provide a pretty
substantial win on the decoding end.

Dave


Binary support for pgoutput plugin

2019-06-03 Thread Dave Cramer
Is there a reason why pgoutput sends data in text format? Seems to me that
sending data in binary would provide a considerable performance improvement.


Dave Cramer


Re: [Elecraft] Problem with K3 driving old tube amp

2019-05-31 Thread Kurt Cramer
Do you have the ALC hooker up?

Kurt

> On May 31, 2019, at 4:21 PM, William Stewart  wrote:
> 
> All,
> 
> I am trying to bring a Drake L-4B to life. I have done the power supply 
> rebuild, the soft-start and other mods. I have also directly grounded the 
> grids of the 3-500Zs (Eimac brand). In driving it with my K3 (SN 51XX, 
> software up to date) I find that the RF power stair-steps to the set power in 
> a hesitating way over a few seconds. This happens sending Morse code or using 
> the "Tune" button.
> 
> The supply voltage, as monitored on the K3 display, is at about 13.4 volts 
> when transmitting.
> 
> I have a power/SWR meter between the K3 and the amp and it shows a low SWR at 
> all times, as does the K3 SWR meter.
> 
> The power meter between the K3 and the amp shows the power gradually 
> increasing. As the K3 increases power out, the amp power out increases in a 
> similar fashion.
> 
> I see no indication of oscillation. The power meters on the output side of 
> the amp show no power when the amp is keyed but no RF is supplied. I placed a 
> low-pass filter (41 MHz cutoff) between the K3 and the amp to no effect.
> 
> I notice that if I send a series of Morse code characters such that the K3 
> transmit delay timer never times out, once the power ramps up to the set 
> value, it remains solid until the timer expires - when I start transmitting 
> again, the power step process repeats.
> 
> When I put the amp in Standby, the K3 functions normally.
> 
> This smells like an oscillation problem, but I see no evidence of one.
> 
> Any advice would be appreciated.
> 
> Thanks,
> 
> Bill K5EMI
> 
> 
> __
> Elecraft mailing list
> Home: http://mailman.qth.net/mailman/listinfo/elecraft
> Help: http://mailman.qth.net/mmfaq.htm
> Post: mailto:Elecraft@mailman.qth.net
> 
> This list hosted by: http://www.qsl.net
> Please help support this email list: http://www.qsl.net/donate.html

__
Elecraft mailing list
Home: http://mailman.qth.net/mailman/listinfo/elecraft
Help: http://mailman.qth.net/mmfaq.htm
Post: mailto:Elecraft@mailman.qth.net

This list hosted by: http://www.qsl.net
Please help support this email list: http://www.qsl.net/donate.html


Re: [HACKERS] Built-in plugin for logical decoding output

2019-05-29 Thread Dave Cramer
Reviving this thread.



On Tue, 26 Sep 2017 at 13:57, Henry  wrote:

> It seems test_decoding.c could be easily changed to support JSON by using
> the built in PostgreSQL functions (json.c composite_to_json) to convert a
> Datum into SQL. It's use of OidOutputFunctionCall could be modified to emit
> arrays and composite types as JSON. This might be enough to enable
> downstream frameworks to parse (without having to code to the terse and
> positional composite structure format).
>
> It could be a minimal change to have in core using the built in JSON
> support with no additional libraries. I have not made changes to this code
> but it seems like it should work.
>
> Thank you,
> Henry
>
> On Tue, Sep 26, 2017 at 9:37 AM Alvaro Hernandez  wrote:
>
>>
>>
>> On 26/09/17 17:50, Craig Ringer wrote:
>>
>> On 26 September 2017 at 22:14, Magnus Hagander 
>> wrote:
>>
>>>
>>>
>>> On Tue, Sep 26, 2017 at 2:16 PM, Alvaro Hernandez 
>>> wrote:
>>>
>>>>
>>>>
>>>>
>>>> But what about earlier versions? Any chance it could be backported
>>>> down to 9.4? If that would be acceptable, I could probably help/do that...
>>>
>>>
>>> The likelihood is zero if you mean backported into core of earlier
>>> versions.
>>>
>>
>> Right. We don't add features to back branches.
>>
>>
>> Yeah, I know the policy. But asking is free ;) and in my opinion this
>> would be a very good reason to have an exception, if there would be a clear
>> desire to have a single, unified, production quality output plugin across
>> all PostgreSQL versions. At least I tried ;)
>>
>>
>>
>>
>>
>>>
>>> If you mean backported as a standalone extension that could be installed
>>> on a previous version, probably. I'm not sure if it relies on any internals
>>> not present before that would make it harder, but it would probably at
>>> least be possible.
>>>
>>>
>> All the pub/sub stuff is new and hooked into syscache etc. So you'd be
>> doing a bunch of emulation/shims using user catalogs. Not impossible, but
>> probably irritating and verbose. And you'd have none of the DDL required to
>> manage it, so you'd need SQL-function equivalents.
>>
>> I suspect you'd be better off tweaking pglogical to speak the same
>> protocol as pg10, since the pgoutput protocol is an evolution of
>> pglogical's protocol. Then using pglogical on older versions.
>>
>>
>>
>> Given all this, if I would be doing an app based on logical decoding,
>> I think I will stick to test_decoding for <10
>>
>>
>> Thanks,
>>
>>
>> Álvaro
>>
>>
>> --
>>
>> Alvaro Hernandez
>>
>>
>> ---
>> OnGres
>>
>>

I believe there is a valid reason for providing a reasonably feature
complete plugin in core. Specifically in instances such as cloud providers
where the user does not control what is installed on the server it would be
useful to have a decent output plugin.

Having had a cursory look at pgoutput I see no reason why pgoutput could
not be used as general purpose output plugin.

One thing that would be nice is to remove the requirement for a publication
as creating a publication on all tables requires a superuser.
I'm also curious why pgoutput does not send attributes in binary ? This
seems like a rather small change that should provide some significant
performance benefits.


Dave Cramer

da...@postgresintl.com
www.postgresintl.com


Re: This seems like very unfriendly behaviour

2019-05-26 Thread Dave Cramer
On Sun, 26 May 2019 at 01:40, Jaime Casanova 
wrote:

> On Sat, 25 May 2019 at 08:35, Dave Cramer  wrote:
> >
> > How do I get rid of this slot ?
> >
> > select pg_drop_replication_slot('mysub');
> > ERROR:  replication slot "mysub" is active for PID 13065
> > test_database=# select * from pg_subscription;
> >  subdbid | subname | subowner | subenabled | subconninfo | subslotname |
> subsynccommit | subpublications
> >
> -+-+--++-+-+---+-
> > (0 rows)
> >
> > test_database=# select * from pg_publication;
> >  pubname | pubowner | puballtables | pubinsert | pubupdate | pubdelete |
> pubtruncate
> >
> -+--+--+---+---+---+-
> > (0 rows)
> >
>
> Can you check "select * from pg_stat_replication"?
>
> also, what pid is being reported in pg_replication_slot for this slot?
> do you see a process in pg_stat_activity for that pid? in the os?
>

Well it turned out it was on receiver. I did get rid of it, but still not a
friendly message.

Thanks

Dave Cramer


This seems like very unfriendly behaviour

2019-05-25 Thread Dave Cramer
How do I get rid of this slot ?

select pg_drop_replication_slot('mysub');
ERROR:  replication slot "mysub" is active for PID 13065
test_database=# select * from pg_subscription;
 subdbid | subname | subowner | subenabled | subconninfo | subslotname |
subsynccommit | subpublications
-+-+--++-+-+---+-
(0 rows)

test_database=# select * from pg_publication;
 pubname | pubowner | puballtables | pubinsert | pubupdate | pubdelete |
pubtruncate
-+--+--+---+---+---+-
(0 rows)

Dave Cramer


Re: initdb recommendations

2019-05-24 Thread Dave Cramer
On Fri, 24 May 2019 at 07:48, Joe Conway  wrote:

> On 5/23/19 10:30 PM, Stephen Frost wrote:
> > Greetings,
> >
> > * Tom Lane (t...@sss.pgh.pa.us) wrote:
> >> "Jonathan S. Katz"  writes:
> >> > For now I have left in the password based method to be scram-sha-256
> as
> >> > I am optimistic about the support across client drivers[1] (and FWIW I
> >> > have an implementation for crystal-pg ~60% done).
> >>
> >> > However, this probably means we would need to set the default password
> >> > encryption guc to "scram-sha-256" which we're not ready to do yet, so
> it
> >> > may be moot to leave it in.
> >>
> >> > So, thinking out loud about that, we should probably use "md5" and
> once
> >> > we decide to make the encryption method "scram-sha-256" by default,
> then
> >> > we update the recommendation?
> >>
> >> Meh.  If we're going to break things, let's break them.  Set it to
> >> scram by default and let people who need to cope with old clients
> >> change the default.  I'm tired of explaining that MD5 isn't actually
> >> insecure in our usage ...
> >
> > +many.
>
> many++
>
> Are we doing this for pg12? In any case, I would think we better loudly
> point out this change somewhere.
>
>
+many as well given the presumption that we are going to break existing
behaviour

Dave


Re: Hibernate ERROR (Could not synchronize database state with session)

2019-05-21 Thread Dave Cramer
Couple things here.

Pgadmin3 is no longer supported and secondly Pgadmin3 is not written in
Java, nor does it use Hibernate. Not sure we can help you on this list.

Regards,

Dave Cramer


On Tue, 21 May 2019 at 05:48, LOPES Filipe 
wrote:

> Good Morning,
>
>
>
> I’m a support technician working for a company called SES-IMAGOTAG,
>
>
>
> I’ve an issue with PgAdmin3:
>
>
>
> I’ve a lot of hibernate errors (you can find the logs attached)
>
>
>
> Can you help me figure out what the problem is, and if there’s a solution
> I can use to solve it?
>
>
>
> Thank you,
>
>
>
> BR
>
>
>
> [image: 1 - logo_SES-imagotag black]
>
>
>
> *Filipe LOPES*
>
> Équipe Support / Suporte Técnico
>
> *support.inte...@ses-imagotag.com *
>
>
>
> Store Electronic Systems
> 55 place Nelson Mandela
>
> CS 60106
> 92024 Nanterre Cedex
>
> *www.ses-imagotag.com <http://www.ses-imagotag.com/>*
>
>
>
>
>
>
>
>
>


Book or tutorials?

2019-05-15 Thread Tom Cramer
Good morning,

I wonder if someone could direct me to a good book or tutorials
regarding the following.  I recently got a new laptop/tablet and
wanted to install iTunes and sync certain things from my phone, such
as contacts and other things.  I haven't used iTunes for a very long
time and remember it had its issues in the past.  Has iTunes gotten
easier to use with JAWS, and is it going to be easy to sync contacts
and phone information?  I don't mind doing the work but just want to
know a good place to get started.
Thank you very much.

-- 
The following information is important for all members of the V iPhone list.

If you have any questions or concerns about the running of this list, or if you 
feel that a member's post is inappropriate, please contact the owners or 
moderators directly rather than posting on the list itself.

Your V iPhone list moderator is Mark Taylor.  Mark can be reached at:  
mk...@ucla.edu.  Your list owner is Cara Quinn - you can reach Cara at 
caraqu...@caraquinn.com

The archives for this list can be searched at:
http://www.mail-archive.com/viphone@googlegroups.com/
--- 
You received this message because you are subscribed to the Google Groups 
"VIPhone" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to viphone+unsubscr...@googlegroups.com.
To post to this group, send email to viphone@googlegroups.com.
Visit this group at https://groups.google.com/group/viphone.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/viphone/CAPr-VTr-6b8hxaAUO2UQX%3DfaRaG5XMdsg%3D4tBbSgw7KRGw3Uww%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: ZFS...

2019-05-08 Thread Walter Cramer

On Wed, 8 May 2019, Paul Mather wrote:


On May 8, 2019, at 9:59 AM, Michelle Sullivan  wrote:


Paul Mather wrote:
due to lack of space.  Interestingly have had another drive die in the 
array - and it doesn't just have one or two sectors down it has a *lot* - 
which was not noticed by the original machine - I moved the drive to a 
byte copier which is where it's reporting 100's of sectors damaged... 
could this be compounded by zfs/mfi driver/hba not picking up errors like 
it should?



Did you have regular pool scrubs enabled?  It would have picked up silent 
data corruption like this.  It does for me.
Yes, every month (once a month because, (1) the data doesn't change much 
(new data is added, old it not touched), and (2) because to complete it 
took 2 weeks.)



Do you also run sysutils/smartmontools to monitor S.M.A.R.T. attributes? 
Although imperfect, it can sometimes signal trouble brewing with a drive 
(e.g., increasing Reallocated_Sector_Ct and Current_Pending_Sector counts) 
that can lead to proactive remediation before catastrophe strikes.


Unless you have been gathering periodic drive metrics, you have no way of 
knowing whether these hundreds of bad sectors have happened suddenly or 
slowly over a period of time.




+1

Use `smartctl` from a cron script to do regular (say, weekly) *long* 
self-tests of hard drives, and also log (say, daily) all the SMART 
information from each drive.  Then if a drive fails, you can at least 
check the logs for whether SMART noticed symptoms, and (if so) for other 
drives with symptoms.  Or enhance this with a slightly longer script, 
which watches the logs for symptoms, and alerts you.


(My experience is that SMART's *long* self-test checks the entire disk for 
read errors, without neither downside of `zpool scrub` - it does a fast, 
sequential read of the HD, including free space.  That makes it a nice 
test for failing disk hardware; not a replacement for `zpool scrub`.)



Cheers,

Paul.
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"

___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: ZFS...

2019-05-06 Thread Walter Cramer

On Mon, 6 May 2019, Patrick M. Hausen wrote:


Hi!


Am 30.04.2019 um 18:07 schrieb Walter Cramer :


With even a 1Gbit ethernet connection to your main system, savvy use of 
(say) rsync (net/rsync in Ports), and the sort of "know your data / 
divide & conquer" tactics that Karl mentions, you should be able to 
complete initial backups (on both backup servers) in <1 month.  After 
that - rsync can generally do incremental backups far, far faster.


ZFS can do incremental snapshots and send/receive much faster than rsync 
on the file level. And e.g. FreeNAS comes with all the bells and 
whistles already in place - just a matter of point and click to 
replicate one set of datasets on one server to another one ???


True.  But I was making a brief suggestion to Michelle - who does not seem 
to be a trusting fan of ZFS - hoping that she might actually implement it, 
or something similar.  Or at least an already-tediously-long mailing list 
thread would end.  Rsync is good enough for her situation, and would let 
her use UFS on her off-site backup servers, if she preferred that.


*Local* replication is a piece of cake today, if you have the hardware.

Kind regards,
Patrick
--
punkt.de GmbH   Internet - Dienstleistungen - Beratung
Kaiserallee 13a Tel.: 0721 9109-0 Fax: -100
76133 Karlsruhe i...@punkt.de   http://punkt.de
AG Mannheim 108285  Gf: Juergen Egeling___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: ZFS...

2019-04-30 Thread Walter Cramer

Brief "Old Man" summary/perspective here...

Computers and hard drives are complex, sensitive physical things.  They, 
or the data on them, can be lost to fire, flood, lightning strikes, theft, 
transportation screw-ups, and more.  Mass data corruption by faulty 
hardware or software is mostly rare, but does happen.  Then there's the 
users - authorized or not - who are inept or malicious.


You can spent a fortune to make loss of the "live" data in your home 
server / server room / data center very unlikely.  Is that worth the time 
and money?  Depends on the business case.  At any scale, it's best to have 
a manager - who understands both computers and the bottom line - keep a 
close eye on this.


"Real" protection from data loss means multiple off-site and generally 
off-line backups.  You could spend a fortune on that, too...but for your 
use case (~21TB in an array that could hold ~39TB, and what sounds like a 
"home power user" budget), I'd say to put together two "backup servers" - 
cheap little (aka transportable) FreeBSD systems with, say 7x6GB HD's, 
raidz1.  With even a 1Gbit ethernet connection to your main system, savvy 
use of (say) rsync (net/rsync in Ports), and the sort of "know your data / 
divide & conquer" tactics that Karl mentions, you should be able to 
complete initial backups (on both backup servers) in <1 month.  After that 
- rsync can generally do incremental backups far, far faster.  How often 
you gently haul the backup servers to/from your off-site location(s) 
depends on a bunch of factors - backup frequency, cost of bandwidth, etc.


Never skimp on power supplies.

-Walter

[Credits:  Nothing above is original.  Others have already made most of my 
points in this thread.  It's pretty much all decades-old computer wisdom 
in any case.]



On Tue, 30 Apr 2019, Michelle Sullivan wrote:


Karl Denninger wrote:

On 4/30/2019 05:14, Michelle Sullivan wrote:

On 30 Apr 2019, at 19:50, Xin LI  wrote:
On Tue, Apr 30, 2019 at 5:08 PM Michelle Sullivan  

wrote:
but in my recent experience 2 issues colliding at the same time results 

in disaster
Do we know exactly what kind of corruption happen to your pool?  If you 
see it twice in a row, it might suggest a software bug that should be 
investigated.


All I know is it???s a checksum error on a meta slab (122) and from what 
I can gather it???s the spacemap that is corrupt... but I am no expert.  I 
don???t believe it???s a software fault as such, because this was cause by a 
hard outage (damaged UPSes) whilst resilvering a single (but completely 
failed) drive.  ...and after the first outage a second occurred (same as the 
first but more damaging to the power hardware)... the host itself was not 
damaged nor were the drives or controller.

.
Note that ZFS stores multiple copies of its essential metadata, and in my 
experience with my old, consumer grade crappy hardware (non-ECC RAM, with 
several faulty, single hard drive pool: bad enough to crash almost monthly 
and damages my data from time to time),
This was a top end consumer grade mb with non ecc ram that had been 
running for 8+ years without fault (except for hard drive platter failures.). 
Uptime would have been years if it wasn???t for patching.

Yuck.

I'm sorry, but that may well be what nailed you.

ECC is not just about the random cosmic ray.  It also saves your bacon
when there are power glitches.


No. Sorry no.  If the data is only half to disk, ECC isn't going to save 
you at all... it's all about power on the drives to complete the write.


Unfortunately however there is also cache memory on most modern hard
drives, most of the time (unless you explicitly shut it off) it's on for
write caching, and it'll nail you too.  Oh, and it's never, in my
experience, ECC.


No comment on that - you're right in the first part, I can't comment if 
there are drives with ECC.




In addition, however, and this is something I learned a LONG time ago
(think Z-80 processors!) is that as in so many very important things
"two is one and one is none."

In other words without a backup you WILL lose data eventually, and it
WILL be important.

Raidz2 is very nice, but as the name implies it you have two
redundancies.  If you take three errors, or if, God forbid, you *write*
a block that has a bad checksum in it because it got scrambled while in
RAM, you're dead if that happens in the wrong place.


Or in my case you write part data therefore invalidating the checksum...


Yeah.. unlike UFS that has to get really really hosed to restore from 
backup with nothing recoverable it seems ZFS can get hosed where issues occur 
in just the wrong bit... but mostly it is recoverable (and my experience has 
been some nasty shit that always ended up being recoverable.)


Michelle

Oh that is definitely NOT true again, from hard experience,
including (but not limited to) on FreeBSD.

My experience is that ZFS is materially more-resilient but there is no
such thing as "can never be corrupted 

'Making Matters' conference in The Hague, Netherlands

2019-04-29 Thread Florian Cramer
Bridging Art, Design and Technology through Critical Making

Tickets € 7,50-20: https://tinyurl.com/MakingMattersSymposium
<https://l.facebook.com/l.php?u=https%3A%2F%2Ftinyurl.com%2FMakingMattersSymposium%3Ffbclid%3DIwAR28zcCAIXEN1voHuAsMlc_6dySv1bIsubwRIWbur7rBh7S-orhbzd8xpxI=AT3HM71MMgC9odGH5EIl432AyJ77Z5u2gXc1SClvfgUpaiMmgZVMq5qhprLkBsOg9S0_mbjub6bH1wRa612oMlmlv9e7WUawQyyMJkhw9TcUCWIMnkPXUJUvl9nQWzO6uoyG>

The partners and researchers of the project Bridging Art, Design and
Technology through Critical Making are excited to announce the two-day
symposium Making Matters. The symposium takes place from 9-10th of May at
West Den Haag, Lange Voorhout 102 in The Hague.

Making Matters invites makers, artists, students, activists, theorists,
designers, humans and non-humans to think about making practices and their
critical potential. By offering opportunity for exchange across
disciplines, the symposium attempts to shift the discourse of making from
maker culture to a wider set of creative practices, thereby proposing
alternatives to the solutionism of contemporary techno-creative industries.

The project ˜Bridging Art, Design and Technology through Critical Making'
investigates how Critical Making — a notion originally developed in the
context of social research, design and technology — can be adopted and
developed in relation to artistic research and (post)critical theory.

Confirmed speakers include: Ramon Amaro / Liesbeth Bik / Loes Bogers /
Letizia Chiappini / ginger coons / Florian Cramer / Dyne.org
<https://l.facebook.com/l.php?u=http%3A%2F%2FDyne.org%2F%3Ffbclid%3DIwAR3eJC2T4aLpnkV_VFUzGyhUNFlv1mbwHeakmTCfDa7VSycOlg7rC7kncQA=AT1oFfSNlHxYBjc96uiXCGd--O-7zwDnzWkbCZDHxUieXpvmb0MHK5QdHke_OEsZkh49VIMfrX_Nh6Ial32hl8UxKknGSoUClCOOM3Dgmm1EEKRsp1ba7pxDHRlq0ifcPkI2>
/ Anja Groten / Frans-Willem Korsten / Pia Louwerens / Ulrike Möntmann /
Shailoh Phillips / Dani Ploeger / Constant (Femke Snelting) / Janneke
Wesseling

Program:
THURSDAY 9 MAY 2019
9.30 Welcome (coffee & tea)
10.00 Introduction Critical Making Consortium:
Klaas Kuitenbrouwer (Het Nieuwe Instituut) / Janneke Wesseling / Lucas
Evers (Waag)
10.30 Presentation Liesbeth Bik followed by dialogue with Florian Cramer
11.15 Coffee break
11.30 Presentations: Dyne.org
<https://l.facebook.com/l.php?u=http%3A%2F%2FDyne.org%2F%3Ffbclid%3DIwAR1sG9MdNfiF9tYt7K_w867rHXhaI4VHDAcCrIFYrNHOLHe7JivvIfvidrA=AT1oFfSNlHxYBjc96uiXCGd--O-7zwDnzWkbCZDHxUieXpvmb0MHK5QdHke_OEsZkh49VIMfrX_Nh6Ial32hl8UxKknGSoUClCOOM3Dgmm1EEKRsp1ba7pxDHRlq0ifcPkI2>
/ Constant (Femke Snelting) + dialogue
12.45 Lunch break
13.45 Presentations: Shailoh Phillips & Pia Louwerens
14.45 Presentation: Dani Ploeger
15.45 Coffee break
16.00 Public discussion: Challenges and Consequences of Critical Making Now
17.00 Drinks

FRIDAY 10 MAY 2019
9.30 Welcome (coffee & tea)
10.00 Introduction on Critical Making:
Lucas Evers (Waag) / Klaas Kuitenbrouwer (Het Nieuwe Instituut) /
Marie-José Sondeijker (West Den Haag)
10.30 Presentations: Frans-Willem Korsten / ginger coons
11.30 Coffee break
11.45 Talk: Ramon Amaro
12.15 Presentation: Anja Groten
12.45 Lunch break
13.45 Bookpresentation INC: Letizia Chiappini / Loes Bogers
14.00 Workshop: Hackers & Designers
14.00 Workshop: Ramon Amaro
14.00 Workshop: Thalia Hoffman
14.00 Workshop: Pia Louwerens
16.00 Public discussion + wrap up
17.00 Drinks

More information: www.criticalmaking.nl
<https://l.facebook.com/l.php?u=http%3A%2F%2Fwww.criticalmaking.nl%2F%3Ffbclid%3DIwAR0XZgy1ZRlHpPUNsaPw10NkDbVJ_UdH9x_2yhE04cvlKn_JLyqrH2KPSRU=AT1L7QH72Fvk78UQFyS_em3wDh9bn_K2FyLQ6oiP_K9gnzvywKD3RBi9MOrm1Ic_IDuWVMBnMLqNUbAygl0nqEO43U_lYHuSoAmBl0OwpzOaJFV-eSwKa8kr28IDRwU-E5g->

Partners: Academy of Creative and Performing Arts - Leiden University
<https://www.facebook.com/acpaleiden/>, Willem de Kooning Academy
<https://www.facebook.com/WillemdeKooningAcademy/>, Het Nieuwe Instituut
<https://www.facebook.com/HetNieuweInstituut/>, Waag Society
<https://www.facebook.com/waagsociety/>, West
<https://www.facebook.com/westdenhaag/>

The research project ‘Bridging Art, Design and Technology through Critical
Making’ is part of the Smart Culture - Arts and Culture programme, funded
by the Netherlands Organisation for Scientific Research (NWO) in
collaboration with the Taskforce for Applied Research (NRPO SIA).

This notice reflects only the authors’ views. NWO is not liable for any use
that may be made of the information contained therein.


-- 
blog: *https://pod.thing.org/people/13a6057015b90136f896525400cd8561
<https://pod.thing.org/people/13a6057015b90136f896525400cd8561>*
bio:  http://floriancramer.nl
#  distributed via : no commercial use without permission
#is a moderated mailing list for net criticism,
#  collaborative text filtering and cultural politics of the nets
#  more info: http://mx.kein.org/mailman/listinfo/nettime-l
#  archive: http://www.nettime.org contact: nett...@kein.org
#  @nettime_bot tweets mail w/ sender unless #ANON is in Subject:

Anyone use Instagram?

2019-04-13 Thread Tom Cramer
Hello,
Does anyone on this list use Instagram with VoiceOver?  I know I am
totally blind, but I've learned there are a few instances where I
might benefit from using Instagram, and several people I know are now
using it rather than FaceBook and such.  Is Instagram more than
pictures, or is it only pictures?  How easy would it be with
VoiceOver?

-- 
The following information is important for all members of the V iPhone list.

If you have any questions or concerns about the running of this list, or if you 
feel that a member's post is inappropriate, please contact the owners or 
moderators directly rather than posting on the list itself.

Your V iPhone list moderator is Mark Taylor.  Mark can be reached at:  
mk...@ucla.edu.  Your list owner is Cara Quinn - you can reach Cara at 
caraqu...@caraquinn.com

The archives for this list can be searched at:
http://www.mail-archive.com/viphone@googlegroups.com/
--- 
You received this message because you are subscribed to the Google Groups 
"VIPhone" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to viphone+unsubscr...@googlegroups.com.
To post to this group, send email to viphone@googlegroups.com.
Visit this group at https://groups.google.com/group/viphone.
For more options, visit https://groups.google.com/d/optout.


RE: Crontab Question

2019-04-10 Thread Walter Cramer

On Wed, 10 Apr 2019, Software Info wrote:

OK. So although the script is located in my home directory, it doesn???t 
start there?  Sorry but I don???t quite understand. Could you explain a 
little further please?


Both 'cp' and 'ls' are located in /bin.  But if I run the 'ls' command in 
/root, 'ls' can't find 'cp' (unless I tell it where to look) - even though 
/bin *is* in my PATH -


server7:/root # ls cp
ls: cp: No such file or directory
server7:/root # ls /bin/cp
/bin/cp

Where the system looks for *commands*, to execute, is different from where 
it looks for other files, which those commands use.  The latter is 
generally only the current directory (unless you tell it otherwise). 
When cron runs a script as root, "current directory" will be /root.


BUT - for security and other reasons, it would be better to have cron run 
your script as you (not root), and as '/home/me/myscript' (instead of 
adding your home directory to PATH in /etc/crontab).


-Walter___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: change password_encryption default to scram-sha-256?

2019-04-08 Thread Dave Cramer
On Mon, 8 Apr 2019 at 16:38, Tom Lane  wrote:

> Dave Cramer  writes:
> >> If someone installs a postgres RPM/DEB from postgresql.org, they could
> >> also install postgresql-jdbc, right ?
>
> > I would guess there might be some distro specific java apps that might
> > actually use what is on the machine but as mentioned any reasonably
> complex
> > Java app is going to ensure it has the correct versions for their app
> using
> > Maven.
>
> I'm not really sure if that makes things better or worse.  If some app
> thinks that it needs version N of the driver, but SCRAM support was
> added in version N-plus-something, how tough is it going to be to get
> it updated?  And are you going to have to go through that dance for
> each app separately?
>
>

I see the problem you are contemplating, but even installing a newer
version of the driver has it's perils (we have been known to break some
expectations in the name of the spec).
So I could see a situation where there is a legacy app that wants to use
SCRAM. They update the JDBC jar on the system and due to the "new and
improved" version their app breaks.
Honestly I don't have a solution to this.

That said 42.2.0 was released in January 2018, so by PG13 it's going to be
4 years old.

Dave


Re: change password_encryption default to scram-sha-256?

2019-04-08 Thread Dave Cramer
>
>
>
> > The scenario that worries me here is somebody using a bleeding-edge PGDG
> > server package in an environment where the rest of the Postgres ecosystem
> > is much less bleeding-edge.
>
> If someone installs a postgres RPM/DEB from postgresql.org, they could
> also
> install postgresql-jdbc, right ?
>
>
No, this is not how the majority of people use Java at all. They would use
Maven to pull down the JDBC driver of choice.

I would guess there might be some distro specific java apps that might
actually use what is on the machine but as mentioned any reasonably complex
Java app is going to ensure it has the correct versions for their app using
Maven.

Dave Cramer

da...@postgresintl.com
www.postgresintl.com


>


Re: change password_encryption default to scram-sha-256?

2019-04-08 Thread Dave Cramer
On Mon, 8 Apr 2019 at 16:07, Alvaro Herrera 
wrote:

> On 2019-Apr-08, Dave Cramer wrote:
>
> > > IIUC the vast majority of clients already support SCRAM auth.  So the
> > > vast majority of PG users can take advantage of the additional
> security.
> > > I think the only massive-adoption exception is JDBC, and apparently
> they
> > > already have working patches for SCRAM.
> >
> > We have more than patches this is already in the driver.
> >
> > What do you mean by "massive-adoption exception"
>
> I meant an exception to the common situation that SCRAM-SHA-256 is
> supported and shipped in stable releases of each driver.  The wiki here
> still says it's unsupported on JDBC:
> https://wiki.postgresql.org/wiki/List_of_drivers
> For once I'm happy to learn that the wiki is outdated :-)
>


Way too many places to update :)


Dave Cramer

da...@postgresintl.com
www.postgresintl.com


>
>


Re: change password_encryption default to scram-sha-256?

2019-04-08 Thread Dave Cramer
On Mon, 8 Apr 2019 at 15:18, Jonathan S. Katz  wrote:

> On 4/8/19 2:28 PM, Tom Lane wrote:
> > Andres Freund  writes:
> >> On 2019-04-08 13:34:12 -0400, Alvaro Herrera wrote:
> >>> I'm not sure I understand all this talk about deferring changing the
> >>> default to pg13.  AFAICS only a few fringe drivers are missing support;
> >>> not changing in pg12 means we're going to leave *all* users, even those
> >>> whose clients have support, without the additional security for 18 more
> >>> months.
> >
> >> Imo making such changes after feature freeze is somewhat poor
> >> form.
> >
> > Yeah.
>
> Yeah, that's fair.
>
> >
> >> If jdbc didn't support scram, it'd be an absolutely clear no-go imo. A
> >> pretty large fraction of users use jdbc to access postgres. But it seems
> >> to me that support has been merged for a while:
> >> https://github.com/pgjdbc/pgjdbc/pull/1014
> >
> > "Merged to upstream" is a whole lot different from "readily available in
> > the field".  What's the actual status in common Linux distros, for
> > example?
>
> Did some limited research just to get a sense.
>
> Well, if it's RHEL7, it's PostgreSQL 9.2 so, unless they're using our
> RPM, that definitely does not have it :)
>
> (While researching this, I noticed on the main RHEL8 beta page[1] that
> PostgreSQL is actually featured, which is kind of neat. I could not
> quickly find which version of the JDBC driver it is shipping with, though)
>
> On Ubuntu, 18.04 LTS ships PG10, but the version of JDBC does not
> include SCRAM support. 18.10 ships JDBC w/SCRAM support.
>
> On Debian, stretch is on 9.4. buster has 11 packaged, and JDBC is
> shipping with SCRAM support.
>
>

Honestly what JDBC driver XYZ distro ships with is a red herring. Any
reasonably complex java program is going to use maven and pull it's
dependencies.

That said from a driver developer, I support pushing this decision off to
PG13

Dave Cramer

da...@postgresintl.com
www.postgresintl.com


>
>


Re: change password_encryption default to scram-sha-256?

2019-04-08 Thread Dave Cramer
Alvaro,

On Mon, 8 Apr 2019 at 13:34, Alvaro Herrera 
wrote:

> I'm not sure I understand all this talk about deferring changing the
> default to pg13.  AFAICS only a few fringe drivers are missing support;
> not changing in pg12 means we're going to leave *all* users, even those
> whose clients have support, without the additional security for 18 more
> months.
>
> IIUC the vast majority of clients already support SCRAM auth.  So the
> vast majority of PG users can take advantage of the additional security.
> I think the only massive-adoption exception is JDBC, and apparently they
> already have working patches for SCRAM.
>


We have more than patches this is already in the driver.

What do you mean by "massive-adoption exception"

Dave Cramer

da...@postgresintl.com
www.postgresintl.com



>
>


Re: Forks of pgadmin3?

2019-03-25 Thread Dave Cramer
Thomas,

Any chance it would run under graalvm getting rid of the need for the JVM ?

Dave Cramer

da...@postgresintl.com
www.postgresintl.com


On Mon, 25 Mar 2019 at 07:06, Thomas Kellerer  wrote:

> kpi6...@gmail.com schrieb am 22.03.2019 um 17:25:
> > 95% of my time I use pgadminIII just to type select and update
> > statements and review the output rows.
> >
> > I know that I can do this in psql but it’s not handy with many
> > columns.
>
> An alternative you might want to try is SQL Workbench/J:
> https://www.sql-workbench.eu/
>
> Full disclosure: I am the author of that tool.
>
> It's a cross DBMS tool, but my primary focus is Postgres.
>
> It focuses on running SQL queries rather then being a DBA tool.
>
> Regards
> Thomas
>
>


[Bug 1821642] [NEW] stalls during install or reports an error that the CD might be currupt

2019-03-25 Thread Tommy TBones Cramer
Public bug reported:

my next step is to use the live disc as the OS and download the current
version then burn a new disc before I reboot. This is my home server
that is used mainly for live broadcasting to my streaming server
radio1.440music.com. radio2.440music.com is the server name of the
server Im trying to install Ubuntu Bionic Beaver 18.04.2. Ubuntu seems
to be the only stable OS that will run Mixxx, I sure wish SAMSBroadcast
would work on Linux OSs

ProblemType: Bug
DistroRelease: Ubuntu 18.04
Package: ubiquity 18.04.14.12
ProcVersionSignature: Ubuntu 4.18.0-15.16~18.04.1-generic 4.18.20
Uname: Linux 4.18.0-15-generic x86_64
ApportVersion: 2.20.9-0ubuntu7.5
Architecture: amd64
CasperVersion: 1.394
Date: Mon Mar 25 16:09:05 2019
InstallCmdLine: file=/cdrom/preseed/ubuntu.seed boot=casper 
initrd=/casper/initrd quiet splash --- maybe-ubiquity
LiveMediaBuild: Ubuntu 18.04.2 LTS "Bionic Beaver" - Release amd64 (20190210)
SourcePackage: ubiquity
UpgradeStatus: No upgrade log present (probably fresh install)

** Affects: ubiquity (Ubuntu)
 Importance: Undecided
 Status: New


** Tags: amd64 apport-bug bionic ubiquity-18.04.14.12 ubuntu

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1821642

Title:
  stalls during install or reports an error that the CD might be currupt

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/ubiquity/+bug/1821642/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

Re: rage against the machine

2019-03-24 Thread Florian Cramer
Having zero knowledge of airplane technology, I do not know whether the
following writeup/opinion piece on the 737 Max is a trustworthy source or
not.
It was written by a software developer (that I could verify) named Gregory
Travis who claims to have been a "pilot and aircraft owner for over thirty
years"
and who blogged on airplane engineering in the past:
https://drive.google.com/file/d/1249KS8xtIDKb5SxgpeFI6AD-PSC6nFA5/view

Travis suggests that the 737 MAX fiasco resulted from a combination of
market economics/cost-optimization management and software
being used to correct hardware design flaws.

Here's an extensive, selective quote from this document:

> "Over the years, market and technological forces pushed the 737 into
larger versions with more electronic
> and mechanical complexity. This is not, by any means, unique to the 737.
All
> airliners, enormous capital investments both for the industries that make
them as well as
> the customers who buy them, go through a similar growth process.
> The majority of those market and technical forces allied on the side of
economics, not safety.
> They were allied to relentlessly drive down what the industry calls
'seat-mile costs' – the cost of flying a seat from one point to another."
>
> To improve capacity and efficiency (I'm still paraphrasing the document),
engines had to become physically larger:
> "problem: the original 737 had (by today’s standards) tiny little engines
that easily cleared the ground beneath the wings. As the 737 grew and was
fitted with bigger engines, the
> clearance between the engines and the ground started to get a little,
umm, 'tight.' [...]
>
> With the 737 MAX the situation became critical. [...] The solution was to
extend the engine up and well in front of the wing. However,
> doing so also meant that the centerline of the engine’s thrust changed.
Now, when the pilots applied power to the
> engine, the aircraft would have a significant propensity to 'pitch up' –
raise its nose. [...]
>
> Apparently the 737 MAX pitched up a bit too much for comfort on power
application as well as
> at already-high-angles-of-attack. It violated that most ancient of
aviation canons and probably
> violated the FAA’s certification criteria. But, instead of going back to
the drawing board and
> getting the airframe hardware right (more on that below), Boeing’s
solution was something
> called the 'Maneuvering Characteristics Augmentation System,' or MCAS.
> Boeing’s solution to their hardware problem was software."

Software that didn't work as expected.

- By itself, this story doesn't sound new, but (particularly to European
readers) like a flashback from more than twenty years ago
when Mercedes botched the aerodynamic design of its "A series" car (its
first entry into the compact car segment) and corrected it with
computerized Electronic Stability Control (ESC/ESP), a textbook example of
a cybernetic feedback-and-control system based on sensors and software.

Here is an article that explains the basics of Boeing's MCAS system, which
sounds similar to ESC/ESP indeed:
https://theaircurrent.com/aviation-safety/what-is-the-boeing-737-max-maneuvering-characteristics-augmentation-system-mcas-jt610/
(For lay people like me, the surprising bit was that MCAS "activates
automatically when [...] autopilot is off".)

-F

-- 
blog: *https://pod.thing.org/people/13a6057015b90136f896525400cd8561
*
bio:  http://floriancramer.nl
#  distributed via : no commercial use without permission
#is a moderated mailing list for net criticism,
#  collaborative text filtering and cultural politics of the nets
#  more info: http://mx.kein.org/mailman/listinfo/nettime-l
#  archive: http://www.nettime.org contact: nett...@kein.org
#  @nettime_bot tweets mail w/ sender unless #ANON is in Subject:

Re: Christchurch and the Dark Social Web by Luke Munn

2019-03-19 Thread Florian Cramer
>
> although I agree with  a large part of Lukes analyse let me put in
> question the point of machinic agency.


All the more since 8chan, where the Christchurch killer was at culturally
and literally at home and posted his announcement, is (along with the other
chans) the least algorithmically regulated social medium of them all -
which is exactly the reason of its attraction to counter and fringe
culture, including racist murderers.

-F
#  distributed via : no commercial use without permission
#is a moderated mailing list for net criticism,
#  collaborative text filtering and cultural politics of the nets
#  more info: http://mx.kein.org/mailman/listinfo/nettime-l
#  archive: http://www.nettime.org contact: nett...@kein.org
#  @nettime_bot tweets mail w/ sender unless #ANON is in Subject:

Re: Observations from a ZFS reorganization on 12-STABLE

2019-03-18 Thread Walter Cramer
I suggest caution in raising vm.v_free_min, at least on 11.2-RELEASE 
systems with less RAM.  I tried "65536" (256MB) on a 4GB mini-server, with 
vfs.zfs.arc_max of 2.5GB.  Bad things happened when the cron daemon merely 
tried to run `periodic daily`.


A few more details - ARC was mostly full, and "bad things" was 1: 
`pagedaemon` seemed to be thrashing memory - using 100% of CPU, with 
little disk activity, and 2: many normal processes seemed unable to run. 
The latter is probably explained by `man 3 sysctl` (see entry for 
"VM_V_FREE_MIN").



On Mon, 18 Mar 2019, Pete French wrote:


On 17/03/2019 21:57, Eugene Grosbein wrote:

I agree. Recently I've found kind-of-workaround for this problem:
increase vm.v_free_min so when "FREE" memory goes low,
page daemon wakes earlier and shrinks UMA (and ZFS ARC too) moving some 
memory
from WIRED to FREE quick enough so it can be re-used before bad things 
happen.


But avoid increasing vm.v_free_min too much (e.g. over 1/4 of total RAM)
because kernel may start behaving strange. For 16Gb system it should be 
enough

to raise vm.v_free_min upto 262144 (1GB) or 131072 (512M).

This is not permanent solution in any way but it really helps.


Ah, thats very interesting, thankyou for that! I;ve been bitten by this issue 
too in the past, and it is (as mentioned) much improved on 12, but the act it 
could still cause issues worries me.


-pete.
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: [GTALUG] Gender discrimination

2019-03-13 Thread Dave Cramer via talk
On Wed, 13 Mar 2019 at 16:58, Malgosia Askanas via talk 
wrote:

> So let me understand this.  If a woman (me, for example) would bemoan the
> fact that 80% of participants in a certain field of endeavor are male,
> that's not gender discrimination, right?  (I assume it isn't, since it goes
> on all the time in all kinds of media, without any visible censorship.)
> But if a man bemoans the fact that 80% of participants in (another) field
> of endeavor are female, that's gender discrimination?  To my mind, labeling
> the latter as "gender discrimination seems like... gender discrimination.
> Or is it that, by definition, censoring a man cannot possibly be gender
> discrimination?
>

Isn't 2019 s much fun...

Same thing happens with race. Caucasians can't bemoan anything...

Dave Cramer
---
Talk Mailing List
talk@gtalug.org
https://gtalug.org/mailman/listinfo/talk


[docbook-apps] Google "Season of Docs"

2019-03-12 Thread David Cramer
Back when we mentored those Summer of Code projects, I always thought
they should do something similar for docs. Now they are:

https://opensource.googleblog.com/2019/03/introducing-season-of-docs.html

Regards,

David

-
To unsubscribe, e-mail: docbook-apps-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: docbook-apps-h...@lists.oasis-open.org



Re: Leaving a conversation?

2019-03-02 Thread Tom Cramer
Hello,
I forgot to say that I'm getting these threads with the basic Message
app.  In other words, they're text conversations. I don't know if that
helps.

On 3/2/19, Tom Cramer  wrote:
> Hello,
>
> Is it possible for me to leave a conversation made up of texts?
> Lately, I've been added to conversations for a group of people, but
> many of the messages are simply emoticons or LOL or things that really
> add no substance to the conversation.  I just didn't know if I could
> leave a thread or hide the messages.  If I do something like this,
> would the people know?
>

-- 
The following information is important for all members of the V iPhone list.

If you have any questions or concerns about the running of this list, or if you 
feel that a member's post is inappropriate, please contact the owners or 
moderators directly rather than posting on the list itself.

Your V iPhone list moderator is Mark Taylor.  Mark can be reached at:  
mk...@ucla.edu.  Your list owner is Cara Quinn - you can reach Cara at 
caraqu...@caraquinn.com

The archives for this list can be searched at:
http://www.mail-archive.com/viphone@googlegroups.com/
--- 
You received this message because you are subscribed to the Google Groups 
"VIPhone" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to viphone+unsubscr...@googlegroups.com.
To post to this group, send email to viphone@googlegroups.com.
Visit this group at https://groups.google.com/group/viphone.
For more options, visit https://groups.google.com/d/optout.


Leaving a conversation?

2019-03-02 Thread Tom Cramer
Hello,

Is it possible for me to leave a conversation made up of texts?
Lately, I've been added to conversations for a group of people, but
many of the messages are simply emoticons or LOL or things that really
add no substance to the conversation.  I just didn't know if I could
leave a thread or hide the messages.  If I do something like this,
would the people know?

-- 
The following information is important for all members of the V iPhone list.

If you have any questions or concerns about the running of this list, or if you 
feel that a member's post is inappropriate, please contact the owners or 
moderators directly rather than posting on the list itself.

Your V iPhone list moderator is Mark Taylor.  Mark can be reached at:  
mk...@ucla.edu.  Your list owner is Cara Quinn - you can reach Cara at 
caraqu...@caraquinn.com

The archives for this list can be searched at:
http://www.mail-archive.com/viphone@googlegroups.com/
--- 
You received this message because you are subscribed to the Google Groups 
"VIPhone" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to viphone+unsubscr...@googlegroups.com.
To post to this group, send email to viphone@googlegroups.com.
Visit this group at https://groups.google.com/group/viphone.
For more options, visit https://groups.google.com/d/optout.


Re: Libpq support to connect to standby server as priority

2019-02-28 Thread Dave Cramer
> Now I will add the another parameter target_server_type to choose the
> primary, standby or prefer-standby
> as discussed in the upthreads with a new GUC variable.
>


So just to further confuse things here is a use case for "preferPrimary"

This is from the pgjdbc list.

"if the master instance fails, we would like the driver to communicate with
the secondary instance for read-only operations before the failover process
is commenced. The second use-case is when the master instance is
deliberately shut down for maintenance reasons and we do not want to fail
over to the secondary instance, but instead allow it to process user
queries throughout the maintenance."


see this for the thread.
https://www.postgresql.org/message-id/VI1PR05MB5295AE43EF9525EACC9E57ECBC750%40VI1PR05MB5295.eurprd05.prod.outlook.com

Dave Cramer

da...@postgresintl.com
www.postgresintl.com


Re: Reviving the "Stopping logical replication protocol" patch from Vladimir Gordichuk

2019-02-16 Thread Dave Cramer
Andres,

Thanks for looking at this. FYI, I did not originally write this, rather
the original author has not replied to requests.
JDBC could use this, I assume others could as well.

That said I'm certainly open to suggestions on how to do this.

Craig, do you have any other ideas?

Dave Cramer


On Fri, 15 Feb 2019 at 22:01, Andres Freund  wrote:

> Hi,
>
> On 2018-12-03 06:38:43 -0500, Dave Cramer wrote:
> > From 4d023cfc1fed0b5852b4da1aad6a32549b03ce26 Mon Sep 17 00:00:00 2001
> > From: Dave Cramer 
> > Date: Fri, 30 Nov 2018 18:23:49 -0500
> > Subject: [PATCH 1/5] Respect client initiated CopyDone in walsender
> >
> > ---
> >  src/backend/replication/walsender.c | 36
> ++--
> >  1 file changed, 30 insertions(+), 6 deletions(-)
> >
> > diff --git a/src/backend/replication/walsender.c
> b/src/backend/replication/walsender.c
> > index 46edb52..93f2648 100644
> > --- a/src/backend/replication/walsender.c
> > +++ b/src/backend/replication/walsender.c
> > @@ -770,6 +770,14 @@ logical_read_xlog_page(XLogReaderState *state,
> XLogRecPtr targetPagePtr, int req
> >   sendTimeLineValidUpto = state->currTLIValidUntil;
> >   sendTimeLineNextTLI = state->nextTLI;
> >
> > + /*
> > + * If the client sent CopyDone while we were waiting,
> > + * bail out so we can wind up the decoding session.
> > + */
> > + if (streamingDoneSending)
> > + return -1;
> > +
> > +  /* more than one block available */
> >   /* make sure we have enough WAL available */
> >   flushptr = WalSndWaitForWal(targetPagePtr + reqLen);
> >
> > @@ -1341,8 +1349,12 @@ WalSndWaitForWal(XLogRecPtr loc)
> >* It's important to do this check after the recomputation
> of
> >* RecentFlushPtr, so we can send all remaining data
> before shutting
> >* down.
> > -  */
> > - if (got_STOPPING)
> > +  *
> > +  * We'll also exit here if the client sent CopyDone
> because it wants
> > +  * to return to command mode.
> > + */
> > +
> > + if (got_STOPPING || streamingDoneReceiving)
> >   break;
> >
> >   /*
> > @@ -2095,7 +2107,14 @@ WalSndCheckTimeOut(void)
> >   }
> >  }
> >
> > -/* Main loop of walsender process that streams the WAL over Copy
> messages. */
> > +/*
> > + * Main loop of walsender process that streams the WAL over Copy
> messages.
> > + *
> > + * The send_data callback must enqueue complete CopyData messages to
> libpq
> > + * using pq_putmessage_noblock or similar, since the walsender loop may
> send
> > + * CopyDone then exit and return to command mode in response to a client
> > + * CopyDone between calls to send_data.
> > + */
>
> Wait, how is it ok to end CopyDone before all the pending data has been
> sent out?
>
>
>
> > diff --git a/src/backend/replication/logical/reorderbuffer.c
> b/src/backend/replication/logical/reorderbuffer.c
> > index 23466ba..66b6e90 100644
> > --- a/src/backend/replication/logical/reorderbuffer.c
> > +++ b/src/backend/replication/logical/reorderbuffer.c
> > @@ -1497,7 +1497,9 @@ ReorderBufferCommit(ReorderBuffer *rb,
> TransactionId xid,
> >   rb->begin(rb, txn);
> >
> >   iterstate = ReorderBufferIterTXNInit(rb, txn);
> > - while ((change = ReorderBufferIterTXNNext(rb, iterstate))
> != NULL)
> > + while ((change = ReorderBufferIterTXNNext(rb, iterstate))
> != NULL &&
> > +(rb->continue_decoding_cb == NULL ||
> > + rb->continue_decoding_cb()))
> >   {
> >   Relationrelation = NULL;
> >   Oid reloid;
>
> > @@ -1774,8 +1776,11 @@ ReorderBufferCommit(ReorderBuffer *rb,
> TransactionId xid,
> >   ReorderBufferIterTXNFinish(rb, iterstate);
> >   iterstate = NULL;
> >
> > - /* call commit callback */
> > - rb->commit(rb, txn, commit_lsn);
> > + if (rb->continue_decoding_cb == NULL ||
> rb->continue_decoding_cb())
> > + {
> > + /* call commit callback */
> > + rb->commit(rb, txn, commit_lsn);
> > + }
>
>
> I'm doubtful it's ok to simply stop in the middle of a transaction.
>

[ccp4bb] unable to run Jligand

2019-02-15 Thread Johannes Cramer
Dear ccp4bb,

I am trying to start JLigand from a ccp4 installation on a Windows7
VirtualBox host running a kubuntu 18.04 client, but I get the following
error:

Exception in thread "main" java.lang.ExceptionInInitializerError
> at JLigand.main(JLigand.java:35)
> Caused by: java.lang.NullPointerException
> at java.base/java.io.Reader.(Reader.java:82)
> at
> java.base/java.io.InputStreamReader.(InputStreamReader.java:72)
> at CharArray.append(CifFile.java:614)
> at CharArray.(CifFile.java:610)
> at Env.(T.java:312)
> ... 1 more


java --version outputs:

openjdk 10.0.2 2018-07-17
> OpenJDK Runtime Environment (build 10.0.2+13-Ubuntu-1ubuntu0.18.04.4)
> OpenJDK 64-Bit Server VM (build 10.0.2+13-Ubuntu-1ubuntu0.18.04.4, mixed
> mode)


The same happens, when I manually start the .jar with Java.
java -jar ${CBIN}/JLigand.jar
Has anyone experienced anything like this and is able to share a workaround?


Cheers,
Johannes



To unsubscribe from the CCP4BB list, click the following link:
https://www.jiscmail.ac.uk/cgi-bin/webadmin?SUBED1=CCP4BB=1


Re: James Bridle: Review of The Age of Surveillance Capitalism by Shoshana Zuboff (Guardian)

2019-02-10 Thread Florian Cramer
Postscript to my last posting:

I had forgot to mention Constanze Kurz' and Frank Riegers 2011
German-language book "Die Datenfresser: Wie Internetfirmen und Staat sich
unsere persönlichen Daten einverleiben und wie wir die Kontrolle darüber
zurückerlangen" ("The Data-vores: How Internet companies and the state
swallow up our personal data and how we regain control over it"). Both were
Chaos Computer Club speakers and know their subject inside out, in practice
as well as in theory.

What I find exceptional about this book is how it doesn't simply put the
blame on bad corporate ethics and governance, but reconstructs - with a
small startup company as its scenario -, the Internet industry's systemic
necessity of collecting, mining and selling out customer data (under
real-life profitability pressures).

-F


-- 
blog: *https://pod.thing.org/people/13a6057015b90136f896525400cd8561
<https://pod.thing.org/people/13a6057015b90136f896525400cd8561>*
bio:  http://floriancramer.nl


On Sun, Feb 10, 2019 at 6:54 PM Florian Cramer  wrote:

> While Zuboff popularized the term "surveillance capitalism" in 2015, she
> wasn't the first person who wrote about it. The underlying issues had
> already been analyzed in Wendy Chun's "Control and Freedom" from 2008.
> Regarding the specific surveillance capitalism of the big social media
> companies, Christian Fuchs' 2011 paper "An Alternative View of Privacy on
> Facebook" strikes me as notable:
>
> "Abstract: The predominant analysis of privacy on Facebook focuses on
> personal information revelation. This paper is critical of this kind of
> research and introduces an alternative analytical framework for studying
> privacy on Facebook, social networking sites and web 2.0. This framework is
> connecting the phenomenon of online privacy to the political economy of
> capitalism—a focus that has thus far been rather neglected in research
> literature about Internet and web 2.0 privacy. Liberal privacy philosophy
> tends to ignore the political economy of privacy in capitalism that can
> mask socio-economic inequality and protect capital and the rich from public
> accountability. Facebook is in this paper analyzed with the help of an
> approach, in which privacy for dominant groups, in regard to the ability of
> keeping wealth and power secret from the public, is seen as problematic,
> whereas privacy at the bottom of the power pyramid for consumers and normal
> citizens is seen as a protection from dominant interests. Facebook's
> understanding of privacy is based on an understanding that stresses
> self-regulation and on an individualistic understanding of privacy. The
> theoretical analysis of the political economy of privacy on Facebook in
> this paper is based on the political theories of Karl Marx, Hannah Arendt
> and Jürgen Habermas. Based on the political economist Dallas Smythe's
> concept of audience commodification, the process of prosumer
> commodification on Facebook is analyzed. The political economy of privacy
> on Facebook is analyzed with the help of a theory of drives that is
> grounded in Herbert Marcuse's interpretation of Sigmund Freud, which allows
> to analyze Facebook based on the concept of play labor (= the convergence
> of play and labor).
> Keywords: Facebook; social networking sites; political economy; privacy;
> surveillance; capitalism"
>
> https://www.mdpi.com/2078-2489/2/1/140/htm
>
>
> --
> blog: *https://pod.thing.org/people/13a6057015b90136f896525400cd8561
> <https://pod.thing.org/people/13a6057015b90136f896525400cd8561>*
> bio:  http://floriancramer.nl
>
>
> On Sun, Feb 10, 2019 at 1:38 PM Francis Hunger <
> francis.hun...@irmielin.org> wrote:
>
>> Indirectly related to Morozov's insightful discussion of Zuboffs
>> "surveillance capitalism" is my own short blurb on "surveillancism" at
>> http://databasecultures.irmielin.org/surveillancism (which I wrote
>> without having read Zuboff)
>>
>> This tries to provide a kind of self-critique of how often discussions of
>> that might become interesting, turn to "surveillance" instead. Comments
>> would be welcome.
>>
>> best
>>
>> Francis
>>
>>
>>
>>
>> On Tue, Feb 5, 2019, 9:00 AM Felix Stalder >
>>>
>>> I found Mozorov's massive review more interesting.
>>>
>>> https://thebaffler.com/latest/capitalisms-new-clothes-morozov
>>
>>
>> Yes I totally agree. Morozov presents the most important Marxist analyses
>> that Zuboff doesn't bother to reference - exactly the ones that have been
>> nettime mainstays for 20 years. He also shows the narrowness of an account
>> centered only 

Re: James Bridle: Review of The Age of Surveillance Capitalism by Shoshana Zuboff (Guardian)

2019-02-10 Thread Florian Cramer
While Zuboff popularized the term "surveillance capitalism" in 2015, she
wasn't the first person who wrote about it. The underlying issues had
already been analyzed in Wendy Chun's "Control and Freedom" from 2008.
Regarding the specific surveillance capitalism of the big social media
companies, Christian Fuchs' 2011 paper "An Alternative View of Privacy on
Facebook" strikes me as notable:

"Abstract: The predominant analysis of privacy on Facebook focuses on
personal information revelation. This paper is critical of this kind of
research and introduces an alternative analytical framework for studying
privacy on Facebook, social networking sites and web 2.0. This framework is
connecting the phenomenon of online privacy to the political economy of
capitalism—a focus that has thus far been rather neglected in research
literature about Internet and web 2.0 privacy. Liberal privacy philosophy
tends to ignore the political economy of privacy in capitalism that can
mask socio-economic inequality and protect capital and the rich from public
accountability. Facebook is in this paper analyzed with the help of an
approach, in which privacy for dominant groups, in regard to the ability of
keeping wealth and power secret from the public, is seen as problematic,
whereas privacy at the bottom of the power pyramid for consumers and normal
citizens is seen as a protection from dominant interests. Facebook's
understanding of privacy is based on an understanding that stresses
self-regulation and on an individualistic understanding of privacy. The
theoretical analysis of the political economy of privacy on Facebook in
this paper is based on the political theories of Karl Marx, Hannah Arendt
and Jürgen Habermas. Based on the political economist Dallas Smythe's
concept of audience commodification, the process of prosumer
commodification on Facebook is analyzed. The political economy of privacy
on Facebook is analyzed with the help of a theory of drives that is
grounded in Herbert Marcuse's interpretation of Sigmund Freud, which allows
to analyze Facebook based on the concept of play labor (= the convergence
of play and labor).
Keywords: Facebook; social networking sites; political economy; privacy;
surveillance; capitalism"

https://www.mdpi.com/2078-2489/2/1/140/htm


-- 
blog: *https://pod.thing.org/people/13a6057015b90136f896525400cd8561
*
bio:  http://floriancramer.nl


On Sun, Feb 10, 2019 at 1:38 PM Francis Hunger 
wrote:

> Indirectly related to Morozov's insightful discussion of Zuboffs
> "surveillance capitalism" is my own short blurb on "surveillancism" at
> http://databasecultures.irmielin.org/surveillancism (which I wrote
> without having read Zuboff)
>
> This tries to provide a kind of self-critique of how often discussions of
> that might become interesting, turn to "surveillance" instead. Comments
> would be welcome.
>
> best
>
> Francis
>
>
>
>
> On Tue, Feb 5, 2019, 9:00 AM Felix Stalder 
>>
>> I found Mozorov's massive review more interesting.
>>
>> https://thebaffler.com/latest/capitalisms-new-clothes-morozov
>
>
> Yes I totally agree. Morozov presents the most important Marxist analyses
> that Zuboff doesn't bother to reference - exactly the ones that have been
> nettime mainstays for 20 years. He also shows the narrowness of an account
> centered only on corporate consumerism, remarking that the resistance and
> transformation Zuboff calls for
>
> " will not win before both managerial capitalism and surveillance
> capitalism are theorized as “capitalism”—a complex set of historical and
> social relationships between capital and labor, the state and the monetary
> system, the metropole and the periphery—and not just as an aggregate of
> individual firms responding to imperatives of technological and social
> change. "
>
> That said, to judge by chapter 1, Surveillance Capitalism is worth
> reading. It provokes and infuriates me by what it leaves out, but it's
> fascinating at points and hopefully gets better as you go. Morozov has
> written the perfect intro for a critical read of what might become a
> landmark book- if the situation it describes does not again suddenly change
> beyond recognition, as it easily could.
>
> Best, Brian
>
>>
>>
> #  distributed via : no commercial use without permission
> #is a moderated mailing list for net criticism,
> #  collaborative text filtering and cultural politics of the nets
> #  more info: http://mx.kein.org/mailman/listinfo/nettime-l
> #  archive: http://www.nettime.org contact: nett...@kein.org
> #  @nettime_bot tweets mail w/ sender unless #ANON is in Subject:
>
> --
> http://www.irmielin.orghttp://nothere.irmielin.orghttp://databasecultures.irmielin.org
>
> #  distributed via : no commercial use without permission
> #is a moderated mailing list for net criticism,
> #  collaborative text filtering and cultural politics of the nets
> #  more info: http://mx.kein.org/mailman/listinfo/nettime-l

Re: [Pdns-users] reverse zone /27 subnet - migrating from bind

2019-01-25 Thread Matthias Cramer
Hi Martin

n 25/01/2019 09:33, Martin Kellermann via Pdns-users wrote:
> hi Andy,
> 
>> By way of example, I (in the ISP role) delegate 85.119.82.118/32 to
>> an end user by putting the equivalent of:
>>
>> 118-32  NS  ns1.abominable.org.uk.
>> 118-32  NS  ns2.abominable.org.uk.
>> 118 CNAME   118.118-32.82.119.85.in-addr.arpa.
>>
>> into the zone 82.119.85.in-addr.arpa. So they have been delegated
>> the zone "118-32.82.119.85.in-addr.arpa". In their zone they
>> (apparently) have put the equivalent of:
>>
>> 118 PTR diablo.404.cx.
> 
> but that doesnt work with powerdns on client side, at least for me.
> taking your example with /31 instead of /32, the client zone would be named 
> "118-31.82.119.85.in-addr.arpa" and contains
>  118 PTR diablo.404.cx. 
>  119 PTR xyz.404.cx.
> unfortunately, this does not work with powerdns. setting up such a zone and 
> doing a
> dig 85.119.82.119 [client powerdns IP] gives a "Host 
> 85.119.82.119.in-addr.arpa not found: 5(REFUSED)"
> when renaming the zone to "82.119.85.in-addr.arpa" it works - obviously.
> but this can't be correct. since the client server will give wrong answers 
> for 85.119.82.0-117 and 85.119.82.120-255
> sorry, i really can't see, what i am missing.

You cant ask your dns directly, it only knows about the zone 
118-31.82.119.85.in-addr.arpa.
So you would need do do:

dig 118.18-31.82.119.85.in-addr.arpa ptr @dnsip

to get a correct result you have to ask the dns at the provider and there you 
get back a cname pointing to your entry,.

Regards

  Matthias

-- 
Matthias Cramer / mc322-ripe   Senior Network & Security Engineer
iway AGPhone +41 43 500 
Badenerstrasse 569 Fax   +41 44 271 3535
CH-8048 Zürich http://www.iway.ch/
GnuPG 1024D/2D208250 = DBC6 65B6 7083 1029 781E  3959 B62F DF1C 2D20 8250
___
Pdns-users mailing list
Pdns-users@mailman.powerdns.com
https://mailman.powerdns.com/mailman/listinfo/pdns-users


Re: Reviving the "Stopping logical replication protocol" patch from Vladimir Gordichuk

2019-01-20 Thread Dave Cramer
Dave Cramer


On Tue, 15 Jan 2019 at 07:53, Dave Cramer  wrote:

>
> Dave Cramer
>
>
> On Sun, 13 Jan 2019 at 23:19, Craig Ringer  wrote:
>
>> On Mon, 3 Dec 2018 at 19:38, Dave Cramer  wrote:
>>
>>> Dmitry,
>>>
>>> Please see attached rebased patches
>>>
>>
>> I'm fine with patch 0001, though I find this comment a bit hard to follow:
>>
>> + * The send_data callback must enqueue complete CopyData messages to
>> libpq
>> + * using pq_putmessage_noblock or similar, since the walsender loop may
>> send
>> + * CopyDone then exit and return to command mode in response to a client
>> + * CopyDone between calls to send_data.
>>
>>
> I think it needs splitting up into a couple of sentences.
>>
>> Fair point, remember it was originally written by a non-english speaker
>

Thoughts on below ?

+/*
+ * Main loop of walsender process that streams the WAL over Copy messages.
+ *
+ * The send_data callback must enqueue complete CopyData messages to the
client
+ * using pq_putmessage_noblock or similar
+ * In order to preserve the protocol it is necessary to send all of the
existing
+ * messages still in the buffer as the WalSender loop may send
+ * CopyDone then exit and return to command mode in response to a client
+ * CopyDone between calls to send_data.
+ */


>
>> In patch 0002, stopping during a txn. It's important that once we skip
>> any action, we continue skipping. In patch 0002 I'd like it to be clearer
>> that we will *always* skip the rb->commit callback
>> if rb->continue_decoding_cb() returned false and aborted the while loop. I
>> suggest storing the result of (rb->continue_decoding_cb == NULL ||
>> rb->continue_decoding_cb())  in an assignment within the while loop, and
>> testing that result later.
>>
>> e.g.
>>
>> (continue_decoding = (rb->continue_decoding_cb == NULL ||
>> rb->continue_decoding_cb()))
>>
>> and later
>>
>> if (continue_decoding) {
>> rb->commit(rb, txn, commit_lsn);
>> }
>>
>> Will do
>

Hmmm... I don't actually see how this is any different than what we have in
the patch now where below would the test occur?


> I don't actually think it's necessary to re-test the continue_decoding
>> callback and skip commit here. If we've sent all the of the txn
>> except the commit, we might as well send the commit, it's tiny and might
>> save some hassle later.
>>
>>
>> I think a comment on the skipped commit would be good too:
>>
>> /*
>>  * If we skipped any changes due to a client CopyDone we must not send a
>> commit
>>  * otherwise the client would incorrectly think it received a complete
>> transaction.
>>  */
>>
>> I notice that the fast-path logic in WalSndWriteData is removed by this
>> patch. It isn't moved; there's no comparison of last_reply_timestamp
>> and wal_sender_timeout now. What's the rationale there? You want to ensure
>> that we reach ProcessRepliesIfAny() ? Can we cheaply test for a readable
>> client socket then still take the fast-path if it's not readable?
>>
>
> This may have been a mistake as I am taking this over from a very old
> patch that I did not originally write. I'll have a look at this
>

OK, I'm trying to decipher the original intent of this patch as well as I
didn't write it.

There are some hints here
https://www.postgresql.org/message-id/CAFgjRd1LgVbtH%3D9O9_xvKQjvUP7aRF-edxqwKfaNs9hMFW_4gw%40mail.gmail.com

As to why the fast path logic was removed. Does it make sense to you?

Dave

>
>>


Re: Libpq support to connect to standby server as priority

2019-01-17 Thread Dave Cramer
On Thu, 17 Jan 2019 at 19:56, Tatsuo Ishii  wrote:

> >> > I'm curious; under what circumstances would the above occur?
> >>
> >> Former primary goes down and one of standbys is promoting but it is
> >> not promoted to new primary yet.
> >>
> >
> > seems like JDBC might have some work to do...Thanks
> >
> > I'm going to wait to implement until we resolve this discussion
>
> If you need some input from me regarding finding a primary node,
> please say so.  While working on Pgpool-II project, I learned the
> necessity in a hard way.
>
>
I would really like to have a consistent way of doing this, and consistent
terms for the connection parameters.

that said yes, I would like input from you.

Thanks,

Dave


Re: Libpq support to connect to standby server as priority

2019-01-17 Thread Dave Cramer
On Thu, 17 Jan 2019 at 19:38, Tatsuo Ishii  wrote:

> >> >> >> > From: Tatsuo Ishii [mailto:is...@sraoss.co.jp]
> >> >> >> >> But pg_is_in_recovery() returns true even for a promoting
> >> standby. So
> >> >> >> >> you have to wait and retry to send pg_is_in_recovery() until it
> >> >> >> >> finishes the promotion to find out it is now a primary. I am
> not
> >> sure
> >> >> >> >> if backend out to be responsible for this process. If not,
> libpq
> >> >> would
> >> >> >> >> need to handle it but I doubt it would be possible.
> >> >> >> >
> >> >> >> > Yes, the application needs to retry connection attempts until
> >> success.
> >> >> >> That's not different from PgJDBC and other DBMSs.
> >> >> >>
> >> >> >> I don't know what PgJDBC is doing, however I think libpq needs to
> do
> >> >> >> more than just retrying.
> >> >> >>
> >> >> >> 1) Try to find a node on which pg_is_in_recovery() returns false.
> If
> >> >> >>found, then we assume that is the primary. We also assume that
> >> >> >>other nodes are standbys. done.
> >> >> >>
> >> >> >> 2) If there's no node on which pg_is_in_recovery() returns false,
> >> then
> >> >> >>we need to retry until we find it. To not retry forever, there
> >> >> >>should be a timeout counter parameter.
> >> >> >>
> >> >> >>
> >> >> > IIRC this is essentially what pgJDBC does.
> >> >>
> >> >> Thanks for clarifying that. Pgpool-II also does that too. Seems like
> a
> >> >> common technique to find out a primary node.
> >> >>
> >> >>
> >> > Checking the code I see we actually use show transaction_read_only.
> >> >
> >> > Sorry for the confusion
> >>
> >> So if all PostgreSQL servers returns transaction_read_only = on, how
> >> does pgJDBC find the primary node?
> >>
> >> well preferSecondary would return a connection.
>
> This is not my message :-)
>
> > I'm curious; under what circumstances would the above occur?
>
> Former primary goes down and one of standbys is promoting but it is
> not promoted to new primary yet.
>

seems like JDBC might have some work to do...Thanks

I'm going to wait to implement until we resolve this discussion

Dave

>
>


Re: Libpq support to connect to standby server as priority

2019-01-17 Thread Dave Cramer
On Thu, 17 Jan 2019 at 19:09, Tsunakawa, Takayuki <
tsunakawa.ta...@jp.fujitsu.com> wrote:

> From: Dave Cramer [mailto:p...@fastcrypt.com]
> >   >> 2) If there's no node on which pg_is_in_recovery() returns
> false,
> > then
> >   >>we need to retry until we find it. To not retry forever,
> there
> >   >>should be a timeout counter parameter.
>
> > Checking the code I see we actually use show transaction_read_only.
>
> Also, does PgJDBC really repeat connection attempts for a user-specified
> duration?  Having a quick look at the code, it seemed to try each host once
> in a while loop.
>

You are correct looking at the code again. On the initial connection
attempt we only try once.


Dave Cramer

da...@postgresintl.com
www.postgresintl.com

>
>


Re: Libpq support to connect to standby server as priority

2019-01-17 Thread Dave Cramer
On Thu, 17 Jan 2019 at 19:15, Tatsuo Ishii  wrote:

> > On Thu, 17 Jan 2019 at 18:03, Tatsuo Ishii  wrote:
> >
> >> > On Wed, 16 Jan 2019 at 01:02, Tatsuo Ishii 
> wrote:
> >> >
> >> >> > From: Tatsuo Ishii [mailto:is...@sraoss.co.jp]
> >> >> >> But pg_is_in_recovery() returns true even for a promoting
> standby. So
> >> >> >> you have to wait and retry to send pg_is_in_recovery() until it
> >> >> >> finishes the promotion to find out it is now a primary. I am not
> sure
> >> >> >> if backend out to be responsible for this process. If not, libpq
> >> would
> >> >> >> need to handle it but I doubt it would be possible.
> >> >> >
> >> >> > Yes, the application needs to retry connection attempts until
> success.
> >> >> That's not different from PgJDBC and other DBMSs.
> >> >>
> >> >> I don't know what PgJDBC is doing, however I think libpq needs to do
> >> >> more than just retrying.
> >> >>
> >> >> 1) Try to find a node on which pg_is_in_recovery() returns false. If
> >> >>found, then we assume that is the primary. We also assume that
> >> >>other nodes are standbys. done.
> >> >>
> >> >> 2) If there's no node on which pg_is_in_recovery() returns false,
> then
> >> >>we need to retry until we find it. To not retry forever, there
> >> >>should be a timeout counter parameter.
> >> >>
> >> >>
> >> > IIRC this is essentially what pgJDBC does.
> >>
> >> Thanks for clarifying that. Pgpool-II also does that too. Seems like a
> >> common technique to find out a primary node.
> >>
> >>
> > Checking the code I see we actually use show transaction_read_only.
> >
> > Sorry for the confusion
>
> So if all PostgreSQL servers returns transaction_read_only = on, how
> does pgJDBC find the primary node?
>
> well preferSecondary would return a connection.

I'm curious; under what circumstances would the above occur?

Regards,
Dave


Re: Libpq support to connect to standby server as priority

2019-01-17 Thread Dave Cramer
On Thu, 17 Jan 2019 at 18:03, Tatsuo Ishii  wrote:

> > On Wed, 16 Jan 2019 at 01:02, Tatsuo Ishii  wrote:
> >
> >> > From: Tatsuo Ishii [mailto:is...@sraoss.co.jp]
> >> >> But pg_is_in_recovery() returns true even for a promoting standby. So
> >> >> you have to wait and retry to send pg_is_in_recovery() until it
> >> >> finishes the promotion to find out it is now a primary. I am not sure
> >> >> if backend out to be responsible for this process. If not, libpq
> would
> >> >> need to handle it but I doubt it would be possible.
> >> >
> >> > Yes, the application needs to retry connection attempts until success.
> >> That's not different from PgJDBC and other DBMSs.
> >>
> >> I don't know what PgJDBC is doing, however I think libpq needs to do
> >> more than just retrying.
> >>
> >> 1) Try to find a node on which pg_is_in_recovery() returns false. If
> >>found, then we assume that is the primary. We also assume that
> >>other nodes are standbys. done.
> >>
> >> 2) If there's no node on which pg_is_in_recovery() returns false, then
> >>we need to retry until we find it. To not retry forever, there
> >>should be a timeout counter parameter.
> >>
> >>
> > IIRC this is essentially what pgJDBC does.
>
> Thanks for clarifying that. Pgpool-II also does that too. Seems like a
> common technique to find out a primary node.
>
>
Checking the code I see we actually use show transaction_read_only.

Sorry for the confusion

Dave Cramer

da...@postgresintl.com
www.postgresintl.com

>
>


Re: Libpq support to connect to standby server as priority

2019-01-17 Thread Dave Cramer
On Thu, 17 Jan 2019 at 05:59, Laurenz Albe  wrote:

> Tsunakawa, Takayuki wrote:
> > From: Tom Lane [mailto:t...@sss.pgh.pa.us]
> > > The problem here of course is that whoever invented
> target_session_attrs
> > > was unconcerned with following that precedent, so what we have is
> > > "target_session_attrs=(any | read-write)".
> > > Are we prepared to add some aliases in service of unifying these names?
> >
> > I think "yes".
> >
> > > 2. Whether or not you want to follow pgJDBC's naming, it seems like we
> > > ought to have both "require read only" and "prefer read only" behaviors
> > > in this patch, and maybe likewise "require read write" versus "prefer
> > > read write".
>

I just had a look at the JDBC code there is no prefer read write. There is
a "preferSecondary"
The logic behind this is that the connection would presumably be only doing
reads so ideally it would like a secondary,
but if it can't find one it will connect to a primary.

To be clear there are 4 target server types in pgJDBC, "any",
"master","secondary", and "preferSecondary" (looking at this I need to
alias master to primary, but that's another discussion)

I have no idea where "I want to write but I'm OK if I cannot came from"?

Dave


Re: Libpq support to connect to standby server as priority

2019-01-17 Thread Dave Cramer
On Wed, 16 Jan 2019 at 01:02, Tatsuo Ishii  wrote:

> > From: Tatsuo Ishii [mailto:is...@sraoss.co.jp]
> >> But pg_is_in_recovery() returns true even for a promoting standby. So
> >> you have to wait and retry to send pg_is_in_recovery() until it
> >> finishes the promotion to find out it is now a primary. I am not sure
> >> if backend out to be responsible for this process. If not, libpq would
> >> need to handle it but I doubt it would be possible.
> >
> > Yes, the application needs to retry connection attempts until success.
> That's not different from PgJDBC and other DBMSs.
>
> I don't know what PgJDBC is doing, however I think libpq needs to do
> more than just retrying.
>
> 1) Try to find a node on which pg_is_in_recovery() returns false. If
>found, then we assume that is the primary. We also assume that
>other nodes are standbys. done.
>
> 2) If there's no node on which pg_is_in_recovery() returns false, then
>we need to retry until we find it. To not retry forever, there
>should be a timeout counter parameter.
>
>
IIRC this is essentially what pgJDBC does.


Dave Cramer

da...@postgresintl.com
www.postgresintl.com

>
>


Re: Libpq support to connect to standby server as priority

2019-01-17 Thread Dave Cramer
On Tue, 15 Jan 2019 at 23:21, Tsunakawa, Takayuki <
tsunakawa.ta...@jp.fujitsu.com> wrote:

> From: Dave Cramer [mailto:p...@fastcrypt.com]
> >   The original desire should have been the ability to connect to a
> > primary or a standby.  So, I think we should go back to the original
> thinking
> > (and not complicate the feature), and create a read only GUC_REPORT
> variable,
> > say, server_role, that identifies whether the server is a primary or a
> > standby.
> >
> >
> >
> > I'm confused as to how this would work. Who or what determines if the
> server
> > is a primary or standby?
>
> Overall, the server determines the server role (primary or standby) using
> the same mechanism as pg_is_in_recovery(), and set the server_role GUC
> parameter.  As the parameter is GUC_REPORT, the change is reported to the
> clients using the ParameterStatus ('S') message.  The clients also get the
> value at connection.
>

Thanks, that clarifies it.


Dave Cramer

da...@postgresintl.com
www.postgresintl.com

>
>


Re: Libpq support to connect to standby server as priority

2019-01-15 Thread Dave Cramer
On Mon, 14 Jan 2019 at 21:19, Tsunakawa, Takayuki <
tsunakawa.ta...@jp.fujitsu.com> wrote:

> From: Tom Lane [mailto:t...@sss.pgh.pa.us]
> > The problem here of course is that whoever invented target_session_attrs
> > was unconcerned with following that precedent, so what we have is
> > "target_session_attrs=(any | read-write)".
> > Are we prepared to add some aliases in service of unifying these names?
>
> I think "yes".
>
Agreed. There's no downside to aliasing and I'd really like to see
consistency.

>
> > 2. Whether or not you want to follow pgJDBC's naming, it seems like we
> > ought to have both "require read only" and "prefer read only" behaviors
> > in this patch, and maybe likewise "require read write" versus "prefer
> > read write".
>
> Agreed, although I don't see a use case for "prefer read write".  I don't
> think there's an app like "I want to write, but I'm OK if I cannot."
>

> > 3. We ought to sync this up with whatever's going to happen in
> > https://commitfest.postgresql.org/21/1090/
> > at least to the extent of agreeing on what GUCs we'd like to see
> > the server start reporting.
>
> Yes.
>
> > 4. Given that other discussion, it's not quite clear what we should
> > even be checking.  The existing logic devolves to checking that
> > transaction_read_only is true, but that's not really the same thing as
> > "is a master server", eg you might have connected to a master server
> > under a role that has SET ROLE default_transaction_read_only = false.
> > (I wonder what pgJDBC is really checking, under the hood.)
> > Do we want to have modes that are checking hot-standby state in some
> > fashion, rather than the transaction_read_only state?
>
> PgJDBC uses transaction_read_only like this:
>
> [core/v3/ConnectionFactoryImpl.java]
>   private boolean isMaster(QueryExecutor queryExecutor) throws
> SQLException, IOException {
> byte[][] results = SetupQueryRunner.run(queryExecutor, "show
> transaction_read_only", true);
> String value = queryExecutor.getEncoding().decode(results[0]);
> return value.equalsIgnoreCase("off");
>   }
>
> But as some people said, I don't think this is the right way.  I suppose
> what's leading to the current somewhat complicated situation is that there
> was no easy way for the client to know whether the server is the master.
> That ended up in using "SHOW transaction_read_only" instead, and people
> supported that compromise by saying "read only status is more useful than
> whether the server is standby or not," I'm afraid.
>
> The original desire should have been the ability to connect to a primary
> or a standby.  So, I think we should go back to the original thinking (and
> not complicate the feature), and create a read only GUC_REPORT variable,
> say, server_role, that identifies whether the server is a primary or a
> standby.
>
> I'm confused as to how this would work. Who or what determines if the
server is a primary or standby?

Dave Cramer

da...@postgresintl.com
www.postgresintl.com

>
>


Re: Reviving the "Stopping logical replication protocol" patch from Vladimir Gordichuk

2019-01-15 Thread Dave Cramer
Dave Cramer


On Sun, 13 Jan 2019 at 23:19, Craig Ringer  wrote:

> On Mon, 3 Dec 2018 at 19:38, Dave Cramer  wrote:
>
>> Dmitry,
>>
>> Please see attached rebased patches
>>
>
> I'm fine with patch 0001, though I find this comment a bit hard to follow:
>
> + * The send_data callback must enqueue complete CopyData messages to libpq
> + * using pq_putmessage_noblock or similar, since the walsender loop may
> send
> + * CopyDone then exit and return to command mode in response to a client
> + * CopyDone between calls to send_data.
>
>
I think it needs splitting up into a couple of sentences.
>
> Fair point, remember it was originally written by a non-english speaker

>
> In patch 0002, stopping during a txn. It's important that once we skip any
> action, we continue skipping. In patch 0002 I'd like it to be clearer that
> we will *always* skip the rb->commit callback if rb->continue_decoding_cb()
> returned false and aborted the while loop. I suggest storing the result of
> (rb->continue_decoding_cb == NULL || rb->continue_decoding_cb())  in an
> assignment within the while loop, and testing that result later.
>
> e.g.
>
> (continue_decoding = (rb->continue_decoding_cb == NULL ||
> rb->continue_decoding_cb()))
>
> and later
>
> if (continue_decoding) {
> rb->commit(rb, txn, commit_lsn);
> }
>
> Will do

> I don't actually think it's necessary to re-test the continue_decoding
> callback and skip commit here. If we've sent all the of the txn
> except the commit, we might as well send the commit, it's tiny and might
> save some hassle later.
>
>
> I think a comment on the skipped commit would be good too:
>
> /*
>  * If we skipped any changes due to a client CopyDone we must not send a
> commit
>  * otherwise the client would incorrectly think it received a complete
> transaction.
>  */
>
> I notice that the fast-path logic in WalSndWriteData is removed by this
> patch. It isn't moved; there's no comparison of last_reply_timestamp
> and wal_sender_timeout now. What's the rationale there? You want to ensure
> that we reach ProcessRepliesIfAny() ? Can we cheaply test for a readable
> client socket then still take the fast-path if it's not readable?
>

This may have been a mistake as I am taking this over from a very old patch
that I did not originally write. I'll have a look at this

I

>
> --
>  Craig Ringer   http://www.2ndQuadrant.com/
>  2ndQuadrant - PostgreSQL Solutions for the Enterprise
>


Re: jdbc PGCopyOutputStream close() v. endCopy()

2019-01-10 Thread Dave Cramer
Hi Rob,

Interesting. I've not looked too much into the copy implementation.
The JDBC list or the jdbc github repo https://github.com/pgjdbc/pgjdbc
might be a better place to report this. I know Lukas Edar monitors it as
well

Dave Cramer

da...@postgresintl.com
www.postgresintl.com


On Tue, 8 Jan 2019 at 16:29, Rob Sargent  wrote:

> As is often the case, I'm unsure of which of these methods to use, or if
> I'm using them correctly.
>
> PG10.5, jooq-3.10.8, postgresql-42.1.4, linux (redhat 6.9) and logback to
> a file.
>
> I have been using close() for a while but thought I would make use of
> either the returned long from endCopy() or perhaps getHandledRowCount().
>
> Both work perfectly, but when I use endCopy() I always get the exception
> shown near the bottom of this log excerpt. The COPY is on its own thread
> from the same connection as the direct jooq writes also listed.  Again, the
> data is all saved but I am worried that I'm not closing properly even if I
> use close(). The data here doesn't warrent bulk copy but it's just a quick
> example to repeat.
>
> 13:32:55.785 [pool-1-thread-1] DEBUG edu.utah.camplab.jx.PayloadFromMux -
> STAGING TABLE CREATED: bulk."flk_g16-forcing very long name to trigger
> truncation_22_8045c0"
> 13:32:55.786 [pool-1-thread-1] INFO  edu.utah.camplab.jx.PayloadFromMux -
> 8045c057-99ec-490b-90c1-85875269afee: started COPY work at 1546979575786
> 13:32:55.789 [pool-1-thread-1] INFO  edu.utah.camplab.jx.PayloadFromMux -
> 8045c057-99ec-490b-90c1-85875269afee: Total segment save took 22 ms
> 13:32:55.790 [pool-1-thread-1] INFO  edu.utah.camplab.jx.AbstractPayload -
> 8045c057-99ec-490b-90c1-85875269afee: closing process
> 8045c057-99ec-490b-90c1-85875269afee
> 13:32:55.790 [8045c057-99ec-490b-90c1-85875269afee] INFO
> e.u.camplab.jx.PayloadWriterThread - bulk."flk_g16-forcing very long name
> to trigger truncation_22_8045c0": Begin bulk copy segment
> 13:32:55.793 [8045c057-99ec-490b-90c1-85875269afee] INFO
> e.u.camplab.jx.PayloadWriterThread - bulked up to 89, maybe?
> 13:32:55.793 [pool-1-thread-1] DEBUG org.jooq.tools.LoggerListener -
> Executing batch query: insert into "process_input" ("id", "process_id",
> "input_type", "input_ref") values (?, ?, ?, ?)
> 13:32:55.795 [8045c057-99ec-490b-90c1-85875269afee] INFO
> e.u.camplab.jx.PayloadWriterThread - bulked up to 197, maybe?
> 13:32:55.797 [8045c057-99ec-490b-90c1-85875269afee] INFO
> e.u.camplab.jx.PayloadWriterThread - bulked up to 318, maybe?
> 13:32:55.798 [8045c057-99ec-490b-90c1-85875269afee] INFO
> e.u.camplab.jx.PayloadWriterThread - bulked up to 393, maybe?
> 13:32:55.799 [8045c057-99ec-490b-90c1-85875269afee] INFO
> e.u.camplab.jx.PayloadWriterThread - 393/393 segments delivered in 9 ms
> 13:32:55.799 [8045c057-99ec-490b-90c1-85875269afee] DEBUG
> e.u.camplab.jx.PayloadWriterThread - staged in 9 ms
> 13:32:55.800 [pool-1-thread-1] DEBUG org.jooq.tools.LoggerListener -
> Executing batch query: insert into "process_output" ("id",
> "process_id", "output_type", "output_ref") values (?, ?, ?, ?)
> 13:32:55.805 [8045c057-99ec-490b-90c1-85875269afee] ERROR
> e.u.camplab.jx.PayloadWriterThread - bulk."flk_g16-forcing very long name
> to trigger truncation_22_8045c0": i/o trouble
> java.io.IOException: Ending write to copy failed.
> at
> org.postgresql.copy.PGCopyOutputStream.close(PGCopyOutputStream.java:107)
> ~[postgresql-42.1.4.jar:42.1.4]
> at
> edu.utah.camplab.jx.PayloadWriterThread.run(PayloadWriterThread.java:75)
> ~[transport/:na]
> Caused by: org.postgresql.util.PSQLException: Tried to write to an
> inactive copy operation
> at
> org.postgresql.core.v3.QueryExecutorImpl.writeToCopy(QueryExecutorImpl.java:978)
> ~[postgresql-42.1.4.jar:42.1.4]
> at org.postgresql.core.v3.CopyInImpl.writeToCopy(CopyInImpl.java:35)
> ~[postgresql-42.1.4.jar:42.1.4]
> at
> org.postgresql.copy.PGCopyOutputStream.endCopy(PGCopyOutputStream.java:166)
> ~[postgresql-42.1.4.jar:42.1.4]
> at
> org.postgresql.copy.PGCopyOutputStream.close(PGCopyOutputStream.java:105)
> ~[postgresql-42.1.4.jar:42.1.4]
> ... 1 common frames omitted
> 13:32:55.810 [pool-1-thread-1] DEBUG org.jooq.tools.LoggerListener -
> Executing batch query: insert into "process_arg" ("id", "process_id",
> "argname", "argvalue_int", "argvalue_float", "argvalue_text") values (?, ?,
> ?, ?, ?, ?)
>
> The class doing the bulk work, PayloadWriterThread extends Thread, the
> thread name is set from the caller and the critical parts are as follows:
>
> @Override
> public void 

Re: [LegacyUG] Facebook

2019-01-09 Thread Pauline B. Cramer
I have been using Legacy Family Tree since version 4 and I have been a 
subscriber to the legacyusergroup for a long time.  I enjoy reading the 
Legacyusergroup emails, which I sometimes find useful.


I looked at the LUG FB a little, but did not find it useful and have not 
looked at it recently.  I have over 300 Facebook Friends, people with 
whom I share other interests.


I am thankful that the Legacy User Group does not have ads.  On google, 
etc., I got so tired of seeing the same ads pop up, photos of clothes or 
shoes I would never want to purchase, etc., so I finally opted out of 
ads.    Still get a lot of ads on, but  fewer and better quality of ads.


Pauline Cramer


On 1/8/2019 8:37 AM, Carolyn White via LegacyUserGroup wrote:
I have "followed" the FB page for a while but even though I'm signed 
up for notifications, I never get one (and because of that I never 
remember to go to the page).  I second the usefulness of the email list.


Carolyn White

On Tuesday, January 8, 2019, 11:35:11 AM EST, Sherry H 
 wrote:



I tried to follow the LUG FB page and it's totally insane. I can't
keep up with it. People hijack threads on FB just as often as they do
in mailing lists

It can't take that long to copy a post you just made to FB and paste
it into an email. No need to retype the entire post. The people on the
mailing list won't get the follow-ups by the group that are on the FB
page but there's no reason they can't get announcements or other
pertinent information about Legacy.

*Many email programs can thread a conversation for you.
*It's very easy to search emails - I think much easier than searching
in a FB group, esp if the emails have a good subject line.
*Sticky announcements are annoying - once I've read it, it's still
there in my face and I can't get rid of it. I can delete an email.
*If a mailing list is moderated, either members or posts, then you can
cut down on spam. In fact, I can't remember the last time I've seen
spam on any of the mailing lists I'm on. On FB, you have to wade
through all the ads that FB puts in a group. A moderated list can also
put a stop to a thread by rejecting posts
*Polls can be sent on mailing lists as well. I just responded to one
on a book list.
*I can choose to save emails that are of particular interest to me or
delete ones I have no interest in. They're all lumped together on FB.
*All the graphics on a FB page are annoying.
*Mailing lists *can* allow pictures. Up to the list owner.

I've been using FB for a few Christian groups and still not a fan - I
so wish they were mailing lists instead!

Sherry

On Tue, Jan 8, 2019 at 5:58 AM Michele Lewis via LegacyUserGroup
<mailto:legacyusergroup@legacyusers.com>> wrote:

>
> We have some complaints about the Facebook LUG getting more 
information than

> the email LUG. I wanted to explain why that is. The Facebook LUG is in a
> much easier format to post information. I can do ten times as much 
stuff on

> the Facebook LUG than I can do here and a lot faster.
>
> *Conversations are threaded so it is much easier to follow the 
conversations

>
> *You don't have to worry about "trimming" emails to make them more 
readable

> (and most people don't do this anyways)
> *It is very easy to search the group for past posts.
> *There is a section where we can upload files that are accessible to
> everyone
> *We have "sticky" announcements
> *Upcoming webinars are announced as well as links to the Legacy News 
posts

> *We can delete spam
> *We can close comments on a thread that has run its course
> *You can post screenshots which makes understanding problems (and 
explaining

> problems) so much easier
> *You can create polls (fun!)
>
> There are 1579 members of the email LUG.  There are 18,963 members 
of the

> Facebook LUG
>
> I know that there are some of you that have sworn you will never join
> Facebook. Unfortunately you will be missing out on some things. I 
can't make

> this email list do all of the things that Facebook can do.
>
> You can easily join Facebook and change your privacy settings so 
that you

> are pretty much incognito. You can make it so that no one can send you
> friend requests. You can format your name in such a way that it isn't
> obvious who you are (you can't post a fake name per Facebook rules 
but there
> is some flexibility). You can pretty much block everything. You need 
to go
> into the Privacy settings (all of the settings really) and go line 
by line

> to get it set up like you want. We have users that have Facebook for no
> other reason that the LUG group. I also HIGHLY recommend the browser
> extension F.B. Purity. It works with Facebook to give a TON of 
options on
> what you see and what you don't see. It gets rid of all the junk 
stuff (like

> ads).
>
> Totally up to you. I just wanted to explain why there is a di

Re: [Archivesspace_Users_Group] ArchLight Documentation

2019-01-07 Thread Tom Cramer
Thanks Steve for this extensive write up on ArcLight.

Two other things to note:

1.) ArcLight is based on Blacklight<http://projectblacklight.org/>, and as such 
does and should continue to benefit from cross-pollination with Blacklight’s 
extensive community and ongoing code developments.

2.) ArcLight is still a bit rough around the edges; I’m not aware that anyone 
has put into production yet, though there are multiple “evaluation” sites live. 
Several of the institutions that have been active in its development up to now 
are tentatively planning for another work cycle on it in mid-2019. At Stanford 
(at least) that is when we would plan to finish up development and move it into 
a live, production service for all our archival collections.

Of course, as open source software, you could become the first to go into 
production!

For anyone curious about ArcLight development, here are the communication 
channels (from the DuraSpace wiki 
page<https://wiki.duraspace.org/display/samvera/ArcLight>):

Email list: To get updates or communicate with us about ArcLight, please join 
the ArcLight Google Group<https://groups.google.com/d/forum/arclight-community> 
(arclight-commun...@googlegroups.com<mailto:arclight-commun...@googlegroups.com>).

Slack channel: To communicate with us more informally, please join the 
#arclight channel on the 
Code4l<https://code4lib.slack.com/messages/arclight/>ib Slack 
team<https://code4lib.slack.com/messages/arclight/>. If you're not on the 
Code4lib Slack team, you can request an invitation using this 
form<http://goo.gl/forms/p9Ayz93DgG> or by contacting us via the email list.

- Tom

 | Tom Cramer
 | Associate University Librarian
 | Director, Digital Library Systems & Services
 | Chief Technology Strategist
 | Stanford University Libraries
 | tcra...@stanford.edu<mailto:tcra...@stanford.edu>




On Jan 7, 2019, at 9:20 AM, Robin Fay 
mailto:r...@athenslibrary.org>> wrote:

I am very interested in learning more about Archlight.
Although I am new to ArchivesSpace dev, I have considerable CMS and IR/digital 
lib dev experience (and crosswalking/harvesting metadata even with ASpace).
The ArchivesSpace interface is on my list to address as part of my larger site 
redesign goals.
Thanks,
Robin Fay
@georgiawebgurl
Digital Media Librarian/Web (re)Designer
Athens Regional Library System

On Mon, Jan 7, 2019 at 5:38 AM Harm Jager 
mailto:harmja...@ruerddevries.nl>> wrote:

Hello fellow Archivesspace users,

We are sending this mail out to you because we are currently in the process of 
doing research in to the best way to use the user-interface of Archivesspace. 
We would like to use the user-interface not only to present our finding-aids 
but also those of other institutions in the Netherlands that are complementary 
to our finding-aids and if possible provide extra information surrounding these 
Finding-Aids

This brings me to our point. Namely the fact that we recently heard of 
ArchLight and someone said that would be a good fit for us and our objectives 
with the staff-interface. We are looking to expand our knowledge when it comes 
to ArcLight. Using Google to find Archlight brings us to a couple of DuraSpace 
pages, but that’s it.

Therefore I share the following question/request. Does anyone of you have any 
experience with ArchLight and if so could you share with us your documentation 
surrounding ArchLight? We would like to know as much as possible about 
ArchLight before we start to fiddle around with it. Any links or documents 
would be greatly appreciated.

Thank you very much for helping us along.

Greetings,

Harm Jager

___
Archivesspace_Users_Group mailing list
Archivesspace_Users_Group@lyralists.lyrasis.org<mailto:Archivesspace_Users_Group@lyralists.lyrasis.org>
http://lyralists.lyrasis.org/mailman/listinfo/archivesspace_users_group
___
Archivesspace_Users_Group mailing list
Archivesspace_Users_Group@lyralists.lyrasis.org<mailto:Archivesspace_Users_Group@lyralists.lyrasis.org>
http://lyralists.lyrasis.org/mailman/listinfo/archivesspace_users_group

___
Archivesspace_Users_Group mailing list
Archivesspace_Users_Group@lyralists.lyrasis.org
http://lyralists.lyrasis.org/mailman/listinfo/archivesspace_users_group


Re: Foundations for "Anthropocene Socialist" Movement

2019-01-05 Thread Florian Cramer
On Sat, Jan 5, 2019 at 7:57 PM Brian Holmes 
wrote:

> What we need, first of all, is a vision so carefully articulated that it
can become a strategy and a calculable plan.
> Exactly that is now emergent. The point is to make it actual. That means,
to make it into the really existing state.

Your wording is interesting, because it connects "emergence" with the
"state". Since the classical concept of emergence evolved around
self-organization, it was decentralist. The state is a (more or less)
centralist concept. The way you put it, it sounds as if you didn't have one
particular state in mind, but a global concept of statehood that can enact
global policies.

Here seems to lie the dilemma, if I'm not mistaken: No decentralist
politics can solve the climate change problem, since decentralization will
always produce incentives to a race to the bottom (of lowering
environmental standards/energy costs to attract capital and/or maintain
current living standards). What is realistically needed is world
governance, in order for it to be effective (more effective than the U.N.,
for example), a world government with direct authority over anything that
concerns the planet's ecology. Neither the anarchist principle of free
association, nor the liberal principle of self-interests balancing out each
other in equilibrium will work, since in the case of the planet, ecological
equilibrium cannot be gained through having opposite interests neutralize
each other, but only through common cause and action.

Such a global teleology is a scary thing. It's prone to result in
eco-fascism or eco-stalinism, with an authoritarian dictate over daily
lives. Even if one ignores the moral issues, it would be prone to power
abuse, misinformed (and therefore even ecologically counterproductive)
top-down decisions, and all the pitfalls and horrors of "wise men's states"
since Plato's Republic.

The concept of socialism creates additional complications, on top of the
above, since socialism is about social and economic justice involving
redistribution of resources. According to everything that I've read as an
amateur on the subject, including alternative economists like Niko Paech, a
global, climate-neutral lifestyle would have to (a) radically localize
production of goods, (b) radically reduce transport/distribution, (c) give
up 24/7 electricity - which, btw., would lead to the end of Internet as we
know it. (A glimpse into such a future is "Low-Tech Magazine", a website
whose editorial content covers all the issues we're discussing here on a
practical level, likely having more to say about them than Nettime:
https://www.lowtechmagazine.com . Its server runs on solar power with
regular outages, and its pages have been designed by Roel Roscam Abbing to
use only a minimum amount of kilobytes.) - Combined, factors the (a), (b)
and (c) would make socialist redistribution tricky. A world that seriously
minimizes climate change could easily produce blatant inequality (in regard
to access to resources).

Alternatively, one can draw the conclusion that Hans-Ulrich Gumbrecht just
drew in an interview with the German "tageszeitung" (
https://www.taz.de/Archiv-Suche/!5559348/): "As far as evolutionary history
and cosmology are concerned, the end of our species is guaranteed anyway,
despite all the rhetorical fuss regarding the long-term survival of
mankind".

-F

On Sat, Jan 5, 2019 at 7:57 PM Brian Holmes 
wrote:

> On Sat, Jan 5, 2019 at 9:15 AM Vincent Gaulin 
> wrote:
>
> Where is the actual site of the surplus that intellectuals, protestors,
>> activists, caretakers and laborers draw upon while renovating the new
>> socialist state?
>>
>
>  Are you kidding? The problem of surplus is that there is too much of it.
> The actual site of surplus production is in largely automated mining sites,
> factories and farms around the world. An immense amount of this surplus
> goes either to the global oligarchy or to the military (US in particular).
> And the degree of automation is now rising as AI is rolled out, threatening
> a new unemployment crisis. As for the number of miners/scavengers,
> engineers, and electricians needed to create the solar field that will
> power a future society, they're all needed. Either we convert the energy
> grid to zero carbon over the next two decades or the future turns quite
> ugly indeed.
>
> The two key dangers on the horizon are mass unemployment and climate
> chaos. It's obvious to career bureaucrats and corporate planners that these
> things have to be faced. What's missing is the politics to do so.
>
> I don't think society can be remade in a utopian way where everyone
> behaves morally at a small scale of autonomous rural production. So I
> admire your cult of frugality, Vince, but I don't support it as a
> universal. Far as I can see, very few people want to give up either cities
> or the vast benefits of a global division of labor orchestrated by
> corporations and super-states. However the current configuration of 

Re: loading jdbc Driver in servlet

2018-12-17 Thread Dave Cramer
On Mon, 17 Dec 2018 at 02:28, Thomas Kellerer  wrote:

> Rob Sargent schrieb am 14.12.2018 um 19:28:
> > Using java 1.8, postgresql-42.1.4.jar, embedded tomcat 9
> >
> > It appears to me that I need to make the call
> > "Class.forName("org.postgresql.Driver)" when the entry is in a
> > servlet.  Is this expected, within a servlet, or is this just /post
> > hoc ergo propter hoc /at it finest and I changed something else
> > (wittingly or not).  Same code outside of servlet does not need the
> > forced loading of the class and the manual claims it's not need after
> > java 1.6
>
> Class.forName() is definitely not needed if the driver's JAR file is
> included in the classloader of the class requesting a connection.
>
> Where exactly did you put the JDBC driver's jar file?
> And what exactly is your main() method doing?
>
> If you look at Tomcat's startup script (catalina.sh or catalina.bat), it's
> obvious that setting up the claspath isn't that straightforward.
> My guess is, that that your main() method does something different
> and does not properly include the driver's jar in the classpath.
>


Servlet classpath issues are legendary. As Thomas points out setting up the
classpath for a servlet engine is not trivial.



Dave Cramer

da...@postgresintl.com
www.postgresintl.com


>
>


Re: loading jdbc Driver in servlet

2018-12-16 Thread Dave Cramer
So you are starting up tomcat yourself ? Perhaps that is the difference ?
I have no idea what the tomcat wrapper does, but I'd be curious if the same
thing happens when stared normally

Dave Cramer

da...@postgresintl.com
www.postgresintl.com


On Sun, 16 Dec 2018 at 12:20, Rob Sargent  wrote:

> Tomcat version 9. Embedded in my main()
>
> On Dec 16, 2018, at 9:30 AM, Dave Cramer  wrote:
>
> My guess is it has something to do with your servlet classpath loader.
> Which servlet engine are you using ?
> Dave Cramer
>
> da...@postgresintl.com
> www.postgresintl.com
>
>
> On Fri, 14 Dec 2018 at 16:04, Rob Sargent  wrote:
>
>>
>>
>> On Dec 14, 2018, at 2:02 PM, Rob Sargent  wrote:
>>
>>
>>
>> On Dec 14, 2018, at 1:30 PM, Dave Cramer  wrote:
>>
>> Strange, I wouldn't think so, but then I haven't used a raw servlet for
>> so long I have no idea.
>>
>>
>> Dave Cramer
>>
>> da...@postgresintl.com
>> www.postgresintl.com
>>
>>
>> On Fri, 14 Dec 2018 at 13:29, Rob Sargent  wrote:
>>
>>> Using java 1.8, postgresql-42.1.4.jar, embedded tomcat 9
>>>
>>> It appears to me that I need to make the call
>>> "Class.forName("org.postgresql.Driver)" when the entry is in a servlet.  Is
>>> this expected, within a servlet, or is this just *post hoc ergo propter
>>> hoc *at it finest and I changed something else (wittingly or not).
>>> Same code outside of servlet does not need the forced loading of the class
>>> and the manual claims it's not need after java 1.6
>>>
>> I too am suspicious.  But the vagaries of javax are daunting.
>> But if I comment it out, as just now, “No suitable driver found...”.  I
>> was days playing with configuration and such think that this very specific
>> error message was telling me my CLASSPATH was wrong.
>>
>> I wonder if I have an old javax installation (which I put in place just
>> recently).
>>
>>
>> I’m using javax.servlet-api-3.1.0.jar
>>
>>
>>
>>
>>


Re: loading jdbc Driver in servlet

2018-12-16 Thread Dave Cramer
My guess is it has something to do with your servlet classpath loader.
Which servlet engine are you using ?
Dave Cramer

da...@postgresintl.com
www.postgresintl.com


On Fri, 14 Dec 2018 at 16:04, Rob Sargent  wrote:

>
>
> On Dec 14, 2018, at 2:02 PM, Rob Sargent  wrote:
>
>
>
> On Dec 14, 2018, at 1:30 PM, Dave Cramer  wrote:
>
> Strange, I wouldn't think so, but then I haven't used a raw servlet for so
> long I have no idea.
>
>
> Dave Cramer
>
> da...@postgresintl.com
> www.postgresintl.com
>
>
> On Fri, 14 Dec 2018 at 13:29, Rob Sargent  wrote:
>
>> Using java 1.8, postgresql-42.1.4.jar, embedded tomcat 9
>>
>> It appears to me that I need to make the call
>> "Class.forName("org.postgresql.Driver)" when the entry is in a servlet.  Is
>> this expected, within a servlet, or is this just *post hoc ergo propter
>> hoc *at it finest and I changed something else (wittingly or not).  Same
>> code outside of servlet does not need the forced loading of the class and
>> the manual claims it's not need after java 1.6
>>
> I too am suspicious.  But the vagaries of javax are daunting.
> But if I comment it out, as just now, “No suitable driver found...”.  I
> was days playing with configuration and such think that this very specific
> error message was telling me my CLASSPATH was wrong.
>
> I wonder if I have an old javax installation (which I put in place just
> recently).
>
>
> I’m using javax.servlet-api-3.1.0.jar
>
>
>
>
>


Re: loading jdbc Driver in servlet

2018-12-14 Thread Dave Cramer
Strange, I wouldn't think so, but then I haven't used a raw servlet for so
long I have no idea.


Dave Cramer

da...@postgresintl.com
www.postgresintl.com


On Fri, 14 Dec 2018 at 13:29, Rob Sargent  wrote:

> Using java 1.8, postgresql-42.1.4.jar, embedded tomcat 9
>
> It appears to me that I need to make the call
> "Class.forName("org.postgresql.Driver)" when the entry is in a servlet.  Is
> this expected, within a servlet, or is this just *post hoc ergo propter
> hoc *at it finest and I changed something else (wittingly or not).  Same
> code outside of servlet does not need the forced loading of the class and
> the manual claims it's not need after java 1.6
>


My Braille Settings are weird!

2018-12-10 Thread Tom Cramer
Hello,

Earlier this year, I was told about being able to use the Braille
keyboard on my iPhone.  Up until a few days ago, it was working fine.
Now, it is doing some weird things.  I can no longer write in Grade 2
Braille.  Some of the signs come up more as accents or other computer
Braille signs.  I have absolutely no idea what could have happened.
What might I have done?
I don't even remember how to go back and set things back to where they
need to be, but I do miss being able to type in Grade 2 Braille.
Any thoughts or help?

-- 
The following information is important for all members of the V iPhone list.

If you have any questions or concerns about the running of this list, or if you 
feel that a member's post is inappropriate, please contact the owners or 
moderators directly rather than posting on the list itself.

Your V iPhone list moderator is Mark Taylor.  Mark can be reached at:  
mk...@ucla.edu.  Your list owner is Cara Quinn - you can reach Cara at 
caraqu...@caraquinn.com

The archives for this list can be searched at:
http://www.mail-archive.com/viphone@googlegroups.com/
--- 
You received this message because you are subscribed to the Google Groups 
"VIPhone" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to viphone+unsubscr...@googlegroups.com.
To post to this group, send email to viphone@googlegroups.com.
Visit this group at https://groups.google.com/group/viphone.
For more options, visit https://groups.google.com/d/optout.


Re: [Elecraft] Elecraft CW Net Announcement

2018-12-09 Thread Kurt Cramer
Yes the Elecraft SSB net is 14.3035 at 1800 Z on Sunday. NCS is in Chicago. If 
you don’t copy him, stay around because there are relay stations in Caif. 
Oregon, Colorado , Fla. east cost, N. Texas  and I forgot some. 

Kurt W7QHD 

 

> On Dec 8, 2018, at 9:47 PM, kevinr  wrote:
> 
> Good Evening,
> 
> Only a few more weeks before the sun turns the corner and starts moving 
> again.  That means I think I have found the proper times for the nets until 
> the next time change.  Both twenty and forty meter nets seemed to have OK 
> propagation.  Nothing fantastic but quiet and reasonably strong.
> 
> We are through our first cold snap.  Any remaining leaves or green fern 
> fronds are now some shade of brown and a bit crispy around the edges.  My 
> begonias had been flowering up until last weekend.  Now they are brown and 
> shriveled.  It was a good year for them.  Now to figure out how to handle the 
> bulbs; it has been too many years since I tended mom's glads.  Hints?
> 
> 
> Please join us tomorrow on:
> 
> 
> 14050 kHz at 2200z Sunday  (2 PM PST Sunday)
> 
>   7047 kHz at z Monday (4 PM PST Sunday)
> 
> 
> 73,
> 
>Kevin. KD5ONS
> 
> -
> 
> __
> Elecraft mailing list
> Home: http://mailman.qth.net/mailman/listinfo/elecraft
> Help: http://mailman.qth.net/mmfaq.htm
> Post: mailto:Elecraft@mailman.qth.net
> 
> This list hosted by: http://www.qsl.net
> Please help support this email list: http://www.qsl.net/donate.html
> Message delivered to vwrace...@gmail.com

__
Elecraft mailing list
Home: http://mailman.qth.net/mailman/listinfo/elecraft
Help: http://mailman.qth.net/mmfaq.htm
Post: mailto:Elecraft@mailman.qth.net

This list hosted by: http://www.qsl.net
Please help support this email list: http://www.qsl.net/donate.html
Message delivered to arch...@mail-archive.com

Re: extended query protcol violation?

2018-12-08 Thread Dave Cramer
On Sat, 8 Dec 2018 at 08:18, Tatsuo Ishii  wrote:

> >> > Curious what client is this that is violating the protocol.
> >>
> >> I heard it was a Java program.
> >>
> >
> > This is not surprising there are a proliferation of non-blocking
> > implementations,
> > probably approaching 10 different implementations now.
>
> Do you think some of the implementations may not be appropriate if
> they behave like that discussed in this thread? If so, maybe it is
> worth to add a warning to the backend.
>

Given that java code is notorious for not reading warnings, I'd say no
That said I'd probably be in favour of a DEBUG mode that did warn.

Dave Cramer

da...@postgresintl.com
www.postgresintl.com


Re: extended query protcol violation?

2018-12-08 Thread Dave Cramer
On Sat, 8 Dec 2018 at 07:50, Tatsuo Ishii  wrote:

> > Curious what client is this that is violating the protocol.
>
> I heard it was a Java program.
>

This is not surprising there are a proliferation of non-blocking
implementations,
probably approaching 10 different implementations now.


Dave Cramer

da...@postgresintl.com
www.postgresintl.com


Re: extended query protcol violation?

2018-12-08 Thread Dave Cramer
On Sat, 8 Dec 2018 at 05:16, Tatsuo Ishii  wrote:

> > Tatsuo>responses of a simple query do not include CloseComplete
> >
> > Tatsuo, where do you get the logs from?
>
> As I said, pgproto.
>
> https://github.com/tatsuo-ishii/pgproto
>
> > I guess you are just confused by the PRINTED order of the messages in the
> > log.
> > Note: wire order do not have to be exactly the same as the order in the
> log
> > since messages are buffered, then might be read in batches.
>
> pgproto directly reads from socket using read system call. There's no
> buffer here.
>
> > In other words, an application might just batch (send all three)
> close(s2),
> > close(s1), query(begin) messages, then read the responses.
> > How does it break protocol?
>
> Again as I said before, the doc says in extended query protocol a
> sequence of extended messages (parse, bind. describe, execute, closes)
> should be followed by a sync message. ie.
>
> close
> close
> sync
> query(begin)
>
> Maybe
>
> close
> close
> query(begin)
>
> is not a violation of protocol, but still I would say this is buggy
> because of the reason Tom said, and I agree with him.
>

Curious what client is this that is violating the protocol.


Dave Cramer

da...@postgresintl.com
www.postgresintl.com

>
>


Re: Reviving the "Stopping logical replication protocol" patch from Vladimir Gordichuk

2018-12-03 Thread Dave Cramer
Dmitry,

Please see attached rebased patches

Dave Cramer


On Fri, 30 Nov 2018 at 18:52, Dmitry Dolgov <9erthali...@gmail.com> wrote:

> >On Sat, Dec 1, 2018 at 12:49 AM Dave Cramer  wrote:
> >
> > Thanks, I have done a preliminary check and it seems pretty
> straightforward.
> >
> > I will clean it up for Monday
>
> Great, thank you!
>


0004-Add-test-for-pg_recvlogical-to-stop-replication.patch
Description: Binary data


0003-Add-ability-for-pg_recvlogical-to-stop-replication-f.patch
Description: Binary data


0001-Respect-client-initiated-CopyDone-in-walsender.patch
Description: Binary data


0002-Client-initiated-CopyDone-during-transaction-decodin.patch
Description: Binary data


Re: Reviving the "Stopping logical replication protocol" patch from Vladimir Gordichuk

2018-11-30 Thread Dave Cramer
Dmitry,

Thanks, I have done a preliminary check and it seems pretty straightforward.

I will clean it up for Monday

Thanks for your patience!

Dave Cramer


On Fri, 30 Nov 2018 at 18:22, Dmitry Dolgov <9erthali...@gmail.com> wrote:

> On Sat, Dec 1, 2018 at 12:17 AM Dave Cramer  wrote:
> >
> > Why is this being closed? I did not see the first email looking for
> clarification.
>
> Well, mostly due total absence of response and broken mind reading crystal
> ball.
>
> > I can certainly rebase it.
>
> Yes, please do. I'll change the CF item status back.
>


Re: Reviving the "Stopping logical replication protocol" patch from Vladimir Gordichuk

2018-11-30 Thread Dave Cramer
Why is this being closed? I did not see the first email looking for
clarification.

The history is the original author dropped off the planet (no idea where he
is)

I can certainly rebase it.

Dave Cramer


On Fri, 30 Nov 2018 at 18:00, Dmitry Dolgov <9erthali...@gmail.com> wrote:

> > On Mon, Nov 19, 2018 at 4:58 PM Dmitry Dolgov <9erthali...@gmail.com>
> wrote:
> >
> > > On Tue, Jul 24, 2018 at 5:52 PM Dave Cramer 
> wrote:
> > >
> > > Back in 2016 a patch was proposed that seems to have died on the vine.
> See
> https://www.postgresql.org/message-id/flat/cafgjrd3hdyoa33m69tbeofnner2bzbwa8ffjt2v5vfztbvu...@mail.gmail.com
> > > for the history and https://commitfest.postgresql.org/10/621/ for the
> commitfest entry.
> > > I've rebased the patches and attached them for consideration.
> > > JDBC tests here
> https://github.com/pgjdbc/pgjdbc/blob/master/pgjdbc/src/test/java/org/postgresql/replication/LogicalReplicationTest.java
> all pass
> >
> > Unfortunately the second patch from the series can't be applied anymore,
> could
> > you please rebase it one more time? Other than that it looks strange for
> me
> > that the corresponding discussion stopped when it was quite close to be
> in a
> > good shape, bouncing from "rejected with feedback" to "needs review". Can
> > someone from involved people clarify what's the current status of this
> patch?
>
> Looks like I'm a failure as a neuromancer, since revival didn't happen. I'm
> marking it as returned with feedback.
>


Re: Against Andrea Nagle's rightwing-masquerading-as-left tract on "open borders"

2018-11-28 Thread Florian Cramer
> Although not directly related to technology per se I found it related to
our current discussions on the polis and inclusion, as well as a
> continuing commentary on how the online right operates deftly in
ostensibly leftist spaces.

This is completely related to our previous discussion of 'identity
politics'.

What we're currently witnessing is a rift between a neo-traditionalist
socialist political left and an intersectional political left. The former
wants to re-focus all political struggle on (traditional) class struggle
and the restoration of the welfare state, arguing that the latter can only
work as a national state with a restrictive border and immigration regime.
This camp dismisses intersectional positions as "liberal". The German
"Aufstehen" movement of Sarah Wagenknecht and theater maker Bernd Stegemann
belongs into this category, the Dutch political thinker Ewald Engelen and
the Dutch Socialist Party. (I'm sure there are more examples, these are
only the ones I'm most familiar with.)

Movements like Bernie Sanders' and Jeremy Corbyn's seemingly attempt to
reconcile both positions, but clearly focus their agenda on traditionalist
class struggle (with Corbyn taking an unclear position towards Brexit). I
see Angela in the traditionalist-socialist camp, too. That doesn't make her
part of the "online right".

-F
#  distributed via : no commercial use without permission
#is a moderated mailing list for net criticism,
#  collaborative text filtering and cultural politics of the nets
#  more info: http://mx.kein.org/mailman/listinfo/nettime-l
#  archive: http://www.nettime.org contact: nett...@kein.org
#  @nettime_bot tweets mail w/ sender unless #ANON is in Subject:

Re: Libpq support to connect to standby server as priority

2018-11-21 Thread Dave Cramer
On Wed, 21 Nov 2018 at 09:05, Robert Haas  wrote:

> On Fri, Nov 16, 2018 at 11:35 AM Tom Lane  wrote:
> > Oh!  The reason I assumed it wasn't doing that is that such a behavior
> > seems completely insane.  If the point is to keep down the load on your
> > master server, then connecting only to immediately disconnect is not
> > a friendly way to do that --- even without counting the fact that you
> > might later come back and connect again.
>
> That seems like a really weak argument.  Opening a connection to the
> master surely isn't free, but it must be vastly cheaper than the cost
> of the queries you intend to run.  I mean, no reasonable production
> user of PostgreSQL opens a connection, runs one or two short queries,
> and then closes the connection.  You open a connection and keep it
> open for minutes, hours, days, or longer, running hundreds, thousands,
> or millions of queries.  The cost of checking whether you've got a
> master or a standby is a drop in the bucket.
>
> And, I mean, if there's some scenario where what I just said isn't
> true, well then don't use this feature in that particular case.
>
>
And to enforce Robert's argument even further almost every pool
implementation I am aware of
has a keep alive query. So why not use the opportunity to check to see if
is a primary or standby at the same time


Dave Cramer

da...@postgresintl.com
www.postgresintl.com

>
>


Re: [Origami] Albrecht Durer polyhedron in Melencolia I

2018-11-20 Thread Scott Cramer



http://mathworld.wolfram.com/DuerersSolid.html , the model is a  truncated 
triangular trapezohedron
> 
> I'm not convinced. I think it is a partially truncated cube but that the 
> perspective of the drawing is imperfect. 
> 
> Melancholia I was after all engraved in 1514.

True, but Dürer's engraving skills are not otherwise lacking, and Archimedes 
was truncating cubes seventeen centuries prior to Dürer. There is evidence that 
Dürer had explored this shape in other drawings, as well. Interesting reading 
here:

https://www.theguardian.com/science/alexs-adventures-in-numberland/2014/dec/03/durers-polyhedron-5-theories-that-explain-melencolias-crazy-cube

Scott


Re: Libpq support to connect to standby server as priority

2018-11-20 Thread Dave Cramer
On Tue, 20 Nov 2018 at 06:23, Vladimir Sitnikov 
wrote:

> Tom>Yes, we need either session open or reconnect it approach to find out
> Tom>the whether server is read-write or read-only.
>
> Just in case, pgjdbc has that feature for quite a while, and the behavior
> there is to keep the connection until it fails or application decides to
> close it.
>
> pgjdbc uses three parameters (since 2014):
> 1) targetServerType=(any | master | secondary | preferSecondary). Default
> is "any". When set to "master" it will look for "read-write" server. If set
> to "preferSecondary" it would search for "read-only" server first, then
> fall back to master, and so on.
> 2) loadBalanceHosts=(true | false). pgjdbc enables to load-balance across
> servers provided in the connection URL. When set to "false", pgjdbc tries
> connections in order, otherwise it shuffles the connections.
> 3) hostRecheckSeconds=int. pgjdbc caches "read/write" status of a
> host:port combination, so it don't re-check the status if multiple
> connections are created within hostRecheckSeconds timeframe.
>
> It is sad that essentially the same feature is re-implemented in core with
> different name/semantics.
> Does it make sense to align parameter names/semantics?
>

Looking at

https://www.postgresql.org/message-id/flat/1700970.cRWpxnom9y%40hammer.magicstack.net

Which Tom points out as being relevant to this discussion ISTM that this is
becoming a half baked "feature" that is being cobbled together instead of
being designed. Admittedly biased but I agree with Vladimir that libpq did
not implement the above feature using the same name and semantics. This
just serves to confuse the users.

Just my 2c worth


Dave Cramer


Re: [Origami] Albrecht Durer polyhedron in Melencolia I

2018-11-18 Thread Scott Cramer



On Sun, Nov 11, 2018, at 4:57 AM, David Mitchell wrote:
> Laura R  asks:
> 
> >I'd like to fold the polyhedron that appears in Melencolia I, the famous 
> >engraving by Albrecht Durer.
> >What would be the best match and where can I find the diagrams? 
> 

According to http://mathworld.wolfram.com/DuerersSolid.html , the model is a  
truncated triangular trapezohedron. The mathematical details of its geometry is 
on the above page, for any modular designer who'd like to have a go.

Scott 


Re: Was cultural Marxism the leading force behind the new world order

2018-11-17 Thread Florian Cramer
The extreme right is just not educated enough to properly spot its enemies.
Parts of Adorno's theory, for example, could be easily hijacked for
conservative and right-wing ends: his resistance against mass culture,
early writings against jazz music, fondness of Spengler and cultural
pessimism, even his larger issue of commodification resistance (which is a
left-wing as well as a right-wing topic), to name only a few. Much of
Adorno's philosophy was in line with the reservations and resentments of
the German "Bildungsbürgertum" (the English translation "educated middle
class" doesn't really cut it, because the German word describes a
particular social milieu that grew out of Lutheran-protestant values,
anti-materialism, academic education and fondness of canonic high culture).
Those resentments found their way into both left-wing and right-wing
thinking, including thinkers who crossed those lines (such as Peter
Sloterdijk).

If the political right and its protagonists would be better educated and
not argue on the level of college freshmen when it comes to cultural
theory, they would know that they shouldn't blame Adorno, Foucault and
Derrida, with the latter being even more easily interpretable as
revisionists and anti-progressives than Adorno. (Which is what parts of
so-called "German media theory" actually did in the 1980s and 1990s.) The
right-wingers should better refocus their attention to British Cultural
Studies which actually happen to be "Cultural Marxism" with no strings
attached. I'm almost afraid to drop the names of Stuart Hall and the
Birmingham School here (not to even mention Marxist post-colonialists such
as Gayatri Spivak), since they could be the "Alt-Right"'s perfect enemy and
scapegoat; much more so than Adorno and the Frankfurt School...

-F


-- 
blog: *https://pod.thing.org/people/13a6057015b90136f896525400cd8561
*
bio:  http://floriancramer.nl


On Fri, Nov 16, 2018 at 10:47 PM Flick Harrison 
wrote:

> I always thought Cultural Marxism was a fine term and it doesn't hit me as
> a right-biased word in itself, though it gets used since its origin that
> way.  I mean the first law teacher I had in University was a
> Marxist-Feminist, who completely believed this radical notion that righties
> hate: there's a superstructure that constructs the social narrative, and
> the social narrative is the source for all concepts of right, wrong, law,
> etc, which are not absolutes but socially determined; and that as we live
> in a patriarchy, the narrative is all about what men need, want, love and
> desire.  Thus the patriarchal power structure and the narrative reinforce
> and reproduce one another.
>
> The objective of the Marxist-Feminist is seizing the means of production
> of this narrative (culturally, in the workplace, in the control of capital
> whether for industry or communications, in politics, in the home etc).
>
> Now if you extrapolate to include intersectional politics, you get
> Cultural Marxism, or maybe Identity Marxism.  What's not to like about this
> term?  What's incompatible with our ideals?
>
> By using the word Marxist you're already implying socialism and
> internationalism, it would be hard to be a Marxist-Feminist who isn't a
> radical socialist too.
>
> So for all the awfulness of Anders Breivik, this nomenclature dispute
> isn't the angle from which I'd critique him.  The problem is his (perhaps
> mental-health or socially-conditioned) fear of the other, leading to
> violent outburst.  The problem is his fear of dialogue and engagement.  The
> problem is the amplifying echo-chamber of violent, unhinged narcissists
> with nothing but contempt for any difference of opinion, where bad-faith
> actors team up with honest ignoramuses and budding lunatics.
>
> Now a term I really suspect is the bogeyman term "Anarcho-Capitalism:"
> this seems to be almost an alt-right Trojan Horse, meant to lure beginner
> Left thinkers of the "Bernie-or-Trump" variety. To me Libertarian
> Capitalism seems like a term that more readily describes people like Trump,
> Paul Ryan, Margaret Thatcher, Andrew Scheer, Sarkozy etc.  "Capital is born
> free, yet everywhere it is in chains!" Oh crap, there's Rousseau again, but
> I swear I know nothing about him.
>
> By using "Anarcho-" that way, it sounds to me like an attempt to muddy our
> image of the villains:  "Anarchism" evokes the left, whereas the most
> radical white supremacist kleptocrats are more likely Libertarian.  Why try
> to make Anarchism sound bad by tying it to Capitalism??  Because the
> alt-right talking points assert that "globalism" and "identity politics"
> and "socialism" are something that "elites" do, i.e. the big bad
> "oligarchs."  Lump these elites (Hillary! Oprah! Michelle Obama!) together
> with capitalists ((Soros!)) and you get "Anarcho-Capitalists??"
>
> Libertarian Capitalism, on the other hand, gets away scot-free because
> Crypto nerds think 

Re: libpq to JDBC adapter

2018-11-14 Thread Dave Cramer
Looks very interesting,

Cheers,

Dave Cramer

da...@postgresintl.com
www.postgresintl.com


On Wed, 14 Nov 2018 at 14:57, Konstantin Knizhnik 
wrote:

> If somebody is interested in connection to various JDBC-compatible
> databases through postgres_fdw,
> please look at my pq2jdbc project: https://github.com/postgrespro/pq2jdbc
> Details of the project are in README file.
>
> If somebody can find some other use cases for libpq to JDBC adapter,
> please let me know!
>
> --
> Konstantin Knizhnik
> Postgres Professional: http://www.postgrespro.com
> The Russian Postgres Company
>
>
>


Re: Inhabit: Instructions for Autonomy

2018-11-09 Thread Florian Cramer
That pamphlet is another piece of male fantasies and cyberlibertarian porn
that might very well come from the Alt-Right. (Note the invocations of
"fight clubs", "indigenous families" etc.)

-F


-- 
blog: *https://pod.thing.org/people/13a6057015b90136f896525400cd8561
*
bio:  http://floriancramer.nl


On Fri, Nov 9, 2018 at 5:10 PM Ian Alan Paul  wrote:

> I thought some on the list interested in infrastructural / ecological
> politics might find this of interest:
>
> Inhabit: Instructions for Autonomy
> (online: https://inhabit.global/ , español: https://es.inhabit.global/ ,
> français: https://fr.inhabit.global/ )
>
> There are two paths: The end of the world or the beginning of the next.
>
> The End of The World:
>
> It’s over.
>
> Bow your head and phone scroll through the apocalypse.
>
> Watch as Silicon Valley replaces everything with robots. New
> fundamentalist deathcults make ISIS look like child’s play. The authorities
> release a geolocation app to real-time snitch on immigrants and political
> dissent while metafascists crowdfund the next concentration camps.
> Government services fail. Politicians turn to more draconian measures and
> the left continues to bark without teeth. Meanwhile glaciers melt,
> wildfires rage, Hurricane Whatever drowns another city. Ancient plagues
> reemerge from thawing permafrost. Endless work as the rich benefit from
> ruin. Finally, knowing we did nothing, we perish, sharing our tomb with all
> life on the planet.
>
> The Beginning of The Next:
>
> Take a breath, and get ready for a new world.
>
> A multiplicity of people, spaces, and infrastructures lay the ground where
> powerful, autonomous territories take shape. Everything for everyone. Land
> is given over to common use. Technology is cracked open–everything a tool,
> anything a weapon. Autonomous supply lines break the economic strangle
> hold. Mesh networks provide real-time communication connecting those who
> sense that a different life must be built. While governments fail, the
> autonomous territories thrive with a new sense that to be free, we must be
> bound to this earth and life on it. Enclaves of techno-feudalism are
> plundered for their resources. We confront the dwindling forces of
> counter-revolution with the option: to hell or utopia?–either answer
> satisfies us. Finally, we reach the edge–we feel the danger of freedom, the
> embrace of living together, the miraculous and the unknown–and know: this
> is life.
>
> Our time is tumultuous and potent.
>
> Upheaval, polarization, politics as bankrupt as the financial markets–yet
> under crisis lies possibility. This epoch forces us to consider how each of
> us forms a kernel of potential, how individuals can follow their wildest
> inclinations to gather with others who feel the call. People learn lost
> skills and warriors return fire to the world. Farmers and gardeners
> experiment with organic agriculture while makers and hackers reconfigure
> machines. Models escape the vacant limelight and break bread with Kurdish
> radicals and military veterans taking a stand for communal life. Those with
> no use for politics find each other at a dinner table in Zuccotti park,
> Oscar Grant Plaza, or Tahrir Square, and the barista who can barely feed
> himself alone learns to cook for a thousand together. A retired welder and
> a web designer learn they are neighbors at an airport occupation and commit
> to read The Art of War together. An Instagram star whose anxiety usually
> confines them to their apartment meets a battle-scarred elder in Ferguson,
> where they are baptized in tear gas and collective strength, and begin to
> feel the weight lifted from their soul. People everywhere, living through
> the greatest isolation, rise together and find new modes of life. But when
> these kernels grow to the surface, they are stomped out in a frenzy of
> banality and fear. Openings are forcefully shuttered by riot police,
> private security forces, and public relation firms. Or worse, by the lonely
> ones–politically right or left–who have nothing to gain but another like on
> their crappy Twitter. All this while smug politicians and CEOs hover. The
> revolutionary character of our epoch cannot be denied, but we’ve yet to
> overcome the hurdle between us and freedom.
>
> We come from somewhere broken, yet we stand.
>
> Our epoch’s nihilism is topological. Everywhere is without foundation. We
> search for the organizational power to repair the world, and find only
> institutions full of weakness and cynicism. Well-meaning activists get
> digested through the spineless body of conventional politics, leaving
> depressed militants or mini-politicians. Those who speak out against abuse
> end up bearing witness to sad games of power playing out on social media.
> Movements erupt and then implode, devoured internally by parasites. Cities
> become unlivable as waters rise and governments scramble to 

[CODE4LIB] Fwd: [ocfl-community] Ready for Review: OCFL 0.1 (Alpha)

2018-11-07 Thread Tom Cramer
At the risk of overwhelming the OCFL editors’ inboxes (a risk I’m willing to 
take, not being one of them), please see the request for comments, below. 
What’s OCFL?

a. Orange County, Florida
b. Oxford Common File Layout
c. an emerging best practice for LTDP file storage
d. Something "sorely needed and I would be surprised if it didn't become just 
as critical a tool as BagIt is”, per digital preservation expert and community 
thought leader, Sibyl Schaefer
e. all of the above (except A)

[The correct answer is E.]

- Tom


Begin forwarded message:

From: Andrew Woods mailto:awo...@duraspace.org>>
Subject: [ocfl-community] Ready for Review: OCFL 0.1 (Alpha)
Date: October 18, 2018 at 1:30:36 PM PDT
To: pasig-disc...@mail.asis.org, 
ocfl-commun...@googlegroups.com
Cc: ocfl-editors 
mailto:ocfl-edit...@googlegroups.com>>

Hello All,

Think of it as an opportunity to participate in shaping the definition of the 
persistence layout for digital information, focused on long-term preservation. 
Many of us have a deep concern and responsibility for the preservation of 
slices of our shared digital, cultural heritage.

The Oxford Common File Layout (OCFL) specification describes an 
application-independent approach to the storage of digital information in a 
structured, transparent, and predictable manner. It is designed to promote 
standardized long-term object management practices within digital repositories.

Through a series of community conversations [1] starting in December of 2017, 
the OCFL 0.1 alpha specification release is now ready for your detailed review 
and feedback!
https://ocfl.io/0.1/spec/

Please review the (short) specification and provide your feedback either as 
discussion on the ocfl-community [2] mailing list or as GitHub issues [3].

We will be discussing/incorporating the feedback during next month's community 
call [4] (November 14th @11am ET).

In addition to feedback on the content of the specification, you are encouraged 
to join the November community call to share interest in implementing OCFL 
locally.

More detail and implementation notes can be found at https://ocfl.io .

Best regards,
Andrew Woods, on behalf of the OCFL editorial group
[1] https://github.com/OCFL/spec/wiki/Community-Meetings
[2] ocfl-commun...@googlegroups.com
[3] https://github.com/ocfl/spec/issues
[4] https://github.com/OCFL/spec/wiki/2018.11.14-Community-Meeting

--
You received this message because you are subscribed to the Google Groups 
"Oxford Common File Layout Community" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to 
ocfl-community+unsubscr...@googlegroups.com.
To post to this group, send email to 
ocfl-commun...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/ocfl-community/CADz%3D0UmuTyTSx3OG0JXVJceXc1AtLqmYeBd%3D1TMxy41-FBG1eQ%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.



Re: apropos of nothing

2018-11-04 Thread Florian Cramer
Alexander Bard is a typical example of a "Querfront" activist, and a member
of this right-wing party:
https://en.wikipedia.org/wiki/Citizens%27_Coalition

-F

-- 
blog: *https://pod.thing.org/people/13a6057015b90136f896525400cd8561
*
bio:  http://floriancramer.nl


On Sun, Nov 4, 2018 at 12:30 AM Willem van Weelden 
wrote:

> dear angela,
> relax dear.
> it is ok.
> noone is recruiting anyone here.
> chill.
> best,
> w
>
>
> > On 03 Nov 2018, at 23:04, Angela Mitropoulos <
> angela.mitropou...@gmail.com> wrote:
> >
> >
> > What is Nettime's policy on whether or not it should give fascists a
> platform from which to recruit?
> >
> > Angela
> >
> >
> >
> > #  distributed via : no commercial use without permission
> > #is a moderated mailing list for net criticism,
> > #  collaborative text filtering and cultural politics of the nets
> > #  more info: http://mx.kein.org/mailman/listinfo/nettime-l
> > #  archive: http://www.nettime.org contact: nett...@kein.org
> > #  @nettime_bot tweets mail w/ sender unless #ANON is in Subject:
>
> #  distributed via : no commercial use without permission
> #is a moderated mailing list for net criticism,
> #  collaborative text filtering and cultural politics of the nets
> #  more info: http://mx.kein.org/mailman/listinfo/nettime-l
> #  archive: http://www.nettime.org contact: nett...@kein.org
> #  @nettime_bot tweets mail w/ sender unless #ANON is in Subject:
>
#  distributed via : no commercial use without permission
#is a moderated mailing list for net criticism,
#  collaborative text filtering and cultural politics of the nets
#  more info: http://mx.kein.org/mailman/listinfo/nettime-l
#  archive: http://www.nettime.org contact: nett...@kein.org
#  @nettime_bot tweets mail w/ sender unless #ANON is in Subject:

Looking for clarification

2018-10-31 Thread Tom Cramer
Good morning,

I have two questions.
First, is there a way to check my iWatch's battery status without
always having to ask Siri?  I thought there was a gesture or an area,
but I can't find one.  And, while I'm on that subject, is there a way
of having it not always speak how much I've moved, walked, exercised,
etc, every time I press the crown so it can speak the time?  I wish I
could just get it to speak the time when I raise my arm, but that
doesn't seem to work for me.

My other question is more complicated and what I need clarified.
I was under the impression that there was a setting that devices like
iPads and iWatches could mirror my phone when it came to things like
text messages.  I had it set, but when I erase my text messages from
my iPhone, they still seem to appear on my iPad and iWatch.  Does
anyone know why that would be and what I'd need to do?  I really
thought that turning mirroring on would be the trick.
Thank you very much.

-- 
The following information is important for all members of the V iPhone list.

If you have any questions or concerns about the running of this list, or if you 
feel that a member's post is inappropriate, please contact the owners or 
moderators directly rather than posting on the list itself.

Your V iPhone list moderator is Mark Taylor.  Mark can be reached at:  
mk...@ucla.edu.  Your list owner is Cara Quinn - you can reach Cara at 
caraqu...@caraquinn.com

The archives for this list can be searched at:
http://www.mail-archive.com/viphone@googlegroups.com/
--- 
You received this message because you are subscribed to the Google Groups 
"VIPhone" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to viphone+unsubscr...@googlegroups.com.
To post to this group, send email to viphone@googlegroups.com.
Visit this group at https://groups.google.com/group/viphone.
For more options, visit https://groups.google.com/d/optout.


Re: Interview with Richard Stallman in New Left Review (September-October 2018)

2018-10-30 Thread Florian Cramer
Hello Carsten,

You wrote:

> Most Free/Open Source Software is in fact not created by unpaid
> volunteers or even by underpaid workers, but by professional developers
> at the companies or organizations who sponsor the projects.

Define "most". What you describe is true for the Linux kernel and other
pieces of software that make up a typical Linux distribution such as
RedHat, but even those are not 100% developed by paid developers. On top of
that, crucial components such as OpenSSH (developed by OpenBSD) and popular
applications such The Gimp are developed by volunteers. Free Software as a
whole is an ecology that is made up by volunteer and paid developer
contributions.

And I would argue that all these developers are underpaid in the light of
the IBM/RedHat transaction which they will not profit from. (Quite on the
opposite, with IBM's management taking over and making it part of its
'cloud' division, the question is how many free software developers on the
RedHat payroll will stay in their jobs.)

It's one thing to sell your labor as alienated labor to a company, knowing
full well that you get exploited. It's another thing to contribute to free
software as a volunteer and (at least partially) idealist cause and see
others make $30 billions with it.

I don't buy the argument that RedHat has a $30 billion company value just
because of its services.

-F





>
>
>
>
> And Red Hat's value is not as much the free software it has used as its
> knowledge and infrastructure - which has arguably not been built by
> unpaid volunteers.
>
> In general, I'd say, top-professional FOSS tools are not built by
> amateurs or volunteers - though maybe by people who like to make them
> and who also can get paid by consulting or doing other works related to
> that.
>
> But as I said, it's not the licensing regime, but the exploitative
> nature of capitalist companies, that's the problem. Going proprietary
> wouldn't help a bit. Creating cooperatives that shared the income
> without any need for "bosses" or "owners" would be a safer bet.
>
> Best
>
> Carsten
>
> #  distributed via : no commercial use without permission
> #is a moderated mailing list for net criticism,
> #  collaborative text filtering and cultural politics of the nets
> #  more info: http://mx.kein.org/mailman/listinfo/nettime-l
> #  archive: http://www.nettime.org contact: nett...@kein.org
> #  @nettime_bot tweets mail w/ sender unless #ANON is in Subject:
#  distributed via : no commercial use without permission
#is a moderated mailing list for net criticism,
#  collaborative text filtering and cultural politics of the nets
#  more info: http://mx.kein.org/mailman/listinfo/nettime-l
#  archive: http://www.nettime.org contact: nett...@kein.org
#  @nettime_bot tweets mail w/ sender unless #ANON is in Subject:

Re: (no subject)

2018-10-29 Thread Florian Cramer
The problem with all debates of "identity politics" is that there is no
clear definition of it, not even by Mark Lilla who popularized the term in
2016. (Lilla, by the way, doesn't even speak of or for the "left", but of
two types of  "liberalism", one that he supports and one that he rejects.)
"Identity politics" is a textbook strawman argument which any decent
analytic philosopher should be able to tear into pieces with propositional
logic. What's more, the term has become a reactionary meme now that
political movements, such as "Aufstehen" in Germany, are being founded on
the premise of reinvigorating the left by ridding it from "identity
politics". This is where the strawman becomes a red herring.

All this is mostly based on the fiction that the working class defected to
the extreme right after established left-wing politics no longer
represented it. It's a fiction because, at least in Europe, research has
clearly shown that most voters for the extreme right come from the middle
class and vote for these parties because of shared core values (in short,
an understanding of the rule of law as law and order, and an understanding
of democracy as the execution of the will of the people who represent the
majority population), not policies.

If Lilla and others were more consequential, they would have to
historically denounce the political left as "identity politics" as such.
One could call the French Revolution "identity politics" of the bourgeois
(versus the aristocracy), the 19th century workers' movement "identity
politics" of the working class (which an old-school Jacobin might have
rejected precisely on the grounds that the republic had declared everyone
to be equal), the feminist movement "identity politics" of women, the black
civil rights movement "identity politics" of African Americans, the gay
pride movement "identity politics" of queers etc.etc.. In the end, those
who deplore "identity politics" express a nostalgia for a simple, binary
past that never existed. Worse, they patronize groups of people to which
they neither belong, nor are in touch with.

Maybe there could be a more precise notion of "identity politics" in the
sense of political choices purely made on the basis of one's group identity
instead of one's political interests. Examples could include trade union
members who voted for Clinton, Blair and Schröder in the 1990s out of token
loyality to "their" party, or the blind support of openly destructive and
malicious politics on the basis of ethnic loyality in areas with ethnic
conflicts. In my hometown Rotterdam, for example, a right-wing populist
party has been the strongest political force for one and a half decade
simply on the basis of white ethnic voter loyalty (in a city whose majority
population is now non-white), never mind the fact that this party is
chasing its own voters out of the city by aggressively gentrifying
traditional neighborhoods. Did Lilla and his epigones ever call this
"identity politics"?

-F






-- 
blog: *https://pod.thing.org/people/13a6057015b90136f896525400cd8561
*
bio:  http://floriancramer.nl


>
#  distributed via : no commercial use without permission
#is a moderated mailing list for net criticism,
#  collaborative text filtering and cultural politics of the nets
#  more info: http://mx.kein.org/mailman/listinfo/nettime-l
#  archive: http://www.nettime.org contact: nett...@kein.org
#  @nettime_bot tweets mail w/ sender unless #ANON is in Subject:

Re: Interview with Richard Stallman in New Left Review (September-October 2018)

2018-10-28 Thread Florian Cramer
Today, IBM announced that it will buy up Red Hat for $30 billion. That
value was mostly created by the labor of volunteer, un- or underpaid
developers of the Free/Libre/Open Source software that makes up Red Hat's
products. These people will not see a dime of IBM’s money. There need to be
discussions of economic flaws and exploitation in the FLOSS
development/distribution model.

-F


-- 
blog: https://pod.thing.org/people/13a6057015b90136f896525400cd8561

bio:  http://floriancramer.nl
#  distributed via : no commercial use without permission
#is a moderated mailing list for net criticism,
#  collaborative text filtering and cultural politics of the nets
#  more info: http://mx.kein.org/mailman/listinfo/nettime-l
#  archive: http://www.nettime.org contact: nett...@kein.org
#  @nettime_bot tweets mail w/ sender unless #ANON is in Subject:

Re: [docbook-apps] DocBook content review process/tools?

2018-10-26 Thread David Cramer
Hi Peter,

Good question. Here are a few ideas:

* Add a "Log a bug" link to each page that links to your bug tracking
system. It's usually easy to add a few query parameters that prepopulate
the new bug with contextual information (url, version, component, build
date, etc). Then the users only has to fill in a description and title.
With some more effort, you could add some code that queries the bug
tracking system and displays a list of open bugs against a give
chapter/section at the bottom of the page.

* Add some kind of commenting mechanism to the output. That
annotatorjs.org thing that Camille pointed out looks cool. I'm going to
have to check that out. Oxygen has a Webhelp with feedback chunked html
format. If security isn't a concern, then there are solutions like
Disqus that will add commenting to a page with just a little JavaScript.
Or there are open source and commercial commenting systems, usually
php-based, you can host.

* Oxygen XML has a "Content Fusion" product. I've only ever read the web
page and played with their online demo, but it uses a web-based version
of Oxygen to let users enter changes with track changes on. This looks
really cool, but if some of your content is generated programmatically
from more primitive sources, then you'd have to adapt the workflow to
merge changes to that content back into its original source.

* If pdf is your primary output format, Adobe has a feedback collection
system. I believe Acrobat Pro has to be involved. Perhaps this could be
automated if you produce postscript from your fo renderer and generate a
pdf from that using Acrobat Pro?

Regards,
David

On 10/23/18 11:53 AM, Peter Desjardins wrote:
> Hi! Does anyone have a document content review process you can
> recommend? One that allows non-DocBook users to easily provide input
> and see each other's comments?
> 
> My team uses Google docs for content review because reviewers can
> comment easily and see each other's input (this is at a company that
> uses Google email/docs). It's error-prone and tedious to transform
> DocBook content to Google documents though. I would love to find a
> better solution.
> 
> We keep content in GitHub and I love the pull request content review
> tools there. Most of our subject matter experts do not use GitHub
> though, and it's not practical to review all DocBook content by
> reading the source XML.
> 
> The conversion to Google doc format that works the best for us (so
> far) is DocBook > HTML > LibreOffice OpenDocument > Upload to Google
> drive and convert to Google doc format. We lose important formatting
> like bold for guilabel elements and bullet characters for
> itemizedlists disappear. Preparing a document for review is painful.
> 
> Do you have a great DocBook-based review process?
> 
> Thanks!
> 
> Peter
> 
> -
> To unsubscribe, e-mail: docbook-apps-unsubscr...@lists.oasis-open.org
> For additional commands, e-mail: docbook-apps-h...@lists.oasis-open.org
> 
> 

-
To unsubscribe, e-mail: docbook-apps-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: docbook-apps-h...@lists.oasis-open.org



v11 transaction semantics inside procedures

2018-09-20 Thread Dave Cramer
Is there somewhere that the transaction semantics inside a procedure are
documented ? From what I can tell transactions start from the first DML
statement and end implicitly when the procedure exits. Commit or Rollback
can be called anytime inside the transaction and this implicitly starts
another transaction.

Is there anything else I am missing ? Does DDL get applied after the
transaction ends ?

I do find this somewhat surprising as Postgres typically requires a BEGIN
statement to start a transaction block.

Thanks
Dave Cramer


Re: ssl tests README and certs

2018-09-16 Thread Dave Cramer
On Sun, 16 Sep 2018 at 14:41, Heikki Linnakangas  wrote:

> On 14/09/18 18:49, Dave Cramer wrote:
> > in src/test/ssl the README suggest that the Makefile can be used to
> > recreate the ssl directory, however there are no rules to create
> > *_ca.crt|key. Am I missing something ?
>
> The README says:
>
> > For convenience, all of these keypairs and certificates are included in
> the
> > ssl/ subdirectory. The Makefile also contains a rule, "make sslfiles", to
> > recreate them if you need to make changes.
>
> I just tested it again, and sure enough "make sslfiles" recreates them.
> You'll need to remove the old ones first, though, with "make
> sslfiles-clean" or just "rm ssl/*". As usual with "make", it won't
> recreate and overwrite existing files, if they already exist.
>

Ya I jumped the gun on this one. Once sorry for the noise

Dave Cramer


Advice for the experts

2018-09-15 Thread Tom Cramer
Good morning,

I've been reading about the new phones coming out and would like to
ask something from the experts on the list who have good and necessary
knowledge.
I've had an iPhone 7 for a couple of years and I got notified that I'm
eligible to upgrade.
First, what's going to happen with the iPhone 7?  Is it still a good
phone and will Apple continue to make it?
Second, should I upgrade from a 7 to an 8?  I know the xr is also the
least expensive phone, but I know it won't have the touch ID or home
buttons any more.
Also, how much faster or better is the 8 from the 7?  I'd like to know
the differences so that I know whether it's worth it to upgrade or
stay with the 7.
Any thoughts?
Tom

-- 
The following information is important for all members of the V iPhone list.

If you have any questions or concerns about the running of this list, or if you 
feel that a member's post is inappropriate, please contact the owners or 
moderators directly rather than posting on the list itself.

Your V iPhone list moderator is Mark Taylor.  Mark can be reached at:  
mk...@ucla.edu.  Your list owner is Cara Quinn - you can reach Cara at 
caraqu...@caraquinn.com

The archives for this list can be searched at:
http://www.mail-archive.com/viphone@googlegroups.com/
--- 
You received this message because you are subscribed to the Google Groups 
"VIPhone" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to viphone+unsubscr...@googlegroups.com.
To post to this group, send email to viphone@googlegroups.com.
Visit this group at https://groups.google.com/group/viphone.
For more options, visit https://groups.google.com/d/optout.


ssl tests README and certs

2018-09-14 Thread Dave Cramer
in src/test/ssl the README suggest that the Makefile can be used to
recreate the ssl directory, however there are no rules to create
*_ca.crt|key. Am I missing something ?


Dave Cramer


[CODE4LIB] Florence

2018-09-11 Thread Tom Cramer
Corresponding with a colleague in the Carolinas today, I was reminded that 
Florence is coming in fast and strong. For anyone in its path, whether you’re 
evacuating or not, stay safe. Those of us in drier climes will be sending good 
vibes towards you, your libraries, and your communities.

- Tom





[Scidb-users] Revision 1519

2018-09-11 Thread Gregor Cramer
I've just released revision 1519:

- Layout now includes toolbar layout.
- Layout now supports fullscreen mode (try F11).
- Some board themes changed a bit, especially contour of some pieces sets
   has been removed.
- Board theme "Elegant" added.

Start Scidb with "--update-themes" if you want to use the changed/new themes. 
For a quick overview of all themes see http://scidb.sf.net/de/themes.html .


___
Scidb-users mailing list
Scidb-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/scidb-users


Re: Quick Review..

2018-09-10 Thread Florian Cramer
Thanks, David - as I said in the discussion in Berlin, Stewart and I ended
up
in a weird place where we practically taught the "Alt-Right" its own
history.
One shouldn't read too much into its grasp of Gramsci though. This is what
Milo
Yiannopolous wrote about him in the original manuscript of his book
'Dangerous' (that Simon & Schuster ended up not publishing):

And so, in the 1920s, the Italian Marxist Antonio Gramsci decided that the
time had come for a new form of revolution -- one based on culture, not
class. According to Gramsci, the reason why the proletariat had failed to
rise up was because old, conservative ideas like loyalty to one's country,
family values, and religion held too much sway in working-class communities.
If that sounds familiar to Obama's comment about guns and religion, that's
because it should. His line of thinking, as we shall see, is directly
descended from the ideological tradition of Gramsci. Gramsci argued that as
a
precursor to revolution, the old traditions of the west -- or the 'cultural
hegemony,' as he called it -- would have to be systematically broken down.
To
do so, Gramsci argued that "proletarian" intellectuals should seek to
challenge the dominance of traditionalism in education and the media, and
create a new revolutionary culture. Gramsci's ideas would prove phenomenally
influential. If you've ever wondered why forced to take diversity or gender
studies courses at university, or why your professors all seem to hate
western civilization ... Well ' ..new you knew who to blame Gramsci.

(Because of the lawsuit, the manuscript is publicly available here:
https://www.dropbox.com/s/bjc0n5dll244o2w/Milo%20Y%20book%20with%20edits.pdf?dl=0
)
-F
--
blog: https://pod.thing.org/people/13a6057015b90136f896525400cd8561
bio: http://floriancramer.nl
#  distributed via : no commercial use without permission
#is a moderated mailing list for net criticism,
#  collaborative text filtering and cultural politics of the nets
#  more info: http://mx.kein.org/mailman/listinfo/nettime-l
#  archive: http://www.nettime.org contact: nett...@kein.org
#  @nettime_bot tweets mail w/ sender unless #ANON is in Subject:

Re: very slow largeobject transfers through JDBC

2018-09-06 Thread Dave Cramer
Hi Mate,

Thanks for the detailed response. This will help others in the same
situation

Dave Cramer

da...@postgresintl.com
www.postgresintl.com


On Thu, 6 Sep 2018 at 05:03, Mate Varga  wrote:

> Hi,
>
> summarizing:
> we had a table that had an OID column, referencing an object in
> pg_largeobject. This was mapped to a (Java) entity with a byte array field,
> annotated with @Lob. The problem was that we were fetching thousands of
> these entities in one go, and LOB fetching is not batched by Hibernate/JDBC
> (so each row is fetched separately). Because we were abusing LOBs (they
> were small, often less than 10 kB), we have chosen to move the binary blobs
> from the LO table to a simple bytea column. So the entity that had a byte
> array field mapped to an OID column now has a byte array field mapped to a
> bytea column, and we have manually moved data from the LO table to the
> bytea column. Now Hibernate/JDBC fetches all the content we need in
> batches. Random benchmark: fetching 20k rows used to take 7 seconds (250
> msec query execution time, 6.7 sec for transfer) and now it takes 1.5
> seconds (250 msec query + 1.3 sec transfer).
>
> Regards,
> Mate
>
> On Thu, Sep 6, 2018 at 10:56 AM Dave Cramer  wrote:
>
>> Hi
>>
>> Can you be more explicit how you fixed the problem ?
>>
>> Thanks
>> Dave Cramer
>>
>> da...@postgresintl.com
>> www.postgresintl.com
>>
>>
>> On Thu, 6 Sep 2018 at 03:46, Mate Varga  wrote:
>>
>>> After inlining the data, performance issues have been solved. Thanks for
>>> the help.
>>>
>>> On Mon, Sep 3, 2018 at 9:57 PM Mate Varga  wrote:
>>>
>>>> Thanks,
>>>> 1) we'll try to move stuff out from LOBs
>>>> 2) we might raise a PR for the JDBC driver
>>>>
>>>> Mate
>>>>
>>>> On Mon, 3 Sep 2018, 19:35 Dave Cramer,  wrote:
>>>>
>>>>>
>>>>>
>>>>> On Mon, 3 Sep 2018 at 13:00, Mate Varga  wrote:
>>>>>
>>>>>> More precisely: when fetching 10k rows, JDBC driver just does a large
>>>>>> bunch of socket reads. With lobs, it's ping-pong: one read, one write per
>>>>>> lob...
>>>>>>
>>>>>>
>>>>> Ok, this is making more sense. In theory we could fetch them all but
>>>>> since they are LOB's we could run out of memory.
>>>>>
>>>>> Not sure what to tell you at this point. I'd entertain a PR if you
>>>>> were motivated.
>>>>>
>>>>> Dave Cramer
>>>>>
>>>>> da...@postgresintl.com
>>>>> www.postgresintl.com
>>>>>
>>>>>
>>>>>
>>>>>>
>>>>>> On Mon, Sep 3, 2018 at 6:54 PM Mate Varga  wrote:
>>>>>>
>>>>>>> So I have detailed profiling results now. Basically it takes very
>>>>>>> long that for each blob, the JDBC driver reads from the socket then it
>>>>>>> creates the byte array on the Java side. Then it reads the next blob, 
>>>>>>> etc.
>>>>>>> I guess this takes many network roundtrips.
>>>>>>>
>>>>>>> On Mon, Sep 3, 2018 at 5:58 PM Dave Cramer  wrote:
>>>>>>>
>>>>>>>>
>>>>>>>> On Mon, 3 Sep 2018 at 10:48, Mate Varga  wrote:
>>>>>>>>
>>>>>>>>> That's 1690 msec (1.69 seconds, and that is how long it takes to
>>>>>>>>> fetch 20k (small-ish) rows without LOBs (LOBs are a few lines below 
>>>>>>>>> on the
>>>>>>>>> screenshot)
>>>>>>>>>
>>>>>>>>
>>>>>>>> that sound high as well!
>>>>>>>>
>>>>>>>> Something isn't adding up..
>>>>>>>>
>>>>>>>>
>>>>>>>> Dave Cramer
>>>>>>>>
>>>>>>>> da...@postgresintl.com
>>>>>>>> www.postgresintl.com
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>>
>>>>>>>>> On Mon, Sep 3, 2018 at 4:40 PM Dave Cramer 
>>>>>>>>> wrote:
>>>>>>>>>
>>>>>>>>>> the one you have highlighted ~1.69ms
>>>>>>

Re: very slow largeobject transfers through JDBC

2018-09-06 Thread Dave Cramer
Hi

Can you be more explicit how you fixed the problem ?

Thanks
Dave Cramer

da...@postgresintl.com
www.postgresintl.com


On Thu, 6 Sep 2018 at 03:46, Mate Varga  wrote:

> After inlining the data, performance issues have been solved. Thanks for
> the help.
>
> On Mon, Sep 3, 2018 at 9:57 PM Mate Varga  wrote:
>
>> Thanks,
>> 1) we'll try to move stuff out from LOBs
>> 2) we might raise a PR for the JDBC driver
>>
>> Mate
>>
>> On Mon, 3 Sep 2018, 19:35 Dave Cramer,  wrote:
>>
>>>
>>>
>>> On Mon, 3 Sep 2018 at 13:00, Mate Varga  wrote:
>>>
>>>> More precisely: when fetching 10k rows, JDBC driver just does a large
>>>> bunch of socket reads. With lobs, it's ping-pong: one read, one write per
>>>> lob...
>>>>
>>>>
>>> Ok, this is making more sense. In theory we could fetch them all but
>>> since they are LOB's we could run out of memory.
>>>
>>> Not sure what to tell you at this point. I'd entertain a PR if you were
>>> motivated.
>>>
>>> Dave Cramer
>>>
>>> da...@postgresintl.com
>>> www.postgresintl.com
>>>
>>>
>>>
>>>>
>>>> On Mon, Sep 3, 2018 at 6:54 PM Mate Varga  wrote:
>>>>
>>>>> So I have detailed profiling results now. Basically it takes very long
>>>>> that for each blob, the JDBC driver reads from the socket then it creates
>>>>> the byte array on the Java side. Then it reads the next blob, etc. I guess
>>>>> this takes many network roundtrips.
>>>>>
>>>>> On Mon, Sep 3, 2018 at 5:58 PM Dave Cramer  wrote:
>>>>>
>>>>>>
>>>>>> On Mon, 3 Sep 2018 at 10:48, Mate Varga  wrote:
>>>>>>
>>>>>>> That's 1690 msec (1.69 seconds, and that is how long it takes to
>>>>>>> fetch 20k (small-ish) rows without LOBs (LOBs are a few lines below on 
>>>>>>> the
>>>>>>> screenshot)
>>>>>>>
>>>>>>
>>>>>> that sound high as well!
>>>>>>
>>>>>> Something isn't adding up..
>>>>>>
>>>>>>
>>>>>> Dave Cramer
>>>>>>
>>>>>> da...@postgresintl.com
>>>>>> www.postgresintl.com
>>>>>>
>>>>>>
>>>>>>
>>>>>>>
>>>>>>> On Mon, Sep 3, 2018 at 4:40 PM Dave Cramer  wrote:
>>>>>>>
>>>>>>>> the one you have highlighted ~1.69ms
>>>>>>>>
>>>>>>>> Dave Cramer
>>>>>>>>
>>>>>>>> da...@postgresintl.com
>>>>>>>> www.postgresintl.com
>>>>>>>>
>>>>>>>>
>>>>>>>> On Mon, 3 Sep 2018 at 10:38, Mate Varga  wrote:
>>>>>>>>
>>>>>>>>> Which frame do you refer to?
>>>>>>>>>
>>>>>>>>> On Mon, Sep 3, 2018 at 3:57 PM Dave Cramer 
>>>>>>>>> wrote:
>>>>>>>>>
>>>>>>>>>> Not sure why reading from a socket is taking 1ms ?
>>>>>>>>>>
>>>>>>>>>> Dave Cramer
>>>>>>>>>>
>>>>>>>>>> da...@postgresintl.com
>>>>>>>>>> www.postgresintl.com
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> On Mon, 3 Sep 2018 at 09:39, Mate Varga  wrote:
>>>>>>>>>>
>>>>>>>>>>> Hi,
>>>>>>>>>>>
>>>>>>>>>>> https://imgur.com/a/ovsJPRv -- I've uploaded the profiling info
>>>>>>>>>>> (as an image, sorry). It seems this is a JDBC-level problem. I 
>>>>>>>>>>> understand
>>>>>>>>>>> that the absolute timing is not meaningful at all because you don't 
>>>>>>>>>>> know
>>>>>>>>>>> how large the resultset is, but I can tell that this is only a few
>>>>>>>>>>> thousands rows + few thousand largeobjects, each largeobject is 
>>>>>>>>>

Re: very slow largeobject transfers through JDBC

2018-09-03 Thread Dave Cramer
On Mon, 3 Sep 2018 at 13:00, Mate Varga  wrote:

> More precisely: when fetching 10k rows, JDBC driver just does a large
> bunch of socket reads. With lobs, it's ping-pong: one read, one write per
> lob...
>
>
Ok, this is making more sense. In theory we could fetch them all but since
they are LOB's we could run out of memory.

Not sure what to tell you at this point. I'd entertain a PR if you were
motivated.

Dave Cramer

da...@postgresintl.com
www.postgresintl.com



>
> On Mon, Sep 3, 2018 at 6:54 PM Mate Varga  wrote:
>
>> So I have detailed profiling results now. Basically it takes very long
>> that for each blob, the JDBC driver reads from the socket then it creates
>> the byte array on the Java side. Then it reads the next blob, etc. I guess
>> this takes many network roundtrips.
>>
>> On Mon, Sep 3, 2018 at 5:58 PM Dave Cramer  wrote:
>>
>>>
>>> On Mon, 3 Sep 2018 at 10:48, Mate Varga  wrote:
>>>
>>>> That's 1690 msec (1.69 seconds, and that is how long it takes to fetch
>>>> 20k (small-ish) rows without LOBs (LOBs are a few lines below on the
>>>> screenshot)
>>>>
>>>
>>> that sound high as well!
>>>
>>> Something isn't adding up..
>>>
>>>
>>> Dave Cramer
>>>
>>> da...@postgresintl.com
>>> www.postgresintl.com
>>>
>>>
>>>
>>>>
>>>> On Mon, Sep 3, 2018 at 4:40 PM Dave Cramer  wrote:
>>>>
>>>>> the one you have highlighted ~1.69ms
>>>>>
>>>>> Dave Cramer
>>>>>
>>>>> da...@postgresintl.com
>>>>> www.postgresintl.com
>>>>>
>>>>>
>>>>> On Mon, 3 Sep 2018 at 10:38, Mate Varga  wrote:
>>>>>
>>>>>> Which frame do you refer to?
>>>>>>
>>>>>> On Mon, Sep 3, 2018 at 3:57 PM Dave Cramer  wrote:
>>>>>>
>>>>>>> Not sure why reading from a socket is taking 1ms ?
>>>>>>>
>>>>>>> Dave Cramer
>>>>>>>
>>>>>>> da...@postgresintl.com
>>>>>>> www.postgresintl.com
>>>>>>>
>>>>>>>
>>>>>>> On Mon, 3 Sep 2018 at 09:39, Mate Varga  wrote:
>>>>>>>
>>>>>>>> Hi,
>>>>>>>>
>>>>>>>> https://imgur.com/a/ovsJPRv -- I've uploaded the profiling info
>>>>>>>> (as an image, sorry). It seems this is a JDBC-level problem. I 
>>>>>>>> understand
>>>>>>>> that the absolute timing is not meaningful at all because you don't 
>>>>>>>> know
>>>>>>>> how large the resultset is, but I can tell that this is only a few
>>>>>>>> thousands rows + few thousand largeobjects, each largeobject is around 
>>>>>>>> 1
>>>>>>>> kByte. (Yes I know this is not a proper use of LOBs -- it's a legacy db
>>>>>>>> structure that's hard to change.)
>>>>>>>>
>>>>>>>> Thanks.
>>>>>>>> Mate
>>>>>>>>
>>>>>>>> On Mon, Sep 3, 2018 at 11:52 AM Mate Varga  wrote:
>>>>>>>>
>>>>>>>>> Hey,
>>>>>>>>>
>>>>>>>>> we'll try to test this with pure JDBC versus hibernate. Thanks!
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On Mon, Sep 3, 2018 at 11:48 AM Dave Cramer 
>>>>>>>>> wrote:
>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> On Mon, 3 Sep 2018 at 03:55, Mate Varga  wrote:
>>>>>>>>>>
>>>>>>>>>>> Basically there's a class with a byte[] field, the class is
>>>>>>>>>>> mapped to table T and the byte field is annotated with @Lob so it 
>>>>>>>>>>> goes to
>>>>>>>>>>> the pg_largeobject table.
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> Ah, so hibernate is in the mix. I wonder if that is causing some
>>>>>>>>>> challenges ?
>>>>>>>>>>
>>>>>>>>&

Re: very slow largeobject transfers through JDBC

2018-09-03 Thread Dave Cramer
On Mon, 3 Sep 2018 at 10:48, Mate Varga  wrote:

> That's 1690 msec (1.69 seconds, and that is how long it takes to fetch 20k
> (small-ish) rows without LOBs (LOBs are a few lines below on the screenshot)
>

that sound high as well!

Something isn't adding up..


Dave Cramer

da...@postgresintl.com
www.postgresintl.com



>
> On Mon, Sep 3, 2018 at 4:40 PM Dave Cramer  wrote:
>
>> the one you have highlighted ~1.69ms
>>
>> Dave Cramer
>>
>> da...@postgresintl.com
>> www.postgresintl.com
>>
>>
>> On Mon, 3 Sep 2018 at 10:38, Mate Varga  wrote:
>>
>>> Which frame do you refer to?
>>>
>>> On Mon, Sep 3, 2018 at 3:57 PM Dave Cramer  wrote:
>>>
>>>> Not sure why reading from a socket is taking 1ms ?
>>>>
>>>> Dave Cramer
>>>>
>>>> da...@postgresintl.com
>>>> www.postgresintl.com
>>>>
>>>>
>>>> On Mon, 3 Sep 2018 at 09:39, Mate Varga  wrote:
>>>>
>>>>> Hi,
>>>>>
>>>>> https://imgur.com/a/ovsJPRv -- I've uploaded the profiling info (as
>>>>> an image, sorry). It seems this is a JDBC-level problem. I understand that
>>>>> the absolute timing is not meaningful at all because you don't know how
>>>>> large the resultset is, but I can tell that this is only a few thousands
>>>>> rows + few thousand largeobjects, each largeobject is around 1 kByte. (Yes
>>>>> I know this is not a proper use of LOBs -- it's a legacy db structure
>>>>> that's hard to change.)
>>>>>
>>>>> Thanks.
>>>>> Mate
>>>>>
>>>>> On Mon, Sep 3, 2018 at 11:52 AM Mate Varga  wrote:
>>>>>
>>>>>> Hey,
>>>>>>
>>>>>> we'll try to test this with pure JDBC versus hibernate. Thanks!
>>>>>>
>>>>>>
>>>>>> On Mon, Sep 3, 2018 at 11:48 AM Dave Cramer  wrote:
>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> On Mon, 3 Sep 2018 at 03:55, Mate Varga  wrote:
>>>>>>>
>>>>>>>> Basically there's a class with a byte[] field, the class is mapped
>>>>>>>> to table T and the byte field is annotated with @Lob so it goes to the
>>>>>>>> pg_largeobject table.
>>>>>>>>
>>>>>>>
>>>>>>> Ah, so hibernate is in the mix. I wonder if that is causing some
>>>>>>> challenges ?
>>>>>>>
>>>>>>>
>>>>>>>> The DB is on separate host but relatively close to the app, and I
>>>>>>>> can reproduce the problem locally as well. One interesting bit is that
>>>>>>>> turning of SSL between the app and PSQL speeds up things by at least 
>>>>>>>> 50%.
>>>>>>>>
>>>>>>>> Ah, one addition -- the binary objects are encrypted, so their
>>>>>>>> entropy is very high.
>>>>>>>>
>>>>>>>> Any chance you could write a simple non-hibernate test code to time
>>>>>>> the code ?
>>>>>>>
>>>>>>> Dave Cramer
>>>>>>>
>>>>>>> dave.cra...@crunchydata.ca
>>>>>>> www.crunchydata.ca
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>> Mate
>>>>>>>>
>>>>>>>> On Sun, Sep 2, 2018 at 12:55 AM Dave Cramer 
>>>>>>>> wrote:
>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On Fri, 31 Aug 2018 at 10:15, Mate Varga  wrote:
>>>>>>>>>
>>>>>>>>>> I see -- we could try that, though we're mostly using an ORM
>>>>>>>>>> (Hibernate) to do this. Thanks!
>>>>>>>>>>
>>>>>>>>>> On Fri, Aug 31, 2018 at 3:57 PM Dmitry Igrishin <
>>>>>>>>>> dmit...@gmail.com> wrote:
>>>>>>>>>>
>>>>>>>>>>> пт, 31 авг. 2018 г. в 16:35, Mate Varga :
>>>>>>>>>>> >
>>>>>>>>>>> > Hi,
>>>>>>>>>>> >
>>>>>>>>>>> > we're fetching binary data from pg_largeobject table. The data
>>>>>>>>>>> is not very large, but we ended up storing it there. If I'm copying 
>>>>>>>>>>> the
>>>>>>>>>>> data to a file from the psql console, then it takes X time (e.g. a 
>>>>>>>>>>> second),
>>>>>>>>>>> fetching it through the JDBC driver takes at least 10x more. We 
>>>>>>>>>>> don't see
>>>>>>>>>>> this difference between JDBC and 'native' performance for anything 
>>>>>>>>>>> except
>>>>>>>>>>> largeobjects (and bytea columns, for the record).
>>>>>>>>>>> >
>>>>>>>>>>> > Does anyone have any advice about whether this can be tuned or
>>>>>>>>>>> what the cause is?
>>>>>>>>>>> I don't know what a reason of that, but I think it's reasonable
>>>>>>>>>>> and
>>>>>>>>>>> quite simple to call lo_import()/lo_export() via JNI.
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>> Can't imagine that's any faster. The driver simply implements the
>>>>>>>>> protocol
>>>>>>>>>
>>>>>>>>> Do you have any code to share ? Any other information ?
>>>>>>>>>
>>>>>>>>> Is the JDBC connection significantly further away network wise ?
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> Dave Cramer
>>>>>>>>>
>>>>>>>>> da...@postgresintl.com
>>>>>>>>> www.postgresintl.com
>>>>>>>>>
>>>>>>>>


Re: very slow largeobject transfers through JDBC

2018-09-03 Thread Dave Cramer
the one you have highlighted ~1.69ms

Dave Cramer

da...@postgresintl.com
www.postgresintl.com


On Mon, 3 Sep 2018 at 10:38, Mate Varga  wrote:

> Which frame do you refer to?
>
> On Mon, Sep 3, 2018 at 3:57 PM Dave Cramer  wrote:
>
>> Not sure why reading from a socket is taking 1ms ?
>>
>> Dave Cramer
>>
>> da...@postgresintl.com
>> www.postgresintl.com
>>
>>
>> On Mon, 3 Sep 2018 at 09:39, Mate Varga  wrote:
>>
>>> Hi,
>>>
>>> https://imgur.com/a/ovsJPRv -- I've uploaded the profiling info (as an
>>> image, sorry). It seems this is a JDBC-level problem. I understand that the
>>> absolute timing is not meaningful at all because you don't know how large
>>> the resultset is, but I can tell that this is only a few thousands rows +
>>> few thousand largeobjects, each largeobject is around 1 kByte. (Yes I know
>>> this is not a proper use of LOBs -- it's a legacy db structure that's hard
>>> to change.)
>>>
>>> Thanks.
>>> Mate
>>>
>>> On Mon, Sep 3, 2018 at 11:52 AM Mate Varga  wrote:
>>>
>>>> Hey,
>>>>
>>>> we'll try to test this with pure JDBC versus hibernate. Thanks!
>>>>
>>>>
>>>> On Mon, Sep 3, 2018 at 11:48 AM Dave Cramer  wrote:
>>>>
>>>>>
>>>>>
>>>>> On Mon, 3 Sep 2018 at 03:55, Mate Varga  wrote:
>>>>>
>>>>>> Basically there's a class with a byte[] field, the class is mapped to
>>>>>> table T and the byte field is annotated with @Lob so it goes to the
>>>>>> pg_largeobject table.
>>>>>>
>>>>>
>>>>> Ah, so hibernate is in the mix. I wonder if that is causing some
>>>>> challenges ?
>>>>>
>>>>>
>>>>>> The DB is on separate host but relatively close to the app, and I can
>>>>>> reproduce the problem locally as well. One interesting bit is that 
>>>>>> turning
>>>>>> of SSL between the app and PSQL speeds up things by at least 50%.
>>>>>>
>>>>>> Ah, one addition -- the binary objects are encrypted, so their
>>>>>> entropy is very high.
>>>>>>
>>>>>> Any chance you could write a simple non-hibernate test code to time
>>>>> the code ?
>>>>>
>>>>> Dave Cramer
>>>>>
>>>>> dave.cra...@crunchydata.ca
>>>>> www.crunchydata.ca
>>>>>
>>>>>
>>>>>
>>>>>> Mate
>>>>>>
>>>>>> On Sun, Sep 2, 2018 at 12:55 AM Dave Cramer  wrote:
>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> On Fri, 31 Aug 2018 at 10:15, Mate Varga  wrote:
>>>>>>>
>>>>>>>> I see -- we could try that, though we're mostly using an ORM
>>>>>>>> (Hibernate) to do this. Thanks!
>>>>>>>>
>>>>>>>> On Fri, Aug 31, 2018 at 3:57 PM Dmitry Igrishin 
>>>>>>>> wrote:
>>>>>>>>
>>>>>>>>> пт, 31 авг. 2018 г. в 16:35, Mate Varga :
>>>>>>>>> >
>>>>>>>>> > Hi,
>>>>>>>>> >
>>>>>>>>> > we're fetching binary data from pg_largeobject table. The data
>>>>>>>>> is not very large, but we ended up storing it there. If I'm copying 
>>>>>>>>> the
>>>>>>>>> data to a file from the psql console, then it takes X time (e.g. a 
>>>>>>>>> second),
>>>>>>>>> fetching it through the JDBC driver takes at least 10x more. We don't 
>>>>>>>>> see
>>>>>>>>> this difference between JDBC and 'native' performance for anything 
>>>>>>>>> except
>>>>>>>>> largeobjects (and bytea columns, for the record).
>>>>>>>>> >
>>>>>>>>> > Does anyone have any advice about whether this can be tuned or
>>>>>>>>> what the cause is?
>>>>>>>>> I don't know what a reason of that, but I think it's reasonable and
>>>>>>>>> quite simple to call lo_import()/lo_export() via JNI.
>>>>>>>>>
>>>>>>>>
>>>>>>> Can't imagine that's any faster. The driver simply implements the
>>>>>>> protocol
>>>>>>>
>>>>>>> Do you have any code to share ? Any other information ?
>>>>>>>
>>>>>>> Is the JDBC connection significantly further away network wise ?
>>>>>>>
>>>>>>>
>>>>>>> Dave Cramer
>>>>>>>
>>>>>>> da...@postgresintl.com
>>>>>>> www.postgresintl.com
>>>>>>>
>>>>>>


Re: very slow largeobject transfers through JDBC

2018-09-03 Thread Dave Cramer
Not sure why reading from a socket is taking 1ms ?

Dave Cramer

da...@postgresintl.com
www.postgresintl.com


On Mon, 3 Sep 2018 at 09:39, Mate Varga  wrote:

> Hi,
>
> https://imgur.com/a/ovsJPRv -- I've uploaded the profiling info (as an
> image, sorry). It seems this is a JDBC-level problem. I understand that the
> absolute timing is not meaningful at all because you don't know how large
> the resultset is, but I can tell that this is only a few thousands rows +
> few thousand largeobjects, each largeobject is around 1 kByte. (Yes I know
> this is not a proper use of LOBs -- it's a legacy db structure that's hard
> to change.)
>
> Thanks.
> Mate
>
> On Mon, Sep 3, 2018 at 11:52 AM Mate Varga  wrote:
>
>> Hey,
>>
>> we'll try to test this with pure JDBC versus hibernate. Thanks!
>>
>>
>> On Mon, Sep 3, 2018 at 11:48 AM Dave Cramer  wrote:
>>
>>>
>>>
>>> On Mon, 3 Sep 2018 at 03:55, Mate Varga  wrote:
>>>
>>>> Basically there's a class with a byte[] field, the class is mapped to
>>>> table T and the byte field is annotated with @Lob so it goes to the
>>>> pg_largeobject table.
>>>>
>>>
>>> Ah, so hibernate is in the mix. I wonder if that is causing some
>>> challenges ?
>>>
>>>
>>>> The DB is on separate host but relatively close to the app, and I can
>>>> reproduce the problem locally as well. One interesting bit is that turning
>>>> of SSL between the app and PSQL speeds up things by at least 50%.
>>>>
>>>> Ah, one addition -- the binary objects are encrypted, so their entropy
>>>> is very high.
>>>>
>>>> Any chance you could write a simple non-hibernate test code to time the
>>> code ?
>>>
>>> Dave Cramer
>>>
>>> dave.cra...@crunchydata.ca
>>> www.crunchydata.ca
>>>
>>>
>>>
>>>> Mate
>>>>
>>>> On Sun, Sep 2, 2018 at 12:55 AM Dave Cramer  wrote:
>>>>
>>>>>
>>>>>
>>>>>
>>>>> On Fri, 31 Aug 2018 at 10:15, Mate Varga  wrote:
>>>>>
>>>>>> I see -- we could try that, though we're mostly using an ORM
>>>>>> (Hibernate) to do this. Thanks!
>>>>>>
>>>>>> On Fri, Aug 31, 2018 at 3:57 PM Dmitry Igrishin 
>>>>>> wrote:
>>>>>>
>>>>>>> пт, 31 авг. 2018 г. в 16:35, Mate Varga :
>>>>>>> >
>>>>>>> > Hi,
>>>>>>> >
>>>>>>> > we're fetching binary data from pg_largeobject table. The data is
>>>>>>> not very large, but we ended up storing it there. If I'm copying the 
>>>>>>> data
>>>>>>> to a file from the psql console, then it takes X time (e.g. a second),
>>>>>>> fetching it through the JDBC driver takes at least 10x more. We don't 
>>>>>>> see
>>>>>>> this difference between JDBC and 'native' performance for anything 
>>>>>>> except
>>>>>>> largeobjects (and bytea columns, for the record).
>>>>>>> >
>>>>>>> > Does anyone have any advice about whether this can be tuned or
>>>>>>> what the cause is?
>>>>>>> I don't know what a reason of that, but I think it's reasonable and
>>>>>>> quite simple to call lo_import()/lo_export() via JNI.
>>>>>>>
>>>>>>
>>>>> Can't imagine that's any faster. The driver simply implements the
>>>>> protocol
>>>>>
>>>>> Do you have any code to share ? Any other information ?
>>>>>
>>>>> Is the JDBC connection significantly further away network wise ?
>>>>>
>>>>>
>>>>> Dave Cramer
>>>>>
>>>>> da...@postgresintl.com
>>>>> www.postgresintl.com
>>>>>
>>>>


Re: very slow largeobject transfers through JDBC

2018-09-03 Thread Dave Cramer
On Mon, 3 Sep 2018 at 03:55, Mate Varga  wrote:

> Basically there's a class with a byte[] field, the class is mapped to
> table T and the byte field is annotated with @Lob so it goes to the
> pg_largeobject table.
>

Ah, so hibernate is in the mix. I wonder if that is causing some challenges
?


> The DB is on separate host but relatively close to the app, and I can
> reproduce the problem locally as well. One interesting bit is that turning
> of SSL between the app and PSQL speeds up things by at least 50%.
>
> Ah, one addition -- the binary objects are encrypted, so their entropy is
> very high.
>
> Any chance you could write a simple non-hibernate test code to time the
code ?

Dave Cramer

dave.cra...@crunchydata.ca
www.crunchydata.ca



> Mate
>
> On Sun, Sep 2, 2018 at 12:55 AM Dave Cramer  wrote:
>
>>
>>
>>
>> On Fri, 31 Aug 2018 at 10:15, Mate Varga  wrote:
>>
>>> I see -- we could try that, though we're mostly using an ORM (Hibernate)
>>> to do this. Thanks!
>>>
>>> On Fri, Aug 31, 2018 at 3:57 PM Dmitry Igrishin 
>>> wrote:
>>>
>>>> пт, 31 авг. 2018 г. в 16:35, Mate Varga :
>>>> >
>>>> > Hi,
>>>> >
>>>> > we're fetching binary data from pg_largeobject table. The data is not
>>>> very large, but we ended up storing it there. If I'm copying the data to a
>>>> file from the psql console, then it takes X time (e.g. a second), fetching
>>>> it through the JDBC driver takes at least 10x more. We don't see this
>>>> difference between JDBC and 'native' performance for anything except
>>>> largeobjects (and bytea columns, for the record).
>>>> >
>>>> > Does anyone have any advice about whether this can be tuned or what
>>>> the cause is?
>>>> I don't know what a reason of that, but I think it's reasonable and
>>>> quite simple to call lo_import()/lo_export() via JNI.
>>>>
>>>
>> Can't imagine that's any faster. The driver simply implements the protocol
>>
>> Do you have any code to share ? Any other information ?
>>
>> Is the JDBC connection significantly further away network wise ?
>>
>>
>> Dave Cramer
>>
>> da...@postgresintl.com
>> www.postgresintl.com
>>
>


Re: very slow largeobject transfers through JDBC

2018-09-01 Thread Dave Cramer
On Fri, 31 Aug 2018 at 10:15, Mate Varga  wrote:

> I see -- we could try that, though we're mostly using an ORM (Hibernate)
> to do this. Thanks!
>
> On Fri, Aug 31, 2018 at 3:57 PM Dmitry Igrishin  wrote:
>
>> пт, 31 авг. 2018 г. в 16:35, Mate Varga :
>> >
>> > Hi,
>> >
>> > we're fetching binary data from pg_largeobject table. The data is not
>> very large, but we ended up storing it there. If I'm copying the data to a
>> file from the psql console, then it takes X time (e.g. a second), fetching
>> it through the JDBC driver takes at least 10x more. We don't see this
>> difference between JDBC and 'native' performance for anything except
>> largeobjects (and bytea columns, for the record).
>> >
>> > Does anyone have any advice about whether this can be tuned or what the
>> cause is?
>> I don't know what a reason of that, but I think it's reasonable and
>> quite simple to call lo_import()/lo_export() via JNI.
>>
>
Can't imagine that's any faster. The driver simply implements the protocol

Do you have any code to share ? Any other information ?

Is the JDBC connection significantly further away network wise ?


Dave Cramer

da...@postgresintl.com
www.postgresintl.com


Re: Stored procedures and out parameters

2018-08-30 Thread Dave Cramer
>
>
> In other words, being more like the SQL standard is probably good, but
> breaking compatibility is bad.  You've technically avoided a
> *backward* compatibility break by deciding that functions and
> procedures can work differently from each other, but that just moves
> the problem around.  Now instead of being unhappy that existing code
> is broken, people are unhappy that the new thing doesn't work like the
> existing thing.  That may be the lesser of evils, but it's still
> pretty evil.  People are not being unreasonable to want to call some
> code stored on the server without having to worry about whether that
> code is in a box labelled PROCEDURE or a box labelled FUNCTION.
>
>
Reading this from the (JDBC) drivers perspective, which is probably a
fairly popular one,
We now have a standard that we can't really support. Either the driver will
have to support
the new PROCEDURE with the {call } mechanism or stay with the existing
FUNCTIONS.
This puts the drivers in a no win situation.

This probably should have been discussed in more detail before this
> got committed, but I guess that's water under the bridge at this
> point.  Nevertheless, I predict that this is going to be an ongoing
> source of pain for a long time to come.
>
> Undoubtedly.. surely the opportunity to do something about this has not
passed as this has not been
officially released ?

Dave Cramer

da...@postgresintl.com
www.postgresintl.com


Assigning a sound alert to a specific email account

2018-08-24 Thread Tom Cramer
Hello,

I have three different email accounts on my iPhone 7's email app.  Two
of them are important enough to make me want to keep up with the
incoming email.  However, I'd like to know how I can assign a
different alert sound to each account so that I know which one is
getting the email.
At this point, all of them have the same sound.
I've gone to the settings menu and have gone to the sounds area where
all of the various tones are, but I have no idea how to assign a sound
to each separate account within that email app.
Any ideas?
Tom

-- 
The following information is important for all members of the V iPhone list.

If you have any questions or concerns about the running of this list, or if you 
feel that a member's post is inappropriate, please contact the owners or 
moderators directly rather than posting on the list itself.

Your V iPhone list moderator is Mark Taylor.  Mark can be reached at:  
mk...@ucla.edu.  Your list owner is Cara Quinn - you can reach Cara at 
caraqu...@caraquinn.com

The archives for this list can be searched at:
http://www.mail-archive.com/viphone@googlegroups.com/
--- 
You received this message because you are subscribed to the Google Groups 
"VIPhone" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to viphone+unsubscr...@googlegroups.com.
To post to this group, send email to viphone@googlegroups.com.
Visit this group at https://groups.google.com/group/viphone.
For more options, visit https://groups.google.com/d/optout.


Re: Stored procedures and out parameters

2018-08-22 Thread Dave Cramer
On Wed, 22 Aug 2018 at 12:58, Peter Eisentraut <
peter.eisentr...@2ndquadrant.com> wrote:

> On 22/08/2018 18:49, David G. Johnston wrote:
> > What others have done doesn't change the situation that has arisen for
> > PostgreSQL due to its implementation history.
>
> What others have done seems relevant, because the whole reason these
> questionable interfaces exist is to achieve compatibility across SQL
> implementations.  Otherwise you can just make a native SQL call directly.
>

It seems to me that if we don't make it possible to call a function or a
procedure using
the same mechanism the drivers will have to make a choice which one to
implement.
That said the path of least resistance and regression for the drivers would
be to not implement
calling procedures through each respective drivers mechanism. I would think
given the importance of
this work it would be a shame not to make it easy to use.

I also agree with David that driver writers made the best out of the
situation with functions and we are now asking for the server to dual
purpose the call command.

Is there a technical reason why this is not possible ?


Dave Cramer

da...@postgresintl.com
www.postgresintl.com

>
>


<    2   3   4   5   6   7   8   9   10   11   >