Re: AuthorizedKeyCommand ldap

2017-12-11 Thread Paulm
On Mon, Dec 11, 2017 at 03:49:24PM -0700, Dan Becker wrote:
> I am reading a blog proposing to use the AuthorizedKeyCommand to hook into
> another authentication mechanism  by calling a shell script
> 
> https://blog.heckel.xyz/2015/05/04/openssh-authorizedkeyscommand-with-fingerprint/
> 
> Do I have a valid concern in thinking this might not be a prudent method of
> authentication ?
> 

I don't know why he uses the term 'dynamic authorized_keys file'.  I
know what he means, but it's not a file.  (When people misuse basic
terms I immediately question their depth of understanding.)

As for your question - these are some thoughts, not intended to be
comprehensive:

As I see it, the key will be somewhere - in the authorized_keys file
in the user's home directory, in an LDAP directory, or perhaps
elsewhere.  Regardless of where it's kept, it needs to be secured
against tampering.  Is the local host more secure in that regard than
an LDAP dir?  That depends on the quality of the sysadmins who set up
the server and how the network infrastructure is designed.  The same
applies to any other mechanism for remotely storing public keys.

sshd(8) will complain if the perms for the user's authorized_key file
aren't correct, so it offers a safe-guard against misconfiguration.

The mechanism for retrieving the key from a remote server should use
SSL/TLS to validate the server's identity and protect the contents.

The utility invoked by sshd to fetch the key needs to be secured,
requiring special privileges to modify it.

Locally, points of attack would be the tool itself or the user's
authorized keys file, or the server's public key.  They're all files,
so file permission restrictions would have to be circumvented.  If the
tool is not written in a type-safe language, then it could create
additional vulnerabilities as well.

In larger environments, keeping track of authorized_keys files for
users and hosts, making sure they're (only) on the hosts they need to
be on, and keeping them accurate and up-to-date can be tedious and
error prone, even with a config management system.  One could argue
that that method allows for vulnerabilities that would not exist if
the keys were managed centrally.  Again, it depends on the quality of
the sysadmins' work. 

The security requirements in an infrastructure are probably not the
same for all hosts, so you could use a hybrid strategy, using a local
authorzed_keys file for hosts that need greater protection (e.g.,
database servers, firewalls, DMZ hosts, etc) if that makes you more
comfortable. (Generally speaking, I think too much uniformity can
sometimes be a weakness).





Re: AuthorizedKeyCommand ldap

2017-12-11 Thread Alexander Hall

On 12/11/17 23:49, Dan Becker wrote:

I am reading a blog proposing to use the AuthorizedKeyCommand to hook into
another authentication mechanism  by calling a shell script

https://blog.heckel.xyz/2015/05/04/openssh-authorizedkeyscommand-with-fingerprint/

Do I have a valid concern in thinking this might not be a prudent method of
authentication ?


AFAICT, he is using AuthorizedKeyCommand exactly as intended, generating 
authorized_keys entries on demand.


What are you concerned about?

/Alexander



AuthorizedKeyCommand ldap

2017-12-11 Thread Dan Becker
I am reading a blog proposing to use the AuthorizedKeyCommand to hook into
another authentication mechanism  by calling a shell script

https://blog.heckel.xyz/2015/05/04/openssh-authorizedkeyscommand-with-fingerprint/

Do I have a valid concern in thinking this might not be a prudent method of
authentication ?

-- 
--Dan


Re: Hellos from the Lands of Norway.

2017-12-11 Thread Üwe Cærlyn
Again, no Gnu-zealots. Others who see the drooltard behaviour, more 
reason to check out what I am doing.


Peaceful Salutations.



Re: New default setup for touchpads in X

2017-12-11 Thread Ulf Brosziewski
On 12/10/2017 09:10 PM, Lari Rasku wrote:
> Ulf Brosziewski kirjoitti 12/06/17 klo 00:59:
>> please consider giving ws a try, and help
>> us by reporting problems if it doesn't work for you.
> 
> ws(4) seems to have much higher limiting friction for me when two-finger 
> scrolling.  In synaptics(4), it was enough to just tilt my fingers to get the 
> page moving, whereas ws(4) requires me to perceptibly move them.  When 
> tilting just a single finger on the touchpad, the limiting friction feels the
> same - but ws(4) moves the pointer much fewer pixels.  From your reply to 
> Christoph ("I hope you can observer a higher precision when navigating at low 
> speeds"), I gather this is intentional?  I guess I've just gotten too used to 
> the synaptics scaling, the ws behavior feels too sluggish to me.
> 

Hi, thanks for the comments.  The acceleration schemes and coordinate
filters are different in the ws+wsmouse setup, so it's inevitable that
the feel of it is different.  Even if I could reproduce the synaptics
behaviour, I wouldn't want it.  By and large, it is usable and
acceptable, but I think it has flaws - which lead to a lack of
precision, especially in short movements.

However, if it is only the base speed of the pointer that doesn't suit
you, there is a simple way to adjust it by changing the value of
wsmouse.tp.scaling
in wsconsctl(8).

Scrolling is a different thing.  The new driver has actually a
comparatively high threshold before it starts scrolling, and the scroll
speed is moderate.  Maybe I'll lower the threshold, that's not settled
yet.

> My machine is a Thinkpad E530.  Here's how the touchpad appears in dmesg:
> 
> pms0 at pckbc0 (aux slot)
> wsmouse0 at pms0 mux 0
> wsmouse1 at pms0 mux 0
> pms0: Synaptics clickpad, firmware 8.1, 0x1e2b1 0x940300
> 
> 



Re: FAQ's duplicating file systems, both methods fail to reproduce correctly

2017-12-11 Thread Steve Williams



On 11/12/2017 12:27 PM, Philip Guenther wrote:
On Mon, Dec 11, 2017 at 9:16 AM, Otto Moerbeek > wrote:


On Mon, Dec 11, 2017 at 08:30:54AM -0700, Steve Williams wrote:
> cpio has always been my "go to" for file system duplication
because it will
> re-create device nodes.

Both pax and tar do that as well.

Come on, you still remember using tar back in the 90's when it didn't 
support devices, paths were 100 bytes _total_, and they didn't include 
user/group names (only UID/GID), right?  Good times!


Philip

Yes, my habits were born of SCO Xenix, IBM's AIX for the RT PC, etc.   
Old habits die hard!!!  lol


Cheers,
Steve W.


Re: FAQ's duplicating file systems, both methods fail to reproduce correctly

2017-12-11 Thread Philip Guenther
On Mon, Dec 11, 2017 at 9:16 AM, Otto Moerbeek  wrote:

> On Mon, Dec 11, 2017 at 08:30:54AM -0700, Steve Williams wrote:
> > cpio has always been my "go to" for file system duplication because it
> will
> > re-create device nodes.
>
> Both pax and tar do that as well.
>

Come on, you still remember using tar back in the 90's when it didn't
support devices, paths were 100 bytes _total_, and they didn't include
user/group names (only UID/GID), right?  Good times!

Philip


Re: scipy and gfortran in current

2017-12-11 Thread Stuart Henderson
On 2017-12-08, Pau  wrote:
> Hello:
>
> This is -current on a thinkpad x270 amd64. dmesg attached to the bottom.
>
> I am trying to get scipy to work with other modules on python2.7
>
> The problem is that since gfortran is missing, scipy seems to be using
> g77, and then:
>
> # pkg_add py-scipy
> quirks-2.396 signed on 2017-12-06T16:43:24Z
> py-scipy-0.16.1p1: ok
> # exit

That works here. libgfortran is in gcc-libs, which is a dependency of
lapack.

First I'd check all packages are up-to-date (pkg_add -u).

If it still fails, use "LD_DEBUG=1 python2.7" and try "import scipy" and
paste in the output. ports@ would be preferred over misc@.




Re: FAQ's duplicating file systems, both methods fail to reproduce correctly

2017-12-11 Thread Otto Moerbeek
On Mon, Dec 11, 2017 at 08:30:54AM -0700, Steve Williams wrote:

> Hi,
> 
> cpio has always been my "go to" for file system duplication because it will
> re-create device nodes.

Both pax and tar do that as well.

-Otto

> 
> Cheers,
> Steve Williams
> 
> 
> On 10/12/2017 11:03 AM, webmas...@bennettconstruction.us wrote:
> > Forgive problems with this email.
> > I saw how my emails showed up on marc.info
> > Scary. This is just temporary.
> > 
> > OK. I've tried to use both methods and just don't
> > get true duplication.
> > 
> > tar
> > It can't work with file and directory names
> > that are OK in filesystem, but too long for itself.
> > Quite a while back I lost a lot of unimportant files
> > and directories that had absolute paths too long.
> > Why is this happening with tar? Can this be fixed?
> > If not, I'd like to add a note about that to the FAQ.
> > 
> > dump
> > I had to move /usr/local to a bigger partition. growfs,
> > etc. I kept the /usr/local untouched and then dumped it
> > to the new partition, expecting a true duplication.
> > Nope.
> > It changed all of the program symlinks permissions.
> > Why is dump doing this? Can this be fixed?
> > Otherwise, a note about this should be added to the FAQ
> > also.
> > 
> > Question:
> > Can dd be used to do what I did with dump or tar?
> > Smaller partition copied to a bigger partition.
> > 
> > I'm willing to try and help out, but I'm going through
> > both laptop and server hell at the moment.
> > 
> > Thanks,
> > Chris Bennett



Re: FAQ's duplicating file systems, both methods fail to reproduce correctly

2017-12-11 Thread Steve Williams

Hi,

cpio has always been my "go to" for file system duplication because it 
will re-create device nodes.


Cheers,
Steve Williams


On 10/12/2017 11:03 AM, webmas...@bennettconstruction.us wrote:

Forgive problems with this email.
I saw how my emails showed up on marc.info
Scary. This is just temporary.

OK. I've tried to use both methods and just don't
get true duplication.

tar
It can't work with file and directory names
that are OK in filesystem, but too long for itself.
Quite a while back I lost a lot of unimportant files
and directories that had absolute paths too long.
Why is this happening with tar? Can this be fixed?
If not, I'd like to add a note about that to the FAQ.

dump
I had to move /usr/local to a bigger partition. growfs,
etc. I kept the /usr/local untouched and then dumped it
to the new partition, expecting a true duplication.
Nope.
It changed all of the program symlinks permissions.
Why is dump doing this? Can this be fixed?
Otherwise, a note about this should be added to the FAQ
also.

Question:
Can dd be used to do what I did with dump or tar?
Smaller partition copied to a bigger partition.

I'm willing to try and help out, but I'm going through
both laptop and server hell at the moment.

Thanks,
Chris Bennett




Re: Hellos from the Lands of Norway.

2017-12-11 Thread Üwe Cærlyn

Den 12/9/2017 11:25, skrev Ywe Cærlyn:

Den 12/9/2017 05:21, skrev gwes:

On 12/07/17 07:31, Ywe Cærlyn wrote:
I saw AMDs "semi-custom" CPU email form and told them that I wanted 
a CPU, that is clockspeed oriented, not cores (might aswell be 
singlecore with high HZ), that could be using several instruction 
macros (combining two or three), for max virtual clockspeed, and an 
optimizing compiler for this. And wondered if an additional poweroff 
mode could be added to the binary stream of 1 0, so that bitwise i/o 
and cpu scheduling could be done.


If one could get the virtual clockspeed up to 12ghz, I think no 
regular user would ever use more than a single core. And it´d be a 
megahit.


Fixing all inefficiency hardware wise. Philosophically aswell.

Peaceful Salutations.


CPU clock speed != performance.

Factor in:
    main memory: latency, bus width, and access/cycle time.
    caches: levels, speeds, sizes, widths
    CPU access patterns interacting with the above
    clocks per instruction: average, best case, worst case
    cost or even feasibility of super high CPU clocks
    propagation time of signals across chips

A very fast CPU clock on a CPU with very low clocks-per-instruction
a small die and a huge memory matching speed == the RISC ideal

Even RISC with floating point hardware, for instance, often takes
many cycles.

Adding cores is often seen as the best way of increasing
>system< performance significantly at the lowest cost.

geoff steckel



Risc = reduced instruction set. This would be the other way I guess... 
:) And really keeping a whole bunch of compatibility. To not speak of 
even windows running near hardware realtime.


Peaceful Salutations.

So what you could do is contact AMD aswell through the CUSTOM CPU 
http://www.amd.com/en-us/solutions/semi-custom page
and say you want this cpu. I have called it a "Grand" cpu, and also 
updated my banner on my youtube page with it.


Check it out for a taste of potentially the next internet - Ultranet. 
https://www.youtube.com/channel/UCR3gmLVjHS5A702wo4bol_Q


Peaceful Salutations.



Re: FAQ's duplicating file systems, both methods fail to reproduce correctly

2017-12-11 Thread x9p

On Mon, December 11, 2017 4:28 am, Robert Paschedag wrote:
>>
>
> Is "rsync" not an option?
>

+1 for rsync. never had a problem.

cheers.

--

x9p | PGP : 0x03B50AF5EA4C8D80 / 5135 92C1 AD36 5293 2BDF  DDCC 0DFA 74AE
1524 E7EE



Re: FAQ's duplicating file systems, both methods fail to reproduce correctly

2017-12-11 Thread vincent delft
On Sun, Dec 10, 2017 at 10:39 PM, Philip Guenther 
wrote:


> 'pax' and 'tar' are actually the same binary so they have the same
> limitation from the file formats that are supported, as well as any purely
> internal limitations.  "pax -rw" actually has file format limitations by
> design, so it doesn't automagically free you from those limitations.
>
>
By Looking a tar man page, I do not find comments regarding "length"
constraints :(.
On pax, it stated that with the default archive format "ustar", filenames
must be 100 char max and path names must be 256 char max.

But It's not clear which archive format the command Tar is using. Is it
using the "old tar" format or the new "ustar" format as described in the
pax man pages ?


Amazing for me to see that both are the same binary

t420:~$ ls -ali /bin/tar
32778 -r-xr-xr-x  3 root  bin  433488 Oct  4 05:13 /bin/tar
t420:~$ ls -ali /bin/pax
32778 -r-xr-xr-x  3 root  bin  433488 Oct  4 05:13 /bin/pax


Re: FAQ's duplicating file systems, both methods fail to reproduce correctly

2017-12-11 Thread Robert Paschedag
Am 10. Dezember 2017 22:17:11 MEZ schrieb vincent delft 
:
>Hello,
>
>Did you tried pax ?
>some thing like: pax -rw -pe  
>
>I don't know if this the best tool, but I'm using it to duplicate a 1TB
>drive (having lot of hard links) onto an other one.
>I've done it couple of time, and I've do not see issues.
>
>rgds
>

Is "rsync" not an option?

>
>
>
>
>
>
>
>
>
>On Sun, Dec 10, 2017 at 7:03 PM, 
>wrote:
>
>> Forgive problems with this email.
>> I saw how my emails showed up on marc.info
>> Scary. This is just temporary.
>>
>> OK. I've tried to use both methods and just don't
>> get true duplication.
>>
>> tar
>> It can't work with file and directory names
>> that are OK in filesystem, but too long for itself.
>> Quite a while back I lost a lot of unimportant files
>> and directories that had absolute paths too long.
>> Why is this happening with tar? Can this be fixed?
>> If not, I'd like to add a note about that to the FAQ.
>>
>> dump
>> I had to move /usr/local to a bigger partition. growfs,
>> etc. I kept the /usr/local untouched and then dumped it
>> to the new partition, expecting a true duplication.
>> Nope.
>> It changed all of the program symlinks permissions.
>> Why is dump doing this? Can this be fixed?
>> Otherwise, a note about this should be added to the FAQ
>> also.
>>
>> Question:
>> Can dd be used to do what I did with dump or tar?
>> Smaller partition copied to a bigger partition.
>>
>> I'm willing to try and help out, but I'm going through
>> both laptop and server hell at the moment.
>>
>> Thanks,
>> Chris Bennett
>>
>>