Re: Static Routes for VPN

2016-10-06 Thread Xen

Thomas Haller schreef op 06-10-2016 14:28:


In such a setup the actual address doesn't matter. 0.0.0.0 should work
just fine.


That's very interesting, thank you.
___
networkmanager-list mailing list
networkmanager-list@gnome.org
https://mail.gnome.org/mailman/listinfo/networkmanager-list


Re: Static Routes for VPN

2016-10-05 Thread Xen

Greg Oliver schreef op 05-10-2016 22:55:


The easiest is if your server pushes them with push.


You are absolutely correct - and I do that for our company.  Other
companies that we are partners with where our engineers log into their
networks do not like it when they push default routes on us though.
It is much more convenient to say 10.0.0.0/8 [1] is all I want to
reach on this connection (leave my default route alone).  This in the
past was achievable with NM-gui - at some point, it has been stripped
out.  I did not go to F24 until recently and I honestly do not
remember the version I was on prior, but it worked using tun0/ppp0,
etc as gateways - and therefore applied routing the way I wanted.

Just looking for the proper way to do that today.


Just saying that a pushed route doesn't have to be a default route. This 
is a simple static route pushed to a client:


push "route 192.168.20.0 255.255.255.0"

I must say I really have no clue about NM at present; but in the past I 
achieved it with a script.


Eventually I migrated to NM and at some point I must have deleted the 
scripts so I do not have them handy.


In the version I am using (the KDE version) I can simply edit static 
routes. Oh yes, I believe I just ran.. wait, you want a static route to 
the gateway? So you don't have a VPN route yet? Either you can reach the 
gateway directly using your ordinary routes or you are doing something 
strange? You need a route to the internal subnet, right? Oh, so you 
don't know the VPN you are connecting to in advance.


You can always ..baah, I may have thrown away my scripts. You can always 
put a script in /etc/NetworkManager/dispatcher.d that will check whether 
the interface is tun0 ($1) and the action is up (or vpn-up, don't know 
if that works) ($2) and then feeding that to ip route:


ip addr show $1 | grep "inet " | awk '{print $2}'

[ "$1" == "tun0" ] && { [ "$2" == "up" ] || [ "$2" == "vpn-up" ]; } && {
ip=$(ip addr show $1 | grep "inet " | awk '{print $2}')
subnet=${ip%.*/*}.0/24
ip route add $subnet via $ip dev $1
}

Or your other, better way of doing things ;-).

But I'm not sure what else. Regards.
___
networkmanager-list mailing list
networkmanager-list@gnome.org
https://mail.gnome.org/mailman/listinfo/networkmanager-list


Re: Static Routes for VPN

2016-10-05 Thread Xen

Greg Oliver schreef op 05-10-2016 22:10:

I cannot remember the last networkManager this worked in (GUI), but I
am trying to add static routes for VPN connections and it no longer
takes interfaces as gateways.  Not knowing the IP of the gateway until
connection, what is the proper procedure for adding static routes to
VPN interfaces so "Only resources on these networks" works?

Amyone know?

 It is a pain to manually add them with IP after connecting each time.
 I must be missing something - I cannot imagine the developers taking
out the ability to use tun0 or ppp0 as a gateway (although it seems
that way)  :-/


The easiest is if your server pushes them with push.
___
networkmanager-list mailing list
networkmanager-list@gnome.org
https://mail.gnome.org/mailman/listinfo/networkmanager-list


Re: [PATCH] Replace 'Ternary' with 'Tertiary'

2016-07-25 Thread Xen

Mathieu Trudel-Lapierre schreef op 26-07-2016 7:04:

Ternary tends to mean "having three parts", and has other significance 
in

IT. Using "Tertiary" seems to be a better choice, even though they may
both be used interchangeably (afaict from dictionary.com anyway).


In the Dutch language apparently the equivalent of tertiary is used.

(If you google search for "primair secundair ter.." then it 
automatically fills into "tertiair" ;-)).


How great my knowledge!.

Personally I would probably just use "Third" unless you want the idiom 
of Tertiary to be more well known (it is so ill used).


Regards.
___
networkmanager-list mailing list
networkmanager-list@gnome.org
https://mail.gnome.org/mailman/listinfo/networkmanager-list


Re: fallback DNS server

2016-07-21 Thread Xen

Beniamino Galvani schreef op 21-07-2016 10:10:

On Thu, Jul 21, 2016 at 09:54:03AM +0200, Nicolas Bock wrote:
How do I check that dnsmasq is using the server? NetworkManager 
started

dnsmasq with

/usr/sbin/dnsmasq --no-resolv --keep-in-foreground --no-hosts
--bind-interfaces --pid-file=/run/NetworkManager/dnsmasq.pid
--listen-address=127.0.0.1 --cache-size=400 --conf-file=/dev/null
--proxy-dnssec --enable-dbus=org.freedesktop.NetworkManager.dnsmasq
--conf-dir=/etc/NetworkManager/dnsmasq.d

and in /etc/NetworkManager/dnsmasq.d is only the file I dropped in 
there

with the fallback DNS servers.


NetworkManager logs should contain messages from dnsmasq and there
should be lines as:

  dnsmasq[7295]: using nameserver 192.168.1.1#53
  dnsmasq[7295]: using nameserver 1.2.3.4#53

telling you which servers are in use.

If you want to be paranoid, you can temporarily add the "log-queries"
option in a configuration snippet in /etc/NetworkManager/dnsmasq.d and
restart NM. After that, NM logs will show all the queries sent by
dnsmasq to each server.

Beniamino



Is there a way to find out about the running configuration of DNSmasq?

The trouble I have is that with /etc/resolv.conf you can see clearly 
what the nameservers are but it is a mystery when it is getting used by 
NM. Apart from seeing some logs.


DNSmasq really doesn't have options for retrieving the current 
configuration. Stuff like the OpenVPN "learn" script uses an exernal 
file followed by -HUP to feed more stuff into DNSmasq. That in itself is 
more usable for a user not knowing what is going on than some DBus 
command. I really don't know how to use DBus, its syntax is too complex.


But DNSMasq was not designed around dbus in that sense, or being 
designed around giving state information on the command line. It puts 
leases in some file and updates that, it doesn't tell you when you ask 
for it.


I wish it could just read the stuff from a file instead of using DBus, 
but yeah.


(Actually, of course, it can, just fine).

This is the OpenVPN learn script, or part of it:

case "$1" in

   add|update)
 /usr/bin/awk '
 # update/uncomment address|FQDN with new record, drop any 
duplicates:

 $1 == "'"$IP"'" || $1 == "#'"$IP"'" || $2 == "'"$FQDN"'" \
 { if (!m) print "'"$IP"'\t'"$FQDN"'"; m=1; next }
 { print }
 END { if (!m) print "'"$IP"'\t'"$FQDN"'" }   # add new 
address to end

 ' "$HOSTS" > "$t" && cat "$t" > "$HOSTS"
   ;;

   delete)
 /usr/bin/awk '
 # no FQDN, comment out all matching addresses (should only be 
one)

 $1 == "'"$IP"'" { print "#" $0; next }
 { print }
 ' "$HOSTS" > "$t" && cat "$t" > "$HOSTS"
   ;;

esac

After which it just does:

# signal dnsmasq to reread hosts file
/bin/kill -HUP $(cat /var/run/dnsmasq.pid)


It's of course pleasant that you can use dnsmasq locally to have access 
to multiple sources (such as VPN and regular internet).



But I just don't know how to troubleshoot when something goes wrong 
because resolv.conf only shows 127.0.1.1 and nothing else.


Is there a command to get the list of nameservers actually getting used 
by the current system?

___
networkmanager-list mailing list
networkmanager-list@gnome.org
https://mail.gnome.org/mailman/listinfo/networkmanager-list


two dhcp-option (openvpn)

2016-07-19 Thread Xen
A user reported having two dhcp-option in his config, either pushed by 
the server or local, I don't know yet.


One of the dhcp-option was faulty, it was 10.8.0.1 but there was no 
response from that server apparently.


The order given was:

public internet DNS
private VPN DNS

In the log from NetworkManager only the second one shows up as being 
added to DNSmasq via dbus. As a consequence, since the local resolv.conf 
points to 127.0.1.1, his names do not resolve.


Using OpenVPN directly caused the connection to succeed as normal with 
two elements written to /etc/resolv.conf apparently. Using OpenVPN 
through NetworkManager caused the above described behaviour.


Is this correct behaviour, a bug, or a lacking feature? I'm trying to 
have him change his VPN config, but I cannot influence what 
NetworkManager is going to do, myself.


Version of NM is probably going to be around 1.2.0.

Regards.
___
networkmanager-list mailing list
networkmanager-list@gnome.org
https://mail.gnome.org/mailman/listinfo/networkmanager-list


Re: on nmcli improvements or needs of improvement

2016-05-19 Thread Xen

Francesco Giudici schreef op 19-05-2016 10:16:


Hi Xen,
  thanks for sharing your thoughts, we are always glad to receive 
advice

on how to improve NetworkManager and make it more user friendly.

There is already a nCurses interface to NM: nmtui (please, give it a 
try!).

We found from the survey that only few know this tool but many of them
love it: this caught our attention and we are going to pay much more
attention to nmtui.

At the same time, we realized that nmcli should become easier to use 
and

more intuitive.
We want to preserve backwards compatibility to avoid breaking scripts
which make use of it, but this would not stop us from adding new
ways to do configurations in a more quick and intuitive manner.

Summing up, our commitment is to pay more attention to nmtui and make
nmcli more user-friendly.


Thank you man, that is very friendly.

At present day, present time I have little need for configuration, 
because I am using DHCP configured with static addresses at the DHCP 
server. That is in part, because manual configuration has been too 
troublesome for me at the client device, and because this makes it 
easier to have local DNS (dnsmasq).


But if and when I ever have a laptop again, which I needed to do more 
advanced configuration with (obviously wifi, roaming, but also VPN and 
some scripting for that) I will give everything a try again. I am glad 
NM is improving and I like and in that sense deeply love the direction 
it is going, because my feeling is that it is really getting better or 
at least that the people are "sobering up" or "wisening up" at least 
since version 1 I think, and that indeed things are going in a very nice 
direction, if that is not too belligerent for me to say (hostile, 
obnoxious, self-serving).


But of course I will give nmtui a try And I will say that the 
menu interface is very action oriented but does not provide much 
information or feedback. The program does not even provide what "nmcli 
c" provides (no parameters).


So currently for me that means there is no "background awareness" in the 
program. You start it, and have 3 options, but no information prior to 
selecting an option and that is not the way it should be. So because of 
that, it seems a collection of screens that provide each a single action 
also provided (likely) by nmcli, just you don't need to guess for the 
parameters now. However a real program would first show NM version, 
current IP configuration, current list of devices, etc. It would show 
devices to begin with (or interfaces, or active connections) with the 
ability to see stats on them, to deactivate them, etc. In essence, it 
would need to show, something similar to the output of the program 
"ifconfig" without parameters.


So any such system would need an "awareness mode" where you see:

- current hostname
- current connections (active, not active)
- possibly current routes
- currently assigned IP addresses

And from there, you would be able to select a connection and perform 
actions on them. You could call this "context-based operations". The 
same way GIMP doesn't have that :p. If you right-click an object in 
GIMP, you get the global menu, completely defeating the purpose of a 
right click (context menu).


So there are basically two modes: first select action, then object to 
perform it on (procedural, in a sense), and first select object, then 
action to perform on it (object oriented, in a sense).


And object oriented is always more natural unless you are selecting 
"tools" first. But in real life if you want to cut some paper, you first 
select the paper, position it, then select the scissors, and then work 
on the paper. You do not first select the scissors, then seek the paper 
you need to cut. So object manipulation is still the core of what people 
do.


So what you'd get is:

-
|   |
| hostname.localdomain NM 1.0.6 |
|   |
| Currently active connections: |
|   |
|   Wired Connection 1  |
| 192.168.1.5  enp3s0   |
| fe80::cc9a:c827:9ff5:26d8enp3s0   |
|   |
|   Wireless Connection 1   |
| 192.168.103.144  wlp0s1   |
|   |
| Currently inactive connections:   |
|   |
|   VPN 1   |
|   ---.---.---.---tun0 |
|

on nmcli improvements or needs of improvement

2016-05-17 Thread Xen

Francesco Giudici schreef op 17-05-2016 23:39:

Hi,
  a couple of months ago we launched the survey on NetworkManager 
usage.

We want to share a short summary of the main outcomes:
https://people.freedesktop.org/~fgiudici/NMsurvey/

It is also available in pdf format if you prefer:
https://people.freedesktop.org/~fgiudici/NMsurvey/NM_survey_summary.pdf

We received 1318 responses, many with comments and advice that we'll
consider for future NetworkManager versions.


Amazing that you have done such a detailed survey. I'm sorry I wasn't 
there to contribute :p.




I feel the nmcli improvements that you reference in the document do not 
amount to much.


I'm not sure if you are out for hearing an opinion, but.

You state that your goal is to retain backwards compatibility.

But just as with Git (...) this primarily means you cannot improve or 
revamp the command (verb) structure and I think the verbs are just very 
hard to use.


I feel, if anything, if you wanted to improve the current command based 
interface, you would have to do the following:


1. Create a menu based (nCurses) interface
2. It doesn't have to be graphical, it is just about the structure
3. Perfect the menu structure to the point where it actually works 
really well

4. Use the new menu to inform or inspire the current command structure.

Since your current command structure is hard to change without good 
reason, you would have to get some form of proof that something else can 
be better.


Now since the menu interface would at once be easy to make (without 
graphics) and no one is actually using it yet, it is a new thing, you 
are free to do with it whatever you want. A menu is something that 
displays common and possible options you can use, at all times.


That means you can use it to complement nmcli while you are developing 
it, in fact, you can execute or effectuate its commands using or through 
the existing nmcli (that you are familiar with).


In this sense you can create something that will
a. never be used by scripts
b. be allowed to morph into perfection
c. reveal the perfect interface you really want nmcli to have.

If you find that your menu interface starts to differ from what nmcli 
offers (even though they might or need to be the same) you will realize 
that that new thing you choose is going to be better or more intuitive, 
and hence, eventually you might find a solid reason to change the same 
in nmcli as well.


If you still need to.

But this time you will have proof of concept and you are not just 
'randomly' changing things based on your current ideas.


Just my thoughts.

Regards.
___
networkmanager-list mailing list
networkmanager-list@gnome.org
https://mail.gnome.org/mailman/listinfo/networkmanager-list


Re: on manual configuration / respecting user made changes

2016-04-06 Thread Xen
Dan Williams schreef op 05-04-16 19:47:
> On Tue, 2016-04-05 at 10:53 +0200, Xen wrote:
>> Question:
>>
>> Currently when NM manages a link/device (say eth0) any attempt to
>> manually configure it (using e.g. ifconfig) will quickly result in
>> this
>> action being undone by NM.
> 
> That should not be the case with NM 1.0 and later.  These versions make
> huge efforts to preserve additional IP configuration you add to the
> interface, both addresses and routes.

Right. It is in (K)ubuntu 16.04, with NM 1.0.4, the interface/device had
default configuration which is DHCP, I had been using DHCP for a while
but needed to test something.

With the remote DHCP server at that point not working, I needed manual
configuration quickly, so I did "ifconfig enp0s25 192.168.1.5" followed
by "ifup enp0s25".

I opened SSH to the host, but my connection got lost within about 20
seconds and my device ended up being deconfigured (or reconfigured for
dhcp, which didn't work).


> Anything you add over-and-above what NM does (even if NM does nothing)
> should be preserved.  I just tested with NM 1.0.10 and an ethernet
> interface that NM showed as "disconnected" (because NM had not
> activated a connection on it).  Simply running "ip addr add 1.2.3.4/24
> dev enp0s25" caused NM to move the device to "connected" state and
> report the added IP address.  NM reads the added configuration and
> won't change it.

That's good to hear. I have always been bugged by this. For example,
even on Debian 7 the behaviour was the same, and I had to take NM off of
the interface to be able to do any manual configuration.


>> Should there be any sense of not interfering with manual
>> configuration?
> 
> Yes, NM spends a lot of effort to do this.

Right.


> 
>> I mean in a general sense what you see happening is this:
>>
>> 1. If some device is configured over DHCP in NM, but DHCP might be
>> failing and there is nothing configured
>> 2. Manual configuration of the interface to a certain IP will work
>> for
>> about 20 seconds, after which NM will deconfigure the device.
> 
> What NM version are you using?  This works for me on NM 1.0 as
> described above.

Currently 1.0.4-0ubuntu10 (package name).

Nmcli version is 1.0.4.
___
networkmanager-list mailing list
networkmanager-list@gnome.org
https://mail.gnome.org/mailman/listinfo/networkmanager-list


on manual configuration / respecting user made changes

2016-04-05 Thread Xen
Question:

Currently when NM manages a link/device (say eth0) any attempt to
manually configure it (using e.g. ifconfig) will quickly result in this
action being undone by NM.

Should there be any sense of not interfering with manual configuration?

I mean in a general sense what you see happening is this:

1. If some device is configured over DHCP in NM, but DHCP might be
failing and there is nothing configured
2. Manual configuration of the interface to a certain IP will work for
about 20 seconds, after which NM will deconfigure the device.

Should there not be a model where:

- NM recognises manual configuration
- NM turns itself off until called for (via GUI, nmcli)
- NM reads new configuration data from the manual configuration when
required, e.g. displaying this information in its GUI

Or in a different light:

- simply being able to read config from /etc/network/interfaces as well
and use this as its source of configuration (I recognise OpenSUSE does
not use this file).

Regards,

Bart.
___
networkmanager-list mailing list
networkmanager-list@gnome.org
https://mail.gnome.org/mailman/listinfo/networkmanager-list


Re: Homenet

2016-03-23 Thread Xen
Using an ULA (as long as it is stable) is really the same as ignoring 
the first 64 bits. Any number that never changes can be ignored in a 
computation, or ceases being a variable in that sense (quite literally).


Op 22-3-2016 om 16:17 schreef Xen:


Meaning, he wants his router to generate an ULA and use that for all 
hostname resolution within the network. That also seems to imply that 
any addressing from the outside (the mobile device moving across 
borders) is not going to work when it uses a hostname.




That would imply that addressing in the home should not even USE the 
first 64 bit of the address field. That in turn would imply that the 
network should only have one (external) prefix, or that addressing 
from the outside using that prefix should be uniquely resolvable at 
all times, which means that if different prefixes ARE used, the 
internal host part should still be unique.




___
networkmanager-list mailing list
networkmanager-list@gnome.org
https://mail.gnome.org/mailman/listinfo/networkmanager-list


Re: Homenet

2016-03-22 Thread Xen
The issue is getting a bit cloudy here because you are responding to 
such disperse things in isolation So I don't know really what it's 
about at this point. But thank you for the discussion regardless. I'm 
intending to install Linux just so I can use Calligra just so I can have 
a reasonably nice environment to write something in ;-).


I have found that the art of writing depends mostly on the environment 
you have. At least for me it does. Calligra is very crashy but maybe it 
will work. I tried Pagemager (7.0) on Windows but what a hideously old 
and ugly program. And every other application is definitely illegal 
unless you pay a sum every month (I mean the Adobe applications). Anyone 
who is not a professional can never use these programs.


Writing is so hard these days. There are not many beautiful programs to 
write in. Sometimes I wonder if everyone should just write on the web 
since these environments are the most beautiful these days. I feel I've 
lost my time already since I was not willing to do what I needed to do, 
which was... nevermind.


Op 22-3-2016 om 20:48 schreef Tore Anderson:

Correct. When looking up «somedevice.home», I'd want IPv6 ULAs to be
returned (assuming ULAs are enabled in the first place) as well as IPv4
RFC1918 addresses (again, assuming IPv4 is enabled).

Note that fixing this issue is just an implementation tweak in the
OpenWrt Homenet code. It's defintively not a fundamental flaw in any
protocol like HNCP or IPv6 itself.


Well of course this makes sense. But the broader issue was a type of 
addressing that crosses boundaries. If you use .home addresses 
(hostnames) in this sense there is no issue, because you are not 
intending to cross boundaries with that.

I'm sorry if I failed to understand that it was just about OpenWRT.


Huh? This does not reflect reality, or I misunderstand you completely.
 From my laptop, sitting in an Homenet topology using NM-1.2, I see:

$ ip address list dev wlo1 scope global
4: wlo1:  mtu 1500 qdisc mq state UP group 
default qlen 1000
 link/ether b4:b6:76:17:2e:83 brd ff:ff:ff:ff:ff:ff
 inet 10.0.72.155/24 brd 10.0.72.255 scope global dynamic wlo1
valid_lft 26681sec preferred_lft 26681sec
 inet6 2a02:fe0:c420:57e1::c68/128 scope global dynamic
valid_lft 1005367sec preferred_lft 400567sec
 inet6 fd65:557c:6f31:2d:483f:37b7:98ea:1036/64 scope global noprefixroute 
dynamic
valid_lft 485sec preferred_lft 185sec
 inet6 2a02:fe0:c420:57e1:30f:919e:64d9:138f/64 scope global noprefixroute 
dynamic
valid_lft 7179sec preferred_lft 1779sec

There are two sets of stable internal addresses, IPv4 RFC1918 (from
10.0.0.0/8) and IPv6 ULA (fc00::/7).

In addition there are the ISP-assigned addresses from 2a02:fe0::/32,
which change from time to time.

These addresses do not «sit in the same 128 bit field», they are
completely independent from each other.


I didn't mean that. Thank you for your consideration.

There are people here that want every device (let's say every device on 
a certain network) to be globally addressable. The issue is that if they 
have the same (IP) address both inside and outside of some boundary, -- 
I mean, both FROM the inside or outside (that means to say, from within 
the subnet or from without it) --- then that means that wherever the 
OTHER device is located, it will be able to find that node.


It will be able to find your device.

Because the address is global, and in a sense even, universal (that is 
its intent right).


I know they probably didn't think of exploring space, but some of them 
must have.


Now there are really two basic problems as I alluded to before.

1. The reason you want a global address for this kind of service, is 
that you have devices that can be anywhere, both inside and out of your 
own prefix or network.


The issue is OTHER devices being able to reach YOU.

YOU at this point are just sitting in your network, stationary.

Now first that implies that your service will stop working (let's forget 
about DNS now) if ever your global address fell away (ie. a disconnect 
from the internet). One of two things can happen, either your prefix 
remains stable and known so that your internal devices can continue to 
operate on it, or it falls away.


And really I'm incredulous that people cannot see the madness of IPv6. 
If you would have designed it, would you have designed the same. If not, 
why do you support it so?


But let's get back, that was just a thought in between. It's a bit hard 
to do this with pictures though. I wonder if I have UMLdesigner 
installed here.


And I wonder if I can even make something like that with that. :(.



The picture should make it immediately clear that the internal addresses 
cannot or should not depend on either "out 1" or "out 2". If your 
internal addresses depend on your connection with out1, and you have 
your devices configured to use that address, at some point you'll get in 
trouble when yo

Re: Homenet

2016-03-22 Thread Xen
Using an ULA (as long as it is stable) is really the same as ignoring 
the first 64 bits. Any number that never changes can be ignored in a 
computation, or ceases being a variable in that sense (quite literally).


Op 22-3-2016 om 16:17 schreef Xen:


Meaning, he wants his router to generate an ULA and use that for all 
hostname resolution within the network. That also seems to imply that 
any addressing from the outside (the mobile device moving across 
borders) is not going to work when it uses a hostname.




That would imply that addressing in the home should not even USE the 
first 64 bit of the address field. That in turn would imply that the 
network should only have one (external) prefix, or that addressing 
from the outside using that prefix should be uniquely resolvable at 
all times, which means that if different prefixes ARE used, the 
internal host part should still be unique.




___
networkmanager-list mailing list
networkmanager-list@gnome.org
https://mail.gnome.org/mailman/listinfo/networkmanager-list


Re: Homenet

2016-03-22 Thread Xen


Op 22-3-2016 om 11:10 schreef Tim Coote:

There are further complications arising from ISP disconnection or prefix 
renumbering. Homenet rfcs discuss the use of ULAs (similar in concept to rfc 
1918 addresses) to handle the startup situation of building a house before its 
connected to an ISP, but providing multiple /48 subnets that can be routed 
between so that the installed hosts can communicate.  I’d not expected prefixes 
to change often, but discussion with ISPs that are rolling out IPv6 show that 
this will be standard practice. Homenet covers this too, including automated 
dns updates.

An open issue to me is how the OS apis would need to be changed to work with 
varying source routeing (each host will have several IPv6 addresses, with 
varying latency, bandwidth and monetary costs. I think that the use of per host 
certficates will also need some work to avoid spoofing in the face of multiple 
IP addresses, while not making it too hard for a consumer to replace a host 
(e.g. a room thermometer, or the mote monitoring a tyre on her car).

The current state of homenet has no security model, and the general experience 
of the development of security models in the computer industry has not been 
good.


The overlying impression is that they've come up with designs that 
suprised them (themselves) two miles later down the road when many of 
these issues should have been a concern in the first place.


If you get surprised by your own creations, you're not doing it fully 
conscious you know.


It seems like essentials are just left as a worry or exercise for those 
who care after the main architecture has already been completed. "Oh, 
we'll solve that later".


Here is a guy with experience with ULAs: 
https://github.com/sbyx/ohybridproxy/issues/4


The person who responds says it's just a bug and it will get fixed. But 
then the guy says: I do not want any dependence on my ISP whatsoever for 
my homenet routing.


Meaning, he wants his router to generate an ULA and use that for all 
hostname resolution within the network. That also seems to imply that 
any addressing from the outside (the mobile device moving across 
borders) is not going to work when it uses a hostname.


If the designers had no oversight of what they were doing and creating 
when they were creating it, it means nothing is well defined and a 
consistent security model will also not be possible. Of course Homenet 
is just a best practice way to deal with IPv6 in the home right.


It's not the fault of Homenet, it is the fault of IPv6.

Homenet tries to solve these issues. If you start out with a certain 
concept and you can't change it and you have to build on that, you're 
just going to do the best you can and from the looks of it Homenet is 
not even doing such a bad job either.


It's just that local independence from an ISP prefix should be 
MANDATORY. Your prefix should give access to your HOME but not to the 
devices within it.


This is the flawed method of addressing I was discussing:

- the external address of your network, and
- the internal address of your sub-network

should be different and independent numbers.

But both are expected to sit in the same 128 bit field, which is clearly 
impossible unless you forget about the 64-bit prefix and use your own 64 
bits to create your own subnet prefixes as required.


That would imply that addressing in the home should not even USE the 
first 64 bit of the address field. That in turn would imply that the 
network should only have one (external) prefix, or that addressing from 
the outside using that prefix should be uniquely resolvable at all 
times, which means that if different prefixes ARE used, the internal 
host part should still be unique.


All complications.

Bart.

___
networkmanager-list mailing list
networkmanager-list@gnome.org
https://mail.gnome.org/mailman/listinfo/networkmanager-list


Re: Homenet

2016-03-21 Thread Xen

Op 21-3-2016 om 13:43 schreef Stuart D. Gathman:

On Mon, 21 Mar 2016, Xen wrote:


"Addressable" is NOT the same thing as "exposed".  Any sane IPv6


There is a fundamental issue with this and that is that this is a rather
arbitrary "sanest method of configuration" rather than a topology 
feature.


So is NAT.


If you consider the network topology the same because it is behind the 
same router, fine. But you do realize that the new system has en 
entirely different model of encapsulation right?


You still have subnets, but now a host can apparently be a member of a 
remote network -- I do not know how routing works when a host is part of 
multiple networks as a network is typically that thing that gets routed 
to, not a host. So you would think that being part of multiple networks 
is impossible unless you are speaking of virtual networks.


If my host is part of one subnet, it cannot be part of another one, 
unless for instance the link-layer network is the same and no routing is 
needed. You can have multiple subnets in the same house of course, on 
the same physical network. And I have used multiple IPs on a single 
interface repeatedly to do VPN related stuff.


And being able to be part of many possible VPNs at the same time is an 
interesting proposition. But they wouldn't (shouldn't) use public IPs.


So just assuming you have answers (or there are answers to that) that 
takes away the nerves of that, let me think more about that topology thing.


Of course if you have multiple routes (physical links) then there is no 
issue with being part of multiple networks.


But supposing that networks are still hierarchical. The network is still 
the same, but the router now passes through a public addressing to 
internal IPs that are the same, and the host part is simply the "room" 
number of a building that is able to be addressed per room.


The network is the building address, the host is the room number.

Nothing special about that you might say.

In a general sense I am trying to enlighten you so that I won't have to 
do the work for it ;-).


For this :p.

You can also call this having a phone number for every room directly 
accessible from the outside without some interim login system or 
telephone operator in between.


I'll leave the rest to you.



There is no longer a "port to different port" mapping, now it is simply
"open or closed".


You can still use NAT.  Ip6 NAT works just fine and dandy - it's just no
longer *needed*.


I didn't know that. I had read or heard that there was no NAT in IPv6. 
Sorry.





When you cross boundaries, meanings can change. For example, I have a 
device
internally running on port 22, but externally port 80. This is 
because I was

located on a premises that blocked outgoing port 22 connections. And
basically all other connections except 80 and 443. There are also other
ports open on that router but they are all accessible through the 
same IP

and domain.


IP6 NAT still works for that.  But I just use IP6 darknet.


Isn't a darknet just intended to catch and sample unwanted traffic into 
a network?


I meant that I was not at home, and where I was located there was a 
corporate firewall disallowing me the use of 22 etc. So I just used port 
80 for everything. Eventually I just started using a commercial VPN 
though that had a tunnel open on 80 as well to connect to the VPN.



I can directly address all the hundreds of boxes I have to monitor.
Configuration is so much simpler.  Protocols like SIP that are broken
behind NAT Just Work with IP6, and without an external 3rd party. I can
have a separate IP for each logical web page.  (Yes, https is finally
being upgraded so name virtual host works - but IP6 is still better
deployed, and that's not saying much.)  I can actually talk to people
behind the same firewall via SIP.


Well, I guess. I would just solve it in a different way I guess. There 
is in essence nothing wrong with host-addressable (network, internal 
network) addressable addresses. We have been using that for telephone 
numbers and basic physical (postal) addresses since like forever.


We have also had systems (businesses) where you couldn't, often quite by 
design.


The question is how much do you want to expose your stuff. In this case, 
of course, if you put those hundreds of boxes you need to monitor on a 
subnet that is accessible through a VPN, and the gateway (VPN gateway) 
of that subnet / network translates those addresses for you (would need 
cooperation of the hosts, so won't work) or simply makes that subnet 
routable to you, the issue is also solved right.


Which of course gives trouble if that subnet you get routed to has the 
same configuration as some other subnet you need to access, and that is 
valid concern. What is needed is a unique form of addressing, given 
several networks that might use the same, and I gues

Re: Homenet

2016-03-21 Thread Xen

Op 21-3-2016 om 11:23 schreef Tim Coote:
If Bart’s view is typical, I guess that the answer to my original 
question is ‘No’.  That’s a shame as further separation of the various 
Linux models (embedded, networking, phone, tablet, desktop, server, 
cloud), is, imo, unhelpful.  But at least it gives me a steer.


I was not saying at all that my views are typical, just to get that 
straight.


But I see no end to the complexity that will arise. No end to it. At 
all. I see no elegance in it either.


I mean, if you can keep this as simple as possible, and as elegant as 
possible, kudos to you. You are working with an inelegant system (IPv6 
in its entirety) but you are maintaining sense and sanity in spite of 
it, or in the face of it. That would be outstanding, you know.


Typically though my experience with Linux especially in the later days, 
(( ( and my experience with the basics of NM as well ) )) is that many 
systems are flawed even from a basic user experience design perspective. 
The simple truth is from MY perspective that *most* commercial 
applications decided for end users are typically rather user friendly. 
Take your average computer game: if it's not easy to use, people will 
not enjoy using it, and enjoyment is like the essence of a game. Your 
average website (that services millions of people) has a better designed 
interface (in many cases) than your typical desktop client or computer 
program application, these days. Webdesign in that sense is where the 
money is. I see rich text editors in websites that are better designed 
(although more buggy) than any other text editor I know of (on the desktop).


That doesn't mean any application on Linux needs to be user-unfriendly. 
That depends entirely on the developer or designer, you might say. But 
at the same time many "computer technical" websites like the most 
important ones (on the ietf.org drafts) are still stuck in the dark ages.


You can go on talking about the next evolution, but look at that web 
page. It is hideous. That is one of the most ugly and in some ways, 
because of that, non-functional, web pages still in existence for any 
large or important body.


https://tools.ietf.org/wg/homenet/

Even someone doing nothing but change the margins could improve that 
page a 1000 fold.


Now if the same attitude is used to design even command line interfaces 
to programs (and I guess you can bet it is) you get command line 
interfaces to programs that are hideous as well. Any lack of an interest 
in providing good interfaces or designing something beautiful will 
always lead to the designing of interfaces that are hideous.


And they will be hard to use, you will forget how to use it, you use a 
manual to remember the commands, etc. etc. etc. You can see with IPv6 
that there has been no interest to make this thing easy to understand.


Concerns constantly appear to have come up that were not even 
ANTICIPATED. That means that the system is not elegant and a certain 
form of patch work is constantly needed to make it function well again 
or to fill up the holes.


The system is not elegant. Period.

(( ( I consider NM itself (nmcli) hard to use and there was a bug where 
my wifi device regularly stopped working for no reason and it was said 
to be rather well known. Until I integrated some setup scripts into NM 
and migrated my openvpn configuration into it, I didn't even want to use 
it. I only needed it for the roaming features. After that it was still 
less elegant than using openvpn directly. Eventually connecting the VPN 
was easier than ever before because of the KDE icon for it and the 
feedback I got from it. The lock icon on the wifi connection - that made 
a great deal of difference for me. Feedback. I also made pains to 
upgrade NM to a later version (this was on OpenSUSE 13.2) because of the 
advances I needed, such as using cipher: none, as an option. I enjoy 
seeing the advances in the version number as a matter of speaking 
because I think they are doing good work) )).


If you can create an interface to the whole system that is very elegant, 
I'm all for it, you know.


I am speaking out of my own interests as well, but I do not consider it 
selfish.


Limiting the scope of what users need to do and thinking of a setup that 
is logical, sensible, and some good default, not needing any other 
choices because it is just a best practice kind of thing, even in the 
context of having a lot of freedom to do it any way you want --- a 
reasonable default setup that users do not need to worry about. I mean, 
that's how you can bring some calm into the thing I guess.


My personal interest is not getting headaches in doing the things I need 
to do. My personal interest is in having the power to shape my personal 
life. If everything is beyond me to know anything about, I lose a lot of 
power.


I am fighting to maintain sanity and dignity in the face of it all. I am 
fighting for this thing (everything I suppose) to be kept 
und

Re: Homenet

2016-03-21 Thread Xen

Op 21-3-2016 om 11:23 schreef Tim Coote:
If Bart’s view is typical, I guess that the answer to my original 
question is ‘No’.  That’s a shame as further separation of the various 
Linux models (embedded, networking, phone, tablet, desktop, server, 
cloud), is, imo, unhelpful.  But at least it gives me a steer.


I was not saying at all that my views are typical, just to get that 
straight.


But I see no end to the complexity that will arise. No end to it. At 
all. I see no elegance in it either.


I mean, if you can keep this as simple as possible, and as elegant as 
possible, kudos to you. You are working with an inelegant system (IPv6 
in its entirety) but you are maintaining sense and sanity in spite of 
it, or in the face of it. That would be outstanding, you know.


Typically though my experience with Linux especially in the later days, 
(( ( and my experience with the basics of NM as well ) )) is that many 
systems are flawed even from a basic user experience design perspective. 
The simple truth is from MY perspective that *most* commercial 
applications decided for end users are typically rather user friendly. 
Take your average computer game: if it's not easy to use, people will 
not enjoy using it, and enjoyment is like the essence of a game. Your 
average website (that services millions of people) has a better designed 
interface (in many cases) than your typical desktop client or computer 
program application, these days. Webdesign in that sense is where the 
money is. I see rich text editors in websites that are better designed 
(although more buggy) than any other text editor I know of (on the desktop).


That doesn't mean any application on Linux needs to be user-unfriendly. 
That depends entirely on the developer or designer, you might say. But 
at the same time many "computer technical" websites like the most 
important ones (on the ietf.org drafts) are still stuck in the dark ages.


You can go on talking about the next evolution, but look at that web 
page. It is hideous. That is one of the most ugly and in some ways, 
because of that, non-functional, web pages still in existence for any 
large or important body.


https://tools.ietf.org/wg/homenet/

Even someone doing nothing but change the margins could improve that 
page a 1000 fold.


Now if the same attitude is used to design even command line interfaces 
to programs (and I guess you can bet it is) you get command line 
interfaces to programs that are hideous as well. Any lack of an interest 
in providing good interfaces or designing something beautiful will 
always lead to the designing of interfaces that are hideous.


And they will be hard to use, you will forget how to use it, you use a 
manual to remember the commands, etc. etc. etc. You can see with IPv6 
that there has been no interest to make this thing easy to understand.


Concerns constantly appear to have come up that were not even 
ANTICIPATED. That means that the system is not elegant and a certain 
form of patch work is constantly needed to make it function well again 
or to fill up the holes.


The system is not elegant. Period.

(( ( I consider NM itself (nmcli) hard to use and there was a bug where 
my wifi device regularly stopped working for no reason and it was said 
to be rather well known. Until I integrated some setup scripts into NM 
and migrated my openvpn configuration into it, I didn't even want to use 
it. I only needed it for the roaming features. After that it was still 
less elegant than using openvpn directly. Eventually connecting the VPN 
was easier than ever before because of the KDE icon for it and the 
feedback I got from it. The lock icon on the wifi connection - that made 
a great deal of difference for me. Feedback. I also made pains to 
upgrade NM to a later version (this was on OpenSUSE 13.2) because of the 
advances I needed, such as using cipher: none, as an option. I enjoy 
seeing the advances in the version number as a matter of speaking 
because I think they are doing good work) )).


If you can create an interface to the whole system that is very elegant, 
I'm all for it, you know.


I am speaking out of my own interests as well, but I do not consider it 
selfish.


Limiting the scope of what users need to do and thinking of a setup that 
is logical, sensible, and some good default, not needing any other 
choices because it is just a best practice kind of thing, even in the 
context of having a lot of freedom to do it any way you want --- a 
reasonable default setup that users do not need to worry about. I mean, 
that's how you can bring some calm into the thing I guess.


My personal interest is not getting headaches in doing the things I need 
to do. My personal interest is in having the power to shape my personal 
life. If everything is beyond me to know anything about, I lose a lot of 
power.


I am fighting to maintain sanity and dignity in the face of it all. I am 
fighting for this thing (everything I suppose) to be kept 
und

Re: Homenet

2016-03-21 Thread Xen


Op 21-3-2016 om 14:05 schreef Thomas Haller:
Note that there are also private stable addresses: 
https://tools.ietf.org/html/rfc7217 
https://blogs.gnome.org/lkundrak/2015/12/03/networkmanager-and-privacy-in-the-ipv6-internet/ 
Thomas 


Hi thanks.

That part seems to actually be a good feature, thanks. What it comes 
down to in my mind from what it seems is that every host gets a 
non-random yet unpredictable network-based address that makes the host 
"unfindable" across networks (and also unidentifiable) (unless it 
identifies to different hosts on different networks as the same 
'person') (using a different means, more application level).


The host then uses that to communicate to all other hosts but only on 
the same network.


The other hosts (or applicatons) then use it to communicate with it 
until the address is lost because the host is getting renewed, 
reinstalled, the key is lost, or whatever, which is not really any 
biggy, but the next time it will use a different stable address. You 
would say (if the device is not an appliance but something more unstable 
like a computer) that the address is stable as long as the current OS is 
running.


Well that seems fine provided everything else is already accepted. Thanks.
___
networkmanager-list mailing list
networkmanager-list@gnome.org
https://mail.gnome.org/mailman/listinfo/networkmanager-list


Re: Homenet

2016-03-21 Thread Xen

Op 21-3-2016 om 00:29 schreef Stuart Gathman:

On 03/20/2016 11:36 AM, Xen wrote:
By the way, if UPnP was ever a problem in terms of NAT security, 
obviously the problem is much worse in IPv6, since there is not even 
any NAT and all devices are always exposed.


"Addressable" is NOT the same thing as "exposed".  Any sane IPv6 
router for the home (every one I have have seen so far) blocks all 
incoming connections by default - just like NAT effectively does. 
There is no operational difference for the clueless home owner. With a 
consumer firewall, selected ports can be "forwarded" through IP4 NAT 
to a selected internal IP.  Similarly, selected ports can be unblocked 
for selected internal objects with an IP6 firewall.


There is a fundamental issue with this and that is that this is a rather 
arbitrary "sanest method of configuration" rather than a topology 
feature. What you get is like a multi-to-multi mapping (one on one, so 
to speak) there is just a filter in between that will block incoming 
connections. That means that the filter will record and maintain 
outgoing connections like a current NAT firewall does. There is no 
advantage to this over NAT other than the fact that you can use the same 
port if you wanted on multiple devices.


There is no longer a "port to different port" mapping, now it is simply 
"open or closed".


The port to port mapping is not really a fundamental feature, typically 
the ports for internal devices are not meaningful. There is a bit of an 
advantage in not configuring anything but you also lose the feature of 
being able to map anything in the first place. If those ports on the 
internal network are meaningful, they don't have to be meaningful on the 
outside. When you cross boundaries, meanings can change. For example, I 
have a device internally running on port 22, but externally port 80. 
This is because I was located on a premises that blocked outgoing port 
22 connections. And basically all other connections except 80 and 443. 
There are also other ports open on that router but they are all 
accessible through the same IP and domain.


Now tell me, what is the advantage of IPv6, I don't see any.

I'm sure the mapping is a feature that is on IPv6 routers as well. But 
are you telling me that I am going to need a different domain to access 
every local device (because they use each a different public IP address)?.


Sure, the router will have this feature. So what is the advantage then. 
I'm still using one IP to access all services.


I'm sure certain people have experienced conflicts because for instance 
certain games required certain incoming ports (doesn't really happen, 
but okay. Think a file-sharing program, that may require some fixed 
ports). Current torrent clients are able to choose any port they want. 
Maybe it's a bit of a configuration hassle if you want fixed ports.


Nothing insurmountable and actually something that helps you understand 
your network.


What advantage do I have if I have addressable (but per the 
configuration of the firewall) inexposed IP addresses for each internal 
device, including possibly the router?


Can you tell me that?

The only semi-valid criticism is that with IP4 NAT, the effective 48 
bit (IP+ random 16 bit port) public address is periodically recycled 
to point to different internal objects.  With IP6 sans NAT, the 
128-bit (Subnet + random 64 bit host ip) public address, while random 
and periodically changing like IP4 NAT, is not recycled.  A given IP 
only ever points to a single internal object.  This could potentially 
reveal more information to someone logging IP+port on the outside.  
But it is not yet clear what exactly it would gain them.


You know, sometimes people say "why do you want it?" and often times 
when people say these things, it's just because people want it and there 
is no other reason.


An example was a computer game that does not allow direct trade between 
players in an online world. Most of these online worlds do allow direct 
trade between individuals in a way of exchanging items in a relatively 
safe way. Did particular game did not have it. When some people started 
arguing for inclusion of this feature, the wannabe employees of this 
company started defending the status quo by saying "why do you need 
it?". "Why do you want it?". And it was completely obvious and is 
completely obvious to any sane normal person out there, even in the real 
world, that being able to give stuff to another player, is something 
that is meaningful and helpful. To anyone not affiliated with the status 
quo of that game, this would normally not be a question. Of course you 
want to trade. Of course you want to be able to hand someone something. 
You can do so in real life, why would you not want to do something like 
that in a virtual world.


So in that case the questi

Re: Homenet

2016-03-20 Thread Xen
By the way, if UPnP was ever a problem in terms of NAT security, 
obviously the problem is much worse in IPv6, since there is not even any 
NAT and all devices are always exposed.


Now even though you are living together in a "house" all "residents" now 
need to solve these issues on their own.


This makes it almost impossible to run any kind of home server, because 
the default setup is going to be: access internet services, don't worry 
about local network access or even if you do have it, accept that you 
will be at risk of getting hacked or exposed constantly.


The exposure might be dealt with by the protocols (if they work) but 
there is a high chance they won't, because the model at the beginning is 
that everything is exposed.


If your premise is complete exposure, if that is what you intend and 
want, then you won't be able to achieve meaningful protection in any way.


If you banish all clothes and then try to find a way for people to not 
see you naked, that won't work.


"So we have no clothes anymore, how can we find a way for people to not 
be cold and to not be seen naked? Hmmm difficult".


You know, maybe don't banish the clothes.

Maybe don't banish NAT.

Maybe don't banish localized, small, understandable networks.

Maybe don't banish the boundary between the local and the remote. Maybe 
not do away with membranes.


Nature has designed life around membranes, all cells have membranes. 
"Cell membranes protect and organize cells. All cells have an outer 
plasma membrane that regulates not only what enters the cell, but also 
how much of any given substance comes in."


The basic topology for IPv6 is so deeply misunderstood and misdesigned 
from my perspective


That it tries to create a membrane based purely on subnet masking.

And that's not a safe thing because a misconfigured system automatically 
gives access. You want all internal addresses to be in the same pool (as 
the router accepts a list or segment of adddress from the ISP). The 
router is supposed to distribute those addresses across clients while 
allowing them to know and find each other, ie. by giving them the 
information on the subnet masks they are supposed to use. The subnet 
mask is everyone's potential and right to not care about any fixed 
boundaries between the local and the remote (wide) network.


Maybe you can call it empowerment (everyone has a public address). But 
it is also a huge loss of control. It's a loss of power. Networks that 
can't be configured by any individual person. Inability to shield 
anything from anyone in a real sense.


Local clients (ie. Linux and Windows computers and Mac computers and 
Android phones) now requiring the intelligence to safely distinguish 
between local and remote services, a problem that was never even solved 
in IPv4, let alone that IPv6 even stands the slightest chance of 
meaningfully solving it.


All of these devices needing to perfectly cooperate in order to find and 
know the local network. Particularly if there is a segmentation between 
"secure" and "insecure" or between "guest" and "resident". And what if 
you want two subnet masks for a different purpose? A managed switch has 
means to physically separate (in a way) two different nets on the same 
cables. You may be wanting to run a network of servers in your home that 
is separate from your local home network. You lose pretty much all 
control in being able to do this effectively.


Even if IPv6 gives some freedom or liberation, it is mostly due to the 
router allowing this. Everyone his own IP address. Everyone his own 
front door. People love that, in a way. But it also means you no longer 
have a family.







Op 20-3-2016 om 16:05 schreef Xen:

Op 20-3-2016 om 11:56 schreef Tim Coote:
Is it intended that NetworkManager will conform to /support / exploit 
the Homenet network name and address assignment and routeing 
protocols (http://bit.ly/1LyAE7H), either to provide end to end 
connectivity or to provide a monitoring layer so that the actual 
state of the network topologies can be understood?


Home or small networks seem to be getting quite complex quickly and 
way beyond what consumers can be expected to 
understand/configure/troubleshoot/optimise, with one or two ISP’s per 
person (via mobile phone + wifi connectivity) + an ISP for the 
premises; and wildly differing network segment performance 
characteristics and requirements (e.g. media streaming vs home 
automation).


I  can't answer your question but in my mind there are only 2 or 3 
issues mostly:


- a mobile phone that connects through home wifi, external wifi, or 
mobile/3G/4G connectivity, may access predominantly internet (cloud) 
services and not have access to anything in the home network by 
default, and there is not really any good platform to enable this.


(all that it needs is 

Re: Homenet

2016-03-20 Thread Xen
And you can also not even configure a local network anymore without 
access to the internet.




It's like those internet streaming devices that can also access the 
local network that people hate. There are internet radio's that will 
only work if they can contact the internet. There are alarm clocks that 
stop working if your internet access goes down.


This system is not good Is all I'm saying.

I hope this informs some of the decisions being made. Good luck.

Bye. Bart.


Op 20-3-2016 om 16:36 schreef Xen:
By the way, if UPnP was ever a problem in terms of NAT security, 
obviously the problem is much worse in IPv6, since there is not even 
any NAT and all devices are always exposed.


Now even though you are living together in a "house" all "residents" 
now need to solve these issues on their own.


This makes it almost impossible to run any kind of home server, 
because the default setup is going to be: access internet services, 
don't worry about local network access or even if you do have it, 
accept that you will be at risk of getting hacked or exposed constantly.


The exposure might be dealt with by the protocols (if they work) but 
there is a high chance they won't, because the model at the beginning 
is that everything is exposed.


If your premise is complete exposure, if that is what you intend and 
want, then you won't be able to achieve meaningful protection in any way.


If you banish all clothes and then try to find a way for people to not 
see you naked, that won't work.


"So we have no clothes anymore, how can we find a way for people to 
not be cold and to not be seen naked? Hmmm difficult".


You know, maybe don't banish the clothes.

Maybe don't banish NAT.

Maybe don't banish localized, small, understandable networks.

Maybe don't banish the boundary between the local and the remote. 
Maybe not do away with membranes.


Nature has designed life around membranes, all cells have membranes. 
"Cell membranes protect and organize cells. All cells have an outer 
plasma membrane that regulates not only what enters the cell, but also 
how much of any given substance comes in."


The basic topology for IPv6 is so deeply misunderstood and misdesigned 
from my perspective


That it tries to create a membrane based purely on subnet masking.

And that's not a safe thing because a misconfigured system 
automatically gives access. You want all internal addresses to be in 
the same pool (as the router accepts a list or segment of adddress 
from the ISP). The router is supposed to distribute those addresses 
across clients while allowing them to know and find each other, ie. by 
giving them the information on the subnet masks they are supposed to 
use. The subnet mask is everyone's potential and right to not care 
about any fixed boundaries between the local and the remote (wide) 
network.


Maybe you can call it empowerment (everyone has a public address). But 
it is also a huge loss of control. It's a loss of power. Networks that 
can't be configured by any individual person. Inability to shield 
anything from anyone in a real sense.


Local clients (ie. Linux and Windows computers and Mac computers and 
Android phones) now requiring the intelligence to safely distinguish 
between local and remote services, a problem that was never even 
solved in IPv4, let alone that IPv6 even stands the slightest chance 
of meaningfully solving it.


All of these devices needing to perfectly cooperate in order to find 
and know the local network. Particularly if there is a segmentation 
between "secure" and "insecure" or between "guest" and "resident". And 
what if you want two subnet masks for a different purpose? A managed 
switch has means to physically separate (in a way) two different nets 
on the same cables. You may be wanting to run a network of servers in 
your home that is separate from your local home network. You lose 
pretty much all control in being able to do this effectively.


Even if IPv6 gives some freedom or liberation, it is mostly due to the 
router allowing this. Everyone his own IP address. Everyone his own 
front door. People love that, in a way. But it also means you no 
longer have a family.








Op 20-3-2016 om 16:05 schreef Xen:

Op 20-3-2016 om 11:56 schreef Tim Coote:
Is it intended that NetworkManager will conform to /support / 
exploit the Homenet network name and address assignment and routeing 
protocols (http://bit.ly/1LyAE7H), either to provide end to end 
connectivity or to provide a monitoring layer so that the actual 
state of the network topologies can be understood?


Home or small networks seem to be getting quite complex quickly and 
way beyond what consumers can be expected to 
understand/configure/troubleshoot/optimise, with one or two ISP’s 
per person (via mobile phone + wifi connectivity) + an ISP f

Re: Homenet

2016-03-20 Thread Xen
By the way, if UPnP was ever a problem in terms of NAT security, 
obviously the problem is much worse in IPv6, since there is not even any 
NAT and all devices are always exposed.


Now even though you are living together in a "house" all "residents" now 
need to solve these issues on their own.


This makes it almost impossible to run any kind of home server, because 
the default setup is going to be: access internet services, don't worry 
about local network access or even if you do have it, accept that you 
will be at risk of getting hacked or exposed constantly.


The exposure might be dealt with by the protocols (if they work) but 
there is a high chance they won't, because the model at the beginning is 
that everything is exposed.


If your premise is complete exposure, if that is what you intend and 
want, then you won't be able to achieve meaningful protection in any way.


If you banish all clothes and then try to find a way for people to not 
see you naked, that won't work.


"So we have no clothes anymore, how can we find a way for people to not 
be cold and to not be seen naked? Hmmm difficult".


You know, maybe don't banish the clothes.

Maybe don't banish NAT.

Maybe don't banish localized, small, understandable networks.

Maybe don't banish the boundary between the local and the remote. Maybe 
not do away with membranes.


Nature has designed life around membranes, all cells have membranes. 
"Cell membranes protect and organize cells. All cells have an outer 
plasma membrane that regulates not only what enters the cell, but also 
how much of any given substance comes in."


The basic topology for IPv6 is so deeply misunderstood and misdesigned 
from my perspective


That it tries to create a membrane based purely on subnet masking.

And that's not a safe thing because a misconfigured system automatically 
gives access. You want all internal addresses to be in the same pool (as 
the router accepts a list or segment of adddress from the ISP). The 
router is supposed to distribute those addresses across clients while 
allowing them to know and find each other, ie. by giving them the 
information on the subnet masks they are supposed to use. The subnet 
mask is everyone's potential and right to not care about any fixed 
boundaries between the local and the remote (wide) network.


Maybe you can call it empowerment (everyone has a public address). But 
it is also a huge loss of control. It's a loss of power. Networks that 
can't be configured by any individual person. Inability to shield 
anything from anyone in a real sense.


Local clients (ie. Linux and Windows computers and Mac computers and 
Android phones) now requiring the intelligence to safely distinguish 
between local and remote services, a problem that was never even solved 
in IPv4, let alone that IPv6 even stands the slightest chance of 
meaningfully solving it.


All of these devices needing to perfectly cooperate in order to find and 
know the local network. Particularly if there is a segmentation between 
"secure" and "insecure" or between "guest" and "resident". And what if 
you want two subnet masks for a different purpose? A managed switch has 
means to physically separate (in a way) two different nets on the same 
cables. You may be wanting to run a network of servers in your home that 
is separate from your local home network. You lose pretty much all 
control in being able to do this effectively.


Even if IPv6 gives some freedom or liberation, it is mostly due to the 
router allowing this. Everyone his own IP address. Everyone his own 
front door. People love that, in a way. But it also means you no longer 
have a family.








Op 20-3-2016 om 16:05 schreef Xen:

Op 20-3-2016 om 11:56 schreef Tim Coote:
Is it intended that NetworkManager will conform to /support / exploit 
the Homenet network name and address assignment and routeing 
protocols (http://bit.ly/1LyAE7H), either to provide end to end 
connectivity or to provide a monitoring layer so that the actual 
state of the network topologies can be understood?


Home or small networks seem to be getting quite complex quickly and 
way beyond what consumers can be expected to 
understand/configure/troubleshoot/optimise, with one or two ISP’s per 
person (via mobile phone + wifi connectivity) + an ISP for the 
premises; and wildly differing network segment performance 
characteristics and requirements (e.g. media streaming vs home 
automation).


I  can't answer your question but in my mind there are only 2 or 3 
issues mostly:


- a mobile phone that connects through home wifi, external wifi, or 
mobile/3G/4G connectivity, may access predominantly internet (cloud) 
services and not have access to anything in the home network by 
default, and there is not really any good platform to enable this.


(all that it needs is 

Re: Homenet

2016-03-20 Thread Xen

Op 20-3-2016 om 11:56 schreef Tim Coote:
Is it intended that NetworkManager will conform to /support / exploit 
the Homenet network name and address assignment and routeing protocols 
(http://bit.ly/1LyAE7H), either to provide end to end connectivity or 
to provide a monitoring layer so that the actual state of the network 
topologies can be understood?


Home or small networks seem to be getting quite complex quickly and 
way beyond what consumers can be expected to 
understand/configure/troubleshoot/optimise, with one or two ISP’s per 
person (via mobile phone + wifi connectivity) + an ISP for the 
premises; and wildly differing network segment performance 
characteristics and requirements (e.g. media streaming vs home 
automation).


I  can't answer your question but in my mind there are only 2 or 3 
issues mostly:


- a mobile phone that connects through home wifi, external wifi, or 
mobile/3G/4G connectivity, may access predominantly internet (cloud) 
services and not have access to anything in the home network by default, 
and there is not really any good platform to enable this.


(all that it needs is a router really that provides loopback, an 
internet domain, and a way to access LAN services both from the outside 
and inside) (but most people don't run services on their network anyway 
except when it is some appliance-like NAS)


(but If you're talking about media streaming and home automation, this 
outside/inside access thing becomes important)


Other than that there is no issue for most people. If your mobile app is 
configured with an internal IP address, you get in trouble when you are 
outside the network, if you configure it with an external address, not 
all routers will allow you to access it from the inside.


For example the popular (or once popular) D-Link dir-655 router doesn't 
allow it, while all TP-Link routers are said to support it (a support 
rep from China or something demonstrated it to me with screenshots).


- I don't think IPv6 is a feasible model for home networking and 
configuring it is said to be a nightmare even for those who understand 
it. I don't think it solves any problems, or at least doesn't solve it 
the right way that makes it easier to use home networking. I think IPv6 
is a completely flawed thing to begin with, but as long as it stays on 
the outside, I don't care very much. NAT shielding from the outside is a 
perfect model. Anyone requiring network access from the outside should 
be in a situation where they are able to configure it (port forwarding). 
Even where you could (as an advanced user) require 2 or more IP 
addresses at home, you still don't need a 100 or 65535. IPv6 in the home 
solves problems practically no one has, and opens up every device to 
internet access without any firewall in between. If home networking is 
only defined by subnet mask, it becomes a pain to understand how you can 
shield anything from anyone. You have to define your home network in 
public internet IP address terms. No more easy 192.168.1.5 that even 
non-technical users recognise. If there's no NAT, you're lost, and only 
wannabe "I can do everything" enthusiasts really understand it.



When I read the Charter of that homenet thing, it is all about IPv6: 
https://datatracker.ietf.org/wg/homenet/charter/


Their statements are wildly conflicting:

"While IPv6 resembles IPv4 in many ways, it changes address allocation 
principles and allows
direct IP addressability and routing to devices in the home from the 
Internet. ***This is a promising area in IPv6 that has proved 
challenging in IPv4 with the proliferation of NAT.***" (emphasis mine)


"End-to-end communication is both an opportunity and a concern as it 
enables new applications but also exposes nodes in the internal networks 
to receipt of unwanted traffic from the Internet. Firewalls that 
restrict incoming connections may be used to prevent exposure, however, 
this reduces the efficacy of end-to-end connectivity that

IPv6 has the potential to restore."

The reality is of course that people (and games/chat applications) have 
always found a way around the problems. UPnP port forwarding, while not 
perfect, has been a make-do solution that basically allowed any 
application end-to-end connectivity as long as you don't require fixed 
ports (few people do).


The internet has been designed around two basic topologies: 
client-server and peer-to-peer. Client-server has never been challenging 
with NAT, only peer-to-peer has. For example, online games always use a 
client-server model in which NAT is a complete non-issue. The only times 
when NAT becomes an issue is with direction connections between two 
clients (peers) which is a requirement mostly for communication 
applications and/or applications that require high bandwidth (voice, video).


Ever since UPnP these devices can simply go to any router on the network 
and say "open me up" and they'd have access. I would have preferred the 
thing to be a little

Re: NM and IETF MIF working group

2015-09-28 Thread Xen
Just want to say that I have been trying (in OpenSUSE) to get a rather 
simple scenario working, but failed, probably due to kernel mechanics:


- main connection receives all traffic destined for port 80, 443.
- VPN receives all else.

I just consider it a more special case of directing VPN traffic to only 
the VPN network (no forwarding/routing at the end node).


It required a few simple steps:
- tag (SYN) packages for 80,443 with a mark
- use the fwmark as an iproute rule
- the rule sends the traffic to a different routing table

Unfortunately although the routing seems to work, the traffic gets 
returned but not progressed by the kernel apparently due to some blocking 
or safety measure. I could not get around it, though I tried everything I 
could find on the web.


A fourth step that may be required is:
- snat the outgoing packages to match the interface they are now sent out 
on (meaning to match its ip address) such that a reverse route will 
coincide with the outgoing route that the kernel/routing system has chosen 
for the outgoing packets.


I thought it was going to be a simple thing to setup and though I spent 
easily 4-5 hours on it, I could not get it to work.


Perhaps if this seems an interesting or important use case, someone who is 
more knowledgeable than me could look into it? It seems rather... that it 
would look really bad on Linux if this common use case is a near 
impossibility due to kernel mechanics or security measures, or whatever 
else is causing it. Not sure how else to phrase it. I mean that it would 
not be a selling point, that sort of stuff.


You could even integrate it into NM if it did work. "Route only selected 
ports over this VPN" or "Route everything except selected ports over this 
VPN". Would really be awesome.


Just wanted to say that.

Regards, Bart.


On Mon, 28 Sep 2015, David Woodhouse wrote:


On Mon, 2015-09-07 at 12:05 +0200, Stjepan Groš wrote:


Two colleagues of mine and I started to work on MIF implementation on
Fedora. In case someone doesn't know, IETF MIF working group (
https://datatracker.ietf.org/wg/mif/charter/) tries to solve the
problems of a single node having multiple parallel connections to
different destinations (Internet, VPN, some private networks, etc.).___
networkmanager-list mailing list
networkmanager-list@gnome.org
https://mail.gnome.org/mailman/listinfo/networkmanager-list


Re: vpn and stuff

2015-09-19 Thread Xen

On Tue, 15 Sep 2015, Simon Geard wrote:


On Mon, 2015-09-14 at 17:18 +0200, Xen wrote:

In a command line shell you will just never learn or remember to
write Ne instead of ne..


Why would I need to? I just type 'ne' and tab, and let (case
insensitive) command-line completion fill in the rest for me...


That is more of a question for you than for me. So, why would you need to?

I don't have case insensitive command-line completion. I pretty much think 
it is a given that you understood that I don't. So why not be helpful 
instead and point to the possility of having this, if it is necessary or 
convenient, and you think it will help me?.


I just activated it. I could not find it in man bash, but they had it on 
the web. It feels like bliss, slightly. But I will have to do it on every 
host I work on, another configuratio step..


But if you're not gonna help me, you're just bragging.

Not sure if it solves every problem though. Bragging, I mean.
___
networkmanager-list mailing list
networkmanager-list@gnome.org
https://mail.gnome.org/mailman/listinfo/networkmanager-list


Re: vpn and stuff

2015-09-14 Thread Xen

On 09/14/2015 01:35 PM, Thomas Haller wrote:

On Sat, 2015-09-12 at 19:56 +0200, Xen wrote:


==

Seriously I would suggest to get rid of the CamelCase name. It
breaks compatibility or congruency with a lot of other things and
as a user you are constantly wondering what the name is going to
be. NetworkManager? networkmanager? network-manager? It changes
from situation to situation.




well... I don't like it either, but changing it now is painful too.

Thomas



It's probably quite easy. I take it your binary is not depended upon, 
nor your configuration directories, by external tools. I haven't seen 
anything thus far that was different e.g. between my Kubuntu and 
OpenSUSE systems. Except that the dispatcher.d/ in Kubuntu contained a 
script to run /etc/network/ifup.d/ things.


I rather doubt there are any external tools, or at least not a lot of 
them, that would need to change /etc/NetworkManager to /etc/network-manager.


Even if you keep the Binary name intact, you could still change the 
config dir /etc/ ... but I don't see for what purpose, that is to say, 
what reason is there for the process name to be a pretty name? It seems 
to want to be very important, but that is not in a user's interest.


Soon you'll have all sorts of programs vying for attention: no, look at 
mee! I don't see where the pain would be.


Just use a semi-major release like 1.2.

It's shame no one in Gnome and KDE ever thought of a way to get better 
process info for a user in a user friendly way. You can say, could say, 
and might very well say, that it is nicer for a user to see 
NetworkManager and ModemManager in the process list when you hit ctrl-esc.


But Gnome has all these pretty names that are unrelated to the real 
process name. "File Browser" is actually called Nautilus, so any user is 
readily confused and made powerless.


But as a basic service that should not be as important as you are making 
it out to be (or that generally just shouldn't be of any more 
outstanding importance than all the rest of the system's processes or 
services) there is no point for it to be standing out.


So question: why /should/ the binary be user-pretty?

It's supposed to be a transparent, invisible system right. Not vying for 
attention and recognition.


If it considered itself less important, my life would be easier too ;-) !.

On 09/14/2015 04:44 PM, Dan Williams wrote:>

I happen to disagree, but everyone is entitled to their opinion. As
it stands, the official name is "NetworkManager", but distributions
apply their own packaging guidelines and some distributions disallow
CamelCase names, but certainly not all Linux distributions. So
distributions that choose to allow only lower-case names will then
obviously create confusion between the package name and the project
name.

Dan



But I don't see why the Project Name could not be simply different from 
the config-dir-name and the binary-name.


I mean, just because e.g. the name for Kubuntu is capital Kubuntu, 
doesn't mean all packages with Kubuntu in it should also be init cap.


It would pretty much create a visual nightmare. You're just apparently 
trying to stand out, but if everyone (and everything) did that you'd 
just get a race for attention. Where everyone is trying to top all the 
other projects.


The main reason is simply also that because (or because) many systems 
are case sensitive. You get usability nightmares. How to remember which 
package or process name or directory tree is capitalized and which is not?


And typing capitals is tiresome anyway. There is a reason they invented 
caps lock ;-).


openSUSE does allow and invite caps in packages. The current result is 
that when a package list is sorted alphabetically, the ones that start 
with an initial cap, end up in front.


The only reason it works well for "openSUSE" is because it does not 
start with an initial cap.


Also for the package names it is just ugly, but all the same, as soon as 
case sensitivity doesn't matter it is not so much an issue anymore.


In a command line shell you will just never learn or remember to write 
Ne instead of ne..


So you could easily keep packages and even the Binary as NetworkManager

but change the /etc/... to network-manager. No matter how incongruent 
that would be.


Personally?

I would change both binary and config dir to lowercase.
I would keep all end-user representation as NetworkManager (but it is 
nowhere to be seen, being "invisible").


If your thing is invisible, why should it stand out?.

I would keep your internet name and project name as NetworkManager.

I would invite packagers to keep using NetworkManager if they wish to 
(doesn't happen in Debian). I would keep your config script/file as 
NetworkManager.conf.


If I had a say in my system I would never allow it to use capital 

Re: vpn and stuff

2015-09-14 Thread Xen

Hi, thanks for your responses.

On 09/14/2015 02:10 PM, Thomas Haller wrote:

You bring up so many different points, that it's hard to keep track of
them. It would be better to discuss them individually or open Bugs for
it.


I know, just imagine having to file bug reports for all of them ;-).


KDE/plasma-nm design decision. Please open a bug.


Useless. It even seems to be a theme default. I don't know, it's 
system-wide. And I just don't know how much use it still is to 
contribute to KDE 4 Anyway I just wanted to mention it. I just 
mentioned everything.




 From your wrapper script, do you invoke the openvpn binary with "exec",
contrary to "call"? That seems important.


I tried changing it to exec, but that didn't seem to make a difference.



/usr/lib/nm-openvpn-service-openvpn-helper --tun -- tun0 1500 1528
10.8.0.6 10.8.0.5 init

Could not send configuration information: The name
org.freedesktop.NetworkManager.openvpn was not provided by any
.service
files".


your installation seems broken.


This happened when OpenVPN connected after the link had been lost for a 
while due to the tunnel disappearing, remember. Killing and reopening 
openvpn reinstates (reinstores, restores) it perfectly and then the 
error does not arise. I just don't know enough about NM to appreciate or 
evaluate or to be able to do something meaningful with your statement 
here; I just don't know what it means. All I know is that  the error 
is related to this 'openvpn going ghost' thing.




Then you have the problem that NM doesn't know about OpenVPN's
"cipher
none" mode. You cannot get it (I cannot get it) to pass that
parameter
to OpenVPN.


It's a UI bug only (https://git.gnome.org/browse/network-manager-openvp
n/commit/?id=be63c404a146704e3e4840f050d5bdd63bc94826)
You can still use the none cipher by configuring it either with nmcli
or by editing the connection file under /etc/NetworkManager/system
-connections/.


Is it an older version problem? I had already tried what you suggest, in 
that I edited /etc/NetworkManager/system-connections/MyVPNThing by 
adding "cipher=none" to the [vpn] section. See, I wasn't entirely 
unprepared before sending this email.


The point was that by inspecting the resulting command line of the 
OpenVPN process, I could see no --cipher option being added.


/usr/sbin/openvpn --remote localhost --comp-lzo --nobind --dev tun 
--proto tcp-client --port 1193 --auth-nocache --syslog nm-openvpn 
--script-security 2 --up /usr/lib/nm-openvpn-service-openvpn-helper 
--tun -- --up-restart --persist-key --persist-tun --management 127.0.0.1 
1194 --management-query-passwords --route-noexec --ifconfig-noexec 
--client --auth-user-pass --ca /etc/openvpn/cert.crt


And indeed my OpenVPN is defunct. This is why I added the wrapper 
script. Now I can turn off cipherless mode but it's a drag on my OpenVPN 
server since it's just a little machine.


My NM version is 0.9.10.0. Wah, you committed that today? :P.




IP4_GATEWAY environment variable. See `man NetworkManager`
Or `nmcli -t -f IP4.GATEWAY connection show uuid $CONNECTION_UUID`


Again, older version. I was looking to upgrading to 1.0 but the library 
thing confused me and I just wanted to compile myself. And I wasn't sure 
how to get it right with plasma-nm.


So I thought I'd just ask first. Replacing distribution-supplied 
packages with files you compile yourself is not always the easiest thing


Currently installing a prepackaged 1.0.6.

It now has --cipher none. My apologies, I was just still on the "stable" 
version supplied by distros. :(.


IP4_GATEWAY is now also there (in the manual, and it works. Integrated).



When NM has a connection as managed, manual interference with IP
address
and such becomes impossible. I consider this a big problem. The
problem
does not arise with adding new IP addresses to any device.


What is your basis to claim "impossible". It is possible. What issues
did you encounter?


Maybe it can be done by /CONFIGURING/ NM to keep its hands off it. But 
that's the same as first making it managed and then unmanaging it. It is 
not possible by default. (How should anyone know about it? It's just 
hidden mystery).



The fallacy is to think or consider that NM is always fully
configured.


You can configure default-routes externally. NM should not interfere if
you set "ipv4.never-default=yes".


But that means NM will NEVER set the default route for that interface. 
Look, with OpenVPN you create like an inner block in which some local 
variables are changed, so to speak. When OpenVPN enters, it wants to 
change the default route for the existing interface (say wlan0) by 
removing that route (in a default config) and then adding a new route 
(default route) to another interface (call it tun0).


Then, when OpenVPN disconnects, this situation is reversed: tun0 0.0.0.0 
is removed, and the original is reinstated.


But without using NM's openvpn shit, NM is just going to be oblivious to 
any of that. It will 

vpn and stuff

2015-09-12 Thread Xen

Hi,

It's not that I ever really liked NM, but.

After I set up my VPN with dispatcher.d scripts (it seems my SuSE 
install doesn't automatically call any ifup.d scripts but then it 
doesn't have /etc/network/ either. ;-)).


I managed to also integrate it with the plasma applet thing for KDE 4, 
which is really nice in user interface terms for the largest part (after 
you realise the non-button-like tool icon is not decoration but a vital 
part of its configuration).


Which is not a NM issue but KDE, it is one of the least intuïtive ways 
to present a button and pretend to hope that the user will understand to 
click on it. The argument against the argument against was probably 
"well, he just has to hover over it, doesn't he"?. Anyway.


VPN seems to work but it is very fallible.

It can really break at any junction.

I have noticed these things:

- it may happen that the existing openvpn instance is not closed but 
since it occupies localhost:1194 any new start will fail
- it may happen and maybe this is the result of some 'tweak' I made (as 
well as the previous item) that openvpn takes a longer time to 
connect and NM will lose its knowledge of that process
- it may happen that cute girls arive and then your wifi stops working, 
but that's another story.


I happened to create a kind of forward shell script that added the 
option --cipher none to NM's openvpn invocation. This may be the cause 
of my current problems, in that NM constantly loses track of an existing 
openvpn connection/process.


Symptoms I've seen were:

* OpenVPN takes longer to connect due to an issue. When finally it 
connects (as it keeps running in the background) this happens:


/usr/lib/nm-openvpn-service-openvpn-helper --tun -- tun0 1500 1528 
10.8.0.6 10.8.0.5 init


Could not send configuration information: The name 
org.freedesktop.NetworkManager.openvpn was not provided by any .service 
files".


I have to kill my OpenVPN process to enable NM to use it again.

After that this happens instead:

 (tun0): new Tun device (driver: 'unknown' ifindex: 6)
 (tun0): exported as /org/freedesktop/NetworkManager/Devices/5

And it connects successfully.


* My tunnel unexpectedly closes, this causes the OpenVPN to disconnect, 
the process keeps running and retrying, apparently NM gets notified that 
the connection is lost, but once OpenVPN reconnects NM doesn't update 
the routing table and I have an ineffective or inoperational VPN


I have to kill OpenVPN and restart it using NM.

All this is extremely arduous as it happens all the time.

NM doesn't really live up to its ideal of (check website).

It's not fuss-free at all.

Whenever my link fails I know it is 80% certain to be NM instead of a 
problem with my actual link.





So basically what I see is that if OpenVPN disconnects, it notifies NM, 
but once it reconnects, NM doesn't know and the process becomes like a 
ghost process.


And I have to manually shut it down each and every time.

At least if I run my VPN manually I have NO ISSUES except for the one 
issue that NM will not allow me to remove the default route for its 
managed connection.


So whatever way you frame it, NM is really my only issue ;-). OpenVPN 
itself works without a hitch.



==

Then you have the problem that NM doesn't know about OpenVPN's "cipher 
none" mode. You cannot get it (I cannot get it) to pass that parameter 
to OpenVPN.


Another problem, and an inconfigurability.

==

The only benefit for NM for me at this point is its gui. Without the 
lock icon in the system tray, it is hard for me to know whether I am 
running VPN or not. And because of its interface it's easier to start 
and stop it. Using the console to do that is not fun.


==

Sometimes my tunnel fails and since it is a simple SSH tunnel using 
/root/.ssh/config but with a custom startup script, I have to check on 
its status using the console. That is tiresome by itself, but OpenVPN is 
capable of just picking up where it left; it's just that NM mostly is not.


==

I have a custom dispather.d script that sets another route on vpn-up. I 
need this for my tunnel host (which is also the VPN host). I think I can 
also do this using VPN options (extra routes) but my problem at this 
point is this:


Is there a way to obtain the equivalent of OpenVPN variable 
"net_gateway"? _ net_gateway _ is a variable that indicates the OLD 
gateway address before VPN is activated. I know there is IP4_ROUTE_N and 
IP4_NUM_ROUTES. But at best this is a list of all routes. Do I have to 
manually search it for the route to 0.0.0.0? Same for VPN_, I don't know 
if it contains the new route or the old routes. Maybe both even.


In OpenVPN the program gives me the required route target so I don't 
have to fix it in any script. With NM I have to write a custom script or 
add a route to the config that seems to have to be fixed.


==