Re: [qubes-users] Salt InterVM Configuration explorations and pitfalls in 3.2-rc2

2016-08-31 Thread Marek Marczykowski-Górecki
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

On Wed, Aug 31, 2016 at 03:47:31PM -0700, nekroze.law...@gmail.com wrote:
> Does anyone have any thoughts on a way to template in the IP address of an 
> appVM so it can be used to define a file.managed state with the IP in the 
> filename such as tinyproxy requires?

Take a look at grains - there is a standard `ipv4` available.

- -- 
Best Regards,
Marek Marczykowski-Górecki
Invisible Things Lab
A: Because it messes up the order in which people normally read text.
Q: Why is top-posting such a bad thing?
-BEGIN PGP SIGNATURE-
Version: GnuPG v2

iQEcBAEBCAAGBQJXx4zjAAoJENuP0xzK19cs+jYH/2hzpEpQ8WR2/yiMc3KJiUW+
vnuGxoFgM72z7nfQmXswi0g6Q0NY3lo5gGWqRt/ZF5bves8ZXeZ7M81DDPF1gLMZ
fHoBSTmJq58J0PpmBS56ekZiVYndPeNTVqLLpgZGwubgjAAXZeCyyAcZiQvSxqom
4zgs3ev50yEfJ9/PoSAeON3Yf76LVbsyRxEgGN01yg9yssvpdBEdwV5bTQ+ZGe/f
sKiQpJAk0ACByFyJN9z8C2SWqCMAmXnALIteJssDuHVT8oS2L/BRG7Juk0/JMx6V
J4htQO/ZQGbqUWlQCbXJWmc9NVtvmsuMeB/o08BlDnNmBKZltUFsOxbWniD6d5E=
=LLQP
-END PGP SIGNATURE-

-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To post to this group, send email to qubes-users@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/20160901020524.GD24732%40mail-itl.
For more options, visit https://groups.google.com/d/optout.


Re: [qubes-users] Salt InterVM Configuration explorations and pitfalls in 3.2-rc2

2016-08-31 Thread Marek Marczykowski-Górecki
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

On Tue, Aug 30, 2016 at 11:00:30PM +0200, Marek Marczykowski-Górecki wrote:
> On Mon, Aug 29, 2016 at 11:07:33PM -0700, nekroze.law...@gmail.com wrote:
> > Also, I am not sure when, but the pkg.uptodate state does nothing in 
> > templates now. It used to work on this qubes install and it still succeeds 
> > (without changes) each run but if I use qubes-manager to do the update 
> > there is stuff to be done.
> 
> Which template? I use it regularly and it works...

Have you included "refresh: true" option? Otherwise it may simply not
refresh repository metadata.

- -- 
Best Regards,
Marek Marczykowski-Górecki
Invisible Things Lab
A: Because it messes up the order in which people normally read text.
Q: Why is top-posting such a bad thing?
-BEGIN PGP SIGNATURE-
Version: GnuPG v2

iQEcBAEBCAAGBQJXxwbQAAoJENuP0xzK19cs0O0IAI6ERhxfjmqEjGwe7ca3HJK8
6mfxtLQPFJnUA0fuSwqoJlPK95jZjg0ergHg4GsPpqWggtht+noeAItei9fLTp/6
or0BW9zeYut+C6GljLmd7hRsU/JzqdXGGP2iMAmvHuoMrzJTglvvBsuczqTBk+WU
yq8Woiv+y5M0hLUw2chqcI0GGXsC154MtBq2Ezk5D3z8YV3UQ+uKceqdF5wM+On4
ZhmIGt6ZdApN+JSe41awlrOdeYNe5Gck+e+uiTWsj9aGavjRDuYjmCe8lueXlbfb
FN//ML6VerD0nUr092yc0T3B0knqjzxd/DKwGasIQPUsNe7Wlu8Zuk5RLhe40cA=
=EYl1
-END PGP SIGNATURE-

-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To post to this group, send email to qubes-users@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/20160831163320.GD11005%40mail-itl.
For more options, visit https://groups.google.com/d/optout.


Re: [qubes-users] Salt InterVM Configuration explorations and pitfalls in 3.2-rc2

2016-08-30 Thread nekroze . lawson
On Wednesday, August 31, 2016 at 7:00:38 AM UTC+10, Marek Marczykowski-Górecki 
wrote:
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA256
> 
> On Mon, Aug 29, 2016 at 11:07:33PM -0700, nekroze.law...@gmail.com wrote:
> > On Tuesday, August 30, 2016 at 12:20:32 PM UTC+10, Marek 
> > Marczykowski-Górecki wrote:
> > > > > > fedora-23-minimal templates are unmanageable via salt, all of the 
> > > > > > internal > > VM salt configuration just doesn't work on on them 
> > > > > > from my experiments.
> > > > > 
> > > > > It may be that salt requires some additional packages to preform its 
> > > > > actions. Minimal template have really minimal package set installed. 
> > > > > But you probably can install additional stuff using pkg.installed. 
> > > > > Yes, it may require calling `qubesctl --all state.highstate` twice. 
> > > > 
> > > > I believe it says in the docs that the only requirements in the target 
> > > > VM for salt inter-vm management to work is scp because ssh looks of it 
> > > > or something. Turns out scp is not installed in the fedora-23-minimal 
> > > > template by default, however, even after installing it the installation 
> > > > of a package does not work for the minimal template. Using the 
> > > > revelation that is the --show-output switch I can see this happening. 
> > > > 
> > > > Its quite long so here is a paste of the section of output pertaining 
> > > > to fedora-23-minimal template http://pastebin.com/kCe29p9L but the tail 
> > > > of it is:
> > > > 
> > > >   stderr:
> > > >   ln: failed to create symbolic link 
> > > > ‘/tmp/salt-shim-sandbox/scp’: File exists
> > > >   WARNING: Unable to locate current thin  version: 
> > > > /tmp/.root_d510cd__salt/version.
> > > >   stdout:
> > > >   ERROR: Failure deploying thin: /usr/bin/scp
> > > >   
> > > > _edbc7885e4f9aac9b83b35999b68d015148caf467b78fa39c05f669c0ff89878
> > > >   deploy
> > > >   
> > > >   ln: failed to create symbolic link 
> > > > ‘/tmp/salt-shim-sandbox/scp’: File exists
> > > >   WARNING: Unable to locate current thin  version: 
> > > > /tmp/.root_d510cd__salt/version.
> > > 
> > > It is already fixed:
> > > https://github.com/QubesOS/qubes-issues/issues/2207
> > 
> > Does this update have to be done in dom0 and in the minimal template? I 
> > have updated dom0 but still have the issue, no reboot yet though as I am 
> > working. 
> 
> It's about qubes-mgmt-salt-vm-connector in default template.
> 
> > I will do more testing tonight as I have a need to also work on a HTTP 
> > proxy setup from the docs but with salt. As I am new to salt I figured I 
> > would learn by implementing everything in the qubes os docs with salt 
> > instead of imperative commands.
> > 
> > On Tuesday, August 30, 2016 at 12:57:54 PM UTC+10, Jeremy Rand wrote:
> > > Seems to me that an attack could be constructed where the Tor exit used
> > > for update downloads feeds sys-whonix an exploit, and from there is able
> > > to either break out of Tor, or compromise Tor in some way that may
> > > affect other VM's' anonymity.
> > 
> > Forgive me if I am misunderstanding the scenario you proposed, but the 
> > setup in question "sys-net>sys-firewall>sys-whonix>sys-update" If dom0 uses 
> > sys-update to pull updates we should be ok. The default for when qubes is 
> > told to use whonix/tor for updates however is 
> > "sys-net>sys-firewall>sys-whonix" with sys-whonix being the update VM if I 
> > remember correctly. In that case dnf/yum is in fact running in a whonix VM 
> > (which as you mention might be a security issue) and the previously 
> > discussed method should prevent that, however as Marek mentioned it is not 
> > the default because it would require the addition of another appVM and the 
> > base setup should be as minimal as possible. Not everyone has 16+gb of ram.
> 
> Yes, exactly. And I think this type of questions during installation may
> be too much technical for most users.
> On the other hand, it can be configured automatically depending on
> available RAM. Patches welcome.
> 
> > I have also started having other issues with salt. It seems to 
> > qubes:template: (the selector for .top files allowing us to target an 
> > appVM's template without knowing its name) does not seem to do anything, no 
> > errors, the states are just not running for the template that I am 
> > targeting.
> 
> Have you added "match: pillar"?
> Also it isn't about targeting AppVM's template - on the contrary - it
> can be used to target all AppVMs based on given template.
> "qubes:template:fedora-23" means "VMs with 'template' set to
> 'fedora-23'".
> But on the other hand, you can target all the templates with
> "qubes:type:template", if you want.

I misunderstood the purpose then. It would be nice though to be able to target 
the template of an appvm as well as build the vm but that implementation makes 
sense.
 
> > Also, I am not sure when, but the pkg.uptodate state 

Re: [qubes-users] Salt InterVM Configuration explorations and pitfalls in 3.2-rc2

2016-08-30 Thread Marek Marczykowski-Górecki
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

On Mon, Aug 29, 2016 at 11:07:33PM -0700, nekroze.law...@gmail.com wrote:
> On Tuesday, August 30, 2016 at 12:20:32 PM UTC+10, Marek Marczykowski-Górecki 
> wrote:
> > > > > fedora-23-minimal templates are unmanageable via salt, all of the 
> > > > > internal > > VM salt configuration just doesn't work on on them from 
> > > > > my experiments.
> > > > 
> > > > It may be that salt requires some additional packages to preform its 
> > > > actions. Minimal template have really minimal package set installed. 
> > > > But you probably can install additional stuff using pkg.installed. 
> > > > Yes, it may require calling `qubesctl --all state.highstate` twice. 
> > > 
> > > I believe it says in the docs that the only requirements in the target VM 
> > > for salt inter-vm management to work is scp because ssh looks of it or 
> > > something. Turns out scp is not installed in the fedora-23-minimal 
> > > template by default, however, even after installing it the installation 
> > > of a package does not work for the minimal template. Using the revelation 
> > > that is the --show-output switch I can see this happening. 
> > > 
> > > Its quite long so here is a paste of the section of output pertaining to 
> > > fedora-23-minimal template http://pastebin.com/kCe29p9L but the tail of 
> > > it is:
> > > 
> > >   stderr:
> > >   ln: failed to create symbolic link 
> > > ‘/tmp/salt-shim-sandbox/scp’: File exists
> > >   WARNING: Unable to locate current thin  version: 
> > > /tmp/.root_d510cd__salt/version.
> > >   stdout:
> > >   ERROR: Failure deploying thin: /usr/bin/scp
> > >   
> > > _edbc7885e4f9aac9b83b35999b68d015148caf467b78fa39c05f669c0ff89878
> > >   deploy
> > >   
> > >   ln: failed to create symbolic link 
> > > ‘/tmp/salt-shim-sandbox/scp’: File exists
> > >   WARNING: Unable to locate current thin  version: 
> > > /tmp/.root_d510cd__salt/version.
> > 
> > It is already fixed:
> > https://github.com/QubesOS/qubes-issues/issues/2207
> 
> Does this update have to be done in dom0 and in the minimal template? I have 
> updated dom0 but still have the issue, no reboot yet though as I am working. 

It's about qubes-mgmt-salt-vm-connector in default template.

> I will do more testing tonight as I have a need to also work on a HTTP proxy 
> setup from the docs but with salt. As I am new to salt I figured I would 
> learn by implementing everything in the qubes os docs with salt instead of 
> imperative commands.
> 
> On Tuesday, August 30, 2016 at 12:57:54 PM UTC+10, Jeremy Rand wrote:
> > Seems to me that an attack could be constructed where the Tor exit used
> > for update downloads feeds sys-whonix an exploit, and from there is able
> > to either break out of Tor, or compromise Tor in some way that may
> > affect other VM's' anonymity.
> 
> Forgive me if I am misunderstanding the scenario you proposed, but the setup 
> in question "sys-net>sys-firewall>sys-whonix>sys-update" If dom0 uses 
> sys-update to pull updates we should be ok. The default for when qubes is 
> told to use whonix/tor for updates however is 
> "sys-net>sys-firewall>sys-whonix" with sys-whonix being the update VM if I 
> remember correctly. In that case dnf/yum is in fact running in a whonix VM 
> (which as you mention might be a security issue) and the previously discussed 
> method should prevent that, however as Marek mentioned it is not the default 
> because it would require the addition of another appVM and the base setup 
> should be as minimal as possible. Not everyone has 16+gb of ram.

Yes, exactly. And I think this type of questions during installation may
be too much technical for most users.
On the other hand, it can be configured automatically depending on
available RAM. Patches welcome.

> I have also started having other issues with salt. It seems to 
> qubes:template: (the selector for .top files allowing us to target an appVM's 
> template without knowing its name) does not seem to do anything, no errors, 
> the states are just not running for the template that I am targeting.

Have you added "match: pillar"?
Also it isn't about targeting AppVM's template - on the contrary - it
can be used to target all AppVMs based on given template.
"qubes:template:fedora-23" means "VMs with 'template' set to
'fedora-23'".
But on the other hand, you can target all the templates with
"qubes:type:template", if you want.

> Also, I am not sure when, but the pkg.uptodate state does nothing in 
> templates now. It used to work on this qubes install and it still succeeds 
> (without changes) each run but if I use qubes-manager to do the update there 
> is stuff to be done.

Which template? I use it regularly and it works...

> This one is really rather minor and I will be writing these up into issues 
> when I am more sure of what they are and that its not just me. When you set a 
> netvm to None with salt you must use 

Re: [qubes-users] Salt InterVM Configuration explorations and pitfalls in 3.2-rc2

2016-08-30 Thread Jeremy Rand
nekroze.law...@gmail.com:
> On Tuesday, August 30, 2016 at 12:57:54 PM UTC+10, Jeremy Rand wrote:
>> Seems to me that an attack could be constructed where the Tor exit used
>> for update downloads feeds sys-whonix an exploit, and from there is able
>> to either break out of Tor, or compromise Tor in some way that may
>> affect other VM's' anonymity.
> 
> Forgive me if I am misunderstanding the scenario you proposed, but the setup 
> in question "sys-net>sys-firewall>sys-whonix>sys-update" If dom0 uses 
> sys-update to pull updates we should be ok. The default for when qubes is 
> told to use whonix/tor for updates however is 
> "sys-net>sys-firewall>sys-whonix" with sys-whonix being the update VM if I 
> remember correctly. In that case dnf/yum is in fact running in a whonix VM 
> (which as you mention might be a security issue) and the previously discussed 
> method should prevent that, however as Marek mentioned it is not the default 
> because it would require the addition of another appVM and the base setup 
> should be as minimal as possible. Not everyone has 16+gb of ram.

Yes, you understand the scenario I suggested correctly.  I agree with
you and Marek that, for users with less RAM, it may be an acceptable
tradeoff to run the update in sys-whonix.  However, there are some users
who either have a lot of RAM or are willing to shut down other VM's
while performing dom0 updates in order to gain some extra security, and
I think it would be reasonable for those users to use a "sys-update" VM
for dom0 updates.  I also think that this is something that might make
sense to ask the user on Qubes install, and automatically configure
"sys-update" if the user opts for the extra security.

The attack surface probably isn't massive here.  But I always like
reducing attack surface when feasible, and using a "sys-update" VM seems
like a decent way to do so.

If Marek (or perhaps Patrick) disagree with me that there's a security
vs RAM usage tradeoff, I'd be very interested to hear their analysis on
this.

Cheers,
-Jeremy Rand

-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To post to this group, send email to qubes-users@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/ce619a10-ecc2-3b31-85f2-f0d28afdcbb1%40airmail.cc.
For more options, visit https://groups.google.com/d/optout.


signature.asc
Description: OpenPGP digital signature


Re: [qubes-users] Salt InterVM Configuration explorations and pitfalls in 3.2-rc2

2016-08-30 Thread nekroze . lawson
On Tuesday, August 30, 2016 at 12:20:32 PM UTC+10, Marek Marczykowski-Górecki 
wrote:
> > > > fedora-23-minimal templates are unmanageable via salt, all of the 
> > > > internal > > VM salt configuration just doesn't work on on them from my 
> > > > experiments.
> > > 
> > > It may be that salt requires some additional packages to preform its 
> > > actions. Minimal template have really minimal package set installed. 
> > > But you probably can install additional stuff using pkg.installed. 
> > > Yes, it may require calling `qubesctl --all state.highstate` twice. 
> > 
> > I believe it says in the docs that the only requirements in the target VM 
> > for salt inter-vm management to work is scp because ssh looks of it or 
> > something. Turns out scp is not installed in the fedora-23-minimal template 
> > by default, however, even after installing it the installation of a package 
> > does not work for the minimal template. Using the revelation that is the 
> > --show-output switch I can see this happening. 
> > 
> > Its quite long so here is a paste of the section of output pertaining to 
> > fedora-23-minimal template http://pastebin.com/kCe29p9L but the tail of it 
> > is:
> > 
> >   stderr:
> >   ln: failed to create symbolic link ‘/tmp/salt-shim-sandbox/scp’: 
> > File exists
> >   WARNING: Unable to locate current thin  version: 
> > /tmp/.root_d510cd__salt/version.
> >   stdout:
> >   ERROR: Failure deploying thin: /usr/bin/scp
> >   _edbc7885e4f9aac9b83b35999b68d015148caf467b78fa39c05f669c0ff89878
> >   deploy
> >   
> >   ln: failed to create symbolic link ‘/tmp/salt-shim-sandbox/scp’: 
> > File exists
> >   WARNING: Unable to locate current thin  version: 
> > /tmp/.root_d510cd__salt/version.
> 
> It is already fixed:
> https://github.com/QubesOS/qubes-issues/issues/2207

Does this update have to be done in dom0 and in the minimal template? I have 
updated dom0 but still have the issue, no reboot yet though as I am working. 

I will do more testing tonight as I have a need to also work on a HTTP proxy 
setup from the docs but with salt. As I am new to salt I figured I would learn 
by implementing everything in the qubes os docs with salt instead of imperative 
commands.

On Tuesday, August 30, 2016 at 12:57:54 PM UTC+10, Jeremy Rand wrote:
> Seems to me that an attack could be constructed where the Tor exit used
> for update downloads feeds sys-whonix an exploit, and from there is able
> to either break out of Tor, or compromise Tor in some way that may
> affect other VM's' anonymity.

Forgive me if I am misunderstanding the scenario you proposed, but the setup in 
question "sys-net>sys-firewall>sys-whonix>sys-update" If dom0 uses sys-update 
to pull updates we should be ok. The default for when qubes is told to use 
whonix/tor for updates however is "sys-net>sys-firewall>sys-whonix" with 
sys-whonix being the update VM if I remember correctly. In that case dnf/yum is 
in fact running in a whonix VM (which as you mention might be a security issue) 
and the previously discussed method should prevent that, however as Marek 
mentioned it is not the default because it would require the addition of 
another appVM and the base setup should be as minimal as possible. Not everyone 
has 16+gb of ram.

I have also started having other issues with salt. It seems to qubes:template: 
(the selector for .top files allowing us to target an appVM's template without 
knowing its name) does not seem to do anything, no errors, the states are just 
not running for the template that I am targeting.

Also, I am not sure when, but the pkg.uptodate state does nothing in templates 
now. It used to work on this qubes install and it still succeeds (without 
changes) each run but if I use qubes-manager to do the update there is stuff to 
be done.

This one is really rather minor and I will be writing these up into issues when 
I am more sure of what they are and that its not just me. When you set a netvm 
to None with salt you must use the lowercase none which yaml accepts however 
qvm-prefs uses a capitol. This causes any qvm.prefs states that set a netvm to 
none to return the changed state every single state.highstate run because the 
yaml says it should be lowercase. Finally the docs for the dark theme seems to 
be out of date as many things including firefox are not using the dark theme if 
it is set globally as the docs describe in ~/.config/gtk-3.0/settings.ini and, 
on debian, it seems gnome-terminal does not conform to that setting unless you 
set gnome-terminal's preferences to use the dark variant but fedora templates 
gnome-terminal goes dark as expected.

Thanks for your time,
Taylor Lawson

-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To post to this group, send 

Re: [qubes-users] Salt InterVM Configuration explorations and pitfalls in 3.2-rc2

2016-08-29 Thread Jeremy Rand
Marek Marczykowski-Górecki:
> On Wed, Aug 17, 2016 at 01:42:36AM -0700, nekroze.law...@gmail.com wrote:
> 
>>> In any case, if you put Fedora-based VM behind sys-whonix, and set it as 
>>> UpdateVM, it should work. 
> 
>> That does indeed seem to fix the problem. Is there a reason why the whonix 
>> setup choice that uses whonix for dom0 updates not also build an update vm 
>> that uses sys-whonix and is based off of fedora?
> 
> Basic actions (install updates, new packages) should work in this setup
> and it save some RAM (no need for additional VM in addition to
> sys-whonix).

Seems to me that an attack could be constructed where the Tor exit used
for update downloads feeds sys-whonix an exploit, and from there is able
to either break out of Tor, or compromise Tor in some way that may
affect other VM's' anonymity.

Granted, this is a fairly lousy attack as attacks go, but isn't the
entire point of Whonix that nothing is supposed to run inside the Whonix
gateway except Tor?

Cheers,
-Jeremy Rand

-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To post to this group, send email to qubes-users@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/6d9feec4-a205-dc21-9158-bad70538f8ee%40airmail.cc.
For more options, visit https://groups.google.com/d/optout.


signature.asc
Description: OpenPGP digital signature


Re: [qubes-users] Salt InterVM Configuration explorations and pitfalls in 3.2-rc2

2016-08-17 Thread nekroze . lawson
Hi Marek,

Thanks for the response and my apologies on the late follow up.

> Hmm, that's strange. I use salt regularly to update all the templates at 
> once and haven't noticed anything like this. 
> Do you see any not cleaned up VMs after that? Like 
>  `disp-mgmt-something`. 

There where no left behind disp VM's and this persisted across boots. I have 
yet to retry this git repo fully enabled as I have needed a stable system for 
work recently but will return to this soon.


> The output is logged to /var/log/qubes/mgmt-*.log. Also take a look at 
> --show-output option. 

Thanks! that is perfect. The help message lead me to believe it was to show the 
command line runs that where executed by salt or something other than the 
report so I did not try it admittedly.

> > There are also a handful of other problems smaller problems I have 
> > encountered while trying to configure everything I need with salt. For 
> > example the fedora-23-minimal templates are unmanageable via salt, all of 
> > the internal > > VM salt configuration just doesn't work on on them from my 
> > experiments.
> 
> It may be that salt requires some additional packages to preform its 
> actions. Minimal template have really minimal package set installed. 
> But you probably can install additional stuff using pkg.installed. 
> Yes, it may require calling `qubesctl --all state.highstate` twice. 

I believe it says in the docs that the only requirements in the target VM for 
salt inter-vm management to work is scp because ssh looks of it or something. 
Turns out scp is not installed in the fedora-23-minimal template by default, 
however, even after installing it the installation of a package does not work 
for the minimal template. Using the revelation that is the --show-output switch 
I can see this happening. 

Its quite long so here is a paste of the section of output pertaining to 
fedora-23-minimal template http://pastebin.com/kCe29p9L but the tail of it is:

  stderr:
  ln: failed to create symbolic link ‘/tmp/salt-shim-sandbox/scp’: File 
exists
  WARNING: Unable to locate current thin  version: 
/tmp/.root_d510cd__salt/version.
  stdout:
  ERROR: Failure deploying thin: /usr/bin/scp
  _edbc7885e4f9aac9b83b35999b68d015148caf467b78fa39c05f669c0ff89878
  deploy
  
  ln: failed to create symbolic link ‘/tmp/salt-shim-sandbox/scp’: File 
exists
  WARNING: Unable to locate current thin  version: 
/tmp/.root_d510cd__salt/version.


> In any case, if you put Fedora-based VM behind sys-whonix, and set it as 
> UpdateVM, it should work. 

That does indeed seem to fix the problem. Is there a reason why the whonix 
setup choice that uses whonix for dom0 updates not also build an update vm that 
uses sys-whonix and is based off of fedora?

> > There are some aspects of configuring the dom0 experience in Qubes that 
> > does not seem to be possible from salt. For example there is no way to 
> > specify which applications are available in the menu for an appVM, 
> 
> Indeed there is no module for this, but you can simply edit 
> `whitelisted-appmenus.list` file in the VM directory with file.managed. 
> Then appmenus regeneration will be triggered at nearest template 
> upgrade, which will probably happen a moment later anyway (as dom0 is 
> configured before all the VMs).

I have tried this and found it not to work. I have not been able to get the 
application to appear in the application menu in xfce, nor is it enabled when I 
view the VM's apps list in the qubes-manager. I can confirm the line is in the 
right place from the state and matches the .desktop file in 
/usr/share/applications which should be where it looks. I have not rebooted yet 
but I have done multiple full highstate reruns on all vms after applying this 
state. It wasn't until I booted up the template the appVM was based on and ran 
qvm-sync-appmenus that it started to appear. I am still trying to find a way to 
emulate this is a sane but simple way with salt.

> Its "meminfo-writer" service (qvm.service). 

Brilliant. Poor assumption on my part that because there was a tickbox it 
wouldn't match one to one for a service but I guess the tickbox is just a 
redirect to the service for convenience from the memory tab.

> BTW do you know a salt module for editing XML files - just like 
> file.line or so? It would be really useful for configuring some desktop 
> environment settings - almost all Xfce configuration is in XML files...

The best would be augeas.change state which uses augeas which can make 
modifying structured data type files a one line thing. It would be perfect for 
this but has some dependencies (python-augeas) but I am not sure if templates 
would need that installed or just dom0.

Thank you for your time,
Taylor Lawson

-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, 

[qubes-users] Salt InterVM Configuration explorations and pitfalls in 3.2-rc2

2016-08-10 Thread nekroze . lawson
Hi All,

I have been experimenting with using salt to configure a full Qubes system with 
standalone vms running docker for development and automated setup of a kali 
template based off of the debian template based on the procedure in the docs.

In my trials (mostly wins) I have found a few issues and also just missing 
parts that make management difficult.

After I finish a run (or two) of "qubesctl --all state.highstate" using my 
qubes salt configuration (https://github.com/Nekroze/qubes-salt) I can no 
longer update dom0 or send files from one VM to another (other then from dom0 
out to a vm) as I just get the error:

  Data vchan connection failed

I have not been able to find any information on common causes for this error, 
the best I can find while searching is the source code that prints the error. 
I've tried disabling a bunch of the tops to reduce what is changing but it just 
keeps happening. This is the 4th time I have re-installed qubes 3.2-rc2 because 
of this error when trying to use salt. I am unsure as to what information is 
required for this kind of error hence reporting here before I start an issue on 
github, any advice on logs to provide or steps to try would be welcome.

There are also a handful of other problems smaller problems I have encountered 
while trying to configure everything I need with salt. For example the 
fedora-23-minimal templates are unmanageable via salt, all of the internal VM 
salt configuration just doesn't work on on them from my experiments.

Additionally it seems that package management control over dom0 fails when 
sys-whonix is the updateVM, this forces all salt updates over sys-firewall for 
setup and it seems updating Whonix templates this way presents an error (that 
they are not running with a whonix-gw based netVM) as they are included in 
states that affect all templates.

Sadly due to the previously mentioned vchan issue I am unable to grab the exact 
error message at the moment but I will try and get it later today when I might 
have time to do another re-install.

Its great when everything goes well but when there is an issue there is no 
summary from the VM's configuration changes like dom0 has. At best it would be 
nice to see things like the versions that changed in the VM's when an update 
works, but when something goes wrong, not having this means I have to step 
through the procedure to find out what failed which means I have to do it 
manually anyways. This seems to happen even when qubesctl says the vm was OK 
but I find the a package was not installed at all and must have errored, again 
doing it manually I was able to see the error and resolve a trivial cache issue 
preventing the install.

When using the qvm.create state, it is clear from using Qubes for a bit that it 
maps to the similarly named cli tool however preferences specified in the 
qvm.create state, being only run if the vm exists at all, require secondary 
qvm.prefs of the same or similar preferences to ensure that those states remain 
the same. From my understanding part of configuration management like this is 
not just to provision but to ensure the configuration conforms to the state 
specifications, this just feels very clumsy to have to do twice, perhaps 
templating can help here but I am just starting the make a dent in learning how 
to use templating for salt.

There are some aspects of configuring the dom0 experience in Qubes that does 
not seem to be possible from salt. For example there is no way to specify which 
applications are available in the menu for an appVM, From what I can see no way 
to toggle the dynamic memory management switch from salt, nor a way to add 
firewall rules to the Qubes manager firewall list via states. There are great 
tools for provisioning the cluster of VM's but it doesn't tie into the user 
experience for Qubes requiring more manual configuration.

I would like to formalize these into issues on GitHub but just wanted to 
discuss if there was more information I need or some issues are already 
resolved in the next version. I am unsure as to which way I should split these 
into issues if at all and would appreciate any advice.

All in all though, the salt stuff is great when it works but the missing or 
broken parts make it hard to justify at present. My apologies for the long post.

Thank you for your time,
Taylor Lawson

-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To post to this group, send email to qubes-users@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/e40da471-2123-4a27-98a6-b5b7e761c075%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.