[qubes-users] Issues merging salt pillars

2021-04-15 Thread 'hut7no' via qubes-users
Hi guys!
I am having some trouble merging some salt pillars, and was wondering if anyone 
here has experience with this.
I have some default settings in a yaml file, and am trying to overwrite values 
based on vm types.
I get the wanted behaviour from running something like "qubesctl --show-output 
--targets dom0 defaults.merge '{a: {b: 1.1, c: 2.1}}' '{a: {b: 1.2}}'" from the 
terminal:
--
a:
  b:
- 1.2
  c:
- 2.1
--
(b is overwritten, but c remains untouched)
When using slsutil.renderer on the map.jinja, the whole list is replaced 
instead, leaving only completely untouched lists and new values:
--
dvm:
  vcpus:
- 2
disp:
  vcpus:
- 2
template:
  source:
- d10m
  include_in_backups:
- True 
--
default_prefs.yaml:
--
template:
  - source: d10m
  - include_in_backups: True
dvm:
  - netvm: none
  - provides_network: False
  - klass: AppVM
  - template_for_dispvms: True
  - default_dispvm: none
  - default_user: user
  - vcpus: 1
  - memory: 300
  - maxmem: 4096
  - kernel: 5.4.107-1.fc25
  - kernelopts: nopat apparmor=1 security=apparmor
  - virt_mode: pvh
  - label: gray
  - include_in_backups: True
disp:
  - netvm: none
  - klass: DispVM
  - default_dispvm: none
  - default_user: user
  - vcpus: 1
  - memory: 300
  - maxmem: 4096
  - kernel: 5.4.107-1.fc25
  - kernelopts: nopat apparmor=1 security=apparmor
  - virt_mode: pvh
  - label: black
  - include_in_backups: True
--
map.jinja:
--
{% import_yaml 
'/srv/salt/user_pillar/base/vm_prefs/templates/default_prefs.yaml' as 
default_settings %}

{#{% if grains['id'].endswith('med') %}#}
# what makes the type different from default
{%- load_yaml as type_diff %}
dvm:
  - vcpus: 2
disp:
  - vcpus: 2
{% endload %}
{#{% endif %}#}
{% set merged = salt['defaults.merge'](default_settings,type_diff) %}
{{ merged }}
--
I noticed that some functions(defaults.update) and 
arguments(merge_lists,in_place) are not available for the installed salt 
version, do I need those ?

-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/20210414183258.GA878%40mail2-dvm.


Re: [qubes-users] 4K videos on external monitor lagging like hell

2021-03-09 Thread 'hut7no' via qubes-users
Qubes doesn't use video acceleration as it is less secure.
Mplayer is IMO the best option for video playback in VMs, as it focuses a lot 
on optimizing without using acceleration.
You can also sacrifice some quality for speed with some of these options:
mplayer -vfm ffmpeg -sws 4 -lavdopts 
skiploopfilter=all:fast:gray:lowres=3:threads= 

You can change the default configuration and/or create an alias in either the 
template or the dvm/appvm.
I set dropping frames(g key) to enabled, but change to hard(pressing g again) 
if audio video timing is especially important.
You can also check if assigning more memory and vcpus helps, if you have not 
already.
Launching many qubes, like qubes updater does, can also cause signifigant 
stutter with slow drives in my experience.
Also remember that h264 decodes a lot faster.

-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/20210309172952.GA821%40mail2-dvm.


Re: [qubes-users] Special template to isolate less trusted software?

2020-09-06 Thread 'hut7no' via qubes-users
I do this, but I use a squid proxy setup from rustybird to cache updates.
Starting up and shutting down VMs still takes the same amount of time though.

-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/20200906173353.GB911%40mail2-dvm.


Re: [qubes-users] saltstack: user specific pillars in qubes

2020-09-06 Thread 'hut7no' via qubes-users
I personally have pillars in /srv/pillar/tops_d and added the top file 
/srv/pillar/_tops/base/tops_d.top.
The top file includes relative paths from /srv/pillar/ with a dot instead of a 
slash:
base:
  '*':
- tops_d.statefile1
- tops_d.statefile2
- tops_d.statefile3
- tops_d.statefile4

I do not use any commandline arguments specifying pillars, just {% 
salt['pillar.get']('pillar_variable') %} from the statefiles in /srv/salt.

-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/20200906172121.GA911%40mail2-dvm.


Re: [qubes-users] How to check (in BASH and dom0) whether a appVM exists?

2020-05-27 Thread 'hut7no' via qubes-users
qvm-check checks if for existence by default.

-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/20200527180320.GD805%40mail2-dvm.


Re: [qubes-users] qubes-mirage-firewall 0.7

2020-05-27 Thread 'hut7no' via qubes-users
Thank you so much for your work, I have saved so much time and memory because 
of your amazing firewall!

-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/20200527180038.GC805%40mail2-dvm.


Re: [qubes-users] Qubes awarded MOSS Mission Partners grant!

2020-05-27 Thread 'hut7no' via qubes-users
Congratulations!

-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/20200527175521.GB805%40mail2-dvm.


Re: [qubes-users] Private Tor Bridge.

2020-05-27 Thread 'hut7no' via qubes-users
> I notice that Tor has a means for "Bridges."  A Bridge being an IP Address 
> that allow one to make a first hop to an IP Address that the ISP, or local 
> server is not expecting, or blocking.  
> 
> My problem being that if one was in a place like China, then the government 
> is surely trying to gather up all the Bridges which the Tor network has.   

If tor bridges are not enough, you might want to try psiphon.

-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/20200527175141.GA805%40mail2-dvm.


[qubes-users] Fixing failed lvm recovery

2020-03-08 Thread 'hut7no' via qubes-users
I recently deleted three large AppVMs on accident.
I panicked and shut off my computer to avoid writing.
The AppVMs were on a secondary usb drive, configured as show in the qubes 
documentation.
I tried to recover using the relevant /etc/lvm/archive/ files with vgcfgrestore 
and ended up using --force since I'm using thin volumes.
I used --test, it seemed to work, then I ran the command without --test, for 
each .vg file, from most recent to the oldest relevant file.
Once I tried to activate the volume group with vgchange -ay qubes, I got errors 
saying the transaction_id was wrong.
Lvscan shows the whole drive as inactive.
Is there anything I can do to fix this, or at least recover some individual 
files?
Thank you in advance.

-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/20200308183920.GA838%40mail2-dvm.
Activating logical volume qubes/poolhd0 exclusively.
activation/volume_list configuration setting not defined: Checking only 
host tags for qubes/poolhd0.
Creating qubes-poolhd0_tmeta
Loading qubes-poolhd0_tmeta table (253:343)
Resuming qubes-poolhd0_tmeta (253:343)
Creating qubes-poolhd0_tdata
Loading qubes-poolhd0_tdata table (253:344)
Resuming qubes-poolhd0_tdata (253:344)
Executing: /usr/sbin/thin_check -q --clear-needs-check-flag 
/dev/mapper/qubes-poolhd0_tmeta
Creating qubes-poolhd0-tpool
Loading qubes-poolhd0-tpool table (253:345)
Resuming qubes-poolhd0-tpool (253:345)
  Thin pool qubes-poolhd0-tpool (253:345) transaction_id is 206, while expected 
202.
Removing qubes-poolhd0-tpool (253:345)
Removing qubes-poolhd0_tdata (253:344)
Removing qubes-poolhd0_tmeta (253:343)
Activating logical volume qubes/vm-test-random-private exclusively.
activation/volume_list configuration setting not defined: Checking only 
host tags for qubes/vm-test-random-private.
Creating qubes-poolhd0_tmeta
Loading qubes-poolhd0_tmeta table (253:343)
Resuming qubes-poolhd0_tmeta (253:343)
Creating qubes-poolhd0_tdata
Loading qubes-poolhd0_tdata table (253:344)
Resuming qubes-poolhd0_tdata (253:344)
Executing: /usr/sbin/thin_check -q --clear-needs-check-flag 
/dev/mapper/qubes-poolhd0_tmeta
Creating qubes-poolhd0-tpool
Loading qubes-poolhd0-tpool table (253:345)
Resuming qubes-poolhd0-tpool (253:345)
  Thin pool qubes-poolhd0-tpool (253:345) transaction_id is 206, while expected 
202.
Removing qubes-poolhd0-tpool (253:345)
Removing qubes-poolhd0_tdata (253:344)
Removing qubes-poolhd0_tmeta (253:343)
Activating logical volume qubes/vm-test-random-2-private exclusively.
activation/volume_list configuration setting not defined: Checking only 
host tags for qubes/vm-test-random-2-private.
Creating qubes-poolhd0_tmeta
Loading qubes-poolhd0_tmeta table (253:343)
Resuming qubes-poolhd0_tmeta (253:343)
Creating qubes-poolhd0_tdata
Loading qubes-poolhd0_tdata table (253:344)
Resuming qubes-poolhd0_tdata (253:344)
Executing: /usr/sbin/thin_check -q --clear-needs-check-flag 
/dev/mapper/qubes-poolhd0_tmeta
Creating qubes-poolhd0-tpool
Loading qubes-poolhd0-tpool table (253:345)
Resuming qubes-poolhd0-tpool (253:345)
  Thin pool qubes-poolhd0-tpool (253:345) transaction_id is 206, while expected 
202.
Removing qubes-poolhd0-tpool (253:345)
Removing qubes-poolhd0_tdata (253:344)
Removing qubes-poolhd0_tmeta (253:343)
Activating logical volume qubes/vm-storage8-dvm-bk-private exclusively.
activation/volume_list configuration setting not defined: Checking only 
host tags for qubes/vm-storage8-dvm-bk-private.
Creating qubes-poolhd0_tmeta
Loading qubes-poolhd0_tmeta table (253:343)
Resuming qubes-poolhd0_tmeta (253:343)
Creating qubes-poolhd0_tdata
Loading qubes-poolhd0_tdata table (253:344)
Resuming qubes-poolhd0_tdata (253:344)
Executing: /usr/sbin/thin_check -q --clear-needs-check-flag 
/dev/mapper/qubes-poolhd0_tmeta
Creating qubes-poolhd0-tpool
Loading qubes-poolhd0-tpool table (253:345)
Resuming qubes-poolhd0-tpool (253:345)
  Thin pool qubes-poolhd0-tpool (253:345) transaction_id is 206, while expected 
202.
Removing qubes-poolhd0-tpool (253:345)
Removing qubes-poolhd0_tdata (253:344)
Removing qubes-poolhd0_tmeta (253:343)
Activating logical volume qubes/vm-storage8-dvm-bk-private-import 
exclusively.
activation/volume_list configuration setting not defined: Checking only 
host tags for qubes/vm-storage8-dvm-bk-private-import.
Creating qubes-poolhd0_tmeta
Loading qubes-poolhd0_tmeta table (253:343)
Resuming qubes-poolhd0_tmeta (253:343)
Creating qubes-poolhd0_tdata
Loading qubes-poolhd0_tdata table (253:344)
 

Re: [qubes-users] Re: VLC gets black when maximized

2020-02-18 Thread 'hut7no' via qubes-users
I would also suggest mplayer for performance if MPV does not work for you.

-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/20200218113436.GA885%40mail2-dvm.


Re: [qubes-users] UPnP in Qubes

2019-12-26 Thread 'hut7no' via qubes-users

December 23, 2019 6:09 PM, "hut7no' via qubes-users" 
 wrote:


I have a setup like this:
AppVM -> FirewallVM -> NetVM -> internet
and want to be able to use UPnP for the AppVM.
Do any of you know how you would set this up in Qubes?

-- You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to
qubes-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/qubes-users/4800d296-0722-9b78-b16c-de83c5bfaffe@tt3j2x4k5ycaa5zt.
nion.



I don't know, but I was actually wondering about this too. You mean for 
automatic port forwarding, right? I would imagine you can get a UPnP daemon 
that allows you to set commands to be run when a client requests port 
forwarding, and have it run qvm-firewall or iptables/nftables accordingly. You 
may have to write some glue code but it should be fairly simple.

See:
https://www.qubes-os.org/doc/firewall/
https://qubes-core-admin-client.readthedocs.io/en/latest/manpages/qvm-firewall.html
https://www.qubes-os.org/doc/networking/
https://manpages.ubuntu.com/manpages/precise/man5/upnpd.conf.5.html (Just an 
example. There are many implementations out there and this one may or may not 
be the best choice.)

If you find a solution, please follow up and share what you came up with.


Thanks you for the info, I'll follow up if I solve it.

--
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/fefad9af-9e66-ff7c-61a2-60af4486006b%40tt3j2x4k5ycaa5zt.onion.


[qubes-users] UPnP in Qubes

2019-12-23 Thread 'hut7no' via qubes-users

I have a setup like this:
AppVM -> FirewallVM -> NetVM -> internet
and want to be able to use UPnP for the AppVM.
Do any of you know how you would set this up in Qubes?

--
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/4800d296-0722-9b78-b16c-de83c5bfaffe%40tt3j2x4k5ycaa5zt.onion.


Re: [qubes-users] What's the logic behind many similar templates?

2019-11-29 Thread 'hut7no' via qubes-users

tetrahedra via qubes-users:

By default Qubes comes with two templates for AppVMs: a Debian template
and a Fedora one.

But many people seem to clone templates, so they also have an e.g
"fedora-minimal" template or a "-multimedia" one or any number of other
variations.

Why not just have "one template to rule them all" for each distribution
(Fedora and Debian)?



Smaller attack surface/faster. If you want to do this you might want a 
squid caching proxy to pass updates from a VM, through the proxy, to 
your previous update VM.

https://github.com/rustybird/qubes-updates-cache
You can have less network usage, it seems to work well.
Read the security considerations, and the code if you are really serious.

--
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/44525d96-e13a-a0c5-db9f-627213d06ab4%40tt3j2x4k5ycaa5zt.onion.