Re: [qubes-users] /usr/bin/notify-send does not work when invoked via qubes-rpc

2019-04-22 Thread tomhet
> Can I ask why you want to do this?
> Why dont you just call notify-send in the qube? 

First some introduction:
- I have 'storage' vm. It takes storage devices (few HDDs) on startup and 
exposes them via NFS & Samba to other VMs and physical machines. Each VM can 
access just what is exported for it, rw or r/o. 
- When 'storage' is started it has to trigger assignment of block devices to 
itself (and un-assign them on shutdown). Achieving this is discussed here 
https://groups.google.com/d/topic/qubes-users/RogG5rXG_Pw/discussion
- Assign/un-assign is triggered from storage via qubes-rpc actions 
'storage-attach/detach'

I just want to be notified on screen for each device that is 
assigned/de-assigned. And since assign/de-assign happens in dom0, 'notify-send' 
is called there. It seems clumsy that dom0 calls back 'storage' to run 
'notify-send' there, but if this is the only working option in Q4 - I'll do it. 
Will try it on next Q4 run

  Tom

-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To post to this group, send email to qubes-users@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/5f1c1c39-b396-4ae0-ad33-abdd6187b183%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [qubes-users] Log qubes firewall packets

2019-04-21 Thread tomhet
> >> Wondering how to log packets blocked and accepted by qubes firewall
> >> for specific vm or all vms if thats the only option? Couldn't find
> >> anything in website or google or qvm-firewall

> > Unfortunately, Qubes firewall was not designed for such use case.
> > 
> > If you are familiar with the iptables (and nftables too), you may be
> > able to workraound this limitation. But it really not trivial to achieve

So, logging is done via -j LOG target, like this (with same rules that would 
match actual action):
   iptables -t nat -A SSH2 -j LOG --log-prefix "DNAT SSH2-tunnel: "
   iptables -t nat -A SSH2 -j DNAT -p tcp --to 10.137.2.11:22

For blocked packages you should add log entry before DROP statements. You 
should review all chains and tables. Add your changes to 
sys-firewall:/rw/config/qubes-firewall-user-script. Be careful when 
inserting/adding rules, as they qubes dynamically changes the tables.

By default LOG uses systemd log but it is configurable.
Your question is not related to Qubes, but is general iptables-question.

-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To post to this group, send email to qubes-users@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/a9399c1d-d316-48fb-af62-4832d60db84a%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[qubes-users] /usr/bin/notify-send does not work when invoked via qubes-rpc

2019-04-21 Thread tomhet
Hi all,

I struggle to get something that's working in Q3.2 also in Qubes4:

- I need to display message from AppVm on the screen (via notify-send in dom0) 
and do it via qubes-rpc executed in AppVm "storage". rpc procedure is named 
'storage.log'
- allow policy in dom0:
  > cat /etc/qubes-rpc/policy/storage.log:
  > storage dom0 allow
  > $anyvm $anyvm deny
- procedure:
  > cat /etc/qubes-rpc/storage.log
  > #!/usr/bin/bash
  > 
  > read message
  > /usr/bin/notify-send "$message"

RPC is called this way in appvm 'storage':
  > echo 'message from vm'|qrexec-client-vm dom0 storage.log

What happens:
- In Q3.2 this works since years
- In Q4 action is executed, but nothing is displayed on screen. Action is 
actually executed-in journalctl I see 'storage.log' is allowed, and if I add 
'echo $message|systemd-cat' that message is logged also.


- If its run in dom0, message is displayed on the screen:
  > echo 'dom0 message'|/etc/qubes-rpc/storage.log


It seems like /usr/bin/notify-send does nothing when invoked via RPC.

Any ideas?

-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To post to this group, send email to qubes-users@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/a3bfbe56-9408-4253-be6f-87bb0e244345%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[qubes-users] Qubes4: net,fw,appvms-all in 10.137.0.* ?

2019-04-21 Thread tomhet
Hi all,

  I tried test install of Qubes4 to test migration from 3.2 and I was surpised 
that sys-new,sys-fw and all appvm share same network 10.137.0.*
This is in contrast to 3.2, where appVMs are in 10.137.2.*, firewalls in 
10.137.1.*, and just sys-net in 10.137.0.*
(like drawn in http://roscidus.com/blog/images/qubes/qubes-net.png)

Why this change? I guess it did not happened accidentally in current v4 
install. 

Seems I'll have to do lot of IP changes in iptable rules in sys-firewall (and 
for each appvm accepting connections from other appvm). I was hoping that appVM 
IPs will be kept when restoring from 3.2 backup (which anyway did not happen).

If 10.137.0 is shared between all types of VMs there is no point in trying to 
manually set IPs to match those of 3.2 (in qubes .xml files describing vm 
configs), as networks are anyway different.

regards,
  Tom


-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To post to this group, send email to qubes-users@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/d190d69c-6b77-46f2-b212-6d9556f8c7bb%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [qubes-users] Removing Thunderbird from fedora-29 removes 68 packages (of which 11 qubes packages)

2019-04-20 Thread tomhet
Great, thank you both!

1. 'dnf mark install qubes-vm-recommended'
2. 'dnf remove thunderbird-qubes' (on which above metapackage depends). This 
way it removes removes also the recommended meta-package, but leaves its other 
dependencies

Let's mark this as completed


-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To post to this group, send email to qubes-users@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/3f61eabf-b014-4fc0-a224-b298a88b80e8%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[qubes-users] Removing Thunderbird from fedora-29 removes 68 packages (of which 11 qubes packages)

2019-04-19 Thread tomhet
Hi guys,

  I installed Q4.0.1 on USB HDD to see changes from 3.2.
As I've decided to use fedora-29 for system-related VMs, I wanted to remove 
large apps like Firefox and Thunderbird from it.
But running 'dnf remove thunderbird' on f29 template resulted in removal of 67 
other packages, which seems important.

Any idea what's wrong?
I used latest Qubes ISO and updated dom0 and fedora-29 template before this 
removal.

Note: I've stripped all but 1st column of output for readability
{code}
Removing:
 thunderbird  
Removing dependent packages:
 qubes-vm-recommended  
Removing unused dependencies:
 ethtool   
 fakeroot  
 fakeroot-libs 
 js-jquery 
 libnftnl  
 libtomcrypt   
 libtommath
 mozilla-filesystem
 nautilus-python   
 net-tools 
 nftables  
 openpgm   
 pciutils  
 pciutils-libs 
 pulseaudio-qubes  
 python-systemd-doc
 python2-babel 
 python2-backports 
 python2-backports-ssl_match_hostname  
 python2-backports_abc 
 python2-cairo 
 python2-chardet   
 python2-crypto 
 python2-futures
 python2-idna   
 python2-ipaddress
 python2-jinja2   
 python2-markupsafe
 python2-msgpack   
 python2-nose  
 python2-numpy 
 python2-olefile   
 python2-pillow
 python2-psutil
 python2-pycurl
 python2-pysocks   
 python2-pytz  
 python2-pyyaml
 python2-qubesimgconverter
 python2-requests 
 python2-singledispatch   
 python2-six  
 python2-systemd  
 python2-tornado  
 python2-urllib3  
 python2-xpyb 
 python2-zmq  
 qubes-core-agent-dom0-updates
 qubes-core-agent-nautilus
 qubes-core-agent-network-manager
 qubes-core-agent-networking 
 qubes-core-agent-passwordless-root
 qubes-gpg-split   
 qubes-img-converter   
 qubes-input-proxy-sender  
 qubes-mgmt-salt-vm-connector  
 qubes-pdf-converter   
 qubes-usb-proxy   
 salt  
 salt-ssh  
 socat 
 thunderbird-qubes 
 tinyproxy 
 usbutils  
 web-assets-filesystem 
 zeromq

Transaction Summary

Remove  68 Packages
{code}

regards,
  tom

-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To post to this group, send email to qubes-users@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/7a085432-18e4-48b0-869b-2da8276e7f2c%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [qubes-users] Protect AppVM init startup scripts:

2017-05-05 Thread tomhet
Suggestion: Instead of having "VMs that boot 'cleanly'" I'd propose to add 
following option:

- configuration data that lives in /rw/config (usrlocal) and is cleaned by this 
scripts/services to be fetched from Dom0 (or dedicated VM) based on VM's name.

This should be done after cleanup service and before Qubes code that executes 
/rw/config/rc.local (or sets firewall rules). 

Purpose is to keep current (original 3.2) configuration behavior, while 
ensuring configuration is not modifiable by malware, neither getting 'clean 
boot'.

What do you think?

-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To post to this group, send email to qubes-users@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/63fca7dd-03f6-4be9-b8e6-690fd9a16a82%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[qubes-users] Re: SUCCESS: GPU passthrough on Qubes 3.1 (Xen 4.6.1) / Radeon 6950 / Win 7 & Win 8.1 (TUTORIAL + HCL)

2016-11-23 Thread tomhet
So, after Marek's fix here, https://github.com/QubesOS/qubes-issues/issues/1659
is it true that I can expect this from it:
- HVM passthrough working using stub domain via xl ?
  (following your guide above, exlcuding 'qemu-xen-traditional')
And not:
- HVM passthrough working via VM created with Qubes manager and started with it 
/ qvm-start ?

regards,
  tom



-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To post to this group, send email to qubes-users@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/26ac8cbc-1490-4df6-80ec-1e48d7008c40%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[qubes-users] Re: Unmounting USB Devices at VM shutdown

2016-08-01 Thread tomhet
> If we shutdown a VM that has a USB storage device attached, the VM still 
> indicates the device being attached to the VM. But when you restart the VM, 
> the device is not accessible and can no longer be disconnected.

As far as I remember this should fix it: "sudo udevadm trigger --action=change"
If not - here's entire guide for your case: https://www.qubes-os.org/doc/usb/

 
> I've just had to reboot the entire system every time I forget to unmount 
> drives before shutting down a VM.
Here's a tread that discusses auto attach/detach/mount/umount on VM startup and 
shutdown (although not usb-specific): 
https://groups.google.com/forum/#!msg/qubes-users/RogG5rXG_Pw/9UWUTzl-QgAJ


> One other thing, the gui in qubes seems to always develop 'artifacts' on the 
> screen, such as horizontal lines of 'static'. 
I never got this issue.

regards,
  Tom

-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To post to this group, send email to qubes-users@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/8c991c28-527b-4d81-badd-a7400f3004a8%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[qubes-users] Re: SUCCESS: GPU passthrough on Qubes 3.1 (Xen 4.6.1) / Radeon 6950 / Win 7 & Win 8.1 (TUTORIAL + HCL)

2016-08-01 Thread tomhet
Hi Marcus,

I'm bit confused with this
> Edit /etc/default/grub and add following options (change the pci address if 
> needed)

Which version of Qubes is this? Aint 3.1 EFI-only?
And EFI version of kernel args are to be passed via /boot/efi/EFI/qubes 
(kernel=)?

regards,
  Tom

-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To post to this group, send email to qubes-users@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/65d87e1b-218a-4cd3-b354-aa85f29d70a1%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [qubes-users] Re: Automatically mount block devices when appvm launches

2016-06-07 Thread tomhet
Thanks for details, Marek !

> If you want to call it automatically at VM startup, you can create qrexec 
> service for that
Cool, this was my first idea, but I was unable to implement it.
Q1: If taking this way I'll need to have opposite (umount & detach) operation 
at VM shutdown (to avoid refreshing block devices). 
Where should I attach (in appVm) the opposite shutdown script (injected by 
rc.local)?

Q2: If taking the opposite way (dom0-initiated process), how/where should 
attach/mount script be called from? 
- systemd service that depends on some (which) qubes service?
- /etc/init.d/rc.local (not sure if necessary qubes stuff will be already 
available)
- something else

thanks,
  Tomhet

P.S. An alternative could be event listeners for qubes events 
'onVmStartComplete'/'beforeVmShutdown', if such exist & I knew python.

-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To post to this group, send email to qubes-users@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/df6e39ff-982e-4daa-8b09-8424025e54d6%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[qubes-users] Re: Automatically mount block devices when appvm launches

2016-06-07 Thread tomhet
Hi, here's my way:
- In storage VM I've created '/rw/config/fstab' with content that will be 
appended to /etc/fstab on each boot. It contains UUIDs of partitions with 
target mount point. Note the 'auto' attribute:
---
[user@storage config]$ cat /rw/config/fstab 
# This should have been appended to /etc/fstab on boot

#Disk1
UUID=3f3564db-7df4-40af-d067-33ed2c049b65 /s/disk1   reiserfs   
auto,noatime,users,exec 0 2
UUID=a4e9a4fc-ef0a-962d-3cf2-a6fdfa35a00b /s/disk2ext3   
auto,noatime,users,exec 0 2
---

File /rw/config/rc.local creates mount points and adds UUIDs
#!/bin/bash
mkdir -m 770 -p /s/disk1
mkdir -m 770 -p /s/disk2

cat /rw/config/fstab >> /etc/fstab

Then in dom0 we have similar script:
#!/bin/bash

qvm-start storage

list=`qvm-block -l | grep -v attached`

awk '
BEGIN{
  attach["HDID"]="dummy";
  attach["ST320ABC"]="dummy";
}

{
  if( ($1 ~ /sd.$/) && ($4 > 0) && attach[$2]){
print "Attaching",$1,$2;
system("qvm-block -a storage " $1);
  }
}
' <<< "$list" 

qvm-run storage 'sudo mount -a'

attach[] contains the names of white-listed devices to be attached to storage - 
this is 2nd column of table output by qvm-block command.
After devices are assigned to storageVM, auto-mount is executed in the VM.

I have similar script that unmounts and detaches devices from the storageVM.


Currently I have to find suitable place and time to run that script. 
Suggestions are welcomed.

Tomhet


-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To post to this group, send email to qubes-users@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/b4bfd29a-7a97-46da-a935-10602a6b1403%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.