Re: Looking for good, small, canadian version laptop suggestions

2013-10-18 Thread Jonathan Thornburg
IBM sells refurbished ThinkPads:
  http://www.ibm.com/shop/used/pref
  
http://www-304.ibm.com/shop/americas/content/home/store_IBMPublicCanada/en_CA/icpepcs.html
I have bought a couple of laptops from them in the past, with generally
good experiences.

-- 
-- "Jonathan Thornburg [remove -animal to reply]" 

   "There was of course no way of knowing whether you were being watched
at any given moment.  How often, or on what system, the Thought Police
plugged in on any individual wire was guesswork.  It was even conceivable
that they watched everybody all the time."  -- George Orwell, "1984"



Re: Experiences with OpenBSD RAID5

2013-10-18 Thread Scott McEachern
List, I'm bringing you into the middle of an off-list conversation where 
I'm setting up a RAID10 array. Well, I'm using two RAID1 arrays as the 
drives for a RAID0 array.


All relevant information follows.  Any clue to why I'm ending up with an 
array 1/4 the size I'm expecting?



On 10/18/13 23:16, Constantine A. Murenin wrote:

No clue what you're talking about; I thought stacking works just fine
since a few releases back.  Are you sure it panic'ed with the
partitions partitioned and specified correctly?

Another question is whether you'd want to have a huge 6TB partition in
OpenBSD -- generally something that's not advised.

C.


Hmm, I stand corrected.  I must have done something wrong.  Either way, 
I'm not quite getting the result I'd hoped for.  Here's the details:


- the 3TB drives in dmesg look like this:

# dmesg|grep sd[0-9]
sd0 at scsibus0 targ 0 lun 0:  SCSI3 
0/direct fixed naa.5000c500525bf426

sd0: 2861588MB, 512 bytes/sector, 5860533168 sectors
sd1 at scsibus0 targ 1 lun 0:  SCSI3 
0/direct fixed naa.5000c5005265ff15

sd1: 2861588MB, 512 bytes/sector, 5860533168 sectors
sd2 at scsibus0 targ 2 lun 0:  SCSI3 
0/direct fixed naa.5000c5004a5baa2e

sd2: 2861588MB, 512 bytes/sector, 5860533168 sectors
sd3 at scsibus0 targ 3 lun 0:  SCSI3 
0/direct fixed naa.5000c5004a6e56f1

sd3: 2861588MB, 512 bytes/sector, 5860533168 sectors
sd4 at scsibus2 targ 0 lun 0:  SCSI3 
0/direct fixed naa.5000c5004e455146

sd4: 2861588MB, 512 bytes/sector, 5860533168 sectors
sd5 at scsibus2 targ 1 lun 0:  SCSI3 
0/direct fixed naa.5000c5004e4a8141

sd5: 2861588MB, 512 bytes/sector, 5860533168 sectors

[snip]

The three RAID1 arrays are created from the above six HDDs like so:

sd9 at scsibus4 targ 1 lun 0:  SCSI2 0/direct fixed
sd9: 2861588MB, 512 bytes/sector, 5860532576 sectors
sd10 at scsibus4 targ 2 lun 0:  SCSI2 0/direct 
fixed

sd10: 2861588MB, 512 bytes/sector, 5860532576 sectors
sd11 at scsibus4 targ 3 lun 0:  SCSI2 0/direct 
fixed

sd11: 2861588MB, 512 bytes/sector, 5860532576 sectors

sd9 = sd0a + sd1a; sd10 = sd2a + sd3a; sd11 = sd4a + sd5a

Observe:

# bioctl -i sd9
Volume  Status   Size Device
softraid0 0 Online  3000592678912 sd9 RAID1
  0 Online  3000592678912 0:0.0   noencl 
  1 Online  3000592678912 0:1.0   noencl 
[ root@elminster:~ ]
# bioctl -i sd10
Volume  Status   Size Device
softraid0 1 Online  3000592678912 sd10RAID1
  0 Online  3000592678912 1:0.0   noencl 
  1 Online  3000592678912 1:1.0   noencl 
[ root@elminster:~ ]
# bioctl -i sd11
Volume  Status   Size Device
softraid0 2 Online  3000592678912 sd11RAID1
  0 Online  3000592678912 2:0.0   noencl 
  1 Online  3000592678912 2:1.0   noencl 

At this point, I have data on sd10, so I'll only use sd9 and sd11. Here 
are their (lightly snipped for brevity) disklabels:


[ root@elminster:~ ]
# disklabel -pg sd9
# /dev/rsd9c:
label: SR RAID 1
duid: a7a8a62ef8e71b99
total sectors: 5860532576 # total bytes: 2794.5G
boundstart: 0
boundend: 5860532576

16 partitions:
#size   offset  fstype [fsize bsize  cpg]
  a:  2794.5G0RAID
  c:  2794.5G0  unused
[ root@elminster:~ ]
# disklabel -pg sd11
# /dev/rsd11c:
label: SR RAID 1
duid: 4b3e16399fbbbcf6
total sectors: 5860532576 # total bytes: 2794.5G
boundstart: 0
boundend: 5860532576

16 partitions:
#size   offset  fstype [fsize bsize  cpg]
  a:  2794.5G0RAID
  c:  2794.5G0  unused

As you can see above, all is looking good.  sd10, which has data, was 
omitted.


Now, the moment of truth...  (I'm recreating this from memory..)

# bioctl -c 0 -l /dev/sd9a,/dev/sd11a softraid0

I forget exactly what was said (it was one reboot ago), but I end up 
with this in dmesg:


sd13 at scsibus4 targ 5 lun 0:  SCSI2 0/direct 
fixed

sd13: 1528871MB, 512 bytes/sector, 3131129344 sectors

(BTW, I have a crypto volume in there as sd12, hence the jump from 11->13)

Do you see the problem with the above?  The disklabel makes it more obvious:

# disklabel -pg sd13
# /dev/rsd13c:
type: SCSI
disk: SCSI disk
label: SR RAID 0
duid: dcfed0a6c6b194e9
flags:
bytes/sector: 512
sectors/track: 63
tracks/cylinder: 255
sectors/cylinder: 16065
cylinders: 194903
total sectors: 3131129344 # total bytes: 1493.0G
boundstart: 0
boundend: 3131129344
drivedata: 0

16 partitions:
#size   offset  fstype [fsize bsize  cpg]
  a:  1493.0G0  4.2BSD   8192 655361
  c:  1493.0G0  unused

# bioctl -i sd13
Volume  Status   Size Device
softraid0 4 Online  1603138224128 sd13RAID0
  0 Online  3000592408576 4:0.0   noencl 
  1 Online  3000592408576 4:1.0   noencl 


This should be a 3TB RAID1 (sd9) + a 3TB RAID1 (sd11) = 6TB RAID0 
(sd13), but I'm

Re: SSDs in RAID and bio(4)

2013-10-18 Thread Chris Cappuccio
Darren Spruell [phatbuck...@gmail.com] wrote:
> I don't have a great deal of experience with SSD disks but was spec'ing
> some systems to use them. We'd be doing RAID on the hosts and I'd prefer
> to have something supported by bio(4) for volume management. Do SSDs
> have any impact on ability to do this? Or can one use the same HW RAID
> controllers for volume management and bio(4) doesn't have to deal with
> any differences? Or do SSDs typically require special RAID controllers?
> 
> Looking at Dell R420s and hoping the PERC controller + SSD combination
> will work under bio(4) (although knowing precisely the driver/controller
> would be necessary, I realize).

SSDs mimic the same interface of any hard disk, bioctl doesn't treat them
any differently than any other disk.

Using them in RAID isn't typically the best idea, if you are worried
about write failures bringing the disks to a halt. At least, some failure
modes will affect all SSDs *at the same time* in a RAID configuration
because the SSDs all receive a similar number of write requests and data.

If the softraid develops TRIM support then you'll get better write performance
(although if I'm reading right, some modern SSDs use tricks to minimize the
need for TRIM?)

I waited until, uhh, now just to use SSDs in server applications. And I'm
still only counting on them lasting for 2 or 3 years



Re: Blocking facebook.com: PF or squid?

2013-10-18 Thread Chris Cappuccio
i'd imagine that putting 'www.facebook.com' in your hosts file will do it,
unless the browser ignores /etc/hosts

you could always use the url filtering mechanism of relayd combined
with pf redirects, but if people really want to bypass it, they'll
do proxyies (via ssh even) or remote desktop or vpn or...

why does your personal dislike of Facebook have to affect other network
users?

Stefan Wollny [stefan.wol...@web.de] wrote:
> Hi there,
> 
> having a personal dislike of Facebook (and the MeeToo-systems alike)
> for their impertinent sniffing for private data I tried on my laptop to
> block facebook.com via hosts-file. Interestingly this failed: Calling
> "http://www.facebook.com"; always resulted in a lookup for
> "httpS://www.facebook.com" and the respective site showed up in the
> browser (tried firefox and xombrero).
> 
> Well: Beside excepting the fact that those facebook engineers did a
> fine job circumventing the entrys in /etc/hosts I felt immediatly
> insecure: The reports on this company's attitude towards even
> non-customers privacy are legendary. Their respective track record
> earns them the honorable title of "NSA's fittest supporter"...
> 
> Anyway: I think I finally managed to block all their IPs via PF and on
> this laptop I now feel a little less 'observed'. [Yes, I know - this is
> just today's snapshot of IPs!]
> 
> My question is on the squid-server I have running at home: What
> would make more sense - blocking facebook.com via pf.conf alike or are
> there reasons to use squid's ACL instead? Performance? Being
> ultra-paranoid and implementing both (or even additionally the
> hosts-file-block?)? From my understanding squid should not be able to
> block https-traffic as it is encrypted - or am I wrong here?
> 
> Curious if there is a particular (Open)BSD solution or simply how you
> 'guys and gals' would do it.
> 
> Thank you for sharing your thoughts.
> 
> Cheers,
> STEFAN

-- 
It was the Nicolatians who first coined the separation between lay and clergy.



Re: Blocking facebook.com: PF or squid?

2013-10-18 Thread Mike.
On 10/19/2013 at 12:27 AM Stefan Wollny wrote:

|Hi there,
|[snip]
|
|My question is on the squid-server I have running at home: What
|would make more sense - blocking facebook.com via pf.conf alike
or are
|there reasons to use squid's ACL instead? Performance? Being
|ultra-paranoid and implementing both (or even additionally the
|hosts-file-block?)? From my understanding squid should not be
able to
|block https-traffic as it is encrypted - or am I wrong here?
|
|Curious if there is a particular (Open)BSD solution or simply
how you
|'guys and gals' would do it.
 =


I put privoxy between the browser and squid on my home network.
The privoxy mailing list has discussion about blocking facebook.

Additionally, if you're running firefox, look to see if the
ghostery plug-in would work for you.



Re: Blocking facebook.com: PF or squid?

2013-10-18 Thread Clint Pachl

mia wrote, On 10/18/13 16:33:
If you're handling DHCP for all of the traffic for your site, why not 
just set up a dns server, point your dhcp clients to this DNS server 
and create an authoritative zone for facebook.com that points to 
somewhere other than facebook?


Running your own own DNS resolver is the best solution to deny the whole 
network facebook access. With Unbound this is simple:


# This will block facebook.com and all subdomains.
local-zone: "facebook.com" redirect
local-data: "facebook.com A 127.0.0.1"

The more savvy users could get around this altering their dns servers 
manually which you can stop blocking DNS traffic out of your network, 
this has the added bonus of cutting down bandwidth out of your network.

Exactly!

If they get really sneaky and try to put host entries in for facebook, 
you can do as you've been doing, blocking IPs, and maybe creat a 
script that does an hourly lookup of all facebook IPs and having it 
update your pf config and then reloading pf.
If it gets to this point, I'd say they should lose their network 
privileges. ;-) Next thing you know they will be using a proxy server to 
circumvent your IP block. There's always a way around.




Re: Blocking facebook.com: PF or squid?

2013-10-18 Thread Stefan Wollny
Am Fri, 18 Oct 2013 18:02:55 -0500 (CDT)
schrieb Eric Johnson :

> On Sat, 19 Oct 2013, Stefan Wollny wrote:
> 
> > Hi there,
> > 
> > having a personal dislike of Facebook (and the MeeToo-systems alike)
> > for their impertinent sniffing for private data I tried on my
> > laptop to block facebook.com via hosts-file. Interestingly this
> > failed: Calling "http://www.facebook.com"; always resulted in a
> > lookup for "httpS://www.facebook.com" and the respective site
> > showed up in the browser (tried firefox and xombrero).
> >
> > ...
> > 
> > Curious if there is a particular (Open)BSD solution or simply how
> > you 'guys and gals' would do it.
> > 
> > Thank you for sharing your thoughts.
> 
> One possibilty off the top of my head would be to log all DNS
> requests to syslog and then use syslogc to get a live running stream
> of DNS requests from a syslog memory buffer.  Then whenever you see a
> DNS request for anything to do with facebook, add the ip address of
> the requestor to a pf table and block their web browsing.  After
> about three to five minutes, remove the ip address from the table.
> 
> If every time they try to access facebook, their web browser quits
> working for a few minutes they might get the message.
> 
> Eric
> 

Hi Eric,

sounds pretty nifty to me - this is s.th. I might use at another
site next year. But for my home-network probably a little oversized
(though a good learning exercise :-) ).

Anyway: Thank you for sharing!

Regards,
STEFAN


Mit freundlichen Grüßen,

STEFAN WOLLNY

Regulatory Reporting Consultancy
Tel.: +49 (0) 177 655 7875
Fax.: +49 (0) 3212 655 7875
Mail: ste...@wollny.de
GnuPG-Key ID: 0x9C26F1D0



Re: Blocking facebook.com: PF or squid?

2013-10-18 Thread mia

On 10/18/13 18:27, Stefan Wollny wrote:

Hi there,

having a personal dislike of Facebook (and the MeeToo-systems alike)
for their impertinent sniffing for private data I tried on my laptop to
block facebook.com via hosts-file. Interestingly this failed: Calling
"http://www.facebook.com"; always resulted in a lookup for
"httpS://www.facebook.com" and the respective site showed up in the
browser (tried firefox and xombrero).

Well: Beside excepting the fact that those facebook engineers did a
fine job circumventing the entrys in /etc/hosts I felt immediatly
insecure: The reports on this company's attitude towards even
non-customers privacy are legendary. Their respective track record
earns them the honorable title of "NSA's fittest supporter"...

Anyway: I think I finally managed to block all their IPs via PF and on
this laptop I now feel a little less 'observed'. [Yes, I know - this is
just today's snapshot of IPs!]

My question is on the squid-server I have running at home: What
would make more sense - blocking facebook.com via pf.conf alike or are
there reasons to use squid's ACL instead? Performance? Being
ultra-paranoid and implementing both (or even additionally the
hosts-file-block?)? From my understanding squid should not be able to
block https-traffic as it is encrypted - or am I wrong here?

Curious if there is a particular (Open)BSD solution or simply how you
'guys and gals' would do it.

Thank you for sharing your thoughts.

Cheers,
STEFAN


If you're handling DHCP for all of the traffic for your site, why not 
just set up a dns server, point your dhcp clients to this DNS server and 
create an authoritative zone for facebook.com that points to somewhere 
other than facebook?


That's traditionally how I block traffic from our network from our users 
trying to go to places other than where I wish them to.


The more savvy users could get around this altering their dns servers 
manually which you can stop blocking DNS traffic out of your network, 
this has the added bonus of cutting down bandwidth out of your network.


If they get really sneaky and try to put host entries in for facebook, 
you can do as you've been doing, blocking IPs, and maybe creat a script 
that does an hourly lookup of all facebook IPs and having it update your 
pf config and then reloading pf.


Aaron



Re: Blocking facebook.com: PF or squid?

2013-10-18 Thread Stefan Wollny
Am Sat, 19 Oct 2013 01:02:58 +0200
schrieb Marios Makassikis :

Hi Marios!

[ ... ]
> >
> > Anyway: I think I finally managed to block all their IPs via PF and
> > on this laptop I now feel a little less 'observed'. [Yes, I know -
> > this is just today's snapshot of IPs!]
> >  
> 
> Did you block individual IPs or complete subnets ?   
I used "whois -h whois.radb.net '!gAS32934'" to collect the subnets
first and put those into /etc/facebook. My pf.conf has this:
~~ QUOTE ~
table  persist file "/etc/facebook"
block log quick on $ExtIF from  to any
block log quick on $ExtIF from any to 
 QUOTE END ~~~

logging is just for some time to investigate if this makes sense at
all...

 Performing DNS
> resolution on facebook.com and fbcdn.net yields the 173.252.64.0/18
> subnet. Blocking it is one additional PF rule or just updating a
> table of already blocked subnets / IPs.
>   
> > My question is on the squid-server I have running at home: What
> > would make more sense - blocking facebook.com via pf.conf alike or
> > are there reasons to use squid's ACL instead? Performance? Being
> > ultra-paranoid and implementing both (or even additionally the
> > hosts-file-block?)? From my understanding squid should not be able
> > to block https-traffic as it is encrypted - or am I wrong here?
> >
> > Curious if there is a particular (Open)BSD solution or simply how
> > you 'guys and gals' would do it.  
> 
> 
> Having squid running on your laptop just to block facebook is way
> overkill IMHO.  

No, no: The squid is running on a regular server at home securing the
PCs and the laptop once I am around.
> 
> Rather than populating (polluting?) your hosts file, I think using
> adsuck[1] would be
> simpler get you similar results, especially if you don't want to use
> an external service
> such as OpenDNS.  
Actually I startet with adsuck when I noticed that facebook manages to
circumvent entries in /etc/hosts. I might have done s.th. wrong but on
my laptop any lookup for facebook.com got redirected to 'https' and
those lines in /var/adsuck/hosts.small had no effect:
# [Facebook]
127.0.0.1  fbstatic-a.akamaihd.net
127.0.0.1  fbcdn-dragon-a.akamaihd.net
127.0.0.1  facebook.com
127.0.0.1  www.facebook.com
127.0.0.1  facebook.de
127.0.0.1  de-de.facebook.com

> 
> It is available as a OpenBSD package, and it's easily configured to
> block more than
> just facebook.  
This is what I had expected.

> 
> Marios
> 
> 
> [1] https://opensource.conformal.com/wiki/adsuck
>   
Thanks a lot for your time to reply!

Regards,
STEFAN



Re: Blocking facebook.com: PF or squid?

2013-10-18 Thread Stefan Wollny
Am Fri, 18 Oct 2013 19:33:11 -0400
schrieb mia :
[ ... ]
> >
> If you're handling DHCP for all of the traffic for your site, why not 
> just set up a dns server, point your dhcp clients to this DNS server
> and create an authoritative zone for facebook.com that points to
> somewhere other than facebook?
> 
> That's traditionally how I block traffic from our network from our
> users trying to go to places other than where I wish them to.
> 
> The more savvy users could get around this altering their dns servers 
> manually which you can stop blocking DNS traffic out of your network, 
> this has the added bonus of cutting down bandwidth out of your
> network.
> 
> If they get really sneaky and try to put host entries in for
> facebook, you can do as you've been doing, blocking IPs, and maybe
> creat a script that does an hourly lookup of all facebook IPs and
> having it update your pf config and then reloading pf.
> 
> Aaron

Hi Aaron,

this might be an other way to go. I haven't thought about this yet. The
squid-server has enough power to handle this as well (or I reactivate
an old laptop).

There are at present only two other users left who are not experienced
enough to fiddle with the DNS (at least not yet ;-) ). And other family 
members  who show up occasionally get FB-access via WLAN on their
smartphones - my prime issue are stealth-connects to FB I try to
prevent. If a guest just can't live without FB I'd rather pull another
cable to the router and have effectively a 'demilitarized zone' for
them than expose the rest of the family to the wild.

Anyway: Thank you for sharing your ideas!

Regards,
STEFAN



Re: Blocking facebook.com: PF or squid?

2013-10-18 Thread Stefan Wollny
Am Fri, 18 Oct 2013 19:21:44 -0400
schrieb Brian McCafferty :

[ ... ]
> If you use dhclient on your laptop, I think you need to make sure to
> specify "lookup file bind" (the search order) to have the hosts file
> checked before DNS server. ie- in resolv.conf.tail
> bind file is the default.
> So then you can add 127.0.0.1 facebook.com to the host file.
> 

Hi Brian,

good point - I had resolv.conf.tail disabled when setting up adsuck on
the laptop. Will test this tomorrow.

Still the question is: As the squid-server at home is dedicated to be
"just a proxy" I am not shure if adsuck is the right tool on this
machine. Prior to trying my luck with adsuck on the laptop I had only
the entries for facebook in the hosts-file - with no effect. This is
why I am about to either use pf.conf on the server as well or a
squid-ACL.

Thank you for joining the discussion.

Regards,
STEFAN

Mit freundlichen Grüßen,

STEFAN WOLLNY

Regulatory Reporting Consultancy
Tel.: +49 (0) 177 655 7875
Fax.: +49 (0) 3212 655 7875
Mail: ste...@wollny.de
GnuPG-Key ID: 0x9C26F1D0



Re: Blocking facebook.com: PF or squid?

2013-10-18 Thread Brian McCafferty
On 10/18/13 18:27, Stefan Wollny wrote:
> Hi there,
> 
> having a personal dislike of Facebook (and the MeeToo-systems alike)
> for their impertinent sniffing for private data I tried on my laptop to
> block facebook.com via hosts-file. Interestingly this failed: Calling
> "http://www.facebook.com"; always resulted in a lookup for
> "httpS://www.facebook.com" and the respective site showed up in the
> browser (tried firefox and xombrero).
> 
> Well: Beside excepting the fact that those facebook engineers did a
> fine job circumventing the entrys in /etc/hosts I felt immediatly
> insecure: The reports on this company's attitude towards even
> non-customers privacy are legendary. Their respective track record
> earns them the honorable title of "NSA's fittest supporter"...
> 
> Anyway: I think I finally managed to block all their IPs via PF and on
> this laptop I now feel a little less 'observed'. [Yes, I know - this is
> just today's snapshot of IPs!]
> 
> My question is on the squid-server I have running at home: What
> would make more sense - blocking facebook.com via pf.conf alike or are
> there reasons to use squid's ACL instead? Performance? Being
> ultra-paranoid and implementing both (or even additionally the
> hosts-file-block?)? From my understanding squid should not be able to
> block https-traffic as it is encrypted - or am I wrong here?
> 
> Curious if there is a particular (Open)BSD solution or simply how you
> 'guys and gals' would do it.
> 
> Thank you for sharing your thoughts.
> 
> Cheers,
> STEFAN
> 
> 
> 

If you use dhclient on your laptop, I think you need to make sure to
specify "lookup file bind" (the search order) to have the hosts file
checked before DNS server. ie- in resolv.conf.tail
bind file is the default.
So then you can add 127.0.0.1 facebook.com to the host file.



Re: Blocking facebook.com: PF or squid?

2013-10-18 Thread Stefan Wollny
Hi Andres,

yes - I have read about OpenDNS' services and that many out there are
really happy with them.

But I try to do my homework first before relying on s.o.
else: I _do_ have this OpenBSD-based squid-server - why not use it to
it's full potential? Might not be a big deal traffic-wise, but it
adds up...

Anyway - thank you for sharing.

Regards,
STEFAN


Am Fri, 18 Oct 2013 17:42:31 -0500
schrieb Andres Genovez :

> Regards,
> 
> The way it gets blocked (but not all for a wise kid) properly is via
> CDIR and block DNS via OpenDNS services
> 
> 
> Greetings.
> 
> 
> 2013/10/18 Stefan Wollny 
> 
> > Hi there,
> >
> > having a personal dislike of Facebook (and the MeeToo-systems alike)
> > for their impertinent sniffing for private data I tried on my
> > laptop to block facebook.com via hosts-file. Interestingly this
> > failed: Calling "http://www.facebook.com"; always resulted in a
> > lookup for "httpS://www.facebook.com" and the respective site
> > showed up in the browser (tried firefox and xombrero).
> >
> > Well: Beside excepting the fact that those facebook engineers did a
> > fine job circumventing the entrys in /etc/hosts I felt immediatly
> > insecure: The reports on this company's attitude towards even
> > non-customers privacy are legendary. Their respective track record
> > earns them the honorable title of "NSA's fittest supporter"...
> >
> > Anyway: I think I finally managed to block all their IPs via PF and
> > on this laptop I now feel a little less 'observed'. [Yes, I know -
> > this is just today's snapshot of IPs!]
> >
> > My question is on the squid-server I have running at home: What
> > would make more sense - blocking facebook.com via pf.conf alike or
> > are there reasons to use squid's ACL instead? Performance? Being
> > ultra-paranoid and implementing both (or even additionally the
> > hosts-file-block?)? From my understanding squid should not be able
> > to block https-traffic as it is encrypted - or am I wrong here?
> >
> > Curious if there is a particular (Open)BSD solution or simply how
> > you 'guys and gals' would do it.
> >
> > Thank you for sharing your thoughts.
> >
> > Cheers,
> > STEFAN
> >
> >
> 
> 
> --
> Atentamente
> 
> Andrés Genovez Tobar / DTIT
> Perfil profesional http://lnkd.in/gcdhJE
> 


Mit freundlichen Grüßen,

STEFAN WOLLNY

Regulatory Reporting Consultancy
Tel.: +49 (0) 177 655 7875
Fax.: +49 (0) 3212 655 7875
Mail: ste...@wollny.de
GnuPG-Key ID: 0x9C26F1D0



Re: Blocking facebook.com: PF or squid?

2013-10-18 Thread Marios Makassikis
On 19 October 2013 00:27, Stefan Wollny  wrote:
>
> Hi there,
>
> having a personal dislike of Facebook (and the MeeToo-systems alike)
> for their impertinent sniffing for private data I tried on my laptop to
> block facebook.com via hosts-file. Interestingly this failed: Calling
> "http://www.facebook.com"; always resulted in a lookup for
> "httpS://www.facebook.com" and the respective site showed up in the
> browser (tried firefox and xombrero).
>
> Well: Beside excepting the fact that those facebook engineers did a
> fine job circumventing the entrys in /etc/hosts I felt immediatly
> insecure: The reports on this company's attitude towards even
> non-customers privacy are legendary. Their respective track record
> earns them the honorable title of "NSA's fittest supporter"...
>
> Anyway: I think I finally managed to block all their IPs via PF and on
> this laptop I now feel a little less 'observed'. [Yes, I know - this is
> just today's snapshot of IPs!]
>

Did you block individual IPs or complete subnets ? Performing DNS resolution
on facebook.com and fbcdn.net yields the 173.252.64.0/18 subnet.
Blocking it is one additional PF rule or just updating a table of
already blocked subnets / IPs.

> My question is on the squid-server I have running at home: What
> would make more sense - blocking facebook.com via pf.conf alike or are
> there reasons to use squid's ACL instead? Performance? Being
> ultra-paranoid and implementing both (or even additionally the
> hosts-file-block?)? From my understanding squid should not be able to
> block https-traffic as it is encrypted - or am I wrong here?
>
> Curious if there is a particular (Open)BSD solution or simply how you
> 'guys and gals' would do it.


Having squid running on your laptop just to block facebook is way overkill IMHO.

Rather than populating (polluting?) your hosts file, I think using
adsuck[1] would be
simpler get you similar results, especially if you don't want to use
an external service
such as OpenDNS.

It is available as a OpenBSD package, and it's easily configured to
block more than
just facebook.

Marios


[1] https://opensource.conformal.com/wiki/adsuck


>
>
> Thank you for sharing your thoughts.
>
> Cheers,
> STEFAN



Re: Blocking facebook.com: PF or squid?

2013-10-18 Thread Eric Johnson
On Sat, 19 Oct 2013, Stefan Wollny wrote:

> Hi there,
> 
> having a personal dislike of Facebook (and the MeeToo-systems alike)
> for their impertinent sniffing for private data I tried on my laptop to
> block facebook.com via hosts-file. Interestingly this failed: Calling
> "http://www.facebook.com"; always resulted in a lookup for
> "httpS://www.facebook.com" and the respective site showed up in the
> browser (tried firefox and xombrero).
>
> ...
> 
> Curious if there is a particular (Open)BSD solution or simply how you
> 'guys and gals' would do it.
> 
> Thank you for sharing your thoughts.

One possibilty off the top of my head would be to log all DNS requests to 
syslog and then use syslogc to get a live running stream of DNS requests 
from a syslog memory buffer.  Then whenever you see a DNS request for 
anything to do with facebook, add the ip address of the requestor to a pf 
table and block their web browsing.  After about three to five minutes, 
remove the ip address from the table.

If every time they try to access facebook, their web browser quits working 
for a few minutes they might get the message.

Eric



Re: Blocking facebook.com: PF or squid?

2013-10-18 Thread Andres Genovez
Regards,

The way it gets blocked (but not all for a wise kid) properly is via CDIR and
block DNS via OpenDNS services


Greetings.


2013/10/18 Stefan Wollny 

> Hi there,
>
> having a personal dislike of Facebook (and the MeeToo-systems alike)
> for their impertinent sniffing for private data I tried on my laptop to
> block facebook.com via hosts-file. Interestingly this failed: Calling
> "http://www.facebook.com"; always resulted in a lookup for
> "httpS://www.facebook.com" and the respective site showed up in the
> browser (tried firefox and xombrero).
>
> Well: Beside excepting the fact that those facebook engineers did a
> fine job circumventing the entrys in /etc/hosts I felt immediatly
> insecure: The reports on this company's attitude towards even
> non-customers privacy are legendary. Their respective track record
> earns them the honorable title of "NSA's fittest supporter"...
>
> Anyway: I think I finally managed to block all their IPs via PF and on
> this laptop I now feel a little less 'observed'. [Yes, I know - this is
> just today's snapshot of IPs!]
>
> My question is on the squid-server I have running at home: What
> would make more sense - blocking facebook.com via pf.conf alike or are
> there reasons to use squid's ACL instead? Performance? Being
> ultra-paranoid and implementing both (or even additionally the
> hosts-file-block?)? From my understanding squid should not be able to
> block https-traffic as it is encrypted - or am I wrong here?
>
> Curious if there is a particular (Open)BSD solution or simply how you
> 'guys and gals' would do it.
>
> Thank you for sharing your thoughts.
>
> Cheers,
> STEFAN
>
>


--
Atentamente

Andrés Genovez Tobar / DTIT
Perfil profesional http://lnkd.in/gcdhJE



Blocking facebook.com: PF or squid?

2013-10-18 Thread Stefan Wollny
Hi there,

having a personal dislike of Facebook (and the MeeToo-systems alike)
for their impertinent sniffing for private data I tried on my laptop to
block facebook.com via hosts-file. Interestingly this failed: Calling
"http://www.facebook.com"; always resulted in a lookup for
"httpS://www.facebook.com" and the respective site showed up in the
browser (tried firefox and xombrero).

Well: Beside excepting the fact that those facebook engineers did a
fine job circumventing the entrys in /etc/hosts I felt immediatly
insecure: The reports on this company's attitude towards even
non-customers privacy are legendary. Their respective track record
earns them the honorable title of "NSA's fittest supporter"...

Anyway: I think I finally managed to block all their IPs via PF and on
this laptop I now feel a little less 'observed'. [Yes, I know - this is
just today's snapshot of IPs!]

My question is on the squid-server I have running at home: What
would make more sense - blocking facebook.com via pf.conf alike or are
there reasons to use squid's ACL instead? Performance? Being
ultra-paranoid and implementing both (or even additionally the
hosts-file-block?)? From my understanding squid should not be able to
block https-traffic as it is encrypted - or am I wrong here?

Curious if there is a particular (Open)BSD solution or simply how you
'guys and gals' would do it.

Thank you for sharing your thoughts.

Cheers,
STEFAN



Re: dump(8) and permissions

2013-10-18 Thread Alexander Hall

On 10/11/13 15:38, Rodolfo Gouveia wrote:

On Fri, Oct 11, 2013 at 09:04:16AM -0400, Jiri B wrote:

Try `su' to your user on that system and try to `ls -lR' those dirs,
I suppose he won't be able to do that.

j.


Thanks Jiri.
Indeed he can't.

I've looked at this closer and I found out that on some machines dump
doesn't give any error even though the user 'backup' can't list the
contents of the folder:
  $ whoami
  backup
  $ ls -lhd /var/audit
  drwxrws---  2 root  wheel   512B Mar 13  2013 /var/audit
  $ ls -lhR /var/audit
  ls: audit: Permission denied

The difference I found between those machines is the partition layout.
Machine with no errors:
  $ mount
  /dev/sd0a on / type ffs (local)
  /dev/sd0g on /home type ffs (local, nodev, nosuid)
  /dev/sd0d on /tmp type ffs (local, nodev, nosuid)
  /dev/sd0f on /usr type ffs (local, nodev)
  /dev/sd0e on /var type ffs (local, nodev, nosuid)
Machine with errors:
  $ mount
  /dev/sd0a on / type ffs (local)

So the difference is that when '/var' is a real partition, dump doesn't
complain at all.
Does this make sense?


Yes, most likely.

If you dump a mount point, e.g. /var in the first machine, it will read 
from the device (/dev/rsd0e). The operator group normally has the read 
bits for that.


If you dump a non-mount point (e.g. /var in the second machine, it 
requires reading the file system itself.


I'm quite positive this is what you're hitting here.

/Alexander



Re: new queueing subsystem

2013-10-18 Thread Andy Lemin
I think he did answer your question, if you read between the lines.. A session 
cannot be 'pushed' to max! It needs to demand the bandwidth in the first place. 
Try reading this; http://trash.net/~kaber/hfsc/SIGCOM97.pdf

This along side /many/ other Internet pages allowed us to fully implement and 
utilise hfsc and frankly it is awesome.. 

It is admittedly a complex system but is very powerful and does have a few 
shortcomings as Henning implied.

At some point I'm planning to write an hfsc how-to for our guys in the company 
I work for as their are a lot of 'rules' which need to be follow to write 
effective hfsc queues which I will post here for others when I get to it.

All that said, I myself have one last question... What is the difference if any 
between an hfsc 'priority' and a 'prio' metric?

My understanding is that the hfsc priority has a lesser effect over prio. Hfsc 
'priority' has a range double that of 'prio' but seeing as VLAN TOS is mapped 
into prio, I make sure that my hfsc 'priority'ies map to my 'prio' values as I 
don't know any better.

I feel your pain though as hfsc is complex, but replaces cbq, and 'red' is dead 
(read up about ECN (explicit congestion notification))..

Good luck.. Andy

Sent from my iPhone

> On 18 Oct 2013, at 18:50, Boris Goldberg  wrote:
> 
> Hello Henning,
> 
> Friday, October 18, 2013, 5:37:23 AM, you wrote:
> 
>>>  I extensively use cbq and very confused by the current queuing manual. It
>>> seems that actual speed will be somewhere between "min" and "max" (and wont
>>> be equal to "bandwidth"), but how to get an idea where?
> 
> HB> bandwidth is the target bandwidth, the actual assigned one is
> HB> somewhere between min and max indeed.
> 
>  You do realize that you haven't answered the question, don't you? Your
> previous email and the presentation helps a bit, but not really.
>  Will the actual queue speed be pushed towards "max" or "bandwidth" (and
> how close) if other "siblings" are almost still?
>  Will the actual queue speed be pushed towards "min" or "bandwidth" (and
> how close) if other "siblings" are extremely busy?
>  Other tips to migrate extensive cbq queues (with borrowing) would be
> very helpful and appreciated.
> 
> -- 
> Best regards,
> Borismailto:bo...@twopoint.com



Re: new queueing subsystem

2013-10-18 Thread Boris Goldberg
Hello Henning,

Friday, October 18, 2013, 5:37:23 AM, you wrote:

>>   I extensively use cbq and very confused by the current queuing manual. It
>> seems that actual speed will be somewhere between "min" and "max" (and wont
>> be equal to "bandwidth"), but how to get an idea where?

HB> bandwidth is the target bandwidth, the actual assigned one is
HB> somewhere between min and max indeed.

  You do realize that you haven't answered the question, don't you? Your
previous email and the presentation helps a bit, but not really.
  Will the actual queue speed be pushed towards "max" or "bandwidth" (and
how close) if other "siblings" are almost still?
  Will the actual queue speed be pushed towards "min" or "bandwidth" (and
how close) if other "siblings" are extremely busy?
  Other tips to migrate extensive cbq queues (with borrowing) would be
very helpful and appreciated.

-- 
Best regards,
 Borismailto:bo...@twopoint.com



Re: Kernel TCP recv / send buffer auto scaling

2013-10-18 Thread Sebastian Reitenbach
On Friday, October 18, 2013 16:53 CEST, Frederic URBAN 
 wrote: 
 
> Hello guys,
> 
> Since 4.9, there is a auto-scaling tcp buffer size, good functionnality 
> but i've a question. I'm using a pair of OpenBSD servers as squid 
> proxies. Our internet bandwidth is 1Gb/s so we are able to download @ 90 
> Mbytes /s if needed.
> 
> When I was using OpenBSD 4.8, i was tuning kernel parameters 
> (net.inet.tcp.recv/sendbuffer) to allow faster downloads through the 
> proxy but with 5.1 and now 5.3 i'm not able to download faster than 3.5 
> Mbytes/s, any idea how i could achieve that with 5.3/5.4 ?

you may fiddle around with SB_MAX values on both ends in
/cvs/src/sys/sys/socketvar.h
and recompile kernel,

or you want to try the patch I sent 25.12.2011 to tech@ with subject:
Re: raise max value for tcp autosizing buffer

this adds a sysctl to allow you to do that.

Note: use at your own risk

Sebastian


> 
> Fred !



Kernel TCP recv / send buffer auto scaling

2013-10-18 Thread Frederic URBAN

Hello guys,

Since 4.9, there is a auto-scaling tcp buffer size, good functionnality 
but i've a question. I'm using a pair of OpenBSD servers as squid 
proxies. Our internet bandwidth is 1Gb/s so we are able to download @ 90 
Mbytes /s if needed.


When I was using OpenBSD 4.8, i was tuning kernel parameters 
(net.inet.tcp.recv/sendbuffer) to allow faster downloads through the 
proxy but with 5.1 and now 5.3 i'm not able to download faster than 3.5 
Mbytes/s, any idea how i could achieve that with 5.3/5.4 ?


Fred !



Re: AX88179 usb gigabit ethernet

2013-10-18 Thread Jonathan Gray
There is a driver in development in -current that is not yet
enabled, axen(4).  It is not part of 5.4.

On Fri, Oct 18, 2013 at 10:29:27PM +0800, man Chan wrote:
> Hello,
> 
> I recently brought a pci usb gigabit ethernet with chipset AX88179 and
> update the source to 5.4.? After the making the 5.4 new kernel, I found out
> that I still can't use the usb gigabit enternet.? Is there anyone using the
> usb gigabit ethernet under 5.4? ?? Any idea to solve the problem ?
> 
> The system
> report the usb gigabit ethernet as followings:-
> 
> ugen0 at uhub1 port2 "ASIX
> Elec. Corp. AX88179" rev2.10/1.00 addr 2
> 
> Thanks.
> 
> clarence



AX88179 usb gigabit ethernet

2013-10-18 Thread man Chan
Hello,

I recently brought a pci usb gigabit ethernet with chipset AX88179 and
update the source to 5.4.  After the making the 5.4 new kernel, I found out
that I still can't use the usb gigabit enternet.  Is there anyone using the
usb gigabit ethernet under 5.4  ?  Any idea to solve the problem ?

The system
report the usb gigabit ethernet as followings:-

ugen0 at uhub1 port2 "ASIX
Elec. Corp. AX88179" rev2.10/1.00 addr 2

Thanks.

clarence



BGP & CARP - suggestions?

2013-10-18 Thread Adam Thompson
I've got two OpenBSD boxes acting as my border router[s], talking BGP to 
a small # (~4) of peers.
At the moment, I've got them using carp(4) on every interface, and 
bgpd.conf has for each neighbor{} stanza, a "depend on carpX" line.
This works, more or less, but failover is anything but instantaneous - 
at least one upstream loses my advertisements for a couple of minutes in 
a failover event.  Also, their default gateway points to a non-BGP 
router so they have a "back door" if bgp fails completely for some 
reason (e.g. typo in bgpd.conf, not sure what else), so I lose outbound 
connectivity until bgpd establishes new sessions and pulls in an entire 
routing table.


I think I can solve the outbound loss of connectivity during failover 
simply by changing the default gateway to point at a BGP peer.


The loss of inbound would, at first glance, appear to be caused by my 
peer not having soft-reconfig enabled, but they say it is enabled for 
them, and it's supposed to be on by default in bgpd(8) on my side.


Any ideas/suggestions/recommendations?

For at least one peer, I can probably get them to peer with both routers 
simultaneously - but a) does this add much value?, and b) would it work 
at all if the "LAN" interface [so to speak] is currently not the CARP 
master?


--
-Adam Thompson
 athom...@athompso.net



Re: ntfs with big files

2013-10-18 Thread David Vasek

On Thu, 17 Oct 2013, David Vasek wrote:


On Fri, 11 Oct 2013, Joel Sing wrote:


On Thu, 10 Oct 2013, Manuel Giraud wrote:

Hi,

I have a ntfs partition with rather large (about 3GB) files on it. When
I copy these files on a ffs partition they are corrupted. When I try to
checksum them directly from the ntfs partition the checksum is not
correct (compared to the same file on a fat32 partition copied with
Windows).

I tried this (with same behaviour) on i386 5.3 release and on i386 last
week current. I'm willing to do some testing to fix this issue but don't
really know where to start.


See if you can isolate the smallest possible reproducable test case. If you
create a 3GB file with known content (e.g. the same byte repeated), does 
the
same issue occur? If so, how small do you need to go before the problem 
goes

away? Also, what operating system (and version) was used to write the files
to the NTFS volume?


Hello, I encountered the same issue. Anything over the 2 GB limit is wrong. I 
mean, first exactly 2 GB of the file are read correctly, following that I get 
wrong data till the end of the file. It is reproducible with any file over 2 
GB in size so far. Smells like int somewhere... I get the same wrong data 
with any release since at least 5.0, didn't test anything older, but I bet it 
is the same.


The filesystem is a Windows XP NTFS system disk, 32-bit, the files were 
copied there with explorer.exe.


Some additional notes and findings:

(1)
The data I receive after first 2 GB are not part of the file, the data is 
from another file (from the same directory, if that fact could be 
important). The data is taken in uninterrupted sequence and the starting 
offset of that sequence is way less than 2 GB in the other file where the 
data belong.


(2)
While reading past 2 GB in larger blocks gives me just wrong data, reading 
in smaller blocks (2kB and less) gives me kernel panic in KASSERT 
immediately when I read past the 2 GB limit. It is 100% reproducible with 
any file larger than 2 GB so far.


# mount -r /dev/wd0i /mnt

# ls -lo /mnt/DATA/ntfs_2gb_test.bin
-rwxr-xr-x  1 root  wheel  - 3054813184 Oct 17 22:11 /mnt/DATA/ntfs_2gb_test.bin

# cat /mnt/DATA//ntfs_2gb_test.bin > /dev/null

# dd if=/mnt/DATA/ntfs_2gb_test.bin bs=4k of=/dev/null
745804+0 records in
745804+0 records out
3054813184 bytes transferred in 108.518 secs (28150083 bytes/sec)

# dd if=/mnt/DATA/ntfs_2gb_test.bin bs=2k count=1m of=/dev/null
1048576+0 records in
1048576+0 records out
2147483648 bytes transferred in 78.783 secs (27258052 bytes/sec)

# dd if=/mnt/DATA/ntfs_2gb_test.bin bs=1k count=2m of=/dev/null
2097152+0 records in
2097152+0 records out
2147483648 bytes transferred in 81.210 secs (26443280 bytes/sec)

# dd if=/mnt/DATA/ntfs_2gb_test.bin bs=4k skip=512k of=/dev/null
221516+0 records in
221516+0 records out
907329536 bytes transferred in 32.314 secs (28077667 bytes/sec)

# dd if=/mnt/DATA/ntfs_2gb_test.bin bs=2k skip=1m of=/dev/null
panic: kernel diagnostic assertion "cl == 1 && tocopy <= ntfs_cntob(1)" failed: file 
"../../../../ntfs/ntfs_subr.c", line 1556
Stopped at  Debugger+0x4:   popl%ebp
RUN AT LEAST 'trace' AND 'ps' AND INCLUDE OUTPUT WHEN REPORTING THIS PANIC!
DO NOT EVEN BOTHER REPORTING THIS WITHOUT INCLUDING THAT INFORMATION!
ddb> trace
Debugger(d08fdcbc,f544fb88,d08dc500,f544fb88,200) at Debugger+0x4
panic(d08dc500,d085fc0e,d08dfe60,d08e00b0,614) at panic+0x5d
__assert(d085fc0e,d08e00b0,614,d08dfe60,8) at __assert+0x2e
ntfs_readntvattr_plain(d1a2d200,d1a36200,d1a5bc00,8800,0) at ntfs_readntvat
tr_plain+0x2e6
ntfs_readattr_plain(d1a2d200,d1a36200,80,0,8800) at ntfs_readattr_plain+0x1
41
ntfs_readattr(d1a2d200,d1a36200,80,0,8800) at ntfs_readattr+0x156
ntfs_read(f544fddc,d64e5140,d6522a60,f544fea0,0) at ntfs_read+0xa8
VOP_READ(d6522a60,f544fea0,0,d6599000,d64e5140) at VOP_READ+0x35
vn_read(d65290a8,d65290c4,f544fea0,d6599000,0) at vn_read+0xb5
dofilereadv(d65365d4,3,d65290a8,f544ff08,1) at dofilereadv+0x13a
sys_read(d65365d4,f544ff64,f544ff84,106,d653f100) at sys_read+0x89
syscall() at syscall+0x227
--- syscall (number 0) ---
0x2:
ddb> ps
   PID   PPID   PGRPUID  S   FLAGS  WAIT  COMMAND
*19967   9961  19967  0  7   0dd
  9961  1   9961  0  30x88  pause sh
14  0  0  0  30x100200  aiodoned  aiodoned
13  0  0  0  30x100200  syncerupdate
12  0  0  0  30x100200  cleaner   cleaner
11  0  0  0  30x100200  reaperreaper
10  0  0  0  30x100200  pgdaemon  pagedaemon
 9  0  0  0  30x100200  bored crypto
 8  0  0  0  30x100200  pftm  pfpurge
 7  0  0  0  30x100200  usbtskusbtask
 6  0  0  0  30x100200  usbatsk   usbatsk
 5  0  0  0  30x100200  acpi0 acpi0
 4  

Re: porter's handbook - pkg-readmes

2013-10-18 Thread Marc Espie
On Fri, Oct 18, 2013 at 11:28:44AM +, Stuart Henderson wrote:
> On 2013-10-18, Gabriel Guzman  wrote:
> > +Does my package need a readme?
> > +A package may require special instructions to run on OpenBSD, or 
> > +additional files may need to be downloaded before the port will work 
> > +properly, or your port may rely on additional packages to support 
> > +additional functionality.  If this is the case, and you are unable to 
> 
> new sentence, new line
> 
> > +provide those features via flavors, a readme may be warranted.
> 
> I wouldn't talk about flavours here, this encourages people to add
> flavours just for optional deps

Also, flavors description is supposed to be a part of DESCR, style-wise.



Re: Experiences with OpenBSD RAID5

2013-10-18 Thread Scott McEachern

On 10/18/13 07:31, Stuart Henderson wrote:

On 2013-10-18, Scott McEachern  wrote:

Circumstances change, and I might be able to redeploy those HDDs as a
RAID5 array.  This, at least in theory, would allow the 18TB total to be
realized as 15TB as RAID5, gaining me 6TB.

even if softraid would rebuild raid5, I'd worry about additional
disk failures before/during rebuild for a volume of this sort of size..
(especially given that rebuilding is not automatic with softraid).



Follow-up:

Thanks to all that replied publicly and privately, the information was 
most helpful.


RAID5 can't rebuild, so that's a show stopper right there.

However, now I understand why something I thought (at first) would be 
important has been left unwritten:  RAID5 has its own lengthy set of 
problems.  Like Stuart and others said, the potential for a secondary 
HDD failure causing a catastrophic failure to the entire volume is far 
greater than most people think.  This link was given to me off-list, and 
it's worth the 60 seconds it takes to read: (It's short and to the point.)


http://www.miracleas.com/BAARF/Why_RAID5_is_bad_news.pdf

My primary goal with RAID is data integrity, with total capacity taking 
a back seat.  As much as, in my case, 6TB seems like a rather large 
loss, the potential for RAID5 failure to gain that 6TB isn't worth it.  
Simply put, RAID1 (or even better, RAID10), is a superior course of 
action for data integrity.


Assuming the numbers provided by CERN in that PDF are anywhere near 
accurate, it seems to me that using RAID5 is not only counter to the 
reason for RAID in the first place, but even reckless.


Thanks again folks for the advice.  I'm sticking to RAID1.

--
Scott McEachern

https://www.blackstaff.ca

"Beware the Four Horsemen of the Information Apocalypse: terrorists, drug dealers, 
kidnappers, and child pornographers. Seems like you can scare any public into allowing 
the government to do anything with those four."  -- Bruce Schneier



Re: ntfs with big files

2013-10-18 Thread David Coppa
On Fri, Oct 18, 2013 at 1:32 PM, Paolo Aglialoro  wrote:
> Just a thought: now that fuse support is enabled what about ntfs-3g?

ntfs-3g is in ports (sysutils/ntfs-3g).
It's fuse support that, as of now, it's not enabled.

Ciao,
David



Re: Experiences with OpenBSD RAID5

2013-10-18 Thread Stuart Henderson
On 2013-10-18, Scott McEachern  wrote:
> Circumstances change, and I might be able to redeploy those HDDs as a 
> RAID5 array.  This, at least in theory, would allow the 18TB total to be 
> realized as 15TB as RAID5, gaining me 6TB.

even if softraid would rebuild raid5, I'd worry about additional
disk failures before/during rebuild for a volume of this sort of size..
(especially given that rebuilding is not automatic with softraid).



Re: ntfs with big files

2013-10-18 Thread Paolo Aglialoro
Just a thought: now that fuse support is enabled what about ntfs-3g?

Il 17/ott/2013 23:36 "David Vasek"  ha scritto:
>
> On Fri, 11 Oct 2013, Joel Sing wrote:
>
>> On Thu, 10 Oct 2013, Manuel Giraud wrote:
>>>
>>> Hi,
>>>
>>> I have a ntfs partition with rather large (about 3GB) files on it. When
>>> I copy these files on a ffs partition they are corrupted. When I try to
>>> checksum them directly from the ntfs partition the checksum is not
>>> correct (compared to the same file on a fat32 partition copied with
>>> Windows).
>>>
>>> I tried this (with same behaviour) on i386 5.3 release and on i386 last
>>> week current. I'm willing to do some testing to fix this issue but don't
>>> really know where to start.
>>
>>
>> See if you can isolate the smallest possible reproducable test case. If
you
>> create a 3GB file with known content (e.g. the same byte repeated), does
the
>> same issue occur? If so, how small do you need to go before the problem
goes
>> away? Also, what operating system (and version) was used to write the
files
>> to the NTFS volume?
>
>
> Hello, I encountered the same issue. Anything over the 2 GB limit is
wrong. I mean, first exactly 2 GB of the file are read correctly, following
that I get wrong data till the end of the file. It is reproducible with any
file over 2 GB in size so far. Smells like int somewhere... I get the same
wrong data with any release since at least 5.0, didn't test anything older,
but I bet it is the same.
>
> The filesystem is a Windows XP NTFS system disk, 32-bit, the files were
copied there with explorer.exe.
>
> Regards,
> David



Re: porter's handbook - pkg-readmes

2013-10-18 Thread Stuart Henderson
On 2013-10-18, Gabriel Guzman  wrote:
> +Does my package need a readme?
> +A package may require special instructions to run on OpenBSD, or 
> +additional files may need to be downloaded before the port will work 
> +properly, or your port may rely on additional packages to support 
> +additional functionality.  If this is the case, and you are unable to 

new sentence, new line

> +provide those features via flavors, a readme may be warranted.

I wouldn't talk about flavours here, this encourages people to add
flavours just for optional deps

> +
> +Creating and installing a readme
> +You can create a README file in the pkg directory of your port with 
> +the relevant information.  Once your file is complete, add 

should tell people to base it on infrastructure/README.template

> +
> +@cwd ${LOCALBASE}/share/doc/pkg-readmes
> +${FULLPKGNAME}
> +
> +to your generated PLIST file.  This will allow the pkg tools to copy 
> +your README to /usr/local/share/doc/pkg-readmes/ when your port is 
> +installed, and notify the user to check there for additional 
> +information.

Adding this manually with @cwd is only for things where PREFIX has
been changed, normally "make plist" picks this up by itself..



Re: new queueing subsystem

2013-10-18 Thread Henning Brauer
* Boris Goldberg  [2013-10-17 15:59]:
>   You probably need to mention that the new queuing is using hfsc model and
> what hfsc model is.

I don't think anyone should have to care what algorithm is being used
under the hood.

>   I extensively use cbq and very confused by the current queuing manual. It
> seems that actual speed will be somewhere between "min" and "max" (and wont
> be equal to "bandwidth"), but how to get an idea where?

bandwidth is the target bandwidth, the actual assigned one is
somewhere between min and max indeed.

>   Does the "set prio" affect this queuing or just creates some separate
> queues?

prio doesn't create queues AT ALL.

-- 
Henning Brauer, h...@bsws.de, henn...@openbsd.org
BS Web Services GmbH, http://bsws.de, Full-Service ISP
Secure Hosting, Mail and DNS Services. Dedicated Servers, Root to Fully Managed
Henning Brauer Consulting, http://henningbrauer.com/



Re: new queueing subsystem

2013-10-18 Thread Henning Brauer
* Johan Beisser  [2013-10-16 21:09]:
> Right. I guess if I want to define multiple queues for matching
> traffic, I need to either redo the filter rules to use tagging*, or
> simply do it per outbound bit of traffic.

let's make that outright clear: defining queues is for bandwidth
shaping only. there is no need (or sense) to do so for prio queueing.
prio queueing is ALWAYS there & on for a couple of releases now, and
some things like carp announcements are priorized by default. there is
nothing to configure for prio queueing (why should there), every
ifqueue, be it on an interface or elsewhere, e. g. ipintrq, has 8
sub-queues that are processed in order at dequeue time.

for the new bandwidth shaping subsystem, it is rather more powerful
than altq, unless you were one of the extremely rare ones that figured
out hfsc in altq. cbq is gone & dead, it doesn't make any sense any
more since it can entirely be expressed in hfsc.

Someone asked about borrowing: there is always borrowing, up to the bw
specified with "max". the newqueue commits included one to pf.conf.5,
and I think the QUEUEING section is pretty clear.
More background can be found in my presentation on the topic at
http://bulabula.org/papers/2012/eurobsdcon/

What is missing now is RED. Aside and as said before I'd like to
change hfsc itself so that we can queue in any queue and not just leaf
queues, that would allow for much more flexibility. That's a change to
the algorithm itself tho and everything but trivial (kudos if someone
beats me to do it).

ALTQ was super important at its time, when bandwidth shaping and the
like were rather new and there was a lot of research going on. That is
what ALTQ was written for, and it is a major reason for there being a
sound understanding of queueing effects now - and that's why altq has
pluggable schedulers (different queueing algorithms), so many buttons
and options: research. 

To get people an idea: the altq core alone is over 7000 lines of code,
and that's without the hooks into the rest of the kernel, without the
userland bits, and without documentation - that roughly doubles that
number. In contrast, the entire newqueue _diff_ (larger due to
context!): 
4423 newqueue.diff
to be fair, that's without prio queueing, but the entire prio queueing
stuff must be well under a thousand loc - prio queueing is rather
simple.

As said before, I plan to remove altq after the 5.5 release, so that
release will have both and people have time to migrate.
I won't do such a huge parallel-backwards-compat circus again, it has
been a nightmare.

-- 
Henning Brauer, h...@bsws.de, henn...@openbsd.org
BS Web Services GmbH, http://bsws.de, Full-Service ISP
Secure Hosting, Mail and DNS Services. Dedicated Servers, Root to Fully Managed
Henning Brauer Consulting, http://henningbrauer.com/



Re: npppd / pppoe server troubles

2013-10-18 Thread Gruel Bruno

Le 18-10-2013 6:18, YASUOKA Masahiko a écrit :

Hi,

On Wed, 16 Oct 2013 21:10:25 +0200
Gruel Bruno  wrote:

As i thought that it's doesn't read my users file i changed the
username & password but nothing else.


Yes, the log shows the session is terminated because the passwords are
mismatched.

I checked by below snapshots, but I could not repeat the problem.

  OpenBSD 5.4-current (GENERIC) #77: Sun Oct 13 17:27:52 MDT 2013
  dera...@i386.openbsd.org:/usr/src/sys/arch/i386/compile/GENERIC

  OpenBSD 5.4-current (GENERIC) #66: Sun Oct 13 15:54:12 MDT 2013
  dera...@amd64.openbsd.org:/usr/src/sys/arch/amd64/compile/GENERIC

Can you try again with below patch?  I'd like to get log for debug.

Index: npppd/pap.c
===
RCS file: /cvs/openbsd/src/usr.sbin/npppd/npppd/pap.c,v
retrieving revision 1.7
diff -u -p -r1.7 pap.c
--- npppd/pap.c 18 Sep 2012 13:14:08 -  1.7
+++ npppd/pap.c 18 Oct 2013 04:06:27 -
@@ -341,7 +341,11 @@ pap_local_authenticate(pap *_this, const
pap_response(_this, 1, DEFAULT_SUCCESS_MESSAGE);
return;
}
-   }
+   pap_log(_this, LOG_INFO, "password mismatch %s<>%s",
+   password, password0);
+   } else
+   pap_log(_this, LOG_INFO, "could not get password for %s",
+   username);
pap_response(_this, 0, DEFAULT_FAILURE_MESSAGE);
 }


I try it this night and give to give you logs.

Thank's.

Bruno