Re: Millions of files in /var/www & inode / out of space issue.

2013-02-19 Thread Otto Moerbeek
On Tue, Feb 19, 2013 at 08:42:01AM +0100, Janne Johansson wrote:

> 2013/2/19 Keith :
> > Q. How do I make the default web folder /var/www/ capable of holding
> > millions of files (say 50GB worth of small 2kb-12kb files) so that I won't
> > get inode issues ?
> 
> Since you probably aren't going to have 50G/2k number of files in a
> single dir, then you'd be wise to make several filesystems for the
> directories you have there, especially for the fsck reasons mentioned
> by others in this thread.
> Fsck'ing 10 5G fs:es with lots of inodes will be far more fun than one
> of 50G in size. And chances are quite big that not all of those 10

A 50G filesysten created with defaults has more than 6 million inodes
and on a system without a decent amount of memory checks pretty quick. 

If you run ffs2 with softdep, and optimization kicks in that will make
the number of *used* inodes the driving factor, instead of the total
number of inodes on a fs.

> will have issues even on unclean shutdowns, so you would be able to
> skip over a few in such an event.

Likely all will be unclean and need to be checked.

Anyway make sure the max number of files per directory doe not grow
without bound. Use a max of a couple of 1 files per directory as a
rule of thumb. 

-Otto



Re: Millions of files in /var/www & inode / out of space issue.

2013-02-19 Thread Otto Moerbeek
On Tue, Feb 19, 2013 at 09:09:49AM +0100, Otto Moerbeek wrote:

> On Tue, Feb 19, 2013 at 08:42:01AM +0100, Janne Johansson wrote:
> 
> > 2013/2/19 Keith :
> > > Q. How do I make the default web folder /var/www/ capable of holding
> > > millions of files (say 50GB worth of small 2kb-12kb files) so that I won't
> > > get inode issues ?
> > 
> > Since you probably aren't going to have 50G/2k number of files in a
> > single dir, then you'd be wise to make several filesystems for the
> > directories you have there, especially for the fsck reasons mentioned
> > by others in this thread.
> > Fsck'ing 10 5G fs:es with lots of inodes will be far more fun than one
> > of 50G in size. And chances are quite big that not all of those 10
> 
> A 50G filesysten created with defaults has more than 6 million inodes
> and on a system without a decent amount of memory checks pretty quick. 

ehh, *with*

> 
> If you run ffs2 with softdep, and optimization kicks in that will make
> the number of *used* inodes the driving factor, instead of the total
> number of inodes on a fs.
> 
> > will have issues even on unclean shutdowns, so you would be able to
> > skip over a few in such an event.
> 
> Likely all will be unclean and need to be checked.
> 
> Anyway make sure the max number of files per directory doe not grow
> without bound. Use a max of a couple of 1 files per directory as a
> rule of thumb. 
> 
>   -Otto



Re: Millions of files in /var/www & inode / out of space issue.

2013-02-19 Thread MJ
Which app are you running that is generating millions of tiny files in a single 
directory?  Regardless, in this case OpenBSD is not the right tool for the job. 
You need either FreeBSD or a Solaris variant to handle this problem because you 
need ZFS.


What limits does ZFS have?
---
The limitations of ZFS are designed to be so large that they will never be 
encountered in any practical operation. ZFS can store 16 Exabytes in each 
storage pool, file system, file, or file attribute. ZFS can store billions of 
names: files or directories in a directory, file systems in a file system, or 
snapshots of a file system. ZFS can store trillions of items: files in a file 
system, file systems, volumes, or snapshots in a pool.


I'm not sure why ZFS hasn't yet been ported to OpenBSD, but if it were then 
that would pretty much eliminate the need for my one and only FreeBSD box ;-)



On Feb 19, 2013, at 2:35 AM, Keith  wrote:

> Q. How do I make the default web folder /var/www/ capable of holding millions 
> of files (say 50GB worth of small 2kb-12kb files) so that I won't get inode 
> issues ?
> 
> The problem is that my server has the default disk layout as I didn't expect 
> to have millions of files (I though they would be stored in the DB). When I 
> started the app it generated all the files and I got out of space warnings. I 
> tried moving the folder containing the files and making a symlink back but 
> that didn't work because nginx is in a chroot.
> 
> The two option I think I have are.
> 
> 1. Reinstall the OS and make a dedicated /var/www partition but how I 
> increase the inode limit I have no idea.
> 2. Make a new partition, format it, copy the files from the original 
> partition and swap them around and restart nginx. ( Do i  run newfs with some 
> option to make more inodes ?)
> 
> Thanks
> Keith.



Re: Millions of files in /var/www & inode / out of space issue.

2013-02-19 Thread Paolo Aglialoro
Or you could just use ZFS, XFS, whateverFS in a separate unix/linux box and
go NFS on it, simulating a true external storage appliance :)


On Tue, Feb 19, 2013 at 11:47 AM, MJ  wrote:

> Which app are you running that is generating millions of tiny files in a
> single directory?  Regardless, in this case OpenBSD is not the right tool
> for the job. You need either FreeBSD or a Solaris variant to handle this
> problem because you need ZFS.
>
>
> What limits does ZFS have?
> ---
> The limitations of ZFS are designed to be so large that they will never be
> encountered in any practical operation. ZFS can store 16 Exabytes in each
> storage pool, file system, file, or file attribute. ZFS can store billions
> of names: files or directories in a directory, file systems in a file
> system, or snapshots of a file system. ZFS can store trillions of items:
> files in a file system, file systems, volumes, or snapshots in a pool.
>
>
> I'm not sure why ZFS hasn't yet been ported to OpenBSD, but if it were
> then that would pretty much eliminate the need for my one and only FreeBSD
> box ;-)
>
>
>
> On Feb 19, 2013, at 2:35 AM, Keith  wrote:
>
> > Q. How do I make the default web folder /var/www/ capable of holding
> millions of files (say 50GB worth of small 2kb-12kb files) so that I won't
> get inode issues ?
> >
> > The problem is that my server has the default disk layout as I didn't
> expect to have millions of files (I though they would be stored in the DB).
> When I started the app it generated all the files and I got out of space
> warnings. I tried moving the folder containing the files and making a
> symlink back but that didn't work because nginx is in a chroot.
> >
> > The two option I think I have are.
> >
> > 1. Reinstall the OS and make a dedicated /var/www partition but how I
> increase the inode limit I have no idea.
> > 2. Make a new partition, format it, copy the files from the original
> partition and swap them around and restart nginx. ( Do i  run newfs with
> some option to make more inodes ?)
> >
> > Thanks
> > Keith.



Re: Millions of files in /var/www & inode / out of space issue.

2013-02-19 Thread Rafal Bisingier
Hi,

Or you could fix your application, to not do stupid things (like
generating millions of files in a single directory) in the first
place... ;-)


On 2013-02-19 at 12:10 CET
Paolo Aglialoro  wrote:

>Or you could just use ZFS, XFS, whateverFS in a separate unix/linux box and
>go NFS on it, simulating a true external storage appliance :)
>
>
>On Tue, Feb 19, 2013 at 11:47 AM, MJ  wrote:
>
>> Which app are you running that is generating millions of tiny files in a
>> single directory?  Regardless, in this case OpenBSD is not the right tool
>> for the job. You need either FreeBSD or a Solaris variant to handle this
>> problem because you need ZFS.
>>
>>
>> What limits does ZFS have?
>> ---
>> The limitations of ZFS are designed to be so large that they will never be
>> encountered in any practical operation. ZFS can store 16 Exabytes in each
>> storage pool, file system, file, or file attribute. ZFS can store billions
>> of names: files or directories in a directory, file systems in a file
>> system, or snapshots of a file system. ZFS can store trillions of items:
>> files in a file system, file systems, volumes, or snapshots in a pool.
>>
>>
>> I'm not sure why ZFS hasn't yet been ported to OpenBSD, but if it were
>> then that would pretty much eliminate the need for my one and only FreeBSD
>> box ;-)
>>
>>
>>
>> On Feb 19, 2013, at 2:35 AM, Keith  wrote:
>>
>> > Q. How do I make the default web folder /var/www/ capable of holding
>> millions of files (say 50GB worth of small 2kb-12kb files) so that I won't
>> get inode issues ?
>> >
>> > The problem is that my server has the default disk layout as I didn't
>> expect to have millions of files (I though they would be stored in the DB).
>> When I started the app it generated all the files and I got out of space
>> warnings. I tried moving the folder containing the files and making a
>> symlink back but that didn't work because nginx is in a chroot.
>> >
>> > The two option I think I have are.
>> >
>> > 1. Reinstall the OS and make a dedicated /var/www partition but how I
>> increase the inode limit I have no idea.
>> > 2. Make a new partition, format it, copy the files from the original
>> partition and swap them around and restart nginx. ( Do i  run newfs with
>> some option to make more inodes ?)
>> >
>> > Thanks
>> > Keith.
>




-- 
Greetings
Rafal Bisingier



Re: Millions of files in /var/www & inode / out of space issue.

2013-02-19 Thread Keith

On 19/02/2013 10:47, MJ wrote:

Which app are you running that is generating millions of tiny files in a single 
directory?  Regardless, in this case OpenBSD is not the right tool for the job. 
You need either FreeBSD or a Solaris variant to handle this problem because you 
need ZFS.


What limits does ZFS have?
---
The limitations of ZFS are designed to be so large that they will never be 
encountered in any practical operation. ZFS can store 16 Exabytes in each 
storage pool, file system, file, or file attribute. ZFS can store billions of 
names: files or directories in a directory, file systems in a file system, or 
snapshots of a file system. ZFS can store trillions of items: files in a file 
system, file systems, volumes, or snapshots in a pool.


I'm not sure why ZFS hasn't yet been ported to OpenBSD, but if it were then 
that would pretty much eliminate the need for my one and only FreeBSD box ;-)



On Feb 19, 2013, at 2:35 AM, Keith  wrote:


Q. How do I make the default web folder /var/www/ capable of holding millions 
of files (say 50GB worth of small 2kb-12kb files) so that I won't get inode 
issues ?

The problem is that my server has the default disk layout as I didn't expect to 
have millions of files (I though they would be stored in the DB). When I 
started the app it generated all the files and I got out of space warnings. I 
tried moving the folder containing the files and making a symlink back but that 
didn't work because nginx is in a chroot.

The two option I think I have are.

1. Reinstall the OS and make a dedicated /var/www partition but how I increase 
the inode limit I have no idea.
2. Make a new partition, format it, copy the files from the original partition 
and swap them around and restart nginx. ( Do i  run newfs with some option to 
make more inodes ?)

Thanks
Keith.

It's a usenet indexing application called Newznab. It consists of two 
parts, some php scripts that do the indexing that are generating the 
pesky "nbz.gz" files and then there's the web front end.


This running on my home server / firewall and I think it's almost 
working I just need to get the partitions sorted out and it should be 
fine. I don't want to switch to FreeBSD for ZFS or introduce another 
machine for a NFS Volume.


To be honest I didn't think indexing usenet would be such a big deal, 
but it's a turning out to be quite a resource hog.


Keith



Re: Millions of files in /var/www & inode / out of space issue.

2013-02-19 Thread Wayne Oliver
On 19 Feb 2013, at 1:40 PM, Rafal Bisingier wrote:

> Hi,
> 
> Or you could fix your application, to not do stupid things (like
> generating millions of files in a single directory) in the first
> place... ;-)

+1

> 
> 
> On 2013-02-19 at 12:10 CET
> Paolo Aglialoro  wrote:
> 
>> Or you could just use ZFS, XFS, whateverFS in a separate unix/linux box and
>> go NFS on it, simulating a true external storage appliance :)
>> 
>> 
>> On Tue, Feb 19, 2013 at 11:47 AM, MJ  wrote:
>> 
>>> Which app are you running that is generating millions of tiny files in a
>>> single directory?  Regardless, in this case OpenBSD is not the right tool
>>> for the job. You need either FreeBSD or a Solaris variant to handle this
>>> problem because you need ZFS.
>>> 
>>> 
>>> What limits does ZFS have?
>>> ---
>>> The limitations of ZFS are designed to be so large that they will never be
>>> encountered in any practical operation. ZFS can store 16 Exabytes in each
>>> storage pool, file system, file, or file attribute. ZFS can store billions
>>> of names: files or directories in a directory, file systems in a file
>>> system, or snapshots of a file system. ZFS can store trillions of items:
>>> files in a file system, file systems, volumes, or snapshots in a pool.
>>> 
>>> 
>>> I'm not sure why ZFS hasn't yet been ported to OpenBSD, but if it were
>>> then that would pretty much eliminate the need for my one and only FreeBSD
>>> box ;-)
>>> 
>>> 
>>> 
>>> On Feb 19, 2013, at 2:35 AM, Keith  wrote:
>>> 
 Q. How do I make the default web folder /var/www/ capable of holding
>>> millions of files (say 50GB worth of small 2kb-12kb files) so that I won't
>>> get inode issues ?
 
 The problem is that my server has the default disk layout as I didn't
>>> expect to have millions of files (I though they would be stored in the DB).
>>> When I started the app it generated all the files and I got out of space
>>> warnings. I tried moving the folder containing the files and making a
>>> symlink back but that didn't work because nginx is in a chroot.
 
 The two option I think I have are.
 
 1. Reinstall the OS and make a dedicated /var/www partition but how I
>>> increase the inode limit I have no idea.
 2. Make a new partition, format it, copy the files from the original
>>> partition and swap them around and restart nginx. ( Do i  run newfs with
>>> some option to make more inodes ?)
 
 Thanks
 Keith.
>> 
> 
> 
> 
> 
> -- 
> Greetings
> Rafal Bisingier



Re: Millions of files in /var/www & inode / out of space issue.

2013-02-19 Thread Nick Holland
On 02/19/13 05:47, MJ wrote:
> Which app are you running that is generating millions of tiny files
> in a single directory?  Regardless, in this case OpenBSD is not the
> right tool for the job. You need either FreeBSD or a Solaris variant
> to handle this problem because you need ZFS.
> 
> 
> What limits does ZFS have? --- 
> The limitations of ZFS are designed to be so large that they will
> never be encountered in any practical operation. ZFS can store 16
> Exabytes in each storage pool, file system, file, or file attribute.
> ZFS can store billions of names: files or directories in a directory,
> file systems in a file system, or snapshots of a file system. ZFS can
> store trillions of items: files in a file system, file systems,
> volumes, or snapshots in a pool.
> 
> 
> I'm not sure why ZFS hasn't yet been ported to OpenBSD, but if it
> were then that would pretty much eliminate the need for my one and
> only FreeBSD box ;-)

The usual stated reason is "license", it is completely unacceptable to
OpenBSD.

The other reason usually not given which I suspect would become obvious
were the license not an instant non-starter is the nature of ZFS.  As it
is a major memory hog, it works well only on loaded 64 bit platforms.
Since most of our 64 bit platforms are older, and Alpha and SGI machines
with many gigabytes of memory are rare, you are probably talking an
amd64 and maybe some sparc64 systems.

Also...see the number of "ZFS Tuning Guides" out there.  How...1980s.
The OP here has a "special case" use, but virtually all ZFS uses involve
knob twisting and experimentation, which is about as anti-OpenBSD as you
can get.  Granted, there are a lot of people who love knob-twisting, but
that's not what OpenBSD is about.

I use ZFS, and have a few ZFS systems in production, and what it does is
pretty amazing, but mostly in the sense of the gigabytes of RAM it
consumes for basic operation (and unexplained file system wedging).
I've usually seen it used as a way to avoid good system design.  Yes,
huge file systems can be useful, but usually in papering over basic
design flaws.

Nick.



Re: Millions of files in /var/www & inode / out of space issue.

2013-02-19 Thread Andres Perera
On Tue, Feb 19, 2013 at 8:11 AM, Nick Holland
 wrote:

> I use ZFS, and have a few ZFS systems in production, and what it does is
> pretty amazing, but mostly in the sense of the gigabytes of RAM it
> consumes for basic operation (and unexplained file system wedging).
> I've usually seen it used as a way to avoid good system design.  Yes,
> huge file systems can be useful, but usually in papering over basic
> design flaws.

funnily enough, that "avoid[ing] good system design" is exactly what
makes it useful for desktop over server. i don't want to spend any
time figuring out how much gigs for /usr/{src,xenocara}. i also don't
want to partition /usr/ports only to find out later on that there's an
"object" or "tmp" sub-directory that i want on a different fs but i
can't because i've hit the 16 partition limit

if i ever install an application for experimental reasons, because
it's not a production machine, i don't want to rethink everything to
fit inside the disklabel constraints either. "good system design"
doesn't apply because it's a case where, gasp, the admin couldn't
possibly plan ahead



Re: Millions of files in /var/www & inode / out of space issue.

2013-02-19 Thread Eric S Pulley
> On 02/19/13 05:47, MJ wrote:
>> Which app are you running that is generating millions of tiny files
>> in a single directory?  Regardless, in this case OpenBSD is not the
>> right tool for the job. You need either FreeBSD or a Solaris variant
>> to handle this problem because you need ZFS.
>>
>>
>> What limits does ZFS have? ---
>> The limitations of ZFS are designed to be so large that they will
>> never be encountered in any practical operation. ZFS can store 16
>> Exabytes in each storage pool, file system, file, or file attribute.
>> ZFS can store billions of names: files or directories in a directory,
>> file systems in a file system, or snapshots of a file system. ZFS can
>> store trillions of items: files in a file system, file systems,
>> volumes, or snapshots in a pool.
>>
>>
>> I'm not sure why ZFS hasn't yet been ported to OpenBSD, but if it
>> were then that would pretty much eliminate the need for my one and
>> only FreeBSD box ;-)
>
> The usual stated reason is "license", it is completely unacceptable to
> OpenBSD.
>
> The other reason usually not given which I suspect would become obvious
> were the license not an instant non-starter is the nature of ZFS.  As it
> is a major memory hog, it works well only on loaded 64 bit platforms.
> Since most of our 64 bit platforms are older, and Alpha and SGI machines
> with many gigabytes of memory are rare, you are probably talking an
> amd64 and maybe some sparc64 systems.
>
> Also...see the number of "ZFS Tuning Guides" out there.  How...1980s.
> The OP here has a "special case" use, but virtually all ZFS uses involve
> knob twisting and experimentation, which is about as anti-OpenBSD as you
> can get.  Granted, there are a lot of people who love knob-twisting, but
> that's not what OpenBSD is about.
>
> I use ZFS, and have a few ZFS systems in production, and what it does is
> pretty amazing, but mostly in the sense of the gigabytes of RAM it
> consumes for basic operation (and unexplained file system wedging).
> I've usually seen it used as a way to avoid good system design.  Yes,
> huge file systems can be useful, but usually in papering over basic
> design flaws.
>
> Nick.
>
>

I feel anyone expecting to run any of the recently hatched filesystem on
10+ year old hardware falls into the design flaw category you mention. As
for needing to turn nobs to get it to work properly this is not necessary
if you use a modern 64bit box. Most of the tuning guides are written for
the guys trying to use it on their old hardware. Or trying to reach
"performance" numbers for whatever, usually misguided, reason. On a modern
amd64 box it pretty much just works.

As for a port to OpenBSD I'd love it, or port of LVM, but the biggest
hurdle IMO is the same one that plagues so many other good potential
OpenBSD ports. Getting someone competent and dedicated enough to do the
work.

I'm neither of those two things when it comes to porting, so I can only
blame myself that I'm using FreeBSD on my file server and desktop instead
of Open as I'd really like. However, I still have deep reservations about
trusting ZFS long term since Oracle closed it off to the community again.
I don't feel FreeBSD will be able to truly maintain the port over time. I
hope I'm wrong but we will see. So it may be for the best that Open
doesn't waste too much time on it.

-- 
ESP



Re: intel 2000 failure

2013-02-19 Thread Zoran Kolic
Hi Chris!

> did you upgrade X as well? Did you wipe out the old /usr/x11r6 directory so 
> to guarantee you are not using any old drivers?

I did upgrade X also. Directory /usr/X11R6 shows 14th february
date. I didn't delete it previously, it was not a request on
any tutorial on openbsd site.
I'm puzzled since people report success with this kind of integ-
rated hardware.
Best regards

 Zoran



Re: Constant attacks and ISP's are ignoring them

2013-02-19 Thread Chris Cappuccio
Richard Thornton [rich...@thornton.net] wrote:
> Linksys routers are defaulted to port forwarding NOT enabled, so check facts 
> before ranting.
> 

Your routers are impervious to penetration.



Re: Millions of files in /var/www & inode / out of space issue.

2013-02-19 Thread Matthias Appel

Am 19.02.2013 18:01, schrieb Eric S Pulley:

[snip]


I feel anyone expecting to run any of the recently hatched filesystem on
10+ year old hardware falls into the design flaw category you mention. As
for needing to turn nobs to get it to work properly this is not necessary
if you use a modern 64bit box. Most of the tuning guides are written for
the guys trying to use it on their old hardware. Or trying to reach
"performance" numbers for whatever, usually misguided, reason. On a modern
amd64 box it pretty much just works.


Maybe I don't see the big picture, but I assume, if ZFS is opt in, and 
not the default FS, memory consumption would only hit those who

*really* run ZFS on their boxes



As for a port to OpenBSD I'd love it, or port of LVM, but the biggest
hurdle IMO is the same one that plagues so many other good potential
OpenBSD ports. Getting someone competent and dedicated enough to do the
work.
I have to confess /me is neither competent nor dedicated, but I assume 
ZFS support for OpenBSD hast to be rewritten fron scratch.


And by talking of ZFS, why not consider 
ext3/4,reiser,xfs,jfs,ntfs,whatever-fs to be ported to OpenBSD?


Don't get me wrong, I would *love* to see ZFS in OpenBSD...but done in 
an OpenBSD-worthy way!

I'm neither of those two things when it comes to porting, so I can only
blame myself that I'm using FreeBSD on my file server and desktop instead
of Open as I'd really like. However, I still have deep reservations about
trusting ZFS long term since Oracle closed it off to the community again.
I don't feel FreeBSD will be able to truly maintain the port over time. I
hope I'm wrong but we will see. So it may be for the best that Open
doesn't waste too much time on it.




Yupp, I think, that's (beside the CDDL part of ZFS) it  the major 
turn-off in any kind of productive enviroment.


At the moment I don't know how FreeBSD handles the ZFS development, but 
maintaining a not-really-fully-ZFS besides Oracle is a no-go, IMHO.
Maybe forking it and calling it whatever-name-you-want-FS, would be 
better (but would violate CDDL, as far as I can see)..


If you want to have ZFS, you will have to bite the bullet and throw some 
$$$ on Oracles hive and get a fully licensed ZFS alongside with Solaris.


If thats not an option, move along and choose someting different.

So, long story short, I do not see any option to use ZFS on a free system.



Re: Constant attacks and ISP's are ignoring them

2013-02-19 Thread Matthias Appel

Am 19.02.2013 18:34, schrieb Chris Cappuccio:

Richard Thornton [rich...@thornton.net] wrote:

Linksys routers are defaulted to port forwarding NOT enabled, so check facts 
before ranting.


Your routers are impervious to penetration.


I would not call those Linksys boxes _routers_ in the first place!



Re: Constant attacks and ISP's are ignoring them

2013-02-19 Thread Matthias Appel

Am 19.02.2013 05:53, schrieb Chris Cappuccio:

Kevin Chadwick [ma1l1i...@yahoo.co.uk] wrote:

Every firewall/router product that I have purchased has been
compromised so far.

I don't believe this at all.  Not one bit.

I could believe it but that doesn't mean that I do. 90% of the routers
on my street will be insecure and even using old sps, upnp or wep.

Common, mass attacks are becoming more sophisticated every day.

All of them. The cat-and-mouse game is continually tilting against the
vast majority who only take the most basic security measures. So it's
typically a big problem when new major vulnerabilities are found in
consumer grade equipment.
If I buy a car, and don't know how to operate it, and cause harm, nobody 
would blame the manufacturer.


But If john Doe buys a "firewall" (hey, it says so on the lable on the 
box, so it HAS to be a "firewall") and gets exploited by a 
drive-by-download, the "firewall" *has* to be bad.



Here's a simple example from the past week:

Someone just pointed out that most of the Linux UPNP routers out there
listen to UPNP port forwarding requests FROM EXTERNAL SOURCES!

So now everyone is releasing patches, and that's only IF the code on
the router is still even maintained. And this new (and pretty fucking
obvious) hole was just pointed out to the general public.

To see that router vendors are mass producing junk that listens to
a UPNP port forwarding request from the fucking INTERNET shows that
anyone who doubts the security of their XYZ router is probably on
to something.

Yeah, you can parade the idea that "you should have disabled UPNP",
and that is a smart choice. But very few UPNP routers will come with
UPNP disabled. And the UPNP insecurity that is well known is at least
supposed to have a basis in an already-compromised INSIDE host,
not take port forwarding requests from the INTERNET.

So if vast numbers of routers are listening to admin commands from
0.0.0.0/0, and you don't believe "at all" that "every router"
this apparent troll has bought has been compromised, then you need
to think more creatively. And this guy needs to disable UPnP, and
maybe change his router admin password while he's at it. (And
reflash the firmware, and reformat his computer, re-flash
his DVD ROM, GPU, and so on.)

Hey, we are talking about users having adobe reader and java on their 
systems (most of them not up-to-date) and you want them to secure their 
BRAND-X plastic crap, they bought for 9.99$ at XYZ-mart?


If I don't know how to maintain my car (although I theoretically know 
how combustion engines work), I take it to someone who does!


But when it comes to computers, everybody thinks "My name is Karl, ich 
bin expert!"  (pun definitely intended).


So, why not throw some bucks at somebody who, at least to some extent, 
knows what he is doing?
Just go to the nearest university and look for some computer science 
students..chances are, that you find somebody who knows, what he is 
doing (and is willing to help you, if you give him some bucks).


And back to OPI would love to see *all* of the compromised gear and 
do a forensic analysis...just for the fun of it!
And I have seen some consumer grade equipment, and in recent times they 
*try* to secure their equipment (no WEP,randomized passphrases both for 
WPA and for admin accounts,no public acessible admin and so on).
Yes, UPnP and those exploits of WPS (you definitely don't want to hear 
my opinion about this cumbersome piece of...well you know it), exist, 
but if you have somebody (see above) who knows what he is doing, he'll 
fix it for you (JUST BY TURNING IT OFF!)


I tend to think OP was exploited from the inside, not by exploiting 
their "web sharing thingie"


Just think about it...what is more likely...exploiting a reasonably 
up-to-date Linux/VmWorks "router" or hitting a vulnerable 
java/adobe/flash/windows/IE/whatever hastily-cobbled-together client 
application.


so long,

Matthias



Re: bootable OpenBSD USB stick from windows?

2013-02-19 Thread Matthias Appel

Am 13.02.2013 19:14, schrieb Hugo Osvaldo Barrera:


$20 may sound cheap to you, but that's not cheap in every part of the
world, especially for a device you'll use only ONCE to install the OS.
It's 2013, and buying floppies/optical drives isn't the best of advices.

What's wrong PXE?

If 20$ is too much to spend for OP, I would like to donate a working USB 
slimline CD/DVD drive, which I don't use anymore(working, of course!).

The only two conditions are:

*Snail-mailing the drive does not cost a fortune.
*If I am in his part of the world, or he is in my part of it, he has to 
pay me a pint of beer.

*Only valid for OPdon't come knocking for free DVD drives ;-)

I am not able to contribute very much to the OpenBSD community, but if 
this is what I can do to have another user using OpenBSD, I would be 
glad to do so!




Regards,

Matthias



Re: bootable OpenBSD USB stick from windows? [OT]

2013-02-19 Thread Matthias Appel
Am 20.02.2013 02:45, schrieb sven falempin:
> If 20$ is too much to spend for OP, I would like to donate a working 
> USB slimline CD/DVD drive, which I don't use anymore(working, of course!).
>
> The only two conditions are:
>
> *Snail-mailing the drive does not cost a fortune.
> *If I am in his part of the world, or he is in my part of it, he
> has to pay me a pint of beer.
> *Only valid for OPdon't come knocking for free DVD drives ;-)
>
> I am not able to contribute very much to the OpenBSD community,
> but if this is what I can do to have another user using OpenBSD, I
> would be glad to do so!
>
>
>
> Regards,
>
> Matthias
>
> i am pretty sure usb key cost less than beer somewhere :-)
>

Challenge accepted:

http://www.e-tec.at/frame1/details.php?art=42888

http://www.josef.co.at/JosefNeu/Web/pdf/Getr%C3%A4nkeangebot.pdf
(Just look beneath "OFFENE BIERE")

And I am talking about *beer* not yellow coloured water :-P


And yeah, OK...2nd condition of the deal is opt-out, of course (hence 
the low possibility of meeting somewhere, anyway)



Cheers,


Matthias



Re: bootable OpenBSD USB stick from windows? [OT]

2013-02-19 Thread sven falempin
On Tue, Feb 19, 2013 at 9:31 PM, Matthias Appel wrote:

>  Am 20.02.2013 02:45, schrieb sven falempin:
>
> If 20$ is too much to spend for OP, I would like to donate a working USB
> slimline CD/DVD drive, which I don't use anymore(working, of course!).
>
>> The only two conditions are:
>>
>> *Snail-mailing the drive does not cost a fortune.
>> *If I am in his part of the world, or he is in my part of it, he has to
>> pay me a pint of beer.
>> *Only valid for OPdon't come knocking for free DVD drives ;-)
>>
>> I am not able to contribute very much to the OpenBSD community, but if
>> this is what I can do to have another user using OpenBSD, I would be glad
>> to do so!
>>
>>
>>
>> Regards,
>>
>> Matthias
>>
>>
>  i am pretty sure usb key cost less than beer somewhere :-)
>
>
> Challenge accepted:
>
> http://www.e-tec.at/frame1/details.php?art=42888
>
> http://www.josef.co.at/JosefNeu/Web/pdf/Getr%C3%A4nkeangebot.pdf
> (Just look beneath "OFFENE BIERE")
>
> And I am talking about *beer* not yellow coloured water :-P
>
>
> And yeah, OK...2nd condition of the deal is opt-out, of course (hence the
> low possibility of meeting somewhere, anyway)
>
>
>
> Cheers,
>
>
> Matthias
>
>
>
4 hours to go , place your bet (shipping is a pain though)

http://www.ebay.com/itm/New-Design-Handgun-Shaped-8GB-USB-2-0-Flash-Memory-Pen-Drive-Stick-Thumb-/330876107090?pt=US_USB_Flash_Drives&hash=item4d09c0b952


-- 
-
() ascii ribbon campaign - against html e-mail
/\



httpd and php-mapi

2013-02-19 Thread Philippe Grégoire

Hi,

I am trying to check the possibilities with Zarafa and installed
OpenBSD 5.2 on an empty machine. Sadly, I spent the last hours
trying to figure out the cause of the following messages in
httpd's error log:

'[...] child pid $x exit signal Segmentation fault (11)'

I tried calling it with '-X' to make a ktrace but then it works(!)
Also, I checked if it might not be caused by some library so I
removed the mapi.ini file from php-5.3 folders and httpd gets fine
but Zarafa cries about php-mapi not being loaded.

In the end, I need php-mapi to use Zarafa but it crashes httpd.

  o Did someone else encountered that issue and got it resolved?
  o Is there some way I could ktrace httpd's child?

Any help would be appreciated, thanks.

P. Grégoire

P.S.
Is CoreDumpDirectory working? It does not here.



Re: Millions of files in /var/www & inode / out of space issue.

2013-02-19 Thread Jan Stary
> On Tue, Feb 19, 2013 at 00:35, Keith wrote:
> > Q. How do I make the default web folder /var/www/ capable of holding
> > millions of files (say 50GB worth of small 2kb-12kb files) so that I
> > won't get inode issues ?

newfs defaults to -f 2k and -b 16k which is fine if you
know in advance you will hold 2k-12k files. As for inodes,
the default of -i is to create an inode for every 4 frags,
that is 8192 bytes. So on a 50G filesystem this should
give you over 6.1 millon inodes. What does df -hi say?

But first of all, fix your crappy app to not do that.



Re: httpd and php-mapi

2013-02-19 Thread Antoine Jacoutot
On Tue, Feb 19, 2013 at 08:55:39PM -0500, Philippe Grégoire wrote:
> Hi,
> 
> I am trying to check the possibilities with Zarafa and installed
> OpenBSD 5.2 on an empty machine. Sadly, I spent the last hours
> trying to figure out the cause of the following messages in
> httpd's error log:
> 
> '[...] child pid $x exit signal Segmentation fault (11)'
> 
> I tried calling it with '-X' to make a ktrace but then it works(!)
> Also, I checked if it might not be caused by some library so I
> removed the mapi.ini file from php-5.3 folders and httpd gets fine
> but Zarafa cries about php-mapi not being loaded.
> 
> In the end, I need php-mapi to use Zarafa but it crashes httpd.
> 
>   o Did someone else encountered that issue and got it resolved?
>   o Is there some way I could ktrace httpd's child?
> 
> Any help would be appreciated, thanks.

$ sudo pkg_add -n zarafa-webaccess
<...>
Look in /usr/local/share/doc/pkg-readmes for extra documentation

Seems to me you didn't do what pkg_add(1) advised you to do.

-- 
Antoine