Re: DragonFlyBSD on Alix board

2011-06-04 Thread Michael Neumann

On 06/03/2011 02:15 PM, Stefano Marinelli wrote:

Hello everybody,
i've an Alix board and it's been running Netbsd for around one year. Never a 
problem. I'd like to try DragonFlyBSD on it, because of hammer, since the board 
is mainly working as a small backup server (using usb disks). It seems I have 
problems,anyway : I can install everything  on the CF on my PC (via usb 
adapter) and I can boot from it without problems. Using qemu, i could also 
check that the serial console is working, trying to boot directly from the CF. 
But when putting the CF on the Alix, it just reaches the boot0 stage, waits 
some seconds for the OS choice, and then tries to go on (I see the char going 
to new line) but stops there. It seems it can't find the boot1 stage. How could 
i solve/debug it? The cf is the same i used for Netbsd, so i am sure it works 
perfectly.
I've also tried to install FreeBSD (using DragonFly BSD's same boot stuff) and 
it works perfectly.


Have you setup the serial console correctly?

I have an alix board myself, and never succeeded in setting up DragonFly 
on it... but I didn't try too long.


Regards,

  Michael


Re: DragonflyBSD on Areca w/ HAMMER

2011-05-31 Thread Michael Neumann

On 05/31/2011 04:58 PM, Justin Sherrill wrote:

There's no way to expand/shrink Hammer volumes.  Another way to
approach this - and it's not necessarily better or worse - is to use
Hammer's mirroring capability to move data to a larger disk and then
start using that one, or otherwise shift it around.


Well, there is a possibility to add and remove volumes other than the 
root volume, but IMHO
it's still considered experimental. In combination with LVM you could 
emulate expanding and

shrinking of a Hammer filesystem.

Regards,

  Michael


On Tue, May 31, 2011 at 1:07 AM, Dean Hamstead  wrote:

Hi Guys

Im thinking about reloading my home file server with dragonflybsd. Mainly so
i can take advantage of the rather awesome looking HAMMER filesystem.

My home server has an Areca 1260 SAS raid card, which is reported as being
supported by the arcmsr man page.

Im looking for confirmation from other users that areca cards work well, and
that the admin tools are supported from freebsd or otherwise implemented
independently.

Also id like to get some feedback that HAMMER is up to the task of being
belted really hard over sustained periods.

Im also not able to see find a conclusive answer to if HAMMER can be
expanded (as im relatively frequently adding more disks, and expanding my
raid array - currently im using UFS tool growfs)

I have dabbled with ZFS on freebsd, but found it wanting. HAMMER seems much
more 'next generation' - rather than just collapsing some raid features in
to the filesystem.

Dean
--
http://fragfest.com.au





27c3 anyone?

2010-12-27 Thread Michael Neumann
Hey,

Anyone of the Dragonfly people on 27c3 here in Berlin?

Regards,

  Michael



Re: SMP boot option?

2010-10-06 Thread Michael Neumann
Am Mittwoch, den 06.10.2010, 15:49 +0200 schrieb Matthias Schmidt:
> * Michael Neumann wrote:
> > 
> > For x86_64, it gets completely rid of compile time APIC_IO and
> > introduces a loader tunable hw.apic_io_enable. Right now it does not
> > compile non-SMP, but I will fix that later. Also i386 is not yet
> > supported, again, it's pretty easy and straightforward to bring in i386
> > support.
> 
> Wow!  I'd love to see this for i386 as well ;)

i386 is there and compiles for both SMP / non-SMP.

http://gitweb.dragonflybsd.org/~mneumann/dragonfly.git/commit/79b62055301f75e30e625e64f13564a1145fe853

I tried to make as an non intrusive as possible.

Regards,

  Michael



Re: SMP boot option?

2010-10-06 Thread Michael Neumann
Am Dienstag, den 05.10.2010, 18:55 -0500 schrieb Tyler Mills:
> I don't know how many people do this, but usually my first step when
> building a Dragonfly BSD system is building an SMP kernel with
> APIC_IO.  I know there are some bugs for some people with regards to
> AHCI and other settings, but wouldn't it make sense to have an
> SMP/APIC_IO option for bootup? That way you can see if it works, if it
> fails boot another kernel.  Similar to how ACPI is treated.
> 
> Just bouncing this idea off, I might end up just making my own ISO for
> this for my use, and could upload if needed.

Please try out my repo:

http://gitweb.dragonflybsd.org/~mneumann/dragonfly.git/shortlog/refs/heads/apic_io

For x86_64, it gets completely rid of compile time APIC_IO and
introduces a loader tunable hw.apic_io_enable. Right now it does not
compile non-SMP, but I will fix that later. Also i386 is not yet
supported, again, it's pretty easy and straightforward to bring in i386
support.

If we commit this to master, I'd like to see the distribution shiped
with an SMP kernel as well!

Regards,

  Michael



Re: HAMMER: WARNING: Missing inode for dirent

2010-09-30 Thread Michael Neumann
Am Donnerstag, den 30.09.2010, 14:48 +0300 schrieb Stathis Kamperis:
> 2010/9/30 Damian Lubosch :
> > Hello!
> >
> > Is that something I should be alarmed of, or is it just a "debug" warning 
> > message
> >
> > HAMMER: WARNING: Missing inode for dirent "pkgsrc"
> >obj_id = 0001040faf6f, asof=000106dda770, lo=0003
> > HAMMER: WARNING: Missing inode for dirent "pkgsrc"
> >obj_id = 0001040faf6f, asof=000106dda770, lo=0003
> > HAMMER: WARNING: Missing inode for dirent "pkgsrc"
> >obj_id = 0001040faf6f, asof=000106dda770, lo=0003
> > HAMMER: WARNING: Missing inode for dirent "pkgsrc"
> >obj_id = 0001040faf6f, asof=000106dda770, lo=0003
> > HAMMER: WARNING: Missing inode for dirent "ld-elf.so.2"
> >obj_id = 000105f776df, asof=000106dda770, lo=0003
> > HAMMER: WARNING: Missing inode for dirent "ld-elf.so.2"
> >
> > I deleted some files few days ago, and now my dmesg.today and .yesterday is 
> > full of those messages.
> >
> > I am using DFly 2.6.3 amd64 with a patched twa(4) driver for hardware raid.
> > [...]
> 
> See undo(1) under the 'DIAGNOSTICS' section:
> http://leaf.dragonflybsd.org/cgi/web-man?command=undo§ion=ANY
> 
>Warning: fake transaction id 0x...  While locating past versions of the
>  file, undo came across a fake transaction id, which are automatically
>  generated by hammer(5) in case the file's directory entry and inode got
>  bisected in the past.

I know that this message can occur, but did I bisect the inode? I just
created a file, then wrote something to it, synced it to disk, and tried
to see the diff. I am also getting those "AMMER: WARNING: Missing inode
for dirent" messages reported by Damian Lubosch. I am just a bit
concerned that something is going wrong, especially as those "fake
transaction id" messages came pretty random.

Regards,

  Michael



Re: HAMMER: WARNING: Missing inode for dirent

2010-09-30 Thread Michael Neumann
Am Donnerstag, den 30.09.2010, 10:30 +0200 schrieb Damian Lubosch:
> Hello!
> 
> Is that something I should be alarmed of, or is it just a "debug" warning 
> message
> 
> HAMMER: WARNING: Missing inode for dirent "pkgsrc"
> obj_id = 0001040faf6f, asof=000106dda770, lo=0003
> HAMMER: WARNING: Missing inode for dirent "pkgsrc"
> obj_id = 0001040faf6f, asof=000106dda770, lo=0003
> HAMMER: WARNING: Missing inode for dirent "pkgsrc"

I noticed the same yesterday. See my bug report:
http://bugs.dragonflybsd.org/issue1857

Regards,

  Michael




Re: example of dfbsd deployment or product that based on dfbsd

2010-09-29 Thread Michael Neumann
Am Mittwoch, den 29.09.2010, 09:17 +0700 schrieb Iwan Budi Kusnanto:
> Justin C. Sherrill wrote:
> > On Tue, September 28, 2010 3:54 am, Iwan Budi Kusnanto wrote:
> >> Hi,
> >> I just have interest in DFBSD and have some questions.
> >>
> >> Can someone give me examples of some big/great DFBSD deployment or some
> >> product that based of DFBSD ?
> >>
> >> Is DFBSD proven to be rock solid in real world ?
> > 
> > I don't know if these are the scale you are looking for , but
> 
> I'm looking for  production server that used by some company for their 
> mission critical application.

We are running our email and web servers on DragonFly (right now in a
virtual machine under a huge Linux/KVM host). Well, email is somehow
mission critical for us :). Last night the Linux host went sudddenly
down due to loosing power. The DragonFly virtual box survived the sudden
death of it's host without the need for fsck or the like thanks to
HAMMER (while the Linux host had to spend a few minutes in fsck :).

Regards,

  Michael



Re: DragonflyBSD under VMware ESX - someone use it?

2010-09-24 Thread Michael Neumann
Am 24.09.2010 19:20, schrieb Tomas Bodzar:
> Hi all,
>
> is there someone who is using DragonflyBSD under VMware ESX platform
> and what are his/her thoughts about it?
>   

Not on VMware, but I run DragonFly under KVM and it works like a charm!

Regards,

  Michael



Re: Why did you choose DragonFly?

2010-09-20 Thread Michael Neumann
Am 20.09.2010 21:33, schrieb Samuel J. Greear:
> This mail is intended for the infrequent responders and lurkers on the
> list just as much as the regular posters.
>
> What has drawn you to use the DragonFly BSD operating system and/or
> participate in its development by following this list? Technical
> features, methodologies, something about the community? I suspect the
> HAMMER filesystem to be the popular choice, but what other features
> affect or do you see affecting your day to day life as an
> administrator, developer, or [insert use case here], now or in the
> future?
>   

Well, IIRC, I was taking a look at DragonFly early at it's beginning when
I was studying the L4 microkernel. I even suggested to base DragonFly
on L4 at this time :). Since then, I always followed it's progress and
also used it as a desktop for some time and recently also on the server.

The ideas and goals of DragonFly are somehow more challanging than
those of other main-stream OSes, so it's very interesting from a technical
and educational standpoint in my case.

And the whole development is driven by people out of their own personal
interest.
This is very nice! Furthermore almost everyone is allowed to contribute
to the
project and is even welcomed to do so. And I like the pragmatic choices
of the
developers, for example to use git instead of cvs, just to mention one.
I think
in general DragonFly is much more pragmatic than any other OS, if there
is a good
solution it will be followed.  

What I also like is the community and especially our project leader (Matt)
who explains things very well, which is always worth reading.

Other than that, to administer systems, HAMMER makes life so much
easier, just
because it makes backups so easy.

Regards,

  Michael



Re: HEADS UP: BIND Removal. Short instructions for migration to pkgsrc-BIND

2010-04-13 Thread Michael Neumann
2010/4/13 Chris Turner 

> Justin C. Sherrill wrote:
>
>> Don't a number of Linux systems ship without those tools unless added via
>> a separate package?  I know, I know - "it's that way in Linux" isn't
>> necessarily a compelling reason.
>>
>
> AARRGGHH!
>
> yeaah, and in most "distros" so is the kernel,
> and so is the bourne shell, and so is awk, and so is grep
> 10: and so is ...
> 20: GOTO 10
>
> using the word 'linux' and 'system' can sometimes be an oxymoron.
>
> Unix:
>
> at origin:
>
> A self-contained 7" tape of the unix source tree,
> and a self-hosting build of that self-same source tree.


I don't know the reason why bind was removed from the source tree,
but I guess Jan had valid reasons for doing so (I guess the reason
is to simplify maintenance). So why not just install the bind package
by default and no one will notice it's no longer in the base?

Btw, has anyone used unbound [1] as an alternavtive to bind?

Regards,

  Michael

[1]: http://www.unbound.net/


Re: How to use hammer volume-add and volume-del

2010-04-07 Thread Michael Neumann
2010/4/7 lhmwzy 

> # uname -a
> DragonFly . 2.6-RELEASE DragonFly v2.6.1-RELEASE #1: Sun Apr  4
> 19:50:41 PDT 2010
> r...@test28.backplane.com:/usr/obj/usr/src-misc/sys/GENERIC  i386
>
> # hammer volume-del /dev/da1 /usr
> hammer volume-del ioctl: Invalid argument
>

/usr is not a valid HAMMER filesystem! You must specify the path you
use in mount, not a PFS.

For example:

  mount_hammer /dev/ad1 /mnt
  hammer volume-add /dev/ad2 /mnt



> when added a disk,can't be removed.
>
> another question,the same disk(/dev/da1 for example) can be added for
> many times using "hammer volume-add /dev/da1 /usr",is this normal?
>

What does hammer info /usr show? I am sure it is not added, but maybe no
error is shown.

Regards,

  Michael


Re: How to use hammer volume-add and volume-del

2010-04-06 Thread Michael Neumann
2010/4/6 lhmwzy 

> Could anyone write a artilce about how to use hammer volume-add and
> volume-del step by step?
> I can't do it well.
>

You used it correctly and discovered a bug. It's really as easy as:

  hammer volume-add /dev/ad1 /

You can use whatever device, partition, slice etc. you want instead of
/dev/ad1.

Note that if you extend you root filesystem, you'd also need to change
your /boot/loader.conf vfs.root.mountfrom setting to include ad1:

Before volume-add:

  vfs.root.mountfrom="hammer:ad0s1d"

After volume-add:

  vfs.root.mountfrom="hammer:ad0s1d:ad1"

Always keep in mind that volume-add and volume-del is still experimental,
which means that it needs more testing!

Regards,

  Michael


Re: How to use hammer volume-add and volume-del

2010-04-06 Thread Michael Neumann
2010/4/5 lhmwzy 

> when I use
> hammer volume-add /dev/da1s0 /
> then use
> hammer volume-del /dev/da1s0 /
>
> the / size is always grows.
>
> FilesystemSize   Used  Avail Capacity  Mounted on
> ROOT  157G   805M   156G 0%/
>
> FilesystemSize   Used  Avail Capacity  Mounted on
> ROOT  236G   805M   235G 0%/
>
> da1 is 80G.
>
> I use
>  "hammer volume-add /dev/da1s0 /
> hammer volume-del /dev/da1s0 /"
> twice.
>


My most recent commit [1] should fix the size issue.

Regards,

  Michael

[1]:
http://gitweb.dragonflybsd.org/dragonfly.git/commit/16b533e86d040ed2450b060c3e99d1239d9a4266


Re: How to use hammer volume-add and volume-del

2010-04-06 Thread Michael Neumann
2010/4/5 lhmwzy 

> Maybe I do something wrong?
> 1.add the disk to computer.
> 2.hammer volume-add /dev/da1s0 /
> 3.shutdown -r now
> 4.panic
>

Hi,

No you did not do anything wrong. I can reproduce it!

The following assertion fails in vfs_bio.c, vfs_vmio_release() line 1747:

KKASSERT (LIST_FIRST(&bp->b_dep) == NULL);

I wonder if this has something to do with my volume-add / volume-del code.
Could anyone
more knowledgeable of vfs_bio.c take a look at it?

Seems not to panic, if the filesystem you add a new volume to is not the
root filesystem.

Regards,

  Michael


Re: How to use hammer volume-add and volume-del

2010-04-06 Thread Michael Neumann
2010/4/6 lhmwzy 

> Could anyone write a artilce about how to use hammer volume-add and
> volume-del step by step?
> I can't do it well.
>

Let me check if something broke volume-add and volume-del. Last time I tried
it worked.
Give me a few days, as I just arrived yesterday from vacations.

Best regards,

  Michael


Re: SAS RAID controllers support

2010-02-23 Thread Michael Neumann
2010/2/23 Francois Tigeot 

> On Thu, Feb 04, 2010 at 06:20:51PM +0100, Francois Tigeot wrote:
> > > > > >
> > > > > > I'm curious about the state of hardware RAID controllers in
> DragonFly.
> >
> > I'm now pretty sure the only hardware RAID adapters which *could* be
> usable
> > are based on the LSI 1078 chipset
> >
> > At least 6 different cards are based on it:
> >
> > - LSI MegaRAID SAS 8704ELP / 8708ELP
> > - LSI MegaRAID SAS 8704EM2 / 8708EM2
> > - LSI MegaRAID SAS 8880EM2
> > - LSI MegaRAID SAS ELP
> >
> > I should be able to test a MegaRAID SAS ELP soon. It was not
> recognized last
> > year but Matt has since updated the mpt(4) driver.
>
> The MegaRAID SAS ELP is still not recognized.
>
> dmesg :
>
> pci3:  on pcib3
> pci3:  (vendor 0x1000, dev 0x0060) at device 0.0 irq 11
>
> pciconf -lv :
>
> non...@pci0:3:0:0:  class=0x010400 card=0x10061000 chip=0x00601000
> rev=0x04 hdr=0x00
>vendor = 'LSI Logic (Was: Symbios Logic, NCR)'
>device = 'SAS1078 PCI-X Fusion-MPT SAS'
>class  = mass storage
>subclass   = RAID
>
> So it seems DragonFly doesn't support any recent hardware RAID controller.
>

The Adaptec RAID (aac) controllers seems to be supported very well. For
example
the Adaptec RAID 5405 seems to work pretty well on a box I tested Dragonfly.

Regards,

  Michael


Re: [PATCH] tmpfs-update 021010 (was: tmpfs work update 013010)

2010-02-13 Thread Michael Neumann
2010/2/13 Naoya Sugioka 

> Thank you for the warming word. It is my pleasure if you or community like
> it.
>

Great work!


Regards,

  Michael


Re: Anyone tried an Atom 330 with Dragonfly

2010-01-30 Thread Michael Neumann
2010/1/30 Steve O'Hara-Smith 
>
>Question is has anyone run DFLY on either of these (or anything
> with the same chipsets) or am would I be breaking new ground ?


Hi Steve,

I had this ASUS EEE PC netbook running with DragonFly for a while. It has a
single-core Atom processor, but two logical (hyper-threading) processors, so
I could run SMP on it.

I was also thinking about buying this ION-based motherboard for my
file-server, but then I guess it still consumes about 30W and my existing
~50W Athlon-based motherboard is of course *much* more powerful.


Regards,

  Michael


Re: replace nvi with traditional vi?

2009-12-14 Thread Michael Neumann
2009/12/14 Steve O'Hara-Smith 

> On Mon, 14 Dec 2009 16:48:43 +0100
> Michael Neumann  wrote:
>
> > 2009/12/14 Steve Shorter 
> >
> > > The above link says "multiple screens" are a fancy feature and
> > > not supported.  Having a vi that can do split screen is essential
> AFAIC.
> > >
> >
> > But our vi can't do ":sp" or am I missing something? That's why I am
> using
> > vim :)
>
>It's :E in nvi.
>

Ah, okay. Makes sense.

Regards,

  Michael


Re: replace nvi with traditional vi?

2009-12-14 Thread Michael Neumann
2009/12/14 Steve Shorter 

> On Mon, Dec 14, 2009 at 05:58:45PM +0300, Alexander Polakov wrote:
> >  I think we can safely replace nvi with traditional vi [1].
> >  vi supports UTF-8 and then we could use UTF-8 locale
> >  systemwide. nvi is old and unmaintained, but supports
> >  more configuration options, while vi is much simpler.
> >  Anyway, if you want a really powerful text editor it
> >  would be vim or emacs.
> >
> >  Do you use feature nvi provides? Would you suffer if it's
> >  replaced with a smaller vi?
> >
> >  [1] http://ex-vi.sourceforge.net/
>
> The above link says "multiple screens" are a fancy feature and
> not supported.  Having a vi that can do split screen is essential AFAIC.
>

But our vi can't do ":sp" or am I missing something? That's why I am using
vim :)

Regards,

  Michael


Re: HAMMER: recovering directory?

2009-12-14 Thread Michael Neumann
2009/12/14 

> So we know that we can recover files. What if a directory(lets say it
> contains 3000 files) is accidently deleted or the files are overwritten,
> but it doesnt exist in the last snapshot (ie. I created it today). How can
> we recover that?
>
> Thanks,
> Petr
>
>
# cd /
# mkdir test
# touch test/a test/b test/c
# hammer synctid /# flush filesystem
# rm -rf test
# undo test
  >>> test 000 0x00010f243340 14-Dec-2009 14:44:41

# cd test@@0x00010f243340
# ls
  a b c

Note, that you don't have to call "hammer synctid" youself, as every 30
seconds the
filesystem is synced to disk.

Regards,

  Michael


Re: Updating USB stack from FBSD 8.x and others

2009-12-12 Thread Michael Neumann
2009/12/12 

> Hi all,
>
> 1) I think we desperately need to bring our USB stack into reality. Is
> anyone working on bringing in the new FreeBSD USB code or maybe one from
> other BSDs? How difficult would it be? Lets dicuss.
>

Some time ago I said that I'd like to bring the HPS USB stack (now in
FreeBSD 8.0) to DragonFly.
But no time and too little knowledge about device drivers. I'd also like to
spend a few hundred dollars
for the person who ports FreeBSD's USB stack.

Regards,

  Michael


Re: HAMMER in real life

2009-11-26 Thread Michael Neumann
Matthew Dillon schrieb:
> * I believe that FreeBSD was talking about adopting some of the LFS work,
>   or otherwise implementing log space for UFS.  I don't know what the
>   state of this is but I will say that it's tough to get something like
>   this to work right without a lot of actual plug-pulling tests.
> 
> Either OpenBSD or NetBSD I believe have a log structured extension to
> UFS which works.  Not sure which, sorry.

That's NetBSD! They had LFS in the past which was never really ready for
production use. But with NetBSD 5.0 they got WAPBL (Write Ahead Physical
Block Logging), sponsored by Wasabi systems.

Regards,

  Michael


Re: USB WLAN/USB Ethernet device

2009-10-21 Thread Michael Neumann
2009/10/21 Saifi Khan 

> On Wed, 21 Oct 2009, Michael Neumann wrote:
>
> > 2009/10/21 Saifi Khan 
> >
> > >
> > > i'm leaning towards USB-Ethernet and was wondering if there is a
> > > USB-Ethernet device that is known to work fine with DragonFly
> > > BSD 2.4.1 ?
> > >
> > >
> >
> > I have an USB WLAN device supported by the ural(4) driver which works
> fine.
> >
> > Regards,
> >   Michael
> >
>
> Hi Michael:
>
> Thank you for your reply.
>
> Can you share the USB WLAN device model details ?
>

It's an D-Link DWL-G122. But you have to be careful, because it can be that
depending on the
revision of the device it's a different chipset! I can't tell you right now
what revision has my device
but I read here [1] (in German) that "HW Ver C1 is a Ralink RT73" which
seems to be not supported by the ural(4) driver.
So according to the manpage you should buy revision b1.

[1]: http://weblog.christoph-probst.com/article.php/20070402153533359

Regards,

  Michael


Re: how to apply patches on a system that doesnot have functional network device ?

2009-10-21 Thread Michael Neumann
2009/10/21 Saifi Khan 

> Hi:
>
> Here is a situation that i'm facing on a Compaq C301TU laptop.
>
> The NIC card (Realtek)   does not work due to driver issue.
> The WLAN card (Broadcom) does not work due to driver issue.
>
> Currently, i review the possible patch visually on an identical
> laptop (running FreeBSD-8) and then type out the code on the
> other laptop in sys/dev/netif/rl/if_rl.c file.
>
> Is there a better way to apply patches on a system that doesnot
> have functional network devices ?
>
> i'm leaning towards USB-Ethernet and was wondering if there is a
> USB-Ethernet device that is known to work fine with DragonFly
> BSD 2.4.1 ?
>
> Please accept my apologies for this newbie query and look
> forward to suggestions from the more experienced folks on this
> matter.
>

I have an USB WLAN device supported by the ural(4) driver which works fine.

Regards,

  Michael


sil3114 supported

2009-10-20 Thread Michael Neumann
Hi,

Does anyone know if the Silicon Image 3114 chipset is supported by the
sili driver? According to the manual it's probably not, but maybe the
chipset is very similar to other sil chips?

Regards,

  Michael


Re: HEADS UP - devfs integration update. iscsi now alpha.

2009-08-11 Thread Michael Neumann

Matthew Dillon schrieb:
:Matt, do you think it's worth to even drive this one step further by 
:probing device slices for HAMMER (or other types of) filesystems, and

:create devfs entries like /dev/hammer/fsid or /dev/hammer/volname.volno?
:
:This would, in case of HAMMER, make devtab kind of superfluous, despite
:being very useful in general.
:
:Regards,
:
:   Michael

I think we have to be very careful when trying to identify drives
by the data stored on them.  Maintainance tasks such as someone, say,
dd'ing a disk image, could blow in our faces from depending on it.
  

Generally I agree that too much magic should be avoided.
But I see one use case where it might be useful: Expanding or
shrinking a HAMMER filesystem. Right now, all volumes
have to be specified. If, after an expansion, you miss to add
the new volume, you can easily render the box unbootable.

What I imagine would still need to mount a filesystem explicitly,
just that the volumes are determined automagically.

Regards,

 Michael



Re: HEADS UP - devfs integration update. iscsi now alpha.

2009-08-08 Thread Michael Neumann

Matthew Dillon schrieb:

DEVFS has gone through a bunch of debug and fix passes since the
initial integration and is now ready for even wider testing on master.

* Now works properly with X.

* Now works properly with mono and other applications.

* Now probes disklabels in GPT slices, and properly probes GPT slice 0.

* Misc namespace issues fixed on reprobe (there were problems with
  iscsi and VN).

* Auto-reprobe is now synchronous from the point of view of fdisk, gpt,
  and disklabel, or any program that is setting up slices and partitions.
  (there were races against the creation of the actual sub-devices in
  /dev before that are now fixed).

* mount-by-serialnumber is possible via /dev/serno/.  Example fstab:

  serno/L41JAB0G.s1d/   hammer rw   1   1


Matt, do you think it's worth to even drive this one step further by 
probing device slices for HAMMER (or other types of) filesystems, and

create devfs entries like /dev/hammer/fsid or /dev/hammer/volname.volno?

This would, in case of HAMMER, make devtab kind of superfluous, despite
being very useful in general.

Regards,

  Michael


Re: Call for help with firefox 3.5

2009-07-03 Thread Michael Neumann

Matthew Dillon schrieb:
:In videos there are sync problems (audio is about second behind the 
:picture) and it crashes often while seeking or if video just finishes 
:playing.

:
:My guess is that it's related OSS backend because nobody actually uses it 
:in the upstream (I don't think that any OS relevant today except FreeBSD 
:and DragonFly use OSS by default at all). There are several commits into 
:other backends to solve sync problems, but none into OSS code. Therefore 
:call for help - if you can and understand what OSS is (I don't ;), please 
:look at it.

:
:-- 
:Hasso Tepper


I'm kinda lost in this sea of source code.  Do you have any idea where
this OSS source is relative to the pkgsrc work dirs?


Matt, we have a target for pkgsrc-wip:

cd /usr
make pkgsrc-wip-checkout
#make pkgsrc-wip-update
cd /usr/pkgsrc/wip

Regards,

  Michael


Re: How the project of add redundancy to HAMMER fs going?

2009-05-27 Thread Michael Neumann
lhmwzy wrote:
> I remember there is a SoC project dealing with local redundancy with HAMMER.
> But it is not become a 2009 SoC project.
> Will and when Matt add the this feature?


> When  add and delete a disk to HAMMER is possible?

Some time ago I started to work on adding disks to Hammer, but never
finished it. Deleting a disk is a lot harder (IMHO) as you have to
integrate this with the reblocker.

Regards,

  Michael


Re: DragonFly-2.2.1 installation problem

2009-05-07 Thread Michael Neumann

Archimedes Gaviola wrote:

Hi,

I've been booting a CD installer of 2.2.1 RELEASE but encountered a
problem on typing a character with my USB keyboard. At the login
prompt, I can't type any character like 'installer' or 'root' to
proceed the installation process. I'm pretty sure the USB keyboard is
functional. I've tested 'default boot' and 'disabled ACPI' options but
seems no effect. I'm using a 2-way quad-core Intel Xeon IBM x3650.


I think you need to set the following tunable within the bootloader:

hw.usb.hack_defer_exploration=0

http://gitweb.dragonflybsd.org/dragonfly.git/commit/eabfb7a3e6d72525e3b2e512609c68d7dfbdd030
http://gitweb.dragonflybsd.org/dragonfly.git/commit/5d0fb0e6a13d29aab69eab8a3726d07f7ec9606b

Regards,

  Michael


Re: How to configure DragonFly?

2009-04-21 Thread Michael Neumann

Colin Adams wrote:

Now that I've managed to install DragonFly (and have a working system
again) by avoiding using the installer, I need to configure my system.
The handbook just shows the menus from the installer. Is it possible
to just type a command that will launch that menu system? If not, what
commands do I need to run to perform these tasks?

P.S. I have no experience of BSD systems, but I've been running Linux
for 14 years, so I am familiar with this sort of thing. It's just that
there are just sufficient differences for me to feel lost.


Two things will make your life easier:

  * kbdmap: this will change your keyboard mapping
Once you figured out your correct mapping, you can put a line into
/etc/rc.conf like:

  keymap="german.iso" # this is mine, yours will differ :)

  * tzsetup: this will setup you time zone. You only have to call it
once.

The file /etc/rc.conf is your friend. Take a look at 
/etc/defaults/rc.conf, which lists all settings that you can enter into 
rc.conf and shows which defaults are used. Also do "man rc.conf".


Mine looks like:

  # /etc/rc.conf
  hostname="mybox.localhost"

  # ale0 here is the network driver name. bge0, re0, rl0 are other for
  # example (see dmesg to find our your driver name).
  ifconfig_ale0="DHCP"

  ifconfig_rum0="WPA DHCP" # wireless

  keymap="german.iso"
  moused_enable="YES"
  usbd_enable="YES"
  sshd_enable="YES"


Regards,

  Michael





Re: Dual-core Athlon

2009-04-21 Thread Michael Neumann

Neil Booth wrote:

Archimedes Gaviola wrote:-


On Mon, Apr 20, 2009 at 10:53 PM, Sascha Wildner  wrote:

Neil Booth schrieb:

I don't believe Dragonfly is using / enabling the second CPU
of a dual-core Athlon I bought recently. ?At least, top only
shows one cpu. ?Is there a way to enable it, or can Dragonfly
not do that yet?

Hmm, and this is with a kernel that has 'options SMP' and 'options APIC_IO'
set in the config? Our stock GENERIC is UP only.

Sascha

--
http://yoyodyne.ath.cx


Yes that's right, as based on my experience with 2-way Quad-Core Xeon
machines on DragonFly, you have to recompile your GENERIC kernel with
'options SMP' to be able to detect multiple core CPUs.


I see; no this is a stock install kernel.  Thanks.


The LiveDVD contains a prebuild SMP kernel IIRC. In case you are too 
lazy to compile your own SMP kernel :)


Regards,

  Michael


Re: Installing DragonFly

2009-04-19 Thread Michael Neumann
On Sat, 18 Apr 2009 09:00:21 +0100
Colin Adams  wrote:

> 2009/4/18 Jordan Gordeev :
> > Colin Adams wrote:
> >>
> >> I don't know if it is the same problem (it certainly sounds
> >> similar).
> >>
> >> This is not a laptop though. Nor is it an old machine (less than 3
> >> years old).
> >>
> >> Anyway, I have booted DragonFly from the live CD and logged in as
> >> root.
> >>
> >> But what device name do I use (I only have one disk)? Everything I
> >> guessed at, it says "device not configured".
> >>
> >> 2009/4/17 Michael Neumann :
> >>
> >
> > Try ad0 or sd0.
> > You should look at dmesg(8) output and see what devices the kernel
> > has recognised (and what names they got).
> >
> 
> I had already tried ad0.
> 
> dmesg revealed that the disk hadn't been seen at all. Perhaps I
> plugged it in too late. Re-booting and re-plugging really early did
> the trick (it was ad0, which was where the live DVD installed
> DragonFly yesterday).
> 
> so fdisk -C ad0 says (slightly abbreviated):
> 
> cylinders=310101 heads=16 sectors/track=63 (1008 blks/cyl)
> 
> Media sector size is 512 bytes.
> Warning: BIOS sector numbering starts with sector 1
> Information form DOS bootblock is:
> The data for partition 1 is:
> ssysid 165,(DragonFly/FreeBSD/NetBSD/386BSD)
> start 63, size 312581745 (152627 Meg), flag 80 (active)
>  beg: cyl 0/ head 1/ sector 1;
>  end: cyl 1023/ head 255/ sector 63
> partitions 2 3 and 4 
> 
> So where do I go from here?

Basically, follow those instructions below, replacing ad4 with ad0, and
"fdisk -B -I ad4" with "fdisk -B -I -C ad0". You simply have to by-pass
the installer, because it doesn't use the "-C" option in fdisk, which is
essential! 

http://www.ntecs.de/blog/articles/2008/07/30/dragonfly-on-hammer/

The instructions above are a bit outdated, but they should still work.
You can stop the instructions after "reboot".

Regards,

  Michael



Re: Installing DragonFly

2009-04-17 Thread Michael Neumann
Colin Adams wrote:

> I was able to install DragonFly on the disk all-right, but  the
> machine still won't boot if the drive is powered-on at boot time.

I remember that I had a similar problem about 2 years ago with my Bullman 
laptop. 

http://leaf.dragonflybsd.org/mailarchive/users/2007-02/msg00158.html

If yours is the same problem (can you confirm?) then "fdisk -C" will solve 
it. But as the installer does not provide an option to set the "-C" flag, 
you'd have to install DragonFly without the installer.

Regards,

  Michael




Re: KQEMU 1.4.0pre1 for QEMU 0.10.1

2009-04-14 Thread Michael Neumann
On Fri, 10 Apr 2009 23:16:37 -0700
Naoya Sugioka  wrote:

> Hi,
> 
> 
> I just motivated to port kqemu module since QEMU starts working good
> recently, according to this mailing list.

Hi,

I'd really like to see a working kqemu on DragonFly...

If your compare kqemutest.messages.fly with kqemutest.messages.linux
you'll notice some "kqemu_unlock_user_page failed" messages for
Dragonfly nearly the end. They don't occur on Linux. Maybe this is
related to your performance problems?

Regards,

  Michael


Re: pkgsrc-HEAD DragonFly 2.3/i386 2009-04-08 05:12

2009-04-13 Thread Michael Neumann
On Mon, 13 Apr 2009 08:39:07 +0300
Hasso Tepper  wrote:

> pkgsrc bulk build report
> 
> 
> DragonFly 2.3/i386
> Compiler: gcc
> 
> Build start: 2009-04-08 05:12
> Build end:   2009-04-13 02:37
> 
> Full report:
> http://leaf.dragonflybsd.org/~hasso/pbulk-logs/20090408.0512/meta/report.html
> Machine readable version:
> http://leaf.dragonflybsd.org/~hasso/pbulk-logs/20090408.0512/meta/report.bz2
> 
> Total number of packages:   8441
>   Successfully built:   7591
>   Failed to build:   349
>   Depending on failed package:   176
>   Explicitly broken or masked:   264
>   Depending on masked package:61
> 
> Packages breaking the most other packages
> 
> Package   Breaks Maintainer
> -
> devel/libgweather 62 pkgsrc-us...@netbsd.org
> lang/mono 25 kef...@netbsd.org

I was able to compile Mono on my DragonFly system. It can probably be
fixed by adding "msgfmt" to USE_TOOLS in Makefile (maybe intltool as
well).

Regards,

  Michael

diff --git a/Makefile b/Makefile
index d19a234..98985f5 100644
--- a/Makefile
+++ b/Makefile
@@ -18,7 +18,7 @@ CONFLICTS=pnet-[0-9]*
 MONO_VERSION=  2.4
 ALL_ENV+=  MONO_SHARED_DIR=${WRKDIR:Q}

-USE_TOOLS+=bison gmake gtar perl:run pkg-config bash:run
+USE_TOOLS+=bison gmake gtar perl:run pkg-config bash:run
msgfmt
USE_LIBTOOL=   yes USE_LANGUAGES+=c c++
 EXTRACT_USING= gtar




Re: Installing DragonFly

2009-04-08 Thread Michael Neumann
Sascha Wildner wrote:

> Colin Adams schrieb:
>> I'm trying to install from the DVD.
>> 
>> When i get to the login prompt, I type installer.
>> 
>> Now every screen I come to, I get, in addition to the formatted screens,
>> I get:
>> 
>> Login incorrect
>> login:
>> 
>> Password:/i386 (dfly-live) (ttyv1)
>> 
>> login:
>> 
>> 
>> It appears I need some kind of password to login as installer. I can't
>> see this in the handbook.
> 
> Yea it's a known bug which has been fixed some time ago.
> 
> Do the following:
> 
> 1) Boot the CD
> 2) Login as root
> 3) Edit /etc/ttys and remove the ttyv1 entry
> 4) kill -1 1
> 5) Logout and relogin as installer
> 
> Generally I wouldn't recommend to take the release ISO. 2.2 snapshot is
> better as it has important bug fixes:
> 
> http://chlamydia.fs.ei.tum.de/pub/DragonFly/snapshots/i386/LATEST-
Release-2.2.iso.bz2

It would be nice to build and distribute snapshots of the USB-stick version 
as well. IMHO this is the easiest and most economical way to try out a 
development version of DragonFly.

Regards,

  Michael




Re: Installing from CD/DVD only?

2009-04-07 Thread Michael Neumann
Colin Adams wrote:

> How do I find out if the motherboard supports it? (I bought the
> machine in question in April 2003)
> 
> I think what I will probably do is buy a cheap USB DVD drive (I have
> other reasons to do this), install Linux on the machine, create a KVM
> guest for DragonFly, and attach the USB drive to the KVM guest.

Or just download the USB image, "burn" it on an empty USB stick using "dd"
and try it out! All you need is an empty USB stick with around 512 MB 
capacity (they should cost just a few bucks). If you want to test whether 
your motherboard works with DragonFly, using KVM will not work, as it will 
emulate parts of the hardware. 

Regards,

  Michael



Re: Building a bootable USB installer

2009-03-25 Thread Michael Neumann
John Leimon wrote:

> Friends,
> 
> Is there a way I can put a dragonfly ISO onto a USB drive and make it
> bootable? I would like to be able to install the OS from a thumb drive
> because my notebook doesn't have a CDROM drive. Any ideas?

I had the same problem some time ago so I developed the USB stick version.
Just go to Download [1] and click on the "USB" link [2]. After unpacking 
with gzip you can put this image onto you usb stick using "dd".

You might want to load the "ehci" (USB 2.0) kernel module in the boot loader 
or try to load it from the command line (this will be much faster). I 
remember some troubles with it, so you have to try. 

Regards,

  Michael

[1]: http://www.dragonflybsd.org/download/
[2]: ftp://chlamydia.fs.ei.tum.de/pub/DragonFly/iso-images/dfly-
img-2.2.0_REL.img.gz




Re: ASUS Eee compatibility

2009-03-23 Thread Michael Neumann
Sepherosa Ziehau wrote:

> On Mon, Mar 23, 2009 at 3:35 AM, Justin C. Sherrill
>  wrote:
>> On Sun, March 22, 2009 12:36 pm, Michael Neumann wrote:
>>> Am Freitag, 20. März 2009 16:02:48 schrieb Justin C. Sherrill:
>>>> On Fri, March 20, 2009 6:22 am, Michael Neumann wrote:
>>>> > I have this ASUS EEE PC 1000H lying next to me, and DragonFly works
>>>> > perfectly on it [1]. It boots fine from the USB stick edition.
>>>> Hardware
>>>> > is supported except the wireless (wired works with the ale driver). I
>>>> > haven't tested X because I didn't want to overwrite Windows :)
>>>>
>>>> What's the wireless chipset in those?
>>>
>>> It's an RT2860.
>>>
>>> I did some initial (minor) porting from OpenBSD:
>>>
>>> 
http://gitweb.dragonflybsd.org/~mneumann/dragonfly.git/shortlog/refs/heads/rt2860
>>>
>>>> And would it be worthwhile for people to band together and buy Sephe
>>>> one of these for support?
>>>
>>> If Sephe thinks he'd like one, then I think this is a great idea. Count
>>> me in with 100 EUR.
>>
>> Sephe - would this, or similar hardware, be helpful?
> 
> Sorry, I didn't follow this thread :)
> 
> I have the hardware (a PCI card, somewhere lying in my office).  The
> problem is I don't have enough time.

No problem! For me it would just be a "nice to have". It's probably easier 
to buy a supported mini-pci wireless card or use an usb wireless adapter. To 
me it seems as if the wireless technology has faster development cycles than 
wired network cards, so that it's waste of time to support a chip as it 
quickly becomes outdated. Maybe I'm just wrong :)

Btw, yesterday I tested the DragonFly LiveDVD on the ASUS EEE 1000H, and it 
works very well. SMP, X all fine!

Regards,

  Michael


> 
> Best Regards,
> sephe
> 




Re: ASUS Eee compatibility

2009-03-22 Thread Michael Neumann
Am Freitag, 20. März 2009 16:02:48 schrieb Justin C. Sherrill:
> On Fri, March 20, 2009 6:22 am, Michael Neumann wrote:
> > I have this ASUS EEE PC 1000H lying next to me, and DragonFly works
> > perfectly on it [1]. It boots fine from the USB stick edition. Hardware
> > is supported except the wireless (wired works with the ale driver). I
> > haven't tested X because I didn't want to overwrite Windows :)
>
> What's the wireless chipset in those?

It's an RT2860.

I did some initial (minor) porting from OpenBSD:

http://gitweb.dragonflybsd.org/~mneumann/dragonfly.git/shortlog/refs/heads/rt2860

> And would it be worthwhile for people to band together and buy Sephe one
> of these for support?

If Sephe thinks he'd like one, then I think this is a great idea. Count me in 
with 100 EUR.

Regards,

  Michael




Re: ASUS Eee compatibility

2009-03-20 Thread Michael Neumann
Am Thu, 19 Mar 2009 17:00:01 -0500
schrieb John Leimon :

> Hello,
> 
> Has anybody tested the lasted version of dragonfly with an ASUS Eee
> netbook? I would like to know if there are any hardware compatibility
> issues.

I have this ASUS EEE PC 1000H lying next to me, and DragonFly works
perfectly on it [1]. It boots fine from the USB stick edition. Hardware
is supported except the wireless (wired works with the ale driver). I
haven't tested X because I didn't want to overwrite Windows :)

I don't remember if I tried the SMP kernel (the Atom features Hyper
Threading). I think I did and it worked.

I can really recommend this little and cheap piece of hardware,
especially when you travel a lot.

Regards,

  Michael

[1] http://leaf.dragonflybsd.org/mailarchive/users/2008-12/msg00075.html



Re: OT - was Hammer or ZFS based backup, encryption

2009-02-22 Thread Michael Neumann
Am Sun, 22 Feb 2009 06:33:44 -0800
schrieb Jeremy Chadwick :

> On Sun, Feb 22, 2009 at 01:36:28PM +0100, Michael Neumann wrote:
> > Am Sat, 21 Feb 2009 19:17:11 -0800
> > schrieb Jeremy Chadwick :
> > 
> > > On Sun, Feb 22, 2009 at 11:59:57AM +1100, Dmitri Nikulin wrote:
> > > > On Sun, Feb 22, 2009 at 10:34 AM, Bill Hacker
> > > >  wrote:
> > > > > Hopefully more 'good stuff' will be ported out of Solaris
> > > > > before it hits the 'too costly vs the alternatives' wall and
> > > > > is orphaned.
> > > > 
> > > > Btrfs has been merged into mainline Linux now, and although it's
> > > > pretty far behind ZFS in completeness at the moment, it
> > > > represents a far greater degree of flexibility and power. In a
> > > > couple of years when it's stable and user friendly, high-end
> > > > storage solutions will move back to Linux, after having given
> > > > Sun a lot of contracts due specifically to ZFS.
> > > 
> > > The fact that btrfs offers grow/shrink capability puts it ahead
> > > of ZFS with regards to home users who desire a NAS.  I can't
> > > stress this point enough.  ZFS's lack of this capability limits
> > > its scope.  As it stands now, if you replace a disk with a larger
> > > one, you have to go through this extremely fun process to make
> > > use of the new space available:
> > > 
> > > - Offload all of your data somewhere (read: not "zfs export");
> > > rsync is usually what people end up using -- if you have multiple
> > > ZFS filesystems, this can take some time
> > > - zpool destroy
> > > - zpool create
> > > - zfs create
> > > 
> > > And if you add a new disk to the system, it's impossible to add
> > > that disk to the existing pool -- you can, of course, create an
> > > entirely new zpool which uses that disk, but that has nothing to
> > > do with the existing zpool.  So you get to do the above dance.
> > 
> > Hm, I thought that would work easily with ZFS, and at least in
> > theory I think that should work well with ZFS. Or what is wrong
> > with:
> > 
> >   zpool add tank /dev/ad8s1
> 
> This will only work how you expect if you're using a ZFS mirror.  With
> RAIDZ, it doesn't work -- you're forced to add the new disk into a new
> zpool.  This is one of the shortcomings of ZFS (and it is documented,
> but only lightly so).
> 
> > Okay "zpool remove" doesn't seem to work as expected, but it should
> > work well at least for RAID-1 (which probably no one uses for large
> > storage systems ;-). Maybe "zfs replace" works, if you replace an
> > old disk, with a larger disk, and split it into two partitions, the
> > one equally sized to the old, and the other containing the
> > remainder of the space. Then do:
> > 
> >   zfs replace tank old_device new_device_equally_sized
> >   zfs add tank new_device_remainder
> > 
> > But you probably know more about ZFS than me ;-)
> 
> In this case, yes (that I know more about ZFS than you :-) ).  What
> you're trying to do there won't work.
> 
> The "zfs" command manages filesystems (e.g. pieces under a zpool).
> You cannot do anything with devices (disks) with "zfs".  I think you
> mean "zpool", especially since the only "replace" command is "zpool
> replace".

Oops, yep, that was of course a typo of mine ;-)
 
> What you're trying to describe won't work, for the same reason I
> described above (with your "zpool add tank ad8s1" command).  You can
> split the disk into two pieces if you want, but it's not going to
> change the fact that you cannot *grow* a zpool.  You literally have to
> destroy it and recreate it for the pool to increase in size.

Ah okay, that's probably because the filesystem and RAID system are too
tighly bundled in ZFS. So if I understand correctly, you can't grow a
ZFS RAID-5 pool or anything similar to RAID-5.
Now the ZFS filesystem probably can only use blocks from one pool, so
the result is that you can't grow a ZFS filesystem living on a RAID-5+
pool as well. A bad example of coupling...

With Hammer the situation is different. You can let vinum
manage a RAID-5 pool (don't know if this is stable, but that's not my
point) and add the storage to a Hammer FS. If you need more space you
have too choices:

  1) Replace a disk with a larger one, splitting it into two subdisks
 (as I described in the last post).

  2) simply create a new RAID-5 pool (built using some new
 disks) and add it as well to the same filesystem. If you reblock
 everything to the new RAID-5 pool you could then remove the old
 RAID-5 pool completely.

Regards,

  Michael


Re: OT - was Hammer or ZFS based backup, encryption

2009-02-22 Thread Michael Neumann
Am Sat, 21 Feb 2009 19:17:11 -0800
schrieb Jeremy Chadwick :

> On Sun, Feb 22, 2009 at 11:59:57AM +1100, Dmitri Nikulin wrote:
> > On Sun, Feb 22, 2009 at 10:34 AM, Bill Hacker 
> > wrote:
> > > Hopefully more 'good stuff' will be ported out of Solaris before
> > > it hits the 'too costly vs the alternatives' wall and is orphaned.
> > 
> > Btrfs has been merged into mainline Linux now, and although it's
> > pretty far behind ZFS in completeness at the moment, it represents a
> > far greater degree of flexibility and power. In a couple of years
> > when it's stable and user friendly, high-end storage solutions will
> > move back to Linux, after having given Sun a lot of contracts due
> > specifically to ZFS.
> 
> The fact that btrfs offers grow/shrink capability puts it ahead of ZFS
> with regards to home users who desire a NAS.  I can't stress this
> point enough.  ZFS's lack of this capability limits its scope.  As it
> stands now, if you replace a disk with a larger one, you have to go
> through this extremely fun process to make use of the new space
> available:
> 
> - Offload all of your data somewhere (read: not "zfs export"); rsync
>   is usually what people end up using -- if you have multiple ZFS
>   filesystems, this can take some time
> - zpool destroy
> - zpool create
> - zfs create
> 
> And if you add a new disk to the system, it's impossible to add that
> disk to the existing pool -- you can, of course, create an entirely
> new zpool which uses that disk, but that has nothing to do with the
> existing zpool.  So you get to do the above dance.

Hm, I thought that would work easily with ZFS, and at least in theory I
think that should work well with ZFS. Or what is wrong with:

  zpool add tank /dev/ad8s1

Okay "zpool remove" doesn't seem to work as expected, but it should
work well at least for RAID-1 (which probably no one uses for large
storage systems ;-). Maybe "zfs replace" works, if you replace an old
disk, with a larger disk, and split it into two partitions, the one
equally sized to the old, and the other containing the remainder of the
space. Then do:

  zfs replace tank old_device new_device_equally_sized
  zfs add tank new_device_remainder

But you probably know more about ZFS than me ;-)

As for Hammer, I worked on some patches that will allow it to expand a
Hammer FS while mounted. It's actually very easy to implement (~100
LoC). And the shrink case should be at least in theory pretty easy to
implement, thanks to reblocking. So with very little work, we can make
Hammer grow/shrink natively (maybe it's in the next release). 

Regards,

  Michael


Re: the 'why' of pseudofs

2009-02-17 Thread Michael Neumann
Am Wed, 18 Feb 2009 05:25:10 +0800
schrieb Bill Hacker :

> Folks,
> 
> Google was no help, and I have only the last 54,000 or so of the 
> DragonFlyBSD newsgroup messages to hand on on the PowerBook, wherein
> a message-body search on pfs, PFS, pseudofs turned up only about 240
> or so messages, or Mark One eyeball processing..
> 
> That now done, I find:
> 
> Several of these cover conception, gestation, birth, and education  - 
> the 'what' or 'how' of pseudofs / PFS, so to speak.
> 
> ONE of which lists the pro /con vs PFS_NOT. And that one not really 
> hard-edged.
> 
> NONE of which tell me with any degree of absolute-ish-ness, if you
> will..
> 
> ... that one cannot, or even 'should not' run a HAMMER fs *without*
> PFS mounts.
> 
> ... or nullfs mounts.
> 
> or even  without softlinks.  Persih the htought. Or the
> confusion...
> 
> At all.
> 
> EG: 'none of the above'.
> 
> Mind - I see the rationale - even necessity - for their use in more
> than a few circumstances.
> 
> But I cannot seem to find the prohibitions against their 'non-use'.
> 
> What do you suppose breaks if I do not apply these in an initial
> setup, but rather leave them until specific needs arise, such as
> volume expansion, export, or mirroring?
> 
> I have in mind small drive(s) for /, /usr, /var/, /tmp, /home
> - perhaps not even hammerfs, those. Nothing there that was ever
> overly hard to backup, restore, of JF replace. My mailstore, for
> exampel, has never lived in any of those. Nor web pages. Nor
> Databases.
> 
> It is on separate, much larger, drive(s) for /data, /mail, /web, /pub 
> and such - where 'mission critical' clients live and play.
> 
> UFS(1), / FFS(1)  - not UFS2/FFS2 has made for less hassle when
> hardware goes pear-shaped or OS migration is afoot.
> 
> Enter (BF)HAMMER
> 
> But what concept am I missing here? Nice-to-have? Or absolute
> necessity?

PFS is the smallest unit of mirroring, and the unit to which you can
apply specific retainment policies. For example while you do not want
to retain much history for /tmp, you might want to do so for /home.
When it comes to mirroring, you clearly do not want to mirror changes to
PFS /tmp, while you want to mirror changes to PFS /home. If everything
would lie on a single huge filesystem "/", we could not decide what to
mirror and what not. That's the major design decision. 

You might ask, why not simply specify which directories to mirror 
and which to leave out (without considering PFS)? The issue here is,
that, AFAIK, mirroring works on a very low level, where only inode
numbers are available and not full pathnames, so something like:
  
  tar -cvzf /tmp/backup.tgz --reject="/tmp;/var/tmp"

would not work, or would be slow.

Another issue is locality. Metadata from one PFS lies more close
together and as such is faster to iterate.

Regards,

  Michael



Re: off-box mirror-stream and friends

2009-02-16 Thread Michael Neumann
Am Mon, 16 Feb 2009 11:45:45 +0100
schrieb Michael Neumann :

> Am Sun, 15 Feb 2009 21:38:54 -0800 (PST)
> schrieb Matthew Dillon :
> 
> > :I have what appears to be a 'Catch 22', wherein:
> > :
> > :hammer mirror-stream /master @:/new_slave
> > :
> > :returns:
> > :
> > :PFS slave /new-slave does not exist.
> > :Do you want to create a new slave PFS? (yes|no) No terminal for
> > response :Aborting operation
> > :validate_mrec_header: short read
> > :
> > :'No terminal for response'  .was ass u me ed to be a byproduct of
> > comign :in off an Xfce4-terminal (Xorg & Xfce4 are quite happy on
> > 2.3.0, BTW) :
> > :Dropped back out to the raw tty0 console and tried it from there.
> > :
> > :No joy.
> > 
> > Definitely a bug in the hammer utility, I'm not sure there is 
> > anything I can do about it though because the remote ssh
> > connection has no channel to accept a Y or N answer... stdin and
> > stdout are used for the protocol stream and I think stderr is output
> > only.
> > 
> > In anycase, I think what this means is that this feature
> > currently only works if the slave is local (non-ssh connection).
> > So you would be able to do it with  .
> 
> Hm, I remember that we implemented this feature (auto-creation of
> slaves) so that it can operate over ssh. And IIRC, it once worked 
> for me using ssh (I am not sure if this was a remote machine or not).
> Does this mean, it is broken?

Looking at function getyn() posted by Bill Hacker I
eventually understand what's wrong. Code - the universal language...

So, really, the best thing that we can do is to introduce a
--force-slave-pfs-creation switch and replace the getyn() call in
cmd_mirror.c by a simple if (ForceSlavePfsCreation...). I like this
more than the original approach (using ttys), as it is usable from
within scripts.

While we are working on this, we could replace all
interactivity in hammer utilities with optional command line switches.
So the strategy could be: first look if there is a switch specified, if
not, fall back to /dev/tty, if this fails, assume NO.

I do not like a command line switch "-f", which means anything
(including force). It's too easy to mix it up with "-f" meaning
"--file". Think about doing some admin work late after midnight, and
your fingers go again faster than your brain; while a "rm -rf" on a
Hammer FS would give you a second chance, a mixed up "hammer
mirror-copy -f" would probably not :). I really would like to see a
--force here (or a more specific one).

Regards,

  Michael


Re: off-box mirror-stream and friends

2009-02-16 Thread Michael Neumann
Am Sun, 15 Feb 2009 21:38:54 -0800 (PST)
schrieb Matthew Dillon :

> :I have what appears to be a 'Catch 22', wherein:
> :
> :hammer mirror-stream /master @:/new_slave
> :
> :returns:
> :
> :PFS slave /new-slave does not exist.
> :Do you want to create a new slave PFS? (yes|no) No terminal for
> response :Aborting operation
> :validate_mrec_header: short read
> :
> :'No terminal for response'  .was ass u me ed to be a byproduct of
> comign :in off an Xfce4-terminal (Xorg & Xfce4 are quite happy on
> 2.3.0, BTW) :
> :Dropped back out to the raw tty0 console and tried it from there.
> :
> :No joy.
> 
> Definitely a bug in the hammer utility, I'm not sure there is 
> anything I can do about it though because the remote ssh
> connection has no channel to accept a Y or N answer... stdin and
> stdout are used for the protocol stream and I think stderr is output
> only.
> 
> In anycase, I think what this means is that this feature currently
> only works if the slave is local (non-ssh connection).  So you
> would be able to do it with  .

Hm, I remember that we implemented this feature (auto-creation of
slaves) so that it can operate over ssh. And IIRC, it once worked 
for me using ssh (I am not sure if this was a remote machine or not).
Does this mean, it is broken?

> :Command *appear* to succeed if/as/when I *manually* create
> 'new_slave' :in advance with a matching shared_uuid. A local
> mirror-copy to it :suceeds, with new_slave showing the files mirrored.
> :
> :However, while the -vvv flag gives 5-sec updates, they all show a
> newer :starting point that pfs-status has for the target, and the
> contents of :the slave never change.
> 
> You must access the slave via its softlink to get the latest
> version synced from the master.  If you try to access the slave via a
> null-mount you will be accessing a snapshot of the slave, not the
> current state of the slave.  The null mount locks in the transaction
> id of the slave.
> 
> :By way of contrast, mirror-stream between on-box master and on-box
> slave :  - same command otherwise - works fine.  No chdir needed to
> see the :updates, just a 'View, Reload' in thunar and sputniks.
> 
> You are probably accessing it via the softlink, yes?  The gui is
> probably using an absolute path.  If you were to CD into a
> sub-directory (even through the softlink), you would be accessing a
> snapshot as-of when you did the CD, not the latest synced copy.
> 
> :Query: Can the loop that seeks a 'yes' be changed to a 5-second 
> :countdown-timer with a message such as:
> :
> :Creating  Hit Ctrl-c to abort
> :
> :.absent which it JFDI.
> :
> :Thanks,
> :
> :Bill Hacker
> 
> That won't work, the target over an ssh link has no tty channel.
> 
> Adding an option to create the slave automatically and passing it
> to the target hammer utility when it is run via the ssh, so it never
> has to ask at all, would work.  If someone would like to do that and
> submit a patch, I don't think it would take more then 20 minutes of
> programming.

You mean, something like a -f (force) option? Should be damn easy to 
implement. I can do that, once I sit in front of a real computer
(with DragonFly) again :)

Regards,

  Michael


Re: Installation on Yet Another Netbook

2009-01-15 Thread Michael Neumann

Am 15.01.2009 03:01, schrieb Simon 'corecode' Schubert:
> Christopher Rawnsley wrote:
>> So now to the present day. I have re-synced with the git repo and this
>> time I tried a 'make img release' and dd the resultant image to my USB
>> flash drive. I tried to boot the image but all I got back from it was
>> that there was no operating system ( one of the messages at the
>> beginning of the image when I viewed the data in a hex editor ).
>>
>> Any help would be greatly appreciated and thanks for being patient
>> with me :-)
>
> Try using packet mode on the image:
>
> boot0cfg -B -o packet /dev/sd0

It is safe to add that to the nrelease Makefile (of course working on 
the vnode device instead of sd0)?


Regards,

  Michael


Re: Installation on Yet Another Netbook

2009-01-15 Thread Michael Neumann

Am 15.01.2009 03:27, schrieb Christopher Rawnsley:
> On 15 Jan 2009, at 02:01, Simon 'corecode' Schubert wrote:
>> Try using packet mode on the image:
>>
>> boot0cfg -B -o packet /dev/sd0
>
> That did it. It booted to login prompt. I couldn't login as 'installer',
> however. I'm going to try a manual install for now.

You'd need "make img installer release" for that.
 ^

Regards,

  Michael


Re: ECC RAM

2009-01-13 Thread Michael Neumann

Am 13.01.2009 10:01, schrieb Simon 'corecode' Schubert:
> Michael Neumann wrote:
>> But then your RAM can be faulty as well. I got a 1-bit correction
>> message last time I did a full buildworld on this ASUS mainboard.
>> Without ECC RAM I wouldn't have noticed this error.
>
> I was really trying hard to find a uATX mainboard that supports ECC RAM,
> but failed. Does anybody know of such mainboards? Seems that ECC is
> still being placed for the server market.

uATX or miniATX :)

Generally don't look for Intel boards, none of them have ECC!

If you need a uATX board with ECC support, take a look at ASUS M2N-VM
for example.

I haven't found a miniATX board that officially supports ECC. My JetWay
AMD board runs with ECC chips, but it doesn't seem to use ECC.

Regards,

  Michael


Re: RAID 1 or Hammer

2009-01-13 Thread Michael Neumann

Am 13.01.2009 06:02, schrieb Dmitri Nikulin:
> On Tue, Jan 13, 2009 at 2:15 PM, Matthew Dillon
>   wrote:
>> I've seen uncaught data corruption on older machines, but not in the
>> last few years.  Ah, the days of IDE cabling problems, remembered
>> fondly (or not).  I've seen bad data get through TCP connections
>> uncaught!  Yes, it actually does happen, even more so now that OS's
>> are depending more and more on CRC checking done by the ethernet 
device.

>
> I once came across a machine in which the IDE cable was plugged in
> wrong. It was plugged a whole row over, leaving the end two pins
> unconnected, and offsetting each pin that was connected. Somehow the
> machine worked fine and only under high load would it start to give
> DMA errors, drop down the DMA level (which any OS I've seen does
> automatically), and continue slower but stable. I found no evidence of
> data corruption and the machine worked at full speeds when the cable
> was moved.
>
> One of my own machines had been supplied with a bad ASUS-labelled IDE
> cable, which exhibited similar symptoms to the one above that was
> plugged in wrong. From both incidents I learned that IDE is pretty
> robust on modern chipsets.
>
> I knew I wasn't just paranoid about ethernet bugs. That's why I run
> mission-critical links over SSH or OpenVPN so that a proper checksum
> and replay protocol take care of it.
>
>> ZFS uses its integrity check data for a lot more then simple 
validation.
>> It passes the information down into the I/O layer and this 
allows the
>> I/O layer (aka the software-RAID layer) to determine which 
underlying
>> block is the correct one when presented with multiple choices. 
So, for
>> example, if data is mirrored the ZFS I/O layer can determine 
which of

>> the mirrored blocks is valid... A, B, both, or neither.
>
> Actually that's precisely what I like about ZFS, the self-healing with
> redundant copies. It means that as long as any corruption is healed
> before the redundant copy is similarly damaged, there will be
> basically no loss of data *or* redundancy. This is in stark contrast
> to a plain mirror in which one copy will become unusable and the data
> in question is no longer redundant, and possibly incorrect.
>
>> People have debunked Sun's tests as pertaining to a properly 
functioning
>> RAID system.  But ZFS also handles any Black Swan that shows up 
in the
>> entire I/O path.  A Black Swan is an unexpected condition.  For 
example,

>> an obscure software bug in the many layers of firmware that the data
>> passes through.  Software is so complex these days there are 
plenty of

>> ways the data can get lost or corrupted without necessarily causing
>> actual corruption at the physical layer.
>
> I confess that, lacking ZFS, I have a very paranoid strategy on my
> Linux machines for doing backups (of code workspaces, etc). I archive
> the code onto a tmpfs and checksum that, and from the tmpfs distribute
> the archive and checksum to local and remote archives. This avoids the
> unthinkably unlikely worst case where an archive can be written to
> disk, dropped from cache, corrupted, and read bad wrong in time to be
> checksummed. The on-disk workspace itself has no such protection, but

But then your RAM can be faulty as well. I got a 1-bit correction
message last time I did a full buildworld on this ASUS mainboard.
Without ECC RAM I wouldn't have noticed this error.

> at least I can tell that each backup is as good as the workspace was
> when archived, which of course has to pass a complete recompile.

You could also use git for your code workspaces, which creates SHA1
checksums of every file, so that you can verify your backups all the
time.  There is even a very cool guy who uses git for doing backups of
his system [1].

Regards,

  Michael

[1]: http://eigenclass.org/hiki/gibak-backup-system-introduction


Re: RAID 1 or Hammer

2009-01-12 Thread Michael Neumann

Am 12.01.2009 23:20, schrieb Simon 'corecode' Schubert:
> Konstantinos Pachnis wrote:
 I'm curious if RAID 1 (mirroring) really helps to protect data loss.
 Of course if a whole disk "dies", RAID 1 has the advantage that
 I have an identical copy. But what happens if only a sector of one 
disk

 contains bad data. How can the RAID controller decide which is the
 correct sector? Or would the disk detect such a case and return an
 error?

>>
>> When the controller will try to perform an I/O operation it will fail
>> on the faulty disk (the disk with the bad sector). As a result the
>> controller will be able to decide which is the correct sector.
>
> You're assuming fail-stop errors. If it is a sneaking bit error or
> something else, it won't notice.

But Hammer won't notify this as well, right? IIRC, Hammer does CRC only
on meta-data.

Regards,

  Michael


Re: ASUS Eee PC 1000H (age(4))

2008-12-31 Thread Michael Neumann

Am 31.12.2008 15:36, schrieb Sepherosa Ziehau:
> On Sun, Dec 28, 2008 at 5:24 PM, Sepherosa 
Ziehau  wrote:

>> On Sun, Dec 28, 2008 at 9:11 AM, Christopher Rawnsley
>>   wrote:
>>> On 27 Dec 2008, at 19:12, nntp.dragonflybsd.org wrote:
 Anyone knows if there is a plan to port the Attansic age(4) driver to
 DragonFly (and the Ralink wireless)?
>> I will take care of them (both age(4) and ale(4)), though I can't
>> promise that it will be done before 2.2 release.  I could only assure
>> you that they could be finished within next several weeks.  Next
>> several weeks will be a good time frame to port some drivers; we are
>> near release, I have no plan to mess too much with critical code :P.
>
> ale(4) is committed.  You could either pull from git repo or wait for
> a suitable snapshot LiveCD.
> It is not in GENERIC yet; You will have to manually load if_ale.ko
> If it works, I will add it to GENERIC.

Will try next year :)

Thanks a lot!

Regards,

  Michael


Re: sata/ide usb adapter tested with dfbsd?

2008-12-29 Thread Michael Neumann

Am 28.12.2008 20:57, schrieb Ferruccio Zamuner:

Hi,

have you used successfully an usb sata/ide adapter with dragonflybsd?
I'm looking for a working one.


I bought one last Saturday and it works perfectly with Dragonfly.
It even includes a power supply and supports SATA *and* IDE devices and
that for just around 20 EUR.

http://www.kmelektronik.de/main_site/prod_detail/detail.php?ArtNr=11595

Regards,

  Michael


Re: Hammer history question

2008-11-21 Thread Michael Neumann

Petr Janda schrieb:
So I accidently deleted one of my worksheets. How do I find out modification 
history of the file and the necessary transaction id?


Ive look at "hammer history" but it doesnt seem to show me anything but the 
last modification


[EMAIL PROTECTED]:/home/petr/docs/callstream_files/2009# hammer history 
report4.ods

report4.ods 000829bb8510 clean {
00086128f420 13-Nov-2008 17:57:06
}

even though I modified the file at least 5 times this week and hammer cleanup 
hasnt been run since last sunday.


Don't know if that works for already deleted files, but at least for 
modified files, you should be able to view the changes made to that file 
by using "undo".


Regards,

  Michael


Re: What's type of HAMMER fs volume is?

2008-10-27 Thread Michael Neumann

lhmwzy schrieb:

At this point,HAMMER seems like a RAID0


In case a HAMMER filesystem consists of mulitple disks, the capacity of 
all disks is used. That has nothing to do with striping (RAID0). Space 
can be allocated and used from each disk.


With RAID there are fixed rules on which disk a sector resides (disk 0, 
disk 1, disk 0 for striping with 2 disks for example), and there is no 
explicit information on which disk a given block resides, while HAMMER 
makes this explicit and uses AFAIK chunks of continous space.


Regards,

  Michael


Re: problem with DragonFly in Qemu

2008-10-10 Thread Michael Neumann

dark0s Optik wrote:

Ok, but I Qemu on Slackware, and dragonFly on Qemu.

have you some suggestions?
  
If you can affort the money, try VMware. VirtualBox would be a good 
alternative, but

DragonFly won't boot here (due to some timer problems).
For Qemu I can only recommend you to read the manual or some wikis. 
There is a lot

of information about Qemu and Linux.

Regards,

 Michael


regards,
saverio

2008/10/10 Michael Neumann <[EMAIL PROTECTED]>:
  

dark0s Optik wrote:


I've a Slackware GNU/Linux with Qemu 0.9.1.
I installed DragonFlyBSD 2.0.1 over Qemu. dragonFly has ed0 ethernet
interface, but it don't connect to internet.

#cat /etc/rc.conf
...
...

ifconfig_ed0="DHCP"

#ifconfig -a
lp0...

ed0 with IP address 10.0.2.15


How can I to do connect DragonFly (over Qemu) to internet?
  

This is not related to DragonFly but to Qemu in general.
IMHO the best way to connect a qemu instance to the internet is by
using NAT. I wrote a HOWTO a few days ago:

http://www.ntecs.de/blog/articles/2008/10/07/qemu-on-freebsd-7/

Regards,

 Michael






  




Re: problem with DragonFly in Qemu

2008-10-10 Thread Michael Neumann

dark0s Optik wrote:

I've a Slackware GNU/Linux with Qemu 0.9.1.
I installed DragonFlyBSD 2.0.1 over Qemu. dragonFly has ed0 ethernet
interface, but it don't connect to internet.

#cat /etc/rc.conf
...
...

ifconfig_ed0="DHCP"

#ifconfig -a
lp0...

ed0 with IP address 10.0.2.15


How can I to do connect DragonFly (over Qemu) to internet?


This is not related to DragonFly but to Qemu in general.
IMHO the best way to connect a qemu instance to the internet is by
using NAT. I wrote a HOWTO a few days ago:

http://www.ntecs.de/blog/articles/2008/10/07/qemu-on-freebsd-7/

Regards,

  Michael


Re: HAMMER mirroring feature question

2008-10-10 Thread Michael Neumann

Igor Pokrovsky wrote:

I was following instructions from HAMMER manual page to create a mirror
like this

# hammer pfs-master /home/pfs/master

Creating PFS #3 succeeded!
/home/pfs/master
sync-beg-tid=0x0001
sync-end-tid=0x000161a85910
shared-uuid=ac6bce37-96ab-11dd-8310-01055d75dad0
unique-uuid=ac6bce9e-96ab-11dd-8310-01055d75dad0
label=""
operating as a MASTER
snapshots dir for master defaults to /snapshots


# hammer pfs-slave /home/pfs/slave 
shared-uuid=ac6bce37-96ab-11dd-8310-01055d75dad0

Creating PFS #4 succeeded!
/home/pfs/slave
sync-beg-tid=0x0001
sync-end-tid=0x0001
shared-uuid=ac6bce37-96ab-11dd-8310-01055d75dad0
unique-uuid=d24e60f1-96ab-11dd-8310-01055d75dad0
slave
label=""
operating as a SLAVE
snapshots directory not set for slave


# mount_null /home/pfs/master /home/master
# mount_null /home/pfs/slave /home/slave

mount_null: /home/pfs/@@0x0001:4: No such file or
directory


# hammer mirror-copy /home/master /home/slave


Try this:

# hammer mirror-copy /home/pfs/master /home/pfs/slave

Regards,

  Michael


Re: USB keyboard

2008-09-10 Thread Michael Neumann

Stefan Johannesdal schrieb:

Sepherosa Ziehau wrote:

On Tue, Sep 9, 2008 at 8:59 PM, Stefan Johannesdal
<[EMAIL PROTECTED]> wrote:
 

Hi!

I have encountered a slightly annoying problem. When using my USB 
keyboard
with a SMP enabled kernel it simply won't work and I have to plug my 
old PS2
keyboard back in and reboot to get a working keyboard. With a UP 
kernel the

USB keyboard works as it should.

Any ideas?


This could be related to the following patch:

http://www.dragonflybsd.org/cvsweb/src/sys/bus/usb/usb.c?rev=1.49&content-type=text/x-cvsweb-markup

Please try to set the following tunable at boot-time:

  hw.usb.hack_defer_exploration=0

Regards,

  Michael


Re: Hammer talk

2008-08-13 Thread Michael Neumann

Sdävtaker wrote:

Hey,
Im collecting data for the talk in the JRSL next week (about hammer).
Agustin Nieto from UBA joined and we going to talk together.
Our plan for the talk is to show the features and some demo scripts of
how to use it, then move to a feature comparison against popular FSs
(ZFS, UFS, EXT3) and lastly go for some benchmark comparison.
If someone got test scripts, benchmark scripts, graphics, or anything
you want to share and think is usefull to talk about, please send it
to me and I will add the references and try to add to the talk.
The talk will be 30-45 minutes long, the talk is oriented to advanced
users and admins, we will not talk about implementation details, just
how to use it and why to use it. Anyway, Agustin and me are interested
in implementation ;-)
Another thing, I was trying to measure I/O, memory and CPU for the
benchmarking, it can do a nice compare against UFS (running in a
similar clean DFBSD installation) but i cant find a fear way to
compare against EXT3 or ZFS, any idea about it?
Thanks for any help and suggestions.
Damian


Hi,

Just two pointers.

http://www.ntecs.de/blog/articles/2008/07/30/dragonfly-on-hammer/
http://www.ntecs.de/blog/articles/2008/01/17/zfs-vs-hammerfs/

Regards,

  Michael


Re: hammer: big file changes very often

2008-08-09 Thread Michael Neumann

[EMAIL PROTECTED] wrote:

Hi,
Ive just been thinking about this thing. what if i had a lets say > 1gb
database file which changes at least every 30 seconds. if sync is run
every 30 seconds, i would effectively create 2880 historical copies of the
same 1gb file every day. This would equal to almost 3TB of history every
day. I think its safe to assume that most people dont have 3TB of space.
How would you have to configure hammer to only keep one copy a
week(without creating 3TB of history every day) or not keep any copy of
this file at all?


"chflags nohistory" is your friend.

Regards,

  Michael


Re: Site layout and design discussion

2008-08-04 Thread Michael Neumann

Justin C. Sherrill wrote:

So, since the release is past:

http://www.shiningsilence.com:81/

>


(I got this together just before the release, so no 2.0 stuff is on it.)

Questions I have for people:
- How does it look to you?


Very cool design!!!


- The front page looks plain.  Who wants to contribute art?

This work so far doesn't take into account how the actual information is
located on the site, so more questions:

- What pages do you find most useful on the DragonFly site?


I like those small short-cut links on the NetBSD homepage:

  * The Guide (okay, *I* don't find that too useful)
  * Manual Pages (not so useful as well IMHO)
  * Mailing lists and Archives (useful)
  * CVS repository (definitively useful)
  * Report or query a bug (yeah, nice)
  * Software packages

So definitively: Mailing Lists and Archives (that could include 
newsgroups), CVS repo and bug-tracker.


Regards,

  Michael


Re: Hammer: Transactional file updates

2008-08-01 Thread Michael Neumann

Daniel Taylor wrote:

--- On Fri, 1/8/08, Michael Neumann <[EMAIL PROTECTED]> wrote:


   fd = open(file);  // behaves like START TRANSACTION
   read(fd, ...);
   write(fd, ...);
   close(fd);// behaves like COMMIT


If you want a commit on close, fsync() the file just before you close() it.


That's not what I want. fsync only guarantees that the data is stored
permanently. That's not my problem. I want to ensure that some
write-operations are performed either-all-or-nothing.


That would be fine except that it would give me a new inode
number, and
the inode number is right now the only way to associate
further data
with a file.


Why do you care if you get a new inode vs multiple versions of the same inode?


Hammer doesn't reuse inode numbers. So inode numbers could be used as
unique id's to refer to that file (like a ROWID or OID in a database).

Regards,

  Michael


Re: Hammer: Transactional file updates

2008-08-01 Thread Michael Neumann

Matthew Dillon wrote:

:Hi,
:
:So Hammer does not guarantee "transctional consistency" of data in case
:of a crash, only that of meta-data, right?
:
:Is there a method to guarantee the write to be transactional, so that
:I either have the previous "version" of the file or the version that I
:wrote? Like this:
:
:   fd = open(file);  // behaves like START TRANSACTION
:   read(fd, ...);
:   write(fd, ...);
:   close(fd);// behaves like COMMIT
:
:That would be incredible cool (and very useful!) and could play well due
:to Hammers historical nature.
:
:I know I could probably solve this issue by creating a new file,
:fsyncing it and then doing a "rm old_file; mv new_file old_file" (or
:something like that), but that would give me a new inode number, which
:I'd like to avoid.
:
:Regards,
:
:   Michael

Well, you will never see garbage in the data, but there is no
integrated API available for enclosing multiple operations in a
single transaction.


Note that I was looking for "enclosing multiple operations *to the same
file* in a single transaction".


If you do a sequence of write()'s and nothing else the blocks will be
committed to the media at the operating system's whim, meaning not
necessarily in order, so a crash would result in spotty updates of the
file's data.  You will never get garbage like you can with UFS, but it
is not an all-or-nothing situation either.


So each write() is all-or-nothing?


Can an arbitrary transactional API be implemented in HAMMER?  Yes, it
can.  This is how you do it:

* run 'hammer synctid' to get a snapshot transction id, write the TID
  to a file somewhere.  fsync().

* issue the operations you want to be atomic

* run 'hammer synctid' to get a snapshot transction id, write the TID
  to a file somewhere.  fsync().

If a crash occurs during the sequence you can do a selective rollback
to the recorded TID for the portion of the filesystem you modified.
It could be done now, in fact, by using something like:

# perform rollback
cpdup -V directory@@ directory


Hm, I was thinking more in terms of per-file transactions, not
neccessarily whole-filesystem transactions. I think what you describe
wouldn't be too efficient when compared against a MVCC database. What I
need could be implemented using temporary files:

  * lock original file

  * create temporary file

  * write temporary file

  * fsync temporary file

  * rename original file to something else

  * rename temporary file to original file

  * unlock

That would be fine except that it would give me a new inode number, and
the inode number is right now the only way to associate further data
with a file.

Could that style of transactional-writes (per file) be implemented in
Hammer?

Regards,

  Michael


Re: Hammer: Transactional file updates

2008-08-01 Thread Michael Neumann

Martin Schut wrote:
On Fri, 01 Aug 2008 17:36:13 +0200, Michael Neumann <[EMAIL PROTECTED]> 
wrote:



Jasse Jansson wrote:

 On Aug 1, 2008, at 1:09 PM, Michael Neumann wrote:


Hi,

So Hammer does not guarantee "transctional consistency" of data in case
of a crash, only that of meta-data, right?

Is there a method to guarantee the write to be transactional, so that
I either have the previous "version" of the file or the version that I
wrote? Like this:

  fd = open(file);  // behaves like START TRANSACTION
  read(fd, ...);
  write(fd, ...);
  close(fd);// behaves like COMMIT

That would be incredible cool (and very useful!) and could play well 
due

to Hammers historical nature.

 You are talking about COW (copy on write), right.
It slows things down, but it's cool.


Well, Hammer's history-retention IMHO is similar, just that it is less
fine-grained than COW.

I'm not sure how Hammer internally works, but I think that a single
write() can generate a new version of that file (at least if you wait 30
seconds). What I'd like to have is the ability to do multiple writes to
that file and once I reach the end of my write transaction switch the
file to use that new version. During that period of time, other readers
would see the old content. Basically like a transaction in a database,
but provided by Hammer so that I don't have to reimplement transactions
myself ;-).


Sounds cool, however what to do once a application crashes. I suppose 
you will not commit. But then, a lot of log-files are not written at 
all. Also utils like tail -f won't work to.


It shouldn't be default behaviour. One would have to specify:

  fd = open("file", O_TRANSACT);

If the application would crash, the file would be in the previous state.

Regards,

  Michael


Re: Hammer: Transactional file updates

2008-08-01 Thread Michael Neumann

Jasse Jansson wrote:


On Aug 1, 2008, at 1:09 PM, Michael Neumann wrote:


Hi,

So Hammer does not guarantee "transctional consistency" of data in case
of a crash, only that of meta-data, right?

Is there a method to guarantee the write to be transactional, so that
I either have the previous "version" of the file or the version that I
wrote? Like this:

  fd = open(file);  // behaves like START TRANSACTION
  read(fd, ...);
  write(fd, ...);
  close(fd);// behaves like COMMIT

That would be incredible cool (and very useful!) and could play well due
to Hammers historical nature.


You are talking about COW (copy on write), right.
It slows things down, but it's cool.


Well, Hammer's history-retention IMHO is similar, just that it is less
fine-grained than COW.

I'm not sure how Hammer internally works, but I think that a single
write() can generate a new version of that file (at least if you wait 30
seconds). What I'd like to have is the ability to do multiple writes to
that file and once I reach the end of my write transaction switch the
file to use that new version. During that period of time, other readers
would see the old content. Basically like a transaction in a database,
but provided by Hammer so that I don't have to reimplement transactions
myself ;-).

Regards,

  Michael


Hammer: Transactional file updates

2008-08-01 Thread Michael Neumann

Hi,

So Hammer does not guarantee "transctional consistency" of data in case
of a crash, only that of meta-data, right?

Is there a method to guarantee the write to be transactional, so that
I either have the previous "version" of the file or the version that I
wrote? Like this:

  fd = open(file);  // behaves like START TRANSACTION
  read(fd, ...);
  write(fd, ...);
  close(fd);// behaves like COMMIT

That would be incredible cool (and very useful!) and could play well due
to Hammers historical nature.

I know I could probably solve this issue by creating a new file,
fsyncing it and then doing a "rm old_file; mv new_file old_file" (or
something like that), but that would give me a new inode number, which
I'd like to avoid.

Regards,

  Michael


Hammer pfs permissions

2008-07-28 Thread Michael Neumann

Hi,

It doesn't seem to be possible to assign permissions
(like 1777 for /tmp) to pseudo-filesystems:

  hammer pfs-master /tmp
  chmod 1777 /tmp
  ls -la /tmp
  # still shows "lrwxr-xr-x" for /tmp

Regards,

  Michael


Hammer pruning and pfs

2008-07-28 Thread Michael Neumann

Hi,

it's unclear to me whether pruning works locally to a PFS or not.

Say I have a snapshots directory with links to PFS#1, e.g.

  /pfs1/snapshots
  snap1 -> /pfs1/@0x
  snap2 -> /pfs1/@0x

and I do a

  hammer prune /pfs1/snapshots

will it just prune the PFS#1 accoring to the softlinks, or
will it prune the whole Hammer filesystem using the softlinks.

My understanding is that each PFS can be pruned separatly (otherwise 
mirroring using per-mirror retention policies will not work).

But then, if one has multiple PFS, one has to maintain multiple
snapshots for each PFS, even if the transaction id of the snapshot
is global to the whole Hammer filesystem.

Regards,

  Michael


Re: cpdup will silently overwrite pfs

2008-07-28 Thread Michael Neumann

Michael Neumann schrieb:

Hi,

I just noticed that the following:

  hammer pfs-master /hammer
  cpdup /something /hammer

will not behave as I initially assumed.

I will not copy the contents of /adirectory into


s/adirectory/something/ ;-)


cpdup will silently overwrite pfs

2008-07-28 Thread Michael Neumann

Hi,

I just noticed that the following:

  hammer pfs-master /hammer
  cpdup /something /hammer

will not behave as I initially assumed.

I will not copy the contents of /adirectory into
the pfs /hammer. Instead, it will remove the PFS symlink /hammer
and create a directory /hammer.

So the following:

  hammer pfs-status /hammer

will show PFS#0 (the root PFS) instead of the newly created.

So users should be warned when using cpdup together with PFS.
Maybe we can do something about it (e.g. introducing a warning into cpdup).

It's easy to recreate the original PFS if you know it's number (#1 in my 
case):


  ln -s "@@0x:1" /hammer

Or maybe it's wise to let hammer pfs-master/slave do a "chflags noschg" 
by default on the symlink?


BTW, now that I moved my 40GB into a directory on the root PFS instead 
of PFS #1, can I simply "mv" it to PFS #1? I assume I can just do that.


2nd BTW: I have now a "/" PFS and a "/data" PFS which I'd like to mirror 
separately. I assume that when I mirror "/", it will not include the

"/data" PFS. Is that correct?

Nevertheless, hammer and cpdup are extremely practical and great tools, 
I'd never ever want to miss again.


Regards,

  Michael


Re: DFBSD from VM to RM

2008-07-23 Thread Michael Neumann

Sdävtaker wrote:

Hey,
Im doing setup of DFBSD2  in a VM to show my boss that it can provide
all the services we need. If i succed (i will) i will have to move it
to a real machine, is there a way to just move frm VM to RM? did
someone tried? or the best thing is do everything with script and then
rerun the whole process? I just wondering about the time that server
will be down while doing the second choice.


1) Make sure that DragonFly runs on the real machine. You can just do
   that by running the live cd installer.

2) You now have (at least) three choices:

  a) Copy the raw disk from VM to RM (using dd).

 This might not be desired when you have
 a smaller DragonFly image on the VM than you'll have
 on the RM.

  b) File-system level copy.

 You could do a remote cpdup from VM to RM. Have your VM
 running somewhere you can access it via SSH.

 This is basically how the live cd installer works. It just
 cpdups the files from the live cd to the hard disk.

 It should be pretty fast to install a basic DragonFly
 system using the live cd installer and then doing a
 remote cpdup from the VM (ca. 30 minutes).

  c) Using Hammer migration.

 You'd need a small boot partition from /boot. The rest is a big
 hammer partition. Do that in your VM. Then create
 pseudo-filesystems (hammer pfs-master) for /usr etc.
 And install into the hammer filesystem (there is currently no
 support by the installer for hammer, but it should be pretty
 straight forward to move the files later on from the ufs to hammer
 partition). Once you did that, you only have to repeat
 partitioning, newfs'ing and pfs-creation on the RM
 (using the live-cd). Then you should be able to mirror the whole
 installation (except /boot) from the VM (using hammer mirror-copy).
 Once you did that, you have to upgrade the pseudo-fs's of the RM
 to become masters (hammer pfs-upgrade) so that you can mount them.

 However, this approach is a bit more advanced than the others.

Regards,

  Michael


Re: HAMMER encryption

2008-07-22 Thread Michael Neumann

G. Mirov wrote:

Are there any plans to add encryption to HAMMER?

Matt, could you provide a quick overview (for potential  HAMMER encryption
developers) of where, when and how you believe the encryption layer
can/should be added to HAMMER?


I'd love to see a much more general translation layer, which would also
include compression.

Regards,

  Michael


Re: keeping pkgsrc up to date

2008-07-21 Thread Michael Neumann

Johannes Hofmann wrote:

What do you use to keep your pkgsrc tree up to date?
Anonymous cvs does work but is pretty slow.
The cvsup mirrors seem to be rather busy.
I used the mercurial repo at
http://hg.scode.org/mirror/pkgsrc
for some time, but it seems to be down now.


Couldn't we set up a master pkgsrc repository using Hammer, and then use
Hammer's mirroring capability? Would be nice to experiment and see how
it performs. As most pkgsrc files are actually very small, the size
penalty should not matter that much.

Oh btw, can we Hammer-mirroring from slaves? Actually, I don't see a
reason why this shouldn't be possible (just too lazy to try it out
myself right now).

Regards,

  Michael


Re: DragonFlyBSD 2.0 RELEASE SCHEDULE / livelocked limit engaged error

2008-07-13 Thread Michael Neumann

Vincent Stemen wrote:

On Fri, Jul 11, 2008 at 11:40:45AM -0700, Matthew Dillon wrote:

The release is scheduled for Sunday 20-July-2008!  We have about a week
left!

Now is the time for people to list their must-haves and would-likes
for the release!  Please use this thread.  I'll start it off:


There is a bug that has been lingering for some time that I would sure
like to see fixed.  We have not been able to run a serial ATA DVDRW
drive on any of our machines.  Any time it is connected, we get
continuous repeating errors like

intr 10 at 40001/4hz, livelocked limit engage!
intr 10 at 19810/2 hz livelock removed.

We have found a few past postings apparently about this issue that
suggested that the problem might be related to USB.  We have found no
other mention of the problem being related to SATA CDRW/DVDRW drives.
However, it only occurs on our machines when an optical SATA drive is
connected.  SATA *hard* drives work fine and USB works fine otherwise.  We
have tested on two different Intel machines, and an AMD64 machine, with
an add on PCI SiI 3512 SATA150 controller card.  We also tested with the
on board VIA 6420 SATA150 controller on the AMD64 machine.  The problem
exists in all cases.  We have also tested with 3 different SATA CD
drives.

The problem exists on dragonfly 1.10.1-RELEASE and we just tested today
with the latest snapshot ISO image from yesterday (07/12/2008) and it
still exists.


Could you try to install FreeBSD 7.0 (and maybe a -HEAD version as
well). Just to see whether it experiences the same problems. As
DragonFly inherits the (n)ata code from FreeBSD, it would be easier to
locate. But I fear it's unrelated to our nata code.

Regards,

  Michael


Re: Creating lots of files on Hammer

2008-07-10 Thread Michael Neumann

Matthew Dillon wrote:

:Hi,
:
:I wrote a script that generates 1 million files in one directory on the
:hammer filesystem. The first 100k are created very quickly, then it
:starts to get less predictive. It stops completely after creating 836k
:files. I can still ping the machine, but I can't ssh into it any more.
:It's a head-less system so I can tell what is going on exactly.
:
:I'm using the attached C file like this:
:
:   cr 100 test
:
:Regards,
:
:   Michael

 Oooh, nice test.  Mine got to 389000 before it deadlocked in
 the buffer cache.

 I'll be able to get this fixed today.  It looks like a simple
 issue.


Commit 61/A fixes the problem. But now, after creating around 3 million
files and doing a "cpdup /usr/pkgsrc /hammer/pkgsrc", running "make
head-src-cvsup" turned out to be extremely slow (actually it was the
"cvs update" part). Then I did a "rm -rf
/hammer/dir-with-a-million-files" and hammer finally died :) It probably
core dumped :(

I can try to reproduce it tomorrow with a display connected to it.

Another thing I noticed is that when there is a lot of file system
activity, other user processes are slowed down a lot (maybe they are
just blocked on a condition). At least that's how it feels.

Regards,

  Michael


Creating lots of files on Hammer

2008-07-10 Thread Michael Neumann

Hi,

I wrote a script that generates 1 million files in one directory on the
hammer filesystem. The first 100k are created very quickly, then it
starts to get less predictive. It stops completely after creating 836k
files. I can still ping the machine, but I can't ssh into it any more.
It's a head-less system so I can tell what is going on exactly.

I'm using the attached C file like this:

  cr 100 test

Regards,

  Michael
#include 
#include 
#include 

int main(int argc, char **argv)
{
  char path[100];
  int i, n, fh;

  if (argc != 3) {
fprintf(stderr, "Usage: %s n dir\n", argv[0]);
return -1;
  }

  n = atoi(argv[1]);
  for (i = 0; i < n; i++) {
snprintf(path, 100, "%s/%d", argv[2], i);
fh = open(path, O_CREAT | O_TRUNC | O_WRONLY);
if (fh == -1)
  return -2;
close(fh);

if (i % 1000 == 0)
  printf("%d\n", i);
  }
  return 0;
}


Re: Hammer on-the-move

2008-07-10 Thread Michael Neumann

Matthew Dillon wrote:

:Hi,
:
:I think it shouldn't be too hard to switch a hammer master into a slave
:and a slave to a master, isn't it?
:...
:Many years ago I really hoped that what I descibed above would work out
:well using the Coda File System. Then came hammer... :)
:
:Regards,
:
:   Michael

You don't have to convince me :-)  I want to be able to do that to
implement fail-over support.


Hehe.


I restricted those options a few days ago (its basically just two lines
of code in the HAMMER user utility) and I haven't decided whether
to unrestrict them for the release or not.  The main reason is to
prevent early adopters of HAMMER from shooting themselves in the foot
because we do *NOT* have multi-master support yet and trying to
cross-mirror two masters can blow up a filesystem faster then chocolate
melts on a hot day.   The mirroring occurs at the B-Tree record level
and has no concept of high level filesystem topology.


Trying to understand what could be a worst-case scenario. Would I just
get mixed versions of files, i.e. a highly inconsistent view of files,
while the contents of the files stay the same, or would I get corrupted
file content?

What if every B-tree record would contain an origin field, which
identifies where this record was first created. When mirroring, this
field would not be modified, so it would become easy to "undo" a
mirroring operation just by removing all records of that origin.


To do multi-master merging support the mirroring program needs to be
made a lot smarter.  The low level structures *CAN* already identify
which master made a modification (the low 4 bits of the transaction id
will identify which master made the change, as long as every master


Ah, that's basically the "origin" field I'm talking about ;-)


is given a different master id).  But the mirroring program does not
yet use that information to resolve conflicts between masters when
you want to merge their data sets into a single coherent whole.


Do you plan to implement that (resolving conflicts)? I thought that a
multi-master hammer would work using quorums, so that no conflicts could
occur as there is always a well-defined state.


We are right on cusp of being able to do this but I am awfully worried
that enabling the feature in the release, before all the support
is in place, will cause too many user-support headaches and foot-shooting.

Maybe what I should do is allow slaves to be upgraded to masters but
put all sorts of 'are you really really really sure' warnings in if
someone tries to downgrade a master into a slave.  Either way I will
not allow the mirroring program to mirror between two masters, so
you'd have to downgrade a master into a slave first.  With the proviso
that merging two master data sets where BOTH may have been independantly
modified is strictly off limits, I could allow it for the release.


Yes, better be conservative here.

Regards,

  Michael


Hammer on-the-move

2008-07-10 Thread Michael Neumann

Hi,

I think it shouldn't be too hard to switch a hammer master into a slave
and a slave to a master, isn't it?

The reason why I'd love to do that is the following:

At home, I'd like to access my files from the central file-server. This
is even much faster than doing the same via a slow laptop hard disk. The
central file-server is now the master (or the /home/mneumann pfs).

Now I want to travel around for a while and of course I don't want to go
without my home directory :)

So I mirror the central /home/mneumann to my local hard disk and switch
it (the central /home/mneumann) into a read-only slave. At the same
time, I switch the local /home/mneumann to a master and mount it
read/write. When I come back from my travel (if I come back ;-), I
mirror back the changes from local -> central and again switch the
master into a slave and vice versa.

The main advantages are:

  * You always have a backup around. You shouldn't care too much
if your laptop gets stolen.

  * You can access your files quickly when at home (using RAID to
accelerate) over NFS/Samba

  * Synchronization should be very fast unlike maybe rsync etc.

Btw, what would happen if I'd accidentially mount the central
file-server which right now acts as a slave read/write, modify some
files, and then mirror from the master? That is, the situation when
there are (accidentially) two masters. Will it do much harm to the file
system?

Many years ago I really hoped that what I descibed above would work out
well using the Coda File System. Then came hammer... :)

Regards,

  Michael


Portable vkernel (emulator)

2008-07-10 Thread Michael Neumann

Hi,

IMHO, Hammer is the killer-feature of DragonFly, too sad that I can't
use it on another system until it gets ported. I'd of course love to run
a native DragonFly on my laptop (I'm planning to do soon), but there is
still some unsupported hardware etc.

So instead of porting Hammer to other systems, wouldn't it be easier to
write a vkernel emulator that can run on *BSD and/or Linux (or even
Windoze)? I mean, it's just a user process, isn't it? How much effort
would it be to implement such an emulator for say FreeBSD? Is there any
possibility to run everything as a user process (like qemu), so no
extensions to the operating system must be done (kind of intercepting
the syscalls of the vkernel)?

Even if it might be super inefficient, it would be super cool to be able
to run a vkernel, which runs hammer, which runs samba, which serves as
fileserver for windoze ;-)
(or FreeBSD -> vkernel -> hammer -> nfs -> FreeBSD :)

The main issue that I want to solve is to be able to use Hammer even
while travelling around. At home, I can access my files over NFS/Samba.
But when being on the move, I'd like to access my files via a Hammer
slave filesystem, but therefore I need DragonFly installed (okay can do
that with qemu already...).

Regards,

  Michael


Re: Wake on LAN

2008-07-09 Thread Michael Neumann

Dmitri Nikulin wrote:

On Wed, Jul 9, 2008 at 5:59 AM, Michael Neumann <[EMAIL PROTECTED]> wrote:

Well, my mainboard supports it, but by searching around on the web, a
lot a people have problems with getting it working (including me).


If the BIOS supports waking off PCI LAN cards you can pick one up for
the cost of a sandwich.


BIOS says it supports that. Do you have recommendations for a special 
card? I think the Intel Gigabit Adapter would be a good choice!?



Does it work if you Suspend instead of Halting the machine? I don't
know if WOL is supposed to work from Suspend but if it did, it'd work
around your problem nicely.


I think I tried that as well with no success.

Thanks

  Michael


Re: Wake on LAN

2008-07-08 Thread Michael Neumann

Matthew Dillon wrote:

:Hi,
:
:Has anybody got wake on lan (WOL) working with DragonFly? I patched
:if_nfe to not disable WOL, but it still doesn't seem to work. I can
:watch the network leds blinking while my box is off, so it's receiving
:those magic packets and I can also power-on the box using the keyboard.
:But it just doesn't wake up.
:
:Is there more involved (I read about not disabling PCI when shutting
:down would help or something similar, but it was Linux-related), or is
:it just my cheap ASUS mainboard?
:
:Regards,
:
:   Michael

I don't know anyone who uses that stuff, so it could be that it just
doesn't work at all.  Usually you also have to tell the BIOS which
devices to WOL on.


It's just that I'd prefer not to run my home-server 24x7 to not waste
too much energy (and still being able to access it remotely). In an
ideal world it wouldn't be neccessary to do that as the components would
save power when idle.

Well, my mainboard supports it, but by searching around on the web, a
lot a people have problems with getting it working (including me).

Regards,

  Michael


Wake on LAN

2008-07-08 Thread Michael Neumann

Hi,

Has anybody got wake on lan (WOL) working with DragonFly? I patched
if_nfe to not disable WOL, but it still doesn't seem to work. I can
watch the network leds blinking while my box is off, so it's receiving
those magic packets and I can also power-on the box using the keyboard.
But it just doesn't wake up.

Is there more involved (I read about not disabling PCI when shutting
down would help or something similar, but it was Linux-related), or is
it just my cheap ASUS mainboard?

Regards,

  Michael


Re: Hammer performance

2008-07-07 Thread Michael Neumann

Matthew Dillon wrote:

:
:Abother strange thing occurs if I "dd" directly to the device:
:
:   dd if=/dev/zero of=/dev/ad4s1d count=2
:   2+0 records in
:   2+0 records out
:   1024 bytes transferred in 7.361097 secs (1391097 bytes/sec)
:
:Here I only get around 1.4 MB/sec. Shouldn't that me a much higer value?
:
:Regards,
:
:   Michael

If you do not specify a block size parameter to dd it will be doing
512 byte writes.  A typical block size is 32k, e.g.:

dd if=/dev/zero of=/blah bs=32k count=10240


Thanks, that does the trick. Now I get ~100 MB/sec throughput for
sequential reads/writes which I guess is full platter speed.

Regards,

  Michael


Re: Hammer performance

2008-07-07 Thread Michael Neumann

Michael Neumann wrote:

Hi,

Just compared writing a 1 GB large file with Hammer and got:

  dragnas# dd if=/dev/zero of=test count=200
  200+0 records in
  200+0 records out
  102400 bytes transferred in 26.880290 secs (38094827 bytes/sec)

When I do the same (on the same hard disk) with UFS, I get around twice
the write performance:

  dd if=/dev/zero of=test count=200
  200+0 records in
  200+0 records out
  102400 bytes transferred in 12.816585 secs (79896477 bytes/sec)


Abother strange thing occurs if I "dd" directly to the device:

  dd if=/dev/zero of=/dev/ad4s1d count=2
  2+0 records in
  2+0 records out
  1024 bytes transferred in 7.361097 secs (1391097 bytes/sec)

Here I only get around 1.4 MB/sec. Shouldn't that me a much higer value?

Regards,

  Michael


Hammer: Low interactivity during high filesystem activity

2008-07-07 Thread Michael Neumann

Hi,

Just noticed while running blogbench on my hammer partition, that "top"
in another window took several seconds (around 10) to show up. The same
happened for "man". I have a dual core Athlon X2, and I don't see heavy
CPU load. As I am accessing the box over SSH (nfe0), could that be the
potential bottleneck?

Regards,

  Michael


Hammer performance

2008-07-07 Thread Michael Neumann

Hi,

Just compared writing a 1 GB large file with Hammer and got:

  dragnas# dd if=/dev/zero of=test count=200
  200+0 records in
  200+0 records out
  102400 bytes transferred in 26.880290 secs (38094827 bytes/sec)

When I do the same (on the same hard disk) with UFS, I get around twice
the write performance:

  dd if=/dev/zero of=test count=200
  200+0 records in
  200+0 records out
  102400 bytes transferred in 12.816585 secs (79896477 bytes/sec)

Is there a possibility to tune hammer in any way, for example by giving
it more memory?

Regards,

  Michael


Re: EHCI working?

2008-07-02 Thread Michael Neumann

Simon 'corecode' Schubert wrote:

Hey,

could it be that EHCI is not working correctly?  On my desktop I get irq 
3 interrupt livelocks when loading EHCI (actually it is on/off 
livelocking).  On my laptop it seems to load okay, but then transferring 
data to my new mp3 player is slow, basically around 1MB/sec.  In dmesg, 
cam writes something about "Down reving Protocol Version from 2 to 0?", 
but I don't know what that means.  Does anybody have a working EHCI setup?


Hm, last time I tried, it was pretty slow. Could you try booting with
tunable hw.usb.hack_defer_exploration set to 0 (see [1]). Does this make
any difference?

Regards,

  Michael

[1]:
http://www.dragonflybsd.org/cvsweb/src/sys/bus/usb/usb.c?rev=1.49&content-type=text/x-cvsweb-markup


Re: HAMMER lockup

2008-06-30 Thread Michael Neumann

Matthew Dillon wrote:

:found disconnected inode 00010411441e
:[diagnostic] cache_lock: blocked on 0xc1529aa8 "log.smbd"
:
:log.smbd is strangely on a UFS partition.
:
:I know this is hard to debug but posted here as maybe we can sort it out
:anyway. I am glad to provide the information you need and perform the
:necessary tests as there is no sensitive data on this rig.
:
:-- 
:Gergo Szakal MD <[EMAIL PROTECTED]>

:University Of Szeged, HU

Try it with all the recent commits.  If it is still locking up
break into the debugger and do a 'ps' to see what the processes
are all stuck on.

I've fixed a couple of issues, half of which were in the kernel
itself.  So far my test box running with hw.physmem="128m" is
still alive.


This sounds like Hammer will be very well suited for embedded products
like NAS boxes. Indeed Hammer would make a great product as a combined
backup/file-server appliance, using CIFS to serve Windows clients.

I am curious how much CPU such an appliance would ideally need, i.e. how
CPU-bound Hammer is, for example compared to UFS. Any recommendations?
For example would a low-power 1 GHZ single-core Sempron work out well or
is it better to use a Quad-core? I'm happy with any qualitative
answer...

Thanks in advance.

  Michael


Re: HAMMER recovery and other questions

2008-06-24 Thread Michael Neumann

Matthew Dillon wrote:

:4) Feature suggestion: I think for a little bit more comfortable
:operation, there should me a command that automatically creates a
:softlink. Like: hammer snap /path/to/softlink which does a synctid and
:creates the softlink in the desired path. That way one would not be
:forced to retrieve the transaction ID and create softlinks manually. Or
:have I missed something and you already have implemented this? :-)

It's a good idea.  Go ahead and add it to the hammer utility.
Maybe call it 'hammer snapshot  []'
(where the filesystem need only be specified if the softlink 
directory is not in the desired filesystem).


Is there an easy way to determine the filesystem a path belongs to?

I'd suggest an extension:

  mkdir /hammer/softlinks
  hammer snapshot /hammer/softlinks  # => /hammer/softlinks/

  hammer snapshot /hammer/softlinks/soft1 # => name snapshot "soft1"

Regards,

  Michael


Re: vkernel and testing IP Filter

2008-06-13 Thread Michael Neumann

Jeremy C. Reed wrote:
I am hoping I can use vkernel on leaf or pkgbox to do some IP Filter 
coding.


Can a vkernel be used to test packet filtering, networking and tcpdump?

Or maybe as an alternative, I could do my testing within bochs or gxemul 
environment and use kernel module(s) -- so I don't have to continually 
reboot my system for testing.


I am using qemu and it works well.

If your host machine is FreeBSD, you can nat the qemu-internal network
to the internet using the following in your rc.conf:

  cloned_interfaces="tap0"
  ifconfig_tap0="inet 192.168.3.1 netmask 255.255.255.0 up"
  gateway_enable="YES"
  firewall_enable="YES"
  firewall_type="OPEN"
  natd_enable="YES"
  natd_interface="rum0"
  natd_flags="-same_ports"

Just replace "rum0" with your interface that is contected with the
internet. Then start qemu with:

  qemu -m 256 -localtime -boot c \
   -hda disk1.img \
   -net nic -net tap,ifname=tap0,script=no

You can run this as normal user. But notice that when you quit qemu you
have to set the inet address for tap0 again (ifconfig tap0 inet
192.168.3.1). Assign 192.168.3.2 to your qemu-DragonFly box and voila,
you can ssh into it from outside.

You could also use bridging, but this doesn't seem to work for wireless
interfaces.

Regards,

  Michael


Re: hammer prune explanation

2008-05-13 Thread Michael Neumann

Matthew Dillon wrote:
> :Yeah, I was thinking about wildcarding as well.
> :
> :But is it possible to implement it within cmd_prune.c, or do I have to
> :modify the ioctl kernel code? If done in cmd_prune.c, I somehow have to
> :iterate over all deleted files and call the prune command for it.
> :
> :I thought, it's easier to introduce a check in the kernel, whether the
> :file that should be pruned matches a given pattern. Doesn't sound very
> :hard to do, if it is easy to get the pathname for a given inode.
> :
> :Are you thinking about something like the archive flag?
>
> I think it is probably best to implement that level of sophistication
> in the utility rather then in the kernel.  The pruning ioctl code
> has no concept of files or directories... literally it has no 
concept.

> All it understands, really, are object id's (aka inode numbers) and
> records.
>
> The hammer utility on the other hand can actually scan the filesystem
> hierarchy.
>
> Locating wholely deleted files and directories is not hard to do.
> As-of queries can be used to access earlier versions of a directory.

Hm, how would that work, if I want it to behave like the prune command?
I'd need to traverse a lot of filesystem trees, to just determine which
files were deleted.

Imagine:

  compare /mnt with /[EMAIL PROTECTED] and prune deleted files.

  compare /[EMAIL PROTECTED] with /[EMAIL PROTECTED] ...

I wouldn't find files that were deleted in between 1-hour-ago and
2-hours-ago. To make it work, I'd need to compare the filesystem trees
of every possible timestamp.

It's probably easier, and more efficient, to have
separate filesystems.

> We might want to add some kernel support to make it more efficient,
> for example to make it possible for the hammer utility to have
> visibility into all deleted directory entries.  It could use that
> visbility to do as-of accesses and through that mechanic would thus
> have visibility into all deleted files and directories.

Does this mean, I'd see files like:

  /[EMAIL PROTECTED]
  /[EMAIL PROTECTED]/[EMAIL PROTECTED]
  /[EMAIL PROTECTED]

Regards,

  Michael


Re: hammer prune explanation

2008-05-10 Thread Michael Neumann

Matthew Dillon wrote:
> :Thanks a lot! Could this great explanation (or parts of it) go into the
> :man-page? I think it's very helpful, especially the visualization.
>
> I am going to write up a whole paper on HAMMER.  It's almost time for
> me to sit down and do it.
>
> :Is it possible to prune according to the filename? For example:
> :
> :   hammer prune /mnt/usr/obj from 2d everything
> :   hammer prune /mnt/usr/src from 1d to 10d every 1d
> :
> :Don't know if it is possible to implement... but would avoid the need
> :for separate filesystems.
> :
> :Regards,
> :
> :   Michael
>
> The filesystem supports pruning on an object-by-object basis, so
> it is possible to prune a single file.  The hammer utility does not
> currently have support for that, but it would not be difficult to
> add.  If you want a little side project, add it to the utility!
> The core code that selects the object id range (aka inode numbers)
> is in /usr/src/sbin/hammer/cmd_prune.c line 74ish.

Sounds good :)

> What I would like to do is have a more sophisticated pruning 
capability

> in general, such as based on wildcarding and/or an inherited chflag
> flag, or perhaps be able to specify a pruning category selector on
> a file by file basis.  I don't know what the best approach is.

Yeah, I was thinking about wildcarding as well.

But is it possible to implement it within cmd_prune.c, or do I have to
modify the ioctl kernel code? If done in cmd_prune.c, I somehow have to
iterate over all deleted files and call the prune command for it.

I thought, it's easier to introduce a check in the kernel, whether the
file that should be pruned matches a given pattern. Doesn't sound very
hard to do, if it is easy to get the pathname for a given inode.

Are you thinking about something like the archive flag?

> Right now any serious HAMMER user need to set up at least a daily
> cron job to prune and reblock the filesystem.  I add a '-t timeout'
> feature to the HAMMER utility to make allow the operations to be
> set up in a cron job and keep the filesystem up to snuff over a long
> period of time.  So, e.g. you would have a nightly cron job that
> did this:
>
># spend up to 5 minutes pruning the filesystem and another
># 5 minutes reblocking it, then stop.
>hammer -t 300 prune /myfilesystem; hammer -t 300 reblock /myfilesystem

Does this degrade filesystem seriously?

Regards,

  Michael


Re: hammer prune explanation

2008-05-10 Thread Michael Neumann

Matthew Dillon wrote:
> :Hi,
> :
> :I don't understand the usage of
> :
> :   hammer prune from xxx to yyy every zzz
> :
> :Could someone enlighten me, what the "from" and "to" exactly means?
> :
> :Does it mean, that all deleted records with an age between xxx and yyy
> :are considered for pruning? Starting from "xxx", just keep deleted
> :records every "zzz"?
> :
> :Regards,
> :
> :   Michael
>
> You got it.   Note that 'deletions' also mean overwrites and changes.
> For example, if you chmod a file HAMMER will remember the old modes
> as a deleted record.
>
> So here's an example:
>
>hammer prune /mnt from 1d to 30d every 1d
> [...]
>

Thanks a lot! Could this great explanation (or parts of it) go into the
man-page? I think it's very helpful, especially the visualization.

Is it possible to prune according to the filename? For example:

  hammer prune /mnt/usr/obj from 2d everything
  hammer prune /mnt/usr/src from 1d to 10d every 1d

Don't know if it is possible to implement... but would avoid the need
for separate filesystems.

Regards,

  Michael


hammer prune explanation

2008-05-10 Thread Michael Neumann

Hi,

I don't understand the usage of

  hammer prune from xxx to yyy every zzz

Could someone enlighten me, what the "from" and "to" exactly means?

Does it mean, that all deleted records with an age between xxx and yyy
are considered for pruning? Starting from "xxx", just keep deleted
records every "zzz"?

Regards,

  Michael


Re: Some questions

2008-04-29 Thread Michael Neumann

araratpp wrote:

Big THX for your fast answers!


2.0's biggest features are going to be the new filesystem,
called HAMMER


Will be HAMMER the standard file system of the root partition in v2.0?


No. It will be still alpha or beta-quality.


The ultimate goal of the project is transparent machine clustering.

When DragonFlyBSD will optimized for transparent machine clustering,
can it than used as operating system for desktop system or is that no
(more) recommended?

And I habe another question:
When comes the results of the Google Summer of Code into DragonFlyBSD?
In Version 2.0 or later?


I don't think version 2.0 will be delayed because of a Summer of Code 
project.



2008/4/28, Sdävtaker <[EMAIL PROTECTED]>:

You can get a fair start in http://www.dragonflybsd.org/docs/goals.shtml
 The thing doing the bigger noise right now is the HammerFS.
 See ya around.
 Damian

I readed this text but its very hard to understand this. :(


Do you speak German? :)


 Take a look here:
http://wiki.dragonflybsd.org/index.cgi/DragonFly_Technologies

 Most notable:

  * Process checkpointing
  * Journaling (HAMMER will replace it as far as I understand)
  * VKernel
  * Varsym

 Not to forget dntpd, dma and all the good pkgsrc stuff ;-)


That is it what I have search :-)


But keep in mind that those features are not too useful for a desktop-user.

Regards,

  Michael


Re: Some questions

2008-04-28 Thread Michael Neumann

Matthew Dillon wrote:

:Hello!
:
:I have some questions about DragonflyBSD!
:
:1. What you have reach since the forking of FreeBSD?

That would be a pretty long list.  The kernel's core APIs have
been almost entirely rewritten, we have a really nice light
weight process abstraction, integration with pkgsrc, and many
other things.


Take a look here:
http://wiki.dragonflybsd.org/index.cgi/DragonFly_Technologies

Most notable:

 * Process checkpointing
 * Journaling (HAMMER will replace it as far as I understand)
 * VKernel
 * Varsym

Not to forget dntpd, dma and all the good pkgsrc stuff ;-)

And a lot is just not visible by the regular user...

Regards,

  Michael


  1   2   >