On 15/09/10 01:13, C Anthony Risinger wrote:
On Tue, Sep 14, 2010 at 2:27 PM, Nathan Wayde<disposa...@konnichi.com>  wrote:
here's what I'd(and I imagine most others who know about sharing the cache)
use a local mirror for:

to be able to sync all other systems from it. plain and simple. if my
systems don't have internet connection or something like that then i simply
get the packages from the master,
cache sharing doesn't and cannot solve that problem at all, that's a fact.

shared cache won't solve that sure... but there are better solutions:

) if you can get it from master, then it sounds like you have a LAN
connection; tunnel a connection thru master...
) if you have a LAN, what can't some machines have access anyway?
) if you don't have a LAN, you are manually moving packages?  you
could do that without a local mirror
) if you have a LAN, but _cannot_ allow some access to the net, then
use a different method like a caching proxy

local mirror = quick/easy crutch to avoid better utilization of
local/peer resources

i use a homebrew proxy/cache solution for my home, works fine.  one
machine pretends to be a repo, others look to it for packages... easy.
  i'm not using this exact version now, but i implemented this (rather
crappily) while first learning python:

"pacproxy (or something that vaguely resembles an apt-proxy clone)"
https://bbs.archlinux.org/viewtopic.php?id=87115


from the sounds of it all those solutions require an internet connection. my use-case is about installing on-demand what i want without an internet connection - the same reason i never clear my cache when i uninstall stuff. If i'm on the train and working on a presentation or something and i need to make some graphic i need to know that i will have the apps i need. this has saved me before where apps i had were inadequate for something that popped up while i had no internet connection. the fact that i synced everything to my desktop then copied it onto my laptop meant that i wasn't syncing the mirror twice.

now to the bandwidth issue. it's obviously bogus, because:

1) they assume everyone/(lots of people) is going to create a local mirror.
2) they assume that they're all going to sync from the same server.
3) they assume this extra bandwidth waste actually causes a problem for all
the mirrors - i.e that there's only 1 mirror.

now, if my assumptions are wrong thus leading to false conclusions then
please correct me, but so far all I've heard is whining about local mirror
causing problems for the mirrors but nothing about what these problems
actually are, in the meantime the original wiki was deemed bad with not much
of a valid reason and nothing being done to further educate us the users.

i don't think it's even about whether or not it _is_ causing a
problem, and more a preemptive move to discourage naive
implementations.  sure, if you have a heterogeneous environment of 200
machines, then a local mirror probably isn't too bad an idea... but it
still isn't needed, as faster/better/cheaper methods are available.

in my opinion, if you're not publicly seeding your mirror, then you
don't need it; else you probably only want it due to an extreme case
of laziness.  sure maybe mirror XYZ can handle constant sync's from
everyone looking at it... but really, do them a favor, and don't; it
might piss them off :-).


you do realize the average daily sync in repo is only a few hundred megs right? and that's mainly because of the large packages which come in occasionally like kde gnome, OOo, eclipse, etc.

and i don't see how removing the wiki solves anything, it rather makes it worse IMHO. it was simply removed with a vague message pointing to a wiki that doesn't do much better. iirc there was supposedly a warning at the top of the original wiki and no-one ever read it. this sounds to me like someone fancies them-self a mind-reader or something. on a more serious note, let's be honest and say that putting a warning at the top of a page with several subsections that warns mostly about something further down the page is just idiotic.

You can probably tell that I'm annoyed by this and the simple fact is that
ARM sync script was based off the script on that wiki, it's not the same as
I changed a lot of options to cater to my own needs but as have been said
the script was bad, no-one is telling us what was bad about it and these
alternative methods are wholly inadequate at best.

yeah i don't really know the politics here, or have even seen the
script.  in my own experience back in the day syncing ubuntu repos
(for easy install at remote locations from large USB key when client
requirements are unknown)... you likely flat out don't need it, and
there are _very_ few legitimate use cases for it (the parenthesized
use case above is about the best one i know).

all i'm suggesting is that just because you can and it's easy doesn't
mean you should.  but hey, i don't run a mirror, and extreme leeching
won't affect me, so ultimately i could care less; if i did though, i
would monitor for this kind of crap... i mean, doesn't the official
arch mirror impose similar restrictions?  just do you part to not be
excessive.

does one check out the entire library on the possibility of reading 10 books?

C Anthony


well the ARM is like an archive it's not really a public mirror like the rest, it's a last resort kinda thing. the idea is that is wants to cache every package (or as much as possible) that hits the repos, if my script is gonna cause a problem then I'd very much like to know about it but alas no-one seems to know what these problems are.

Reply via email to