RE: open office base
> Any suggestions on reference material for open office base? I've read the > open office 3 guide and a couple of on line tutorials which cover the basics. > Nothing that tells me how to deal with the nonspecific often nonfatal error > messages that pop up or help me understand how things really work. Is > there enough functional overlap that books on non-OO data bases would be > useful? This may be a silly question, but how different is OO Base from LibreOffice base. Both apps suit my basic office needs so I never really have to track anything down for docs. Perhaps the LibreOffice fork is less braindead in the error reporting? Patrick ___ gnhlug-discuss mailing list gnhlug-discuss@mail.gnhlug.org http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/
RE: SCP from STDIN: "-t" option undocumented?
> Having just now quickly RTFSC and done a few superficial experiments I > conclude that the -t option (mnemonic for "to"; there's also a secret "from" > flag -f) is not suitable for use by humans. It tells scp that it's in > "server" > mode and should expect to communicate with its counterpart using some > undocumented protocol that appears to mix commands and data in-band via > stdin. That's not the droid you're looking for... Use it anyways, no one has ever accused you of being a human -=] Patrick ___ gnhlug-discuss mailing list gnhlug-discuss@mail.gnhlug.org http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/
RE: RHEL cluster question
> Greetings, fellow Linux lovers; > > Ran into a little situation today where we need to cycle power/reboot a > bunch of nodes that are down and out, by telnet to the relevant > terminal server ports and the advanced management module. This > involves multiple consoles, windows, command line, GUI, the works, as > follows: > > > > Subject: RHEL cluster, 4.0 through 5.3. > > Issue: How to find IP addresses of terminal server ports which service > individual nodes which are down and out. (need to telnet to them for > troubleshooting/maintenance/rebooting) > > And: IP address and/or hostname of advanced management module which > runs on the clusters . > Some clusters have a "magic decoder ring" file that gives this > information; most don't. > > Any thoughts? Workaround so far has been via eyeballing racks of > blades and doing various arithmetic problems in our heads. It sounds like you work on a bunch of clusters that are configured like crap by academics. You can keep most of that stuff organized with with proper DNS naming or static-dhcp. It takes a bit to set up, but you map DHCP to mac addresses, and cname hostname.console.blah.com to the management module Since you probably can't tell them to rip out all of their dns/dhcp infrastructure, maybe something like netdisco.org would help. It uses CDP + SNMP to grab arp tables and map out your network. You should be able to tell which blades are hooked to which switch ports, and from there figure out the management module. Patrick ___ gnhlug-discuss mailing list gnhlug-discuss@mail.gnhlug.org http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/
Help with: openldap / active directory / sasl
Hey All, I'm trying to bind to LDAP interface using SASL. The ldap interface is running on an active directory server. Using a basic un/pw bind works: ldapsearch -h somead.local -b "" -s base -x -D "myu...@myrhelm" -W Outputs what I would expect, but ldapsearch -h somead.local -b "" -s base -Y DIGEST-MD5 -D "myu...@myrhelm" -W Outputs: Enter LDAP Password: SASL/DIGEST-MD5 authentication started ldap_sasl_interactive_bind_s: Invalid credentials (49) additional info: 8009030C: LdapErr: DSID-0C09043E, comment: AcceptSecurityContext error, data 0, vece I'm a bit stumped. I was under the impression that sasl/digest-md5 was it's own authentication method, that I didn't have to have a kerb ticket to make the call. It's common for linux ldap to ad connections to have Kerberos setup, I don't think it's necessary. Googling around for an answer has been a study in futility. Anyone know the magic for doing sasl auth against an ad server? I know there the server is set up for "reversible" passwords, so I don't think that's the issue. Why does LDAP+AD hate me? I'm a fun guy! I just wanna chat with it about some stuff... Patrick ___ gnhlug-discuss mailing list gnhlug-discuss@mail.gnhlug.org http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/
RE: [OT] ENERGY STAR Low Carbon IT Campaign
> From: gnhlug-discuss-boun...@mail.gnhlug.org [mailto:gnhlug-discuss- > boun...@mail.gnhlug.org] On Behalf Of Alan Johnson > Sent: Tuesday, July 20, 2010 2:07 PM > To: GNHLUG > Subject: [OT] ENERGY STAR Low Carbon IT Campaign > > http://www.energystar.gov/index.cfm?c=power_mgt.pr_power_mgt_low_carbon > > A good place to start thinking about energy in your organization. If > we geeks don't, who will? Oblig: gas powered alarm clock gets energy star approval. http://www.nytimes.com/2010/03/26/science/earth/26star.html You can actually put a dollar figure on going "green". Each watt you don't use in a server is a watt you aren't cooling. Applying cold ac air to the front of most equipment (except cisco) and sequestering your hot air is helpful. Most CRTs are a couple hundred bucks of electricity a year to run non-stop. Pulling shades on bright sunny summer days can reduce your air conditioning bill. Relaxing dress codes in the summer can let you get away with a warmer office. I recall an article where google sets their datacenters to 80 degrees. 50 dollar credits seem nice... ___ gnhlug-discuss mailing list gnhlug-discuss@mail.gnhlug.org http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/
RE: Recommendations...
> From: gnhlug-discuss-boun...@mail.gnhlug.org > [mailto:gnhlug-discuss-boun...@mail.gnhlug.org] On Behalf Of > Gerry Hull > Sent: Tuesday, June 15, 2010 1:22 PM > Folks, > > I just picked up an Lenovo X61 laptop the other day for a > very good price. This 3lb unit is a dual-core t7...@2.6ghz, > 4GB Ram and 100GB disk. > > I want to run Linux as the core operating system, and use > VMWare to load Windows for my Windows work. > > I was thinking of Ubuntu 10.04. My question is should I do > 32 or 64 bit? If I go 32-bit I will not be able to use all > the ram, and if I go 64-bit I may not have all the drivers. > > What are your thoughts/recommendations? If you are going to be using this as a desktop and not a server, I will disagree with most and tell you to go 32 bit. I've never seen anyone demonstrate that the PAE kernels take that much of a performance hit. Flash 64bit is going away and java 64bit plugins for browsers are buggy for anything non-trivial. I've been using kvm for my virtualization needs. It's not as fast as virtual box, doesn't do some of the cool usb2 pass thru, but it's part of the kernel proper and has been more stable for me. If it was a server, and you were less likely to be trying to get binary blobs of crap to run, I'd say go 64, but for average home desktop use, 32bit is still the path of less resistance. Patrick ___ gnhlug-discuss mailing list gnhlug-discuss@mail.gnhlug.org http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/
RE: [NH LoCo] Running linux without Gnome/KDE
There's also ratpoison (http://www.nongnu.org/ratpoison/) and Awesome (http://www.actionshrimp.com/2009/02/installing-awesome-window-manager-o n-ubuntu-intrepid/). Good luck customizing your desktop! > -Original Message- > From: gnhlug-discuss-boun...@mail.gnhlug.org > [mailto:gnhlug-discuss-boun...@mail.gnhlug.org] On Behalf Of > Susan Cragin > Sent: Thursday, February 11, 2010 7:53 PM > To: Ubuntu New Hampshire LoCo Team > Cc: GNHLUG-discussion > Subject: Re: [NH LoCo] Running linux without Gnome/KDE > > >On Thu, Feb 11, 2010 at 1:40 PM, Susan Cragin > wrote: > >> Also tried fvwm by itself. Didn't like it as much as Fluxbox. > > > > FVWM is my preferred window manager/environment. It's > lightweight, > >powerful, and hyper-flexible. But "out of the box" it's not much to > >look at; the stock config is pretty minimal and ugly. If > you want to > >configure the computer to work *exactly* the way *you* want > it, FVWM is > >ideal, but all that power comes at a cost. You're basically > building > >the layout yourself. Whether or not that's what you want is > entirely > >your call. > > > > If you do want to go the FVWM route, I can answer questions. Post > >here or on the gnhlug-discuss list. I know we've got a couple other > >FVWM fans on the gnhlug-discuss list. > > > > To get an idea of what FVWM can do -- and optionally to get some > >canned configs you can copy-and-paste ideas from -- check out the > >screen shot gallery on the FVWM site: > > > >http://www.fvwm.org/screenshots/desktops/ > > > > It's telling that you can find a screen shot that looks > like pretty > >much every other window manager/desktop environment/OS every > created, > >all done using FVWM. :) > > > >-- Ben > > Wow. Those look great. I still have fvwm, will try to > configure it early next week, and take you up on asking about > it. Love the look, love fluxbox, too. > Too many treats. > Next week I'll try to get my ideal window manager configured, > and post screenshots, and ask questions from there. > --Susan > > > ___ > gnhlug-discuss mailing list > gnhlug-discuss@mail.gnhlug.org > http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/ > > ___ gnhlug-discuss mailing list gnhlug-discuss@mail.gnhlug.org http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/
[SOLVED] RE: using iptables/tc to traffic shape
> if you don't mind a couple guesses: > > On 08/10/2009 07:10 PM, Flaherty, Patrick wrote: > > I can't seem to get this to work though. The dnat rule gets > a single > > hit but the packet doesn't show up at the throttler:eth1. > > Do you have?: > net.ipv4.ip_forward = 1 Yes, this was already set in sysctl > > > #accept all traffic on eth0, send it thru eth1, seems like *some* > > packets should show up on eth1 eh? > > iptables -A FORWARD -i eth0 -o eth1 -j ACCEPT > > Does the packet exist in both -i eth0 and -o eth1 states if > it's being forwarded or just one at a time? That is, perhaps > -i eth0 would be enough. Obviously I don't understand the > theory well enough. Jumping jeebus on a pogo stick...Bill is a iptables expert,..I don't totally understand why it works, but after removing the -0 eth1 from the FORWARD chain it works right. I really should have put the destination in a different subnet when I was substituting ips, so here's an update working version. #client:192.168.100.10 #throttler:192.168.100.50 #throttler:192.168.100.51 #destination:192.168.150.100 #turn on natting iptables -t nat -A POSTROUTING -o eth1 -j MASQUERADE #accept established connections from eth1 to eth0 iptables -A FORWARD -i eth1 -o eth0 -m state --state RELATED,ESTABLISHED -j ACCEPT #accept all traffic on eth0 iptables -A FORWARD -i eth0 -j ACCEPT #traffic on eth0 iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 443 -j DNAT --to-destination 192.168.150.100:443 #Now route traffic for 192.168.150.100 thru the interface we are going to use tc on route add -host 192.168.150.100 gw 192.168.100.1 dev eth1 #lets add some latency to eth1 so the connection feels crappier. tc qdisc add dev eth1 root netem delay 1000ms #from client #telnet 192.168.150.100 443 # Connected to SomeHost (192.168.150.100) #Escape character is '^]'. # #Do a funny dance due to success Thank you Bill! Patrick Patrick ___ gnhlug-discuss mailing list gnhlug-discuss@mail.gnhlug.org http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/
using iptables/tc to traffic shape
I'm trying to be able to simulate slow/throttled/crappy internet for a client server app. My plan was to have the client connect to eth0, use an iptables preroute dnat to the destination, and have a static route for the destination go thru eth1, where I could use tc on eth1 to simulated different network issues. My theory: client:192.168.100.10 throttler:192.168.100.50 throttler:192.168.100.51 destination:192.168.100.100 packet leaves client on ephemeral port for throttlerbox:443 packet arrives at throttlerbox:443 iptables nats packet to destinationbox:443 static route for destinationbox:443 sends packet to eth1 packet leaves box on eth1 on an ephemeral port for destinationbox:443 ...and there's the return trip which I don't need to map out. I can't seem to get this to work though. The dnat rule gets a single hit but the packet doesn't show up at the throttler:eth1. A little birdie said he's never been able to get it to work thinking that the kernel was being efficient and ignoring routes for packets destined for a network on the interface they came in on. Anyone ever see this before? Know how to do this correctly? Yes it would be easier if I just hung a hub off the throttler and had it act as a nat box, but that would be inconvenient for the devs and the testers. Here are the rules I tried (i've done about 30 variations on natrules) #turn on natting iptables -t nat -A POSTROUTING -o eth1 -j MASQUERADE #accept established connections from eth1 to eth0 iptables -A FORWARD -i eth1 -o eth0 -m state --state RELATED,ESTABLISHED -j ACCEPT #accept all traffic on eth0, send it thru eth1, seems like *some* packets should show up on eth1 eh? iptables -A FORWARD -i eth0 -o eth1 -j ACCEPT #traffic on eth0 iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 443 -j DNAT --to-destination 192.168.100.100:443 Beers at the next gnhlug for a solution? Patrick ___ gnhlug-discuss mailing list gnhlug-discuss@mail.gnhlug.org http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/
RE: melodrama at CentOS?
> >>> ... control of the domain name ... > >>> So, trust and brand value are at risk. > >> > >> ... massive organizational clusterfsck that would ensue ... > > > > You should be using a local mirror!!! Update one rsync script and > > you're done. > > The various files that control package management > explicitly mention centos.org as the master all over the > place. Take a look at the config files for YUM. (Debian > zealots, note that APT has the same > problem.) > > To say nothing of the fact that, if the project had to > fork, they'll almost certainly change the very name. I was trying to be funny, but since you're being fuddy. It wouldn't be anything more than a semantic name change. There's no forking going on, there's no forking going on, there's no forking going on. The guy who registered the domain name, controls it's dns, and runs their paypal stuff was being uncooperative. According to the centos mailing list and front page that's no longer the case. The worst case scenario (which isn't even going to happen) they need to get a new domain name to use for a homepage (my suggestion is centos-hates-lance.org) The only files that yum cares about are yum.conf (no mention of centos) and any repo files included via /etc/yum.repos.d (multiple centos reference). It's true that you may have to run sed -i 's/mirrors.centos.org/mirrors.centos-ng/g' /etc/yum.repos.d/*, but that would be it. The yum plugins don't mention centos.org, the docs don't mention centos.org, nothing that I've ever seen is dependant on centos.org but the repo file (which i don't have to use because I point my servers at spacewalk). Even if someone were to jack the domain name, the build machines are the ones with the signing key for packages, the hijacker wouldn't have it, and the packages wouldn't verify on your machine. > > If you have lots of centos boxes, try using spacewalk, works pretty > > well. > > When I was working at a consulting firm, a lot of our work > was maintaining Linux servers, one per customer. I never had > any luck adapting management tools to that sort of situation. > The tools all assume strong management from a single point, > which you don't have with a consulting engagement. Of > course, that was at least six years ago I last looked, so > things could well be different now. Spacewalk is sorta like wsus, machines check in tell you what their patch level is. You can approve new patch levels and push them out to the machines. It's the upstream for RedHat network, but it is rather centralized. Patrick ___ gnhlug-discuss mailing list gnhlug-discuss@mail.gnhlug.org http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/
RE: melodrama at CentOS?
> On Thu, Jul 30, 2009 at 4:23 PM, Bill > McGonigle wrote: > > ... control of the domain name ... > > So, trust and brand value are at risk. > > That, and the massive organizational clusterfsck that would > ensue if they had to switch to a different domain name. > Everyone from the top down to a sysadmin with one box would > need to update their stuff. > That's what I care the most about, since I'm one of those sysadmins. > :-) You should be using a local mirror!!! Update one rsync script and you're done. If you have lots of centos boxes, try using spacewalk, works pretty well. ___ gnhlug-discuss mailing list gnhlug-discuss@mail.gnhlug.org http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/
RE: Lan + DMZ + LargeNumOfFiles = headaches AKA: plz halp and donate urbrain!!
> I've been soliciting solutions from everyone I can think of > on moving a > large number of files from inside our lan to a dmz on a > regular basis. > > I have a cluster of machine producing 20k small files (30kbytes or so) > inside our lan. After the files are created, they are pushed to a few > web servers in the DMZ using ftp. The push is done by the machine that > created the file. Ideally, the files make it out to the DMZ > in less than > 30 seconds but there have been some issues. > > FTP seems to fall down when scaling out to more than a web server or > two, many retries and transfer failures. It also adds to complexity to > the processing. What if one of the web servers is down? How > many time do > you retry? Should you notify the other hosts in the cluster? All that > logic needs to be in the pushing script, which becomes a bit ungainly. > There's also the issue with constantly opening up new ftp sessions, > which is a bit expensive. > > So I'm looking for a cleaner architecture. An ideal solution > would be an > NFS/CIFS share internal to the lan replicated readonly to an NFS/CIFS > share in the DMZ. The cluster can write to the nfs share, the web > servers can read from the nfs share. Everyone is happy. The > big sticking > point is being careful violating the security by multi homing the > storage. Many solutions require an open connection network on > many ports > between the two storage boxes, which would be an easy way in > to our lan. > > So far I'm poking at (and some downsides): > FUSE + (sshfs/ftpfs): High performance hit (60%ish from what > I've read) > ZFS + StorageTek: Great, another operating system train people on. > DRBD: requires full network connection between lan and dmz boxes. > dataplow sfs + das box: sales people will promise you the world. > Software SAN replicators of to many names to mention. > > This is such a common problem, I'm not sure why there isn't a nice > canned solution of two cheap pieces of hardware. Maybe I'm > just an idiot > and there is. Oh please please please tell me I'm an idiot. > > Anyone have any brilliant ideas? I want to thank everyone for their input. The rsync idea was nice, but runs into a lot of expensive overhead. Fuse + transport layers looked alright but was expensive and had a performance impact. Almost all clustered filesystems required network access between nodes on top of access to the shared storage. No violating firewall rules unless we absolutly need to. I had a discussion with Steven Soltis at a company called dataplow. He has the distinction of being one of the Ph. D students that worked on the original GFS. Not holding that against him, his company has a shared filesystem product where read-only nodes do not need network access to read/write nodes, only access to the shared storage. It's pretty close to the idea situation. Should keep vulnerabilities down to driver issues and fibre channel hacking. Thanks again for the input. Patrick ___ gnhlug-discuss mailing list gnhlug-discuss@mail.gnhlug.org http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/
Lan + DMZ + LargeNumOfFiles = headaches AKA: plz halp and donate ur brain!!
I've been soliciting solutions from everyone I can think of on moving a large number of files from inside our lan to a dmz on a regular basis. I have a cluster of machine producing 20k small files (30kbytes or so) inside our lan. After the files are created, they are pushed to a few web servers in the DMZ using ftp. The push is done by the machine that created the file. Ideally, the files make it out to the DMZ in less than 30 seconds but there have been some issues. FTP seems to fall down when scaling out to more than a web server or two, many retries and transfer failures. It also adds to complexity to the processing. What if one of the web servers is down? How many time do you retry? Should you notify the other hosts in the cluster? All that logic needs to be in the pushing script, which becomes a bit ungainly. There's also the issue with constantly opening up new ftp sessions, which is a bit expensive. So I'm looking for a cleaner architecture. An ideal solution would be an NFS/CIFS share internal to the lan replicated readonly to an NFS/CIFS share in the DMZ. The cluster can write to the nfs share, the web servers can read from the nfs share. Everyone is happy. The big sticking point is being careful violating the security by multi homing the storage. Many solutions require an open connection network on many ports between the two storage boxes, which would be an easy way in to our lan. So far I'm poking at (and some downsides): FUSE + (sshfs/ftpfs): High performance hit (60%ish from what I've read) ZFS + StorageTek: Great, another operating system train people on. DRBD: requires full network connection between lan and dmz boxes. dataplow sfs + das box: sales people will promise you the world. Software SAN replicators of to many names to mention. This is such a common problem, I'm not sure why there isn't a nice canned solution of two cheap pieces of hardware. Maybe I'm just an idiot and there is. Oh please please please tell me I'm an idiot. Anyone have any brilliant ideas? Best, Patrick ___ gnhlug-discuss mailing list gnhlug-discuss@mail.gnhlug.org http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/
RE: DRBD
> On June 24, 2008, Mark Komarinski sent me the following: > > Anyway, DRBD is self-contained and is automatic (if the > remote system > > disappears, it'll automatically resync when it reappears). You can > > encrypt the stream, you can enforce how fast it syncs, and setup is > > pretty easy. > > Is this a Linux only project, or is it portable to other operating > systems? > DRBD is linux only (sorry FreeBSD). It's curious to me when I hear folks say it's been surpased by gfs. They are different beasts. DRBD is for high availablitiy setups, GFS for clusters. DRBD is a block device, GFS is an actual filesystem. High availablity is when a service can't go down, clusters are used for highly parallal tasks. It's nice when the service that can't go down is highly parallal and fits in a clustered steup, but that is rather rare. You can run mysql on gfs if you use external locking, but I've read terrible things about performance, and having more than one database engine operating on the same data gives me the willies. For our app, low writes, high reads, 2 node drbd works great. If we needed to scale out, we'd use readonly replciated slaves from the HA head node. Patrick ___ gnhlug-discuss mailing list gnhlug-discuss@mail.gnhlug.org http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/
RE: [GNHLUG] MerriLUG Nashua, Thur 19 Jun, MySQL: The Whys, Whats, and Watch-outs
> I've made my slides available here: > > http://docs.google.com/Presentation?id=dcwc3b2p_63ck9zndhh I didn't go to the meeting, but I saw the slides mentioned DRBD, which I've been using lately to redundantify everything (nagios, mysql, cacti). I could do a presentation on it if anyone is interested. As a side note, is anyone doing mysql active/active clusters? The whole primarykey even/odd thing seemed really dirty to me. Patrick ___ gnhlug-discuss mailing list gnhlug-discuss@mail.gnhlug.org http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/
RE: Restrict Memory Usage of Win32 Process?
> My apologies in advance for the wildly off-topic post. > > With all this taking about ulimit and memory restrictions, I > would like > to know if anyone has found or knows of a utility to reproduce the > functionality of ulimit on the Win32 platform? We've set up some rules that trigger services to restart if they hit a certain number of pages. It's an alright workaround for long running programs with memory leaks. We use XYNTservice.exe, http://www.linux.com/articles/46491, to wrap some standard programs as services. All this is done under win2k3. If anyone knows of a ulimit type program for windows I'd be interested. Patrick ___ gnhlug-discuss mailing list gnhlug-discuss@mail.gnhlug.org http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/
RE: Solved: Sendmail question. Problem with yahoo.
> and as the French say "Eez beeg fat accomplishmente". Extra points for Steven... It took me until I ran it thru google's language tools to get the joke. ___ gnhlug-discuss mailing list gnhlug-discuss@mail.gnhlug.org http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/
RE: New distro question
> I realize this is / was / will be a religious argument, but I'm having > trouble with this distribution of Centos on my computer. I was > wondering if there was a distro more up to date and was suited for > scientific calculations. > ... > It doesn't have to be cool, although that is ok. It does have to be > functional and reasonably supportable. MIS is familiar with RH stuff, > if that matters. Did you look at Scientific Linux? Not only does it wear a lab coat, but it has updated graphviz and R releases. It's a RHEL clone like CentOS, so if you are having trouble with CentOS already, I'd assume you will see the same thing with SL. https://www.scientificlinux.org/ http://en.wikipedia.org/wiki/Scientific_Linux You should also look into Debian, it's Free, supportable, stable, has a hypnotising swirly logo, and over 3.5e3 packages available in it's repositories (perhaps the scientific calculation software you are looking for is already part of the release). Patrick ___ gnhlug-discuss mailing list gnhlug-discuss@mail.gnhlug.org http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/
RE: linux hardware inventory program
I realize you already implemented another solution, but I've been poking at OCS NG and GLPI (which will map to OCS entries). I excited about it, other than not knowing how I'm going to keep 40 dmz hosts regularly updated. Patrick > -Original Message- > From: [EMAIL PROTECTED] > [mailto:[EMAIL PROTECTED] On Behalf Of > Jeff Macdonald > Sent: Thursday, February 28, 2008 5:29 PM > To: Greater NH Linux User Group > Subject: linux hardware inventory program > > Hey, > > Anybody have a favorite program that can dump hardware inventory of a > system? I'd like CPU, RAM, Disk, Ethernet, and probably others. > http://lhinv.sourceforge.net/ is close, but is messing disk size info > on scsci disks. > > > -- > Jeff Macdonald > Ayer, MA > ___ > gnhlug-discuss mailing list > gnhlug-discuss@mail.gnhlug.org > http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/ > > ___ gnhlug-discuss mailing list gnhlug-discuss@mail.gnhlug.org http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/
RE: Keeping track of all this IT crap
> Do people know of any good software to keep track of all this IT > crap? Users, computers (with make, model, serial, CPU, RAM, etc.), > patch panels and their jacks, switches and their ports. Most > importantly, what is connected to what: User A has computer B plugged > into jack C which is patched into port D of switch E. Multiple times > 100 users, two buildings, and eight switches, and damn things are > confusing. I've been using a combination of mediawiki (documentation), groundworks nagios (monitoring), netdisco / cacti (analysis). I don't have a good answer for inventory yet. NetDisco discoveres networks based on CDP, lists switch ports / mac addresses / vlan associations. I wrote a script to grab all the relationships in netdisco and add them to nagios. I've also added a graphviz plugin to mediawiki, which I used to diagram process flows which are clickable (because graphviz is magical and awesome.) Patrick ___ gnhlug-discuss mailing list gnhlug-discuss@mail.gnhlug.org http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/
Let's complain about package formats! (WAS: A plague of daemons and the Unix Philosophy)
> The simple answer is that I highly prefesr rpm over debian. > The access is far simpler. Full use of deb files implies > about 13 different packages be loaded just to do deb things. > I'm in a situation right now where I have to create .deb > files and, while I'm getting my job done, I can tell you > there is no book that you can buy to teach you all you need > to know about the hundreds of places where documentation > exists on how it all works together. If you are compelled to creating deb packages, but don't desire to make them "properly" try checkinstall (http://asic-linux.com.mx/%7Eizto/checkinstall/). It reduces turning source into dpkgs to `checkinstall -D make install`. I've had to make debs before, and though I never had to much trouble making them right, checkinstall lets you skip a bunch of steps. You should read the readme, deb packages require you to make a few script directories in your source directory. You might be able to add gross dependency checking or alerting that a package isn't installed using a script in preinstall-pack. > I'm living with it and I have a few things I know how to do, > but compared to RPM and the available docs for it, deb files > suck big green donkey dicks. What a wonderfully non-inflammatory comment. Godwin be damned, did I mention rpm actively participated in the holocaust. -=] Patrick ___ gnhlug-discuss mailing list gnhlug-discuss@mail.gnhlug.org http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/
OT: Sun Station4/Monitor/External CDROM free to a good home.
Since I'd rather pull out my toenails with pliars than work with solaris userland Does anyone have interest in an old Sun Station4. Worked the last time I powered it on. From what I remember it has a Sun->BNC type monitor connection. The monitor is pretty big, 19/20" or so. External CD drive. If I remember right it was like 50mhz 16 megs ram. I'm hoping I don't need to actually hook it back up power it on, but if you'd like more information or want me to double check anything about it, contact me off list. Best Patrick ___ gnhlug-discuss mailing list gnhlug-discuss@mail.gnhlug.org http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/
RE: [OT] Robotics reasources
> Anyone know of good email lists / message forums / groups for > robotics hobbiests? I am always looking for more information on that. I used to work at a robotics company. We contributed stuff to player/stage. Most what our company did was navigation work, but if you trawl the player/stage mailing list you might find some good stuff. ___ gnhlug-discuss mailing list gnhlug-discuss@mail.gnhlug.org http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/
RE: [OT] xkcd
Wow, that's unrealguy needs to start project mayhem if he just metioned it in a comic and it happened. The rumblingshttp://metatalk.metafilter.com/14853/XK-Cee-you-Dere The result http://www.flickr.com/groups/xkcdmeetup/pool/ > -Original Message- > From: [EMAIL PROTECTED] > [mailto:[EMAIL PROTECTED] On Behalf Of Ben Scott > Sent: Thursday, October 11, 2007 10:59 AM > To: Greater NH Linux User Group > Subject: Re: [OT] xkcd > > On 10/11/07, Flaherty, Patrick <[EMAIL PROTECTED]> wrote: > > You missed the best one. > > http://xkcd.com/138/ > > I considered adding that, but the list was already long, > and listing all the good ones would be ridiculous. Besides, > the best one is: > > http://xkcd.com/240/ > > Why is that the best one? Because occasionally, wanting > something can make it real. > > (Explanation omitted to preserve the sense of wonder. You > can find the answer if you look.) > > -- Ben > ___ > gnhlug-discuss mailing list > gnhlug-discuss@mail.gnhlug.org > http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/ > > ___ gnhlug-discuss mailing list gnhlug-discuss@mail.gnhlug.org http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/
RE: [OT] xkcd
You missed the best one. http://xkcd.com/138/ > -Original Message- > From: [EMAIL PROTECTED] > [mailto:[EMAIL PROTECTED] On Behalf Of Ben Scott > Sent: Wednesday, October 10, 2007 9:37 PM > To: Greater NH Linux User Group > Subject: Re: [OT] xkcd > > On 10/10/07, Kent Johnson <[EMAIL PROTECTED]> wrote: > > If you know what a SQL injection attack is, you will love this: > > http://xkcd.com/327/ > > For those of you who hadn't already seen the above: xkcd is > an extremely excellent comic, and should be read by all geeks. > > http://xkcd.com/149/ > > http://xkcd.com/37/ > > http://xkcd.com/272/ > > http://xkcd.com/129/ > > http://xkcd.com/293/ > > http://xkcd.com/285/ > > http://xkcd.com/225/ > > http://xkcd.com/150/ > > ...etc... > > -- Ben > ___ > gnhlug-discuss mailing list > gnhlug-discuss@mail.gnhlug.org > http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/ > > ___ gnhlug-discuss mailing list gnhlug-discuss@mail.gnhlug.org http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/
RE: HA MySQL Setups
> What about multimaster replication? Multi Master made me feel a bit icky. Auto-increment offsets the same logshipping stuff others have had problems with. There are also other "implementations" of mmr, but they are just sets of scripts that mimic heartbeat. In the end, it's the same as normal master/slave replication, but now with the additional moving pieces. Patrick ___ gnhlug-discuss mailing list gnhlug-discuss@mail.gnhlug.org http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/
HA MySQL Setups
I'm planning to set up an HA mysql cluster. The database serves as a backend to a set of webservers (HW loadbalanced). The DB has light load, but when it breaks the site breaks, so I can't really get away with it as a single point of failure. So here were my options: http://dev.mysql.com/doc/refman/5.0/en/ha-overview.html Replication - One master server accepts writes, on write ships it's logs to the slave server(s). Async may not be a problem, but seems silly there's no flag to wait for the slaves to report a write was successful. DRBD - Write all data onto a shared network block device. Use heartbeat to determine which server should be running mysql which lives on that shared block device. Use a cross overcable to prevent strange network issues. Cluster - Needs at least for nodes. Far to many for this setup. I think I've settled on the DRBD method. Using a network block device and failing back and forth using heartbeat and a floating ip, though log shipping seems pretty straightforward. Does anyone have any positive or negative feedback on any of the methods? Patrick ___ gnhlug-discuss mailing list gnhlug-discuss@mail.gnhlug.org http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/
RE: Linux Stickers
I have a friend who does tshirts/vinal stickers/buttons out of Billerica. Pretty cheap. He's also pretty handy with photoshop if you want the work design cleaned up. Contact me off list if you want more information. Patrick > -Original Message- > From: [EMAIL PROTECTED] > [mailto:[EMAIL PROTECTED] On Behalf Of > Brian Chabot > Sent: Sunday, October 07, 2007 4:39 PM > To: [EMAIL PROTECTED] > Cc: Greater NH Linux User Group > Subject: Re: Linux Stickers > > > > Jon 'maddog' Hall wrote: > > In the spirit of Linux you could make your own > > > > I may end up hiring a print ship to do it for me. So far I > kind of like the graphic at > http://linux.wordpress.com/2006/01/30/linux-hardware-sites-for-newbie/ > and with some slight modifications I really like the outcome. > > -- > --- > | [EMAIL PROTECTED] http://www.hirebrian.net | > | IT/MIS Manager - 8 Yrs Experience - Contract or Permanent | > | Self-taught, Fast Learner, and Team Player | > |Ready to Start TODAY at Your Company.| > --- > > ___ > gnhlug-discuss mailing list > gnhlug-discuss@mail.gnhlug.org > http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/ > > ___ gnhlug-discuss mailing list gnhlug-discuss@mail.gnhlug.org http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/
[ means test right? Maybe? (was: Re: GOTCHA in Ubuntu - broken shell)
> =>Another example. This one is [was] my "favorite"; I think it was > =>the first one to bork on me. The error message was [is] so obscure > =>that for the past year I've lived with band-aiding each "if" > =>statement one at a time, just to have something which works. > =>(One uses "if" statements much more frequently than slicing.) > =>In bash: > => $ if [ "${breakfast}" == "spamandeggs" ]; then > => > echo "yummy" > => > fi > => yummy > => $ > =>In dash: > => $ if [ "${breakfast}" == "spamandeggs" ]; then > => > echo "yummy" > => > fi > => [: 11: ==: unexpected operator > => $ > => > =>How ugly and unhelpful is that? > => > =>The band-aid, by the way, was that "=" instead of "==" works. > =>(And how ugly is THAT? And how do you explain that to a student?) > > FYI, the correct operator is = and == is an extension of bash. == should > not be used. > I thought everything in those brackets was just an argument to test. The = or == shouldn't be dependant on your shell, but on your coreutils version. However, I tested it. [EMAIL PROTECTED]:~/hack$ cat test.dash #!/bin/dash t="X" if [ "$t" == "X" ]; then echo $t; else echo "Not X" fi [EMAIL PROTECTED]:~/hack$ dash test.dash [: 9: ==: unexpected operator Not X [EMAIL PROTECTED]:~/hack$ bash test.dash X ...and now my world is turned upside down. Anyone know why this works that way? ___ gnhlug-discuss mailing list gnhlug-discuss@mail.gnhlug.org http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/
RE: XEN - unbind/bind a PCI device
Speaking with little Xen experience. My first inkling would be that the device is locked by the VM it's currently bound to. Did you try rmmoding the sound modules from the VM? Does that VM still have access to the device? How about unbinding another device (try something like a usb drive I guess?). Then I'd see what happens when you give a bogus bus ID in your echo statement. Does it hang or give you an error? After that I'd get the standard debug output (dmesg, lspci -v, etc.) from both the host and the guests, start posting. When all of those fail, revert back to old plug and play troubleshooting (shaking rattles, flicking light switches, lighting incense, smearing cafes blood on the pc in question). G'Luck. > -Original Message- > From: [EMAIL PROTECTED] > [mailto:[EMAIL PROTECTED] On Behalf Of > Tech Writer > Sent: Tuesday, September 25, 2007 12:01 PM > To: Thomas Charron > Cc: gnhlug-discuss@mail.gnhlug.org > Subject: Re: XEN - unbind/bind a PCI device > > Using Xen is very cool when it goes smoothly, and very > frustrating when you run into a glitch. There are a few > "glitches" that I've run into (I've tested it on both RHEL5 > and SLES10.) The most annoying (from SLES) is that when you > point to a "kit" to install your VM from, it often gets > confused when trying to get to the next device (2nd CD.) The > easiest workaround for this is to just use a local DVD iso image file. > > But... I can't figure out why this echo command just hangs... > Ignoring the fact that I'm doing this for Xen, does anyone > have any ideas why an echo command like this would lock? The > idea is that the sound card is bound to domain 0 (the > physical machine's kernel) and to unbind it (so the virtual > machine can take it) you need to write the PCI slot number to > the unbind file. This example is all over the web, so it > must work. I just can't figure out why it hangs for me. > > echo -n ":00:0b.0" > "/sys/bus/pci/drivers/ENS1371/unbind" > > Peg > > - Original Message - > From: "Thomas Charron" <[EMAIL PROTECTED]> > To: "Tech Writer" <[EMAIL PROTECTED]> > Cc: > Sent: Tuesday, September 25, 2007 10:51 AM > Subject: Re: XEN - unbind/bind a PCI device > > > > On 9/25/07, Tech Writer <[EMAIL PROTECTED]> wrote: > >> I'm trying to test some examples in a Xen course. All has > gone well so > >> far, > >> but my very last example is to unbind a PCI device (in > this case, the > >> sound > >> card) from its driver, and bind it to the PCI backend so > that it can be > >> used > >> by one of the virtual machines. > > > > QQ > > > > You just made me REALLY want to look at Xen again. :-D > > > > -- > > -- Thomas > > ___ > > gnhlug-discuss mailing list > > gnhlug-discuss@mail.gnhlug.org > > http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/ > > ___ > gnhlug-discuss mailing list > gnhlug-discuss@mail.gnhlug.org > http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/ > > ___ gnhlug-discuss mailing list gnhlug-discuss@mail.gnhlug.org http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/
Data Recovery Recomendations?
Can anyone recommend a good data recovery firm? I hear techfusion on NPR, but perhaps someone else has had good/bad experiences. I don't know if it was a hardware or software issue, all I know is it wasn't one of my servers (Dances a little). Patrick ___ gnhlug-discuss mailing list gnhlug-discuss@mail.gnhlug.org http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/
RE: [GNHLUG] MerriLUG/Nashua / Thr 16 Aug / Todd Underwood on ZFS - TheLast Word in File Systems
Any time table for the slides getting out to the general group? Patrick -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Ben Scott Sent: Thursday, August 09, 2007 6:52 PM To: GNHLUG Announcements Subject: [GNHLUG] MerriLUG/Nashua / Thr 16 Aug / Todd Underwood on ZFS - TheLast Word in File Systems Who : Todd Underwood, Renesys What: ZFS -- The Last Word in File Systems Where: Martha's Exchange Date: Thur 16 Aug 2007 - *NEXT* Thursday Time: 6:00 PM for dinner, 7:30 PM for meeting proper This month's MerriLUG (Nashua) meeting will host Todd Underwood speaking on ZFS. He will present a survey of what's out there, what their needs are, and how ZFS helps meets those needs, and cover ZFS features and some technical details. Please RSVP if you plan on being there for dinner -- see "Attendance" below (last). === About ZFS === ZFS is an advanced file storage system, featuring such buzzwords as: A pooled storage model (no fixed partitions), transactional semantics, copy-on-write (so on-disk state is always valid), error checking and correction of data, background consistency checking, instantaneous snapshots and clones, fast native backup and restore, built-in compression, and ease-of-use. And it's Open Source. How can you beat that? http://www.opensolaris.org/os/community/zfs/whatis/ === About the presenter === Todd Underwood is Vice President of Operations and Professional Services at Renesys (http://www.renesys.com/). Renesys is in the business of collecting, analyzing and archiving data about what's happening on the Internet. That demands fast and reliable storage for of tens of terabytes of stored data. http://www.renesys.com/about/management.shtml#a-todd === About the group === MerriLUG is the Merrimack Valley Linux User Group, and is a chapter of GNHLUG, the Greater NH Linux User Group. Heather Brodeur is the LUG coordinator, with essential assistance from Jim Kuzdrall. MerriLUG meets the third Thursday of every month. You can find out more about MerriLUG and GNHLUG at the http://www.gnhlug.org/ website. Meetings are open to all, and are held at Marth'a Exchange in Nashua, NH. We meet downstairs for dinner starting at around 6:00 PM, and move upstairs for the meeting proper at 7:30 PM. (Feel free to skip either part.) The meeting proper ends around 9ish, but it's not uncommon to find hangers-on there until 10 or later. === Attending === If you plan on being there for dinner, please RSVP to me directly (not to the list). This helps us ensure we have seating for you! Jim Kuzdrall is taking a break from LUG-wrangling this month, so I'm filling in for him (or rather, attempting to). Driving directions can be found at: http://wiki.gnhlug.org/twiki2/bin/view/Www/PlaceMarthasExchange Hope to see you there! -- Ben ___ gnhlug-announce mailing list [EMAIL PROTECTED] http://mail.gnhlug.org/mailman/listinfo/gnhlug-announce/ ___ gnhlug-discuss mailing list gnhlug-discuss@mail.gnhlug.org http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/
RE: [GNHLUG] DLSLUG: Tomorrow - Usable Web Applications with Rails andAJAX
Is this presentation going to be records or released at all? I can't make the trip, but I'm interested. Patrick -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Bill McGonigle Sent: Wednesday, August 01, 2007 4:24 PM To: [EMAIL PROTECTED] Subject: [GNHLUG] DLSLUG: Tomorrow - Usable Web Applications with Rails andAJAX [please RSVP as we have a real live refreshment sponsor this month!] *** Dartmouth-Lake Sunapee Linux User Group http://dlslug.org/ a chapter of GNHLUG - http://gnhlug.org *** The next regular monthly meeting of the DLSLUG will be held: Thursday, August 2nd, 7-9PM at: Dartmouth College, Carson Hall, Room L01 All are welcome, free of charge. Agenda 7:00 Sign-in, networking 7:15 Introductory remarks 7:20 Usable Web Applications with Rails and AJAX presented by William Henderson-Frost Will will present Greenout!, a new web application that's focused on usability and developed on the Ruby on Rails platform using AJAX techniques, the Prototype library, and plenty of custom code. He'll describe the process of developing a web application with Ruby on Rails, the challenges of writing an AJAX application, and some of the tips and techniques he's developed along the way. Will is a Senior at Dartmouth College, majoring in Computer Science, and a Hanover native. He enjoys good programming languages, like Ruby. 8:50 Roundtable Exchange - where the attendees can make announcements or ask a linux question of the group. Please see the website for links to directions. If any area companies are interested in sponsoring refreshments, please let me know. Please RSVP so we can give a theoretical refreshment sponsor a headcount. - MAILING LISTS There are two primary mailman lists set up for DLSLUG, an Announce list and a Discuss list. Please sign up for the Announce list (moderated, low-volume) to stay apprised of the group's activities and the Discuss list (unmoderated) for group discussion. Links to the mailing lists are on the webpage. Please pass this announcement along to anyone else who may be interested. - Bill McGonigle, Owner Work: 603.448.4440 BFC Computing, LLC Home: 603.448.1668 [EMAIL PROTECTED] Cell: 603.252.2606 http://www.bfccomputing.com/Page: 603.442.1833 Blog: http://blog.bfccomputing.com/ VCard: http://bfccomputing.com/vcard/bill.vcf ___ gnhlug-announce mailing list [EMAIL PROTECTED] http://mail.gnhlug.org/mailman/listinfo/gnhlug-announce/ ___ gnhlug-discuss mailing list gnhlug-discuss@mail.gnhlug.org http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/
RE: A question about rsync
I second backuppc. Really really good software. I've never used rsnapshot (looks like a web interfaceless backuppc), but when I used backuppc, it's only real failing was backup to tape. I could never get archive mode running correctly. I ended up taring backuppc's file repository to tape. If the archive tree was larger than a tape I would have had issues retrieving single files, but all and all it worked pretty well. The other issue was locked files over smb. Outlook data file (.pst?) wouldn't backup correctly because they were opened in some odd mode. I heard a bunch of hacks to make the backup work, from wmi calls to software that clones the file somehow. Maybe newer versions of outlook do something a bit easier to backup. Best Patrick -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Tyson Sawyer Sent: Monday, July 23, 2007 4:14 PM To: gnhlug-discuss@mail.gnhlug.org Subject: Re: A question about rsync On 7/23/07, Cole Tuininga <[EMAIL PROTECTED]> wrote: > On another note, if you're using rsync to make backups, cannot more > highly recommend using rsnapshot (http://www.rsnapshot.org/) Huh! I'm using: http://backuppc.sourceforge.net/ And I highly recommend it. I just glanced at rsnapshot and after a very quick glance it looks like the same thing. Anyone have experience with or knowledge of both these packages? I'd be interested to know the differences. I have been very happy with Backuppc, esp. since I had already started to write some scripts to do the same thing. ;-) Cheers! Ty -- Tyson D Sawyer A strong conviction that something must be done is the parent of many bad measures. - Daniel Webster ___ gnhlug-discuss mailing list gnhlug-discuss@mail.gnhlug.org http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/ ___ gnhlug-discuss mailing list gnhlug-discuss@mail.gnhlug.org http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/
RE: MonadLUG Notes, 12-July-2007
> Charlie volunteered to do a future presentation on digitizing phonograph recordings Is this via IRENE (or IRENE like) software? ___ gnhlug-discuss mailing list gnhlug-discuss@mail.gnhlug.org http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/
RE: [KILLTHREAD] namelessness (thread gone = happy Ben)
Hey Hey You, I think you need to introduce me to your dealer. Sincerly, Clean Cut in CubeVille [sorry ben, I couldn't help it] -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of [EMAIL PROTECTED] Sent: Wednesday, July 11, 2007 11:07 AM To: gnhlug-discuss@mail.gnhlug.org Subject: Re: [KILLTHREAD] namelessness (thread gone = happy Ben) > From: "Michael ODonnell" <[EMAIL PROTECTED]> > Date: Wed, 11 Jul 2007 10:21:24 -0400 > I am assuming that this whole discussion is a meta-meta-hyper- > super-extremely-triple-indirect-wink-wink-nudge-nudge method of *not* > acknowledging what anonymity is *really* about: the freedom to behave > like a fscking a**hole^H^H^H^H^H^H^H^H^H^H^H exercise free expression > in public without fear of reprisal... Well, no. It's about creating an expanded sense of self which encompasses the whole universe, across time. Many people, when I tell them I'm nameless, interperet this to mean that I wish to remain anonymous. This is a popular misconception. > Date: Mon, 9 Jul 2007 18:33:20 -0400 > From: "Ben Scott" <[EMAIL PROTECTED]> > Subject: [OT] noise (was: Petition against OOXML) > So what do we call you? "The account formerly known as Dave Montenegro"? And...then, "Ben Scott" <[EMAIL PROTECTED]> wrote: > Could we take the deconstructionism off-list, please? You asked what to call me, didn't you? ;) Your request is fair enough, though. Namelessness is only meta-meta-hyper-super-extremely- triple-indirect-wink-wink-nudge-nudge-ly Linux-related. KILLTHREAD granted. ___ gnhlug-discuss mailing list gnhlug-discuss@mail.gnhlug.org http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/ ___ gnhlug-discuss mailing list gnhlug-discuss@mail.gnhlug.org http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/
RE: Recovering file attributes from snapshot
I've seen examples where people used getfacl and setfacl to backup their acls (hint: these programs work on non-acl files as well). google says http://www.debian-administration.org/articles/476 was the article I read. The comments are where the important bits are. Patrick -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Michael ODonnell Sent: Tuesday, July 03, 2007 12:03 PM To: gnhlug-discuss@mail.gnhlug.org Subject: Recovering file attributes from snapshot I'm in a situation where the ownerships/permissions in a particular filesystem hierachy get changed by circumstances beyond my control (*cough*Perforce*cough*) and I need to force them back before I can use that hierachy. I'm prepared to script a solution if necessary but I hate reinventing the wheel so I wonder if this tool already exists. It'd basically allow me to snapshot the desired attributes (ie. just certain metadata items, not the contents) of an arbitrary set of files/directories/symlinks/etc and then later allow me to restablish those metadata by referring to the snapshot, but leaving the data unchanged. ___ gnhlug-discuss mailing list gnhlug-discuss@mail.gnhlug.org http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/ ___ gnhlug-discuss mailing list gnhlug-discuss@mail.gnhlug.org http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/
RE: Recommended PCI gigabit ethernet card? OT: PC Gigabit Through putQuestion
Somebody broke out the slide rule -=] patrick -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Michael ODonnell Sent: Thursday, June 14, 2007 4:18 PM To: gnhlug-discuss@mail.gnhlug.org Subject: Re: Recommended PCI gigabit ethernet card? OT: PC Gigabit Through putQuestion PCI-32 theoretical maximum throughput would be: (((33 million cycles) * 32 bits) / 8 = 132 million bytes ) per second ...but since that's unattainable for more than a dozen ticks or so I'm guessing that 2/3 of that (88 million) is a more reasonable maximum. Meanwhile, I (think I) have heard that the rule-of-thumb for Enet overhead is something like: bitrate / 12 = bytes-per-second ...so for GigE we'd get: ~1,000,000,000 bits per second / 12 = ~83,333,333 ...which is in the same ballpark as that PCI guesstimate. ___ gnhlug-discuss mailing list gnhlug-discuss@mail.gnhlug.org http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/ ___ gnhlug-discuss mailing list gnhlug-discuss@mail.gnhlug.org http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/
RE: Recommended PCI gigabit ethernet card? OT: PC Gigabit Through put Question
I'm not the best with these bit/byte problems so I might be wrong, but. A PCI bus can pass 1056 bits a second (32 bit, 33 mhz) tcp/ip over head is somewhere around %20 (1056 * .8 = 844.8) What can you reasonably expect a pci gigabit card to give you for through put? PCI Buses are generally shared (save high end server boards) right? On top of that, if hdparm says timed disk writes are around 40MB, what could you see for sustained download speeds? Maybe a static cached webpage could saturate a gig connection, sustained 5 gig http download couldn't right? Anyone have real world answers for that stuff? Patrick From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Tom Buskey Sent: Thursday, June 14, 2007 12:36 PM To: gnhlug-discuss@mail.gnhlug.org Subject: Recommended PCI gigabit ethernet card? I have a cheap gigabit nic ($20) in my system and suspect it is slowing down throughput so I'd like to upgrade it. I did the google linux thing. Half were error reports, half were from < 2004, half were sales "reviews", etc (yeah, that > 100%). The Linux HOWTOs are 2004 and earlier so there's barely a mention of gigabit networking. It needs to be PCI I'm running Fedora with Fedora kernels and don't want to compile drivers. What do people use, see as fast/compatible? ___ gnhlug-discuss mailing list gnhlug-discuss@mail.gnhlug.org http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/
RE: Hey, How 'bout them Red-Hatters!
oh yeah, and he pointed me towards http://www.desktoplinux.com/news/NS8809240318.html -Original Message- From: Flaherty, Patrick Sent: Wednesday, May 23, 2007 12:15 PM To: '[EMAIL PROTECTED]'; Greater NH Linux User Group Subject: RE: Hey, How 'bout them Red-Hatters! I asked my buddy that works for RedHat. Cheaper licensing, 2 year life cycle desktop os. -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Ted Roche Sent: Wednesday, May 23, 2007 11:43 AM To: Greater NH Linux User Group Subject: Hey, How 'bout them Red-Hatters! Has anyone got a clue what Red Hat is promoting with the "Global Desktop?" Just when I thought we were going to see Fedora Core/Extras united, a clear unified enterprise message out of Red Hat, they zig again. "Today Red Hat is announcing the upcoming availability of Red Hat Global Desktop. Global Desktop breaks through the price and performance barriers that have prevented many people from realizing the full benefits of state-of-the-art information technology. Red Hat and community members around the world recognized the need for a better solution to serve their local government and small business customers. This required removing the limitations that traditional desktop solutions imposed. In response, Red Hat developed the Global Desktop, which delivers a modern-user experience with an enterprise-class suite of productivity applications. Red Hat collaborated closely with Intel to enable the design, support and distribution of Global Desktop to be as close as possible to the customer. In addition, Red Hat and Intel are taking advantage of Global Desktop's high performance and minimal hardware requirements to support a wide range of Intel's current and future desktop platforms, including the Classmate, Affordable, Community and Low-Cost PC lines." http://www.redhat.com/about/news/prarchive/2007/global_desktop.html and http://www.press.redhat.com/2007/05/14/red-hat-global-desktop/ -- Ted Roche Ted Roche & Associates, LLC http://www.tedroche.com ___ gnhlug-discuss mailing list gnhlug-discuss@mail.gnhlug.org http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/ ___ gnhlug-discuss mailing list gnhlug-discuss@mail.gnhlug.org http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/
RE: Hey, How 'bout them Red-Hatters!
I asked my buddy that works for RedHat. Cheaper licensing, 2 year life cycle desktop os. -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Ted Roche Sent: Wednesday, May 23, 2007 11:43 AM To: Greater NH Linux User Group Subject: Hey, How 'bout them Red-Hatters! Has anyone got a clue what Red Hat is promoting with the "Global Desktop?" Just when I thought we were going to see Fedora Core/Extras united, a clear unified enterprise message out of Red Hat, they zig again. "Today Red Hat is announcing the upcoming availability of Red Hat Global Desktop. Global Desktop breaks through the price and performance barriers that have prevented many people from realizing the full benefits of state-of-the-art information technology. Red Hat and community members around the world recognized the need for a better solution to serve their local government and small business customers. This required removing the limitations that traditional desktop solutions imposed. In response, Red Hat developed the Global Desktop, which delivers a modern-user experience with an enterprise-class suite of productivity applications. Red Hat collaborated closely with Intel to enable the design, support and distribution of Global Desktop to be as close as possible to the customer. In addition, Red Hat and Intel are taking advantage of Global Desktop's high performance and minimal hardware requirements to support a wide range of Intel's current and future desktop platforms, including the Classmate, Affordable, Community and Low-Cost PC lines." http://www.redhat.com/about/news/prarchive/2007/global_desktop.html and http://www.press.redhat.com/2007/05/14/red-hat-global-desktop/ -- Ted Roche Ted Roche & Associates, LLC http://www.tedroche.com ___ gnhlug-discuss mailing list gnhlug-discuss@mail.gnhlug.org http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/ ___ gnhlug-discuss mailing list gnhlug-discuss@mail.gnhlug.org http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/
RE: Package management
Expand your environment. Segment your apps onto independent server/instances to avoid issues like library/application conflicts. ssh and scp with keys are fine tools to move data around and start processes. If that means buying more machines or using virtualization so be it, but it's easier (imho) to deal with 12 server instances with different os version than 1 server with 12 hacked together services on it. Shoe horning a bunch of stuff into one host is a fun excercise, but in the end it is usually a waste of time that breaks or can't be updated and when the hardware goes so do *ALL* of your servers. *Bonus points* Virtualization allows you to move resource hogging hosts to their own hardware at a later date easily. There is a free VMware server (kinda slow on io) and a bunch of prebuilt os images for running various applications. The images are compatible with the expensive VMware which you can buy at 15 hundred bucks a processor but gives you pretty good performance (still not native speeds). You could also use something like xen or kvm. Alot of people push for things like paravirtualization for speed reasons if you do virtualization, but I look for ease of managability and flexibility which seems to you want as well, and paravirtualation seems to need to have a pretty homogenous environment. Oh and I like it for troubleshooting apps. Boot WinXP sp1 for some testing, then try WinXP sp2. Boot up the "Dev" mail server try a new program. On shutdown revert to the clean install. It's pretty nifty stuff that you can get for free if you can deal with the performance degredation. If it works well enough you can pay if you want better performance/support or tar dump the filesystem to a new machine later on. However, if you must shoehorn Approach it by sticking with a distro without any licensing restrictions that does rolling releases. Debian comes to mind, but there are most likely others. The licensing freedom is to remove the monitary issues with doing chroots (see below) or virtual instances. Rolling distros will have more granular package releases that will give you greater flexibility when you cross install. They also tend to be a bit better naming libraries and apps to avoid conflicts. In Debian you can install packages from a backport repository or a newer release by changing a config file or running apt with some extra arguments. That *should* install all the dependencies as well. CentOS's yum has the same feature and there are extra repositories like dag and CentOSplus that serve situations like this. You can also help yourself out by making a local mirror and controling what versions are released to machines and when. In the past I've handled similar problems by using chroots of different OS releases to seperate application environments. There can be maintenence issues and it's not the cleanest way to do things, but it worked. I wrote some scripts to copy things in to the right places and before and after proccessing. You can probably run into library trouble doing that (glibc come to mind for some reason), but I've never had an issue. For certain apps (mediawiki, groundworks) there are entire environements wrapped up in 1 package with all of it's dependencies for you to download an install. The other road you can take is paying for support to either a vendor (e.g. Suse, RedHat), or an independent contractor to handle making packages for these platforms for you. And I'm sure alot of developers wouldn't mind an extra couple bucks to backport their app to an older os if you contacted them. If you have custom packages made or make them yourself, I'd encourage you to distribute them back to their maintainer or post them on a website. Chances are if you are looking for the new release Joe's Sweet Shiney Widget Parser for RHEL3, chances are someone else is. Next time maybe they will do the heavy lifting, or the maintainer will add it into list of targets. Pick your poision, but I'd aim for seperation and segmentation. Best Patrick -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Charlie Farinella Sent: Tuesday, May 15, 2007 12:12 PM To: gnhlug-discuss@mail.gnhlug.org Subject: Re: Package management On Tuesday 15 May 2007, Neil Joseph Schelly wrote: > On Tuesday 15 May 2007 11:34, Paul Lussier wrote: > > Bah! > > > > $ apt-get source foo= $ cd foo- > number> $ dpkg-buildpackage -rfakeroot $ cd .. && dpkg -i > > foo- > > > > This will compile the source for whatever version of the debian > > package foo you need against your installed library base, create a > > package for it, then install it. > > Bah back at you. That was my option 3. > > > 3. Third, I'll try a backport myself. This really depends on the > > package in question and what library requirements it will have. I > > download the source package from a newer OS version (in your case, > > the next release of CentOS) and I try to compile it for the local > > system, so I can a