. It knows exactly every little
thing that changed to your files since last time you backed it up,
without having to scan everything. Even if you manually try to fake
the datestamps etc. Finding that information is more or less instant,
making backups easy.
the previous last disc to the then-current time.
I use my own software for making incremental multi-volume backups, based
on file timestamps (m and c), inode numbers, and content checksums.
http://scdbackup.webframe.org/main_eng.html
http://scdbackup.webframe.org/examples.html#incremental
On 1/22/24 20:30, Charles Curley wrote:
On Mon, 22 Jan 2024 18:27:51 -0800
David Christensen wrote:
debian-user:
I have a SOHO file server with ~1 TB of data. I would like archive
the data by burning it to a series of optical discs organized by time
(e.g. mtime). I expect to periodically
On 1/22/24 19:44, gene heskett wrote:
On 1/22/24 21:28, David Christensen wrote:
debian-user:
I have a SOHO file server with ~1 TB of data. I would like archive
the data by burning it to a series of optical discs organized by time
(e.g. mtime). I expect to periodically burn additional
On Mon, 22 Jan 2024 18:27:51 -0800
David Christensen wrote:
> debian-user:
>
> I have a SOHO file server with ~1 TB of data. I would like archive
> the data by burning it to a series of optical discs organized by time
> (e.g. mtime). I expect to periodically burn additional discs in the
>
On 1/22/24 21:28, David Christensen wrote:
debian-user:
I have a SOHO file server with ~1 TB of data. I would like archive the
data by burning it to a series of optical discs organized by time (e.g.
mtime). I expect to periodically burn additional discs in the future,
each covering a span
debian-user:
I have a SOHO file server with ~1 TB of data. I would like archive the
data by burning it to a series of optical discs organized by time (e.g.
mtime). I expect to periodically burn additional discs in the future,
each covering a span of time from the previous last disc to the
On Mon 18 Apr 2022 at 16:06:48 (-0400), Default User wrote:
> BTW, I think I have narrowed the previous restore problem down to what I
> believe is a "buggy" early UEFI implementation on my computer (circa 2014).
> Irrelevant now; I have re-installed with BIOS (not UEFI) booting and MBR
> (not
On Tue 19 Apr 2022 at 07:19:58 (+0200), DdB wrote:
> So i came up with the idea to create a sort of inventory using a sparse
> copy of empty files only (using mkdir, truncate + touch). The space
> requirements are affordable (like 2.3M for an inventory representing
> 3.5T of data). The effect
Hello,
Am 11.04.2022 um 04:58 schrieb Default User:
> So . . . what IS the correct way to make "backups of backups"?
>
I don't know that for sure, but at first glance, i dont understand the
complexity of your setup either. Seems to by quite elaborate, which is
certainly su
On 11/4/22 10:58, Default User wrote:
So . . . what IS the correct way to make "backups of backups"?
Sorry to take so long to respond. I am traveling and have only short
periods that I can spend on non-pressing matters.
To answer your question: the method that gets you the
On 4/18/22 13:06, Default User wrote:
Finally, fun fact:
Many years ago, at a local Linux user group meeting, Sun Microsystems put
on a demonstration of their ZFS filesystem. To prove how robust it was,
they pulled the power cord out of the wall socket on a running desktop
computer. Then they
> >> #!/bin/sh
> >> sudo rsync -aAXHxvv --delete --info=progress2,stats2,name2
> >> /media/default/MSD1/ /media/default/MSD2/
> >>
> >>
> >> Use a version control system for system administration. Create a
> >> project for every machine.
a version control system for system administration. Create a
project for every machine. Check in system configuration files,
scripts, partition table backups, encryption header backups, RAID header
backups, etc.. Maintain a plain text log file with notes of what you
did (e.g. console sessions), when
gt; >>
> >> No problem, I say. I will just use Timeshift to restore from its backup
> of
> >> a few hours earlier.
> >>
> >> But that did not work, even after deleting the extra directory, and
> trying
> >> restores from multiple Timeshift backups.
>
of
a few hours earlier.
But that did not work, even after deleting the extra directory, and trying
restores from multiple Timeshift backups.
Anyway, I never could fix the problem. But I did take it as an opportunity
to "start over". I put in a new(er) SSD, and did a fresh install,
drive to use as a backup device, labeled
>> MSD1.
>> >>> - another identical usb hard drive, labeled MSD2, to use as a copy of
>> the
>> >>> backups on MSD1.
>> >>> - the computer and all storage devices are formatted ext4, not
>> encrypted.
, labeled MSD1.
- another identical usb hard drive, labeled MSD2, to use as a copy of the
backups on MSD1.
- the computer and all storage devices are formatted ext4, not encrypted.
- two old Clonezilla disk images from when I installed Debian 11 last year
(probably irrelevant).
- Timeshift to daily
as a backup device, labeled MSD1.
> > - another identical usb hard drive, labeled MSD2, to use as a copy of the
> > backups on MSD1.
> > - the computer and all storage devices are formatted ext4, not encrypted.
> > - two old Clonezilla disk images from when I installed Debian 11 last
On 4/10/22 19:58, Default User wrote:
Hello!
My setup:
- single home x86-64 computer running Debian 11 Stable, up to date.
- one 4-Tb external usb hard drive to use as a backup device, labeled MSD1.
- another identical usb hard drive, labeled MSD2, to use as a copy of the
backups on MSD1
On Sun, Apr 10, 2022 at 11:13 PM David wrote:
> On Mon, 11 Apr 2022 at 12:59, Default User
> wrote:
>
> > Then I try to use rsync to make an identical copy of backup device MSD1
> on an absolutely identical 4-Tb external usb hard drive,
> > labeled MSD2, using this command:
> >
> > sudo rsync
On Mon, 11 Apr 2022 at 12:59, Default User wrote:
> Then I try to use rsync to make an identical copy of backup device MSD1 on an
> absolutely identical 4-Tb external usb hard drive,
> labeled MSD2, using this command:
>
> sudo rsync -aAXHxvv --delete --info=progress2,stats2,name2
>
Hello!
My setup:
- single home x86-64 computer running Debian 11 Stable, up to date.
- one 4-Tb external usb hard drive to use as a backup device, labeled MSD1.
- another identical usb hard drive, labeled MSD2, to use as a copy of the
backups on MSD1.
- the computer and all storage devices
mick crane wrote:
> On 2020-10-13 00:46, Dan Ritter wrote:
> > mick crane wrote:
> > >
>
> This looks like good advice, thanks Dan and all.
> One thing I wonder about if I reboot and change boot order to start windows
> is if I might create some confusion on the network as pfsense PC does DHCP
On 2020-10-13 00:46, Dan Ritter wrote:
mick crane wrote:
might I ask a favour for information on accepted wisdom for this stuff
?
I being a home user have pfsense on old lenovo between ISP router and
switch
to PCs
another old buster lenovo doing email
another Buster PC I do bits of
ftp or
rsync-over-ssh, that would be much better. Or you can plug an
external USB disk into the Windows machine and ask it to store
the backups there directly.
-dsr-
I am a long time user of LuckyBackup, and am very satisfied.
experimenting with Clear Linux OS system, I have been looking for a
backup solution LuckyBackup is not readily available.
Clear OS provides KopiaUI ...reading the Kopia webpage and YouTube
tutorial the KopiaUI app seems to be
On Thu, Aug 20, 2020 at 11:33:34AM +1200, Ben Caradoc-Davies wrote:
On 20/08/2020 10:08, David Christensen wrote:
On 2020-08-13 01:31, David Christensen wrote:
Without knowing anything about your resources, needs,
expectations, "consistent backup plan", etc., and given the
choices ext2, ext3,
On 20/08/2020 10:08, David Christensen wrote:
On 2020-08-13 01:31, David Christensen wrote:
Without knowing anything about your resources, needs, expectations,
"consistent backup plan", etc., and given the choices ext2, ext3, or
ext4 for an external USB drive presumably to store backup
On 2020-08-13 01:31, David Christensen wrote:
On 8/12/20 5:14 PM, rhkra...@gmail.com wrote:
I'm getting closer to setting up a consistent backup plan, backing up
to an
external USB drive. I'm wondering about a reasonable filesystem to
use, I
think I want to stay in the ext2/3/4 family, and
On Vi, 14 aug 20, 10:31:51, David Wright wrote:
>
> I'm dubious whether I shall ever start using these filesystems.
> I create multiple backups on ext4 filesystems on LUKS, and keep
> MD5 digests of their contents. Would that qualify as your
> "additional tools"
nt (and don't want to learn) either of them at this point -- I don't see
> > much need for a backup filesystem.)
>
> As has been stated already, both btrfs and ZFS have built-in bitrot
> protections that are very useful for backups and archives. To achieve
> the same level of protection
up
(d) unmount
When you discover your media is corrupt/broken, yon restart with a
new medium.
If you need any redundancy, you keep several backups in parallel
(which you keep physically separate, so your house burning down
doesn't catch all of them at once).
Adjust accordingly for over-the
On Thu, Aug 13, 2020 at 09:32:13PM +, ghe2001 wrote:
Two for sure and put them in a RAID1 -- formatted ext4. And watch that
mdstat.
And a third or fourth to see if you can get ZFS going.
For playing around with tech, sure: for part of a mundane, reliable
backup strategy for the OP, and as
FS or BTRFS for my "system" filesystems, but don't see
> any
> point (and don't want to learn) either of them at this point -- I don't see
> much need for a backup filesystem.)
As has been stated already, both btrfs and ZFS have built-in bitrot
protections that are very useful
On 2020-08-13 01:31, David Christensen wrote:
> Migrating to ZFS was non-trivial, and I am still wresting with
> disaster preparedness.
I should have qualified that -- when I used ZFS only as a volume manager
and file system, it was not much harder than md and ext4. You could put
a GPT
On 8/13/20 13:52, rhkra...@gmail.com wrote:
> On Thursday, August 13, 2020 01:45:59 PM Tom Dial wrote:
>> Debian ZFS root (and boot) is not *that* hard; see the instructions at
>>
>> https://openzfs.github.io/openzfs-docs/Getting%20Started/Debian/Debian%20B
>> uster%20Root%20on%20ZFS.html
>>
On Thursday, August 13, 2020 04:09:46 PM David Christensen wrote:
> On 2020-08-13 12:52, rhkra...@gmail.com wrote:
> > On Thursday, August 13, 2020 01:45:59 PM Tom Dial wrote:
> >> I would recommend installing from buster-backports to get the current
> >> openzfs release which includes
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
‐‐‐ Original Message ‐‐‐
On Thursday, August 13, 2020 2:50 PM, Dan Ritter wrote:
> D. R. Evans wrote:
>
> > Greg Wooledge wrote on 8/13/20 2:29 PM:
> >
> > > The simplest answer would be to use ext4.
> >
> > I concur, given the OP's use
D. R. Evans wrote:
> Greg Wooledge wrote on 8/13/20 2:29 PM:
>
> >
> > The simplest answer would be to use ext4.
> >
>
> I concur, given the OP's use case. And I speak as someone who raves about ZFS
> at every reasonable opportunity :-)
Also concur. But by all means buy a spare drive and
Greg Wooledge wrote on 8/13/20 2:29 PM:
>
> The simplest answer would be to use ext4.
>
I concur, given the OP's use case. And I speak as someone who raves about ZFS
at every reasonable opportunity :-)
Doc
--
Web: http://enginehousebooks.com/drevans
signature.asc
Description: OpenPGP
On Thu, Aug 13, 2020 at 01:09:46PM -0700, David Christensen wrote:
> On 2020-08-13 12:52, rhkra...@gmail.com wrote:
> > * Most of my backup will be done from a Wheezy system -- can I install
> > ZFS
> > on Wheezy?
>
> I do not see any ZFS packages for Wheezy:
>
> The simplest answer would
On 2020-08-13 12:52, rhkra...@gmail.com wrote:
On Thursday, August 13, 2020 01:45:59 PM Tom Dial wrote:
Debian ZFS root (and boot) is not *that* hard; see the instructions at
https://openzfs.github.io/openzfs-docs/Getting%20Started/Debian/Debian%20B
uster%20Root%20on%20ZFS.html
They
On Thursday, August 13, 2020 01:45:59 PM Tom Dial wrote:
> Debian ZFS root (and boot) is not *that* hard; see the instructions at
>
> https://openzfs.github.io/openzfs-docs/Getting%20Started/Debian/Debian%20B
> uster%20Root%20on%20ZFS.html
>
> They certainly are not harder than installing early
any
> good reason to use anything beyond ext2?
>
I've been using an external USB drive for backups for years (more specifically,
a regular HDD in a USB enclosure), it works reasonably well. I use ext4.
ext2 is more prone to lose stuff and become corrupted if your PC shuts down
suddenly a
with ZFS on root;
> STFW for details.) There is a 'contrib' ZFS kernel package available
> that can be installed on a working Debian system. This makes it
> possible to use ZFS for most everything except boot and root. ZFS is
> mature and reliable. I use ZFS for FreeBSD system disks, file
On Wed, Aug 12, 2020 at 09:15:21PM -0600, Charles Curley wrote:
> On Wed, 12 Aug 2020 20:14:03 -0400
> rhkra...@gmail.com wrote:
>
> > I'm getting closer to setting up a consistent backup plan, backing up
> > to an external USB drive. I'm wondering about a reasonable
> > filesystem to use, I
On Thu, Aug 13, 2020 at 12:55:35PM +1200, Ben Caradoc-Davies wrote:
> On 13/08/2020 12:14, rhkra...@gmail.com wrote:
> >I'm getting closer to setting up a consistent backup plan, backing up to an
> >external USB drive. I'm wondering about a reasonable filesystem to use, I
> >think I want to stay
s, file server
live data, backups, archives, and images. Migrating to ZFS was
non-trivial, and I am still wresting with disaster preparedness.
David
'm wondering if there is any good reason to use anything beyond ext2?
I use my external USB drives for off-site backup, so I use ext4 on top
of an encrypted partition.
http://charlescurley.com/blog/index.html
Start with
http://charlescurley.com/blog/posts/2019/Nov/02/backups-on-linux/ and
work your way for
On 8/12/2020 7:14 PM, rhkra...@gmail.com wrote:
I'm getting closer to setting up a consistent backup plan, backing up to an
external USB drive. I'm wondering about a reasonable filesystem to use, I
think I want to stay in the ext2/3/4 family, and I'm wondering if there is any
good reason to use
backups are pigz-compressed tar archives, encrypted with gpg
symmetric encryption, with a "pigz -0" outer wrapper to add a 32-bit
checksum wrapper for convenient verification with "gzip -tv" or similar
without requiring decryption. Archives are written to both external
local
On 13/8/20 10:14 am, rhkra...@gmail.com wrote:
I'm getting closer to setting up a consistent backup plan, backing up to an
external USB drive. I'm wondering about a reasonable filesystem to use, I
think I want to stay in the ext2/3/4 family, and I'm wondering if there is any
good reason to use
I'm getting closer to setting up a consistent backup plan, backing up to an
external USB drive. I'm wondering about a reasonable filesystem to use, I
think I want to stay in the ext2/3/4 family, and I'm wondering if there is any
good reason to use anything beyond ext2?
(Some day I'll try ZFS
dump
https://axkibe.github.io/lsyncd/manual/config/layer2/
https://packages.debian.org/stretch/lsyncd
J'utilise ça pour des backups de mon système de fichier, ça
m'étonnerait qu'il ne soit pas utilisable au moins pour une partie des
opérations de 1 à 4.
@+
--
Benoit
Le 23 mars 2018 à 16:49
Le 23/03/18 à 20:23, Eric Degenetais a écrit :
ED> Le 23 mars 2018 21:18, a écrit :
ED>
ED> > J'ai trouvé encore plus simple !
ED> > Avec la doc envoyé par Timoté Brusson, je suis retombé sur un truc
ED> > "pré-fait" est-ce viable ? (
VANDENDAELEN
Web : www.vandendaelen.com
-Message d'origine-
De : Pierre L. <pet...@miosweb.mooo.com>
Envoyé : vendredi 23 mars 2018 20:32
À : debian-user-french@lists.debian.org
Objet : Re: Backups automatiques d'une base de donnée vers un périphérique
de stockage
Le 23/03/2018 à 16:49, vandendaelencle
e qu'était un cron et comment ce bidule fonctionne ! :D
NB : Oui, c'est bien Raspbian, my bad.
Clément VANDENDAELEN
Web : www.vandendaelen.com
-Message d'origine-
De : Pierre L. <pet...@miosweb.mooo.com>
Envoyé : vendredi 23 mars 2018 20:32
À : debian-user-french@lists.debian.org
Obj
Le 23/03/2018 à 16:49, vandendaelenclem...@gmail.com a écrit :
>
> Bonjour à tous,
>
> Je possède un petit Rasberry sur lequel je me suis amusé à créer une
> application web « locale ». Dans un souci de prévoir à l’imprévu,
> j’aimerais effectuer une backup automatique des bases de données
>
Merci pour vos réponses, je vais essayer ça !
Bonne soirée !
Clément VANDENDAELEN
Web : www.vandendaelen.com
-Message d'origine-
De : "Raphaël" POITEVIN <raphael.poite...@gmail.com>
Envoyé : vendredi 23 mars 2018 17:31
À : debian-user-french@lists.debian.org
Obj
Bonjour,
writes:
> j’aimerais effectuer une backup automatique des bases de données
> stocké dans ce dernier vers une clef USB (par exemple).
Un script qui lance des rsync appelé par un cron.
J’ai fait un truc de ce style :
#!/bin/bash
# Backup des données
#
Bonjour à tous,
Je possède un petit Rasberry sur lequel je me suis amusé à créer une
application web « locale ». Dans un souci de prévoir à limprévu, jaimerais
effectuer une backup automatique des bases de données stocké dans ce dernier
vers une clef USB (par exemple).
Il va de soi que le RPI
t; referencing the saying “make things as simple as possible but not more
> simple”).
>
> I have been testing it with toy cases to have at least some experience
> with it before using it for my real backups.
>
> Using a Git checkout of the latest release I get this warning: “Using a
&
On Sun, 20 Aug 2017 20:04:57 -0500
Mario Castelán Castro wrote:
> On 2017-08-19 23:07 -0400 Celejar wrote:
> >There's Borg, which apparently has good deduplication. I've just
> >started using it, but it's a very sophisticated and quite popular piece
>
kup program should do, which is to be a
> repository on some other storage medium besides the day to day operating
> cache, of the data you will need to recover and restore normal
> operations should your main drive become unusable with no signs of ill
> health until its falls over.
That's not the onl
o Castelán Castro <marioxcc...@yandex.com> wrote:
> > > > Hello.
> > > >
> > > > Currently I use rsync to make the backups of my personal data,
> > > > including some manually selected important files of system
> > > > configurat
lo.
> > >
> > > Currently I use rsync to make the backups of my personal data,
> > > including some manually selected important files of system
> > > configuration. I keep old backups to be more safe from the scenario
> > > where I have deleted something imp
with toy cases to have at least some experience
with it before using it for my real backups.
Using a Git checkout of the latest release I get this warning: “Using a
pure-python msgpack! This will result in lower performance.”. Yet I have
the Debian package “python3-msgpack“. Do you know w
ool for my case. I do not need any highly sophisticated
tools. As I noted in the first message, I only want to backup a personal
computer to an USB drive.
Since I must manually connect the USB drive to make the backups, there is
no point in automatizing it with cron. Network backups are irreleva
be.
Amanda is a very well done collection of programs.
It very efficiently does incremental backups to several types of media
-- Gene goes to disk, I go to tape (takes forever, but there are
several little boxes containing backups that are nowhere near a
failure point).
It backs up in tar (or dum
On Saturday 19 August 2017 23:07:01 Celejar wrote:
> On Thu, 17 Aug 2017 11:47:34 -0500
>
> Mario Castelán Castro <marioxcc...@yandex.com> wrote:
> > Hello.
> >
> > Currently I use rsync to make the backups of my personal data,
> > including some manu
On Thu, 17 Aug 2017 11:47:34 -0500
Mario Castelán Castro <marioxcc...@yandex.com> wrote:
> Hello.
>
> Currently I use rsync to make the backups of my personal data, including
> some manually selected important files of system configuration. I keep
> old backups to be more s
On 2017-08-18 23:53 +0100 Liam O'Toole wrote:
>I use duplicity for exactly this scenario. See the wiki page[1] to get
>started.
>
>1: https://wiki.debian.org/Duplicity
Judging from a quick glance at that project's homepage in GNU Savannah,
this seem indeed to be the
On 2017-08-17, Mario Castelán Castro <marioxcc...@yandex.com> wrote:
> Hello.
>
> Currently I use rsync to make the backups of my personal data, including
> some manually selected important files of system configuration. I keep
> old backups to be more safe from the scenario
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On Thu, Aug 17, 2017 at 07:33:53PM -0500, Mario Castelán Castro wrote:
> On 17/08/17 15:51, to...@tuxteam.de wrote:
[...]
> > [...] And yes, there's a wiki entry encouraging "in-line" quoting [1].
>
> Ah, I see. I rarely check the Debian Wiki
On 17/08/17 15:51, to...@tuxteam.de wrote:
> On Thu, Aug 17, 2017 at 03:24:35PM -0500, Mario Castelán Castro wrote:
> [...]
>
> But in general, folks here tend to be tolerant. And yes, there's a
> wiki entry encouraging "in-line" quoting [1].
Ah, I see. I rarely check the Debian Wiki because it
On 17/08/17 13:31, Nicolas George wrote:
> [[elided]]
>
> No, it is the other way around: we rsync the data to a directory stored
> on a btrfs filesystem, and then we make a snapshot of that directory.
> With btrfs's CoW, only the parts of the files that have changed use
> space.
Thanks for the
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On Thu, Aug 17, 2017 at 03:24:35PM -0500, Mario Castelán Castro wrote:
> On 17/08/17 13:31, Nicolas George wrote:
[...]
> > Please remember not to top-post.
>
> Both bottom posting and top posting each have their own disadvantages.
The general
Thanks for your answer.
Let me know if I understood your approach correctly. You have a
directory in a btrfs filesystem that is the target of your backups. When
you make a backup, you take a brtfs snapshot of this directory and
*then* use rsync. Is this correct?
Regards.
On 17/08/17 12:50
On 17/08/17 12:10, Fungi4All wrote:
> [[elided]]
> Stay with rsync
Why? Isn't there a more efficient alternative?
signature.asc
Description: OpenPGP digital signature
Hello.
Currently I use rsync to make the backups of my personal data, including
some manually selected important files of system configuration. I keep
old backups to be more safe from the scenario where I have deleted
something important, I make a backup, and I only notice the deletion
afterwards
Le decadi 30 thermidor, an CCXXV, Mario Castelán Castro a écrit :
> Let me know if I understood your approach correctly. You have a
> directory in a btrfs filesystem that is the target of your backups. When
> you make a backup, you take a brtfs snapshot of this directory and
> *the
Le decadi 30 thermidor, an CCXXV, Mario Castelán Castro a écrit :
> Currently I use rsync to make the backups of my personal data, including
> some manually selected important files of system configuration. I keep
> old backups to be more safe from the scenario where I have deleted
&g
> From: marioxcc...@yandex.com
> To: debian-user <debian-user@lists.debian.org>
>
> Hello.
>
> Currently I use rsync to make the backups of my personal data, including
> some manually selected important files of system configuration. I keep
> old backups to be more
: Some Debian package upgrades are corrupting rsync "quick check"
backups
Resent-Date:Sat, 28 Jan 2017 13:17:06 + (UTC)
Resent-From:debian-secur...@lists.debian.org
Fecha: Sun, 29 Jan 2017 02:11:41 +1300
De: Adam Warner <li...@consulting.net.nz>
Para
Hi, Celejar
On 09/09/16 18:18, Celejar wrote:
My laptop has 802.11 a/b/g WiFi and Fast Ethernet. Wireless data
transfers are slow (~50 Mbps). Wired is twice as fast (100 Mbps); still
slow. Newer WiFi (n, ac) should be faster, but only the newest WiFi
hardware can match or
Hi, deloptes.
On 09/09/16 19:06, deloptes wrote:
>> Still, 20-24 Mbps is more than 10 Mpbs I was seeing with rsync. There
>> could be a bottleneck somewhere?
> In my case it was the IO on the disk - I couldn't do more than 12Mbps even
> on wired connection, because I have encrypted disk ... it
On Sat, 10 Sep 2016 10:53:20 -0400
rhkra...@gmail.com wrote:
> On Saturday, September 10, 2016 10:40:26 AM Gene Heskett wrote:
> > On Saturday 10 September 2016 10:26:15 rhkra...@gmail.com wrote:
> > > On Saturday, September 10, 2016 08:41:53 AM Dan Ritter wrote:
> > > > It's in megabytes per
On 09/10/2016 07:23 PM, Celejar wrote:
> FTR: there seem to be more typos / here. The actual figure should be
> 11034157.6344 bits/second.
Yes, let's whip those typos out of this dead horse some more:
On 09/09/2016 08:36 PM, David Christensen wrote:
> Benchmarking using WiFi (48 Mb/s):
>
>
On Fri, 9 Sep 2016 20:43:44 -0700
David Christensen wrote:
> On 09/09/2016 12:43 PM, Daniel Bareiro wrote:
> > On 09/08/16 22:57, David Christensen wrote:
> >> My laptop has 802.11 a/b/g WiFi and Fast Ethernet. Wireless data
> >> transfers are slow (~50 Mbps). Wired
On Fri, 9 Sep 2016 20:36:39 -0700
David Christensen wrote:
> On 09/09/2016 11:51 AM, Celejar wrote:
> > On Tue, 9 Aug 2016 18:57:02 -0700
> > David Christensen wrote:
> >
> > ...
> >
> >> My laptop has 802.11 a/b/g WiFi and Fast Ethernet.
On Saturday 10 September 2016 10:53:20 rhkra...@gmail.com wrote:
> On Saturday, September 10, 2016 10:40:26 AM Gene Heskett wrote:
> > On Saturday 10 September 2016 10:26:15 rhkra...@gmail.com wrote:
> > > On Saturday, September 10, 2016 08:41:53 AM Dan Ritter wrote:
> > > > It's in megabytes per
On 09/10/2016 07:53 AM, rhkra...@gmail.com wrote:
> On Saturday, September 10, 2016 10:40:26 AM Gene Heskett wrote:
>> You make an assumption many folks do, but theres a start bit and a stop
>> bit so the math is more like 1000/10=100 Mb/s.
>
>
> Well, 1000/8 is still 125 ;-) but I wouldn't have
On Saturday, September 10, 2016 10:40:26 AM Gene Heskett wrote:
> On Saturday 10 September 2016 10:26:15 rhkra...@gmail.com wrote:
> > On Saturday, September 10, 2016 08:41:53 AM Dan Ritter wrote:
> > > It's in megabytes per second, so assume 1000/8 = 250 MB/s is the
> > > bandwidth of a gigabit
On Saturday 10 September 2016 10:26:15 rhkra...@gmail.com wrote:
> On Saturday, September 10, 2016 08:41:53 AM Dan Ritter wrote:
> > It's in megabytes per second, so assume 1000/8 = 250 MB/s is the
> > bandwidth of a gigabit ethernet NIC.
>
> Sorry, I tend to pick at nits, but, for the record,
On Saturday, September 10, 2016 08:41:53 AM Dan Ritter wrote:
> It's in megabytes per second, so assume 1000/8 = 250 MB/s is the
> bandwidth of a gigabit ethernet NIC.
Sorry, I tend to pick at nits, but, for the record, 1000/8 is 125 Mb/s. It
doesn't (really) change your conclusions.
regards,
On Sat, Sep 10, 2016 at 01:22:45AM -0400, Neal P. Murphy wrote:
> On Fri, 9 Sep 2016 23:14:30 -0500
> David Wright wrote:
>
> Good eye! I was going to say it's not possible to get 110Mb/s over 802.11g;
> 40-50 is closer tothe best I get. And 193Mb/s over 100Mb/s
On 09/09/2016 09:14 PM, David Wright wrote:
> On Fri 09 Sep 2016 at 20:36:39 (-0700), David Christensen wrote:
>> So, 1048576900 bytes * 8 bits / byte / 76.024 seconds
> ↑
>
> What's this 9?
A typographical error.
104857600 bytes * 8 bits/byte / 76.024 seconds
= 11034158
On Fri, 9 Sep 2016 23:14:30 -0500
David Wright wrote:
> On Fri 09 Sep 2016 at 20:36:39 (-0700), David Christensen wrote:
> > On 09/09/2016 11:51 AM, Celejar wrote:
> > > On Tue, 9 Aug 2016 18:57:02 -0700
> > > David Christensen wrote:
> > >
On Fri 09 Sep 2016 at 20:36:39 (-0700), David Christensen wrote:
> On 09/09/2016 11:51 AM, Celejar wrote:
> > On Tue, 9 Aug 2016 18:57:02 -0700
> > David Christensen wrote:
> >
> > ...
> >
> >> My laptop has 802.11 a/b/g WiFi and Fast Ethernet. Wireless data
> >>
1 - 100 of 793 matches
Mail list logo