Re: file descriptor VS file handle

2023-07-12 Thread Thomas Schmitt
Hi,

cor...@free.fr wrote:
> In linux systems, are file descriptor and file handle meaning the same
> stuff?

In the programming language C on Linux (more generally: on POSIX systems)
"File descriptor" is an integer number handed out by system calls like
open(2), pipe(2), socket(2), and others. It can be used by calls like
read(2), write(2), recv(2), send(2) to get or to deliver data.
(Execute e.g.
   man 2 open
to get the manual page of the desired system call.)

I am not aware that "file handle" has such a well defined meaning in
POSIX. To my understanding it just means a programming object that can be
used for similar activities as a file descriptor (and possibly for more
file related activities).

Maybe you mean "file pointer", which usually refers to the return value
"FILE *" of fopen(3) and can be used by calls like fread(3), fscanf(3),
fwrite(3), or fprintf(3). This is implemented on top of file descriptor.
The call fileno(3) can obtain the file descriptor of a file pointer, so
that the above calls of man chapter 2 can be used.
The call fdopen(3) can create a file pointer from a file descriptor.

Why there are two sets of file accessing system calls must have historical
reasons. They were already present in the mid 1980s. But obviously file
descriptor is the more fundamental one.
A possible motivation to have FILE * is given in
  
https://www.gnu.org/software/libc/manual/html_node/Streams-and-File-Descriptors.html


Have a nice day :)

Thomas



Re: Migrating system from u-sd to nvme memory on arm64's?

2023-07-12 Thread jeremy ardley



On 13/7/23 11:15, Charles Curley wrote:

I'm not sure that this is correct. I have several SSDs around here, all
several years old, all with swap partitions and all in daily use. None
has failed me yet.



Most modern SBC images for Debian and Armbian don't have a swap 
partition. It's not usually necessary and it provides a vector for wear 
on the device.


If you don't have a swap partition then you won't have more writes to 
drive than otherwise. Every write reduces the remaining life of the 
drive. That is not just in the swap partition as the drives wear level 
irrespective of any format used.


In the same vein, it's really a bad idea to run video surveillance on a 
SSD as overwriting the complete SSD every couple of weeks will trash it 
in no time. There are probably SSDs that boast to do this, but the 
standard now is using carefully designed spinning drives optimised for 
surveillance.


In my personal experience, I ran a 500GB WD NVME drive on my workstation 
without a swap partition and no surveillance. After 3 years It had worn 
it down 30%. Not a drama as I was swapping it out, but surprising for 
just a workstation.



Jeremy




Re: Migrating system from u-sd to nvme memory on arm64's?

2023-07-12 Thread jeremy ardley



On 13/7/23 10:15, Dan Ritter wrote:

M.2 is an interface format, a micro card edge. M.2 has a set of
key cutouts that specify what exact interfaces are allowed to
connect. It can be used to connect PCIe, SATA, or USB devices.

There are enough possibilities that it's best to reference the
wikipedia article:https://en.wikipedia.org/wiki/M.2

M.2 drives can either be SATA SSDs or NVMe SSDs. SATA is exactly
the same electrical interface as you are used to on 2.5 and 3.5
inch disks, with the same 6Gb/s maximum rate. If you have a
spare SATA port, no point in using up the valuable motherboard
M.2 port.

NVMe (non-volatile memory express) is a command protocol running
directly on PCIe and can run at full PCIe 3, 4 or 5 speed, whatever's
supported by the best intersection of the motherboard and the



What you have said is correct, but in the real world most terms are 
loosely used , especially SSD.


As examples, the Samsung "860 EVO SATA III M.2 SSD 1TB" is SATA, while 
the "SSD 980 PCIe 3.0 NVMe M.2" is a PCIe device, but both are called 
M.2 SSD


Unless it says PCIe in the description then it's not going to be your 
super speed storage device.


With M.2  there are more than two options for payload depending in part 
on the key positions.


Key A: This key supports modules that provide wireless connectivity such 
as Wi-Fi and Bluetooth.


Key B: This key is used for modules that support PCIe (Peripheral 
Component Interconnect Express) and SATA (Serial ATA) interfaces, 
typically used for solid-state drives (SSDs) and other storage devices.


Key E: This key is specifically designed for modules providing wireless 
connectivity, including Wi-Fi, Bluetooth, and cellular modules.


Key M: This key supports modules that use the PCIe interface, typically 
used for high-speed storage devices such as SSDs.





Re: Migrating system from u-sd to nvme memory on arm64's?

2023-07-12 Thread jeremy ardley



On 13/7/23 10:49, Carl Fink wrote:


Really? I have never owned a computer where I couldn't replace the SSD. 



Low end laptops and notebooks come with the SSD soldered to the board, 
usually eMMC




Re: Migrating system from u-sd to nvme memory on arm64's?

2023-07-12 Thread Charles Curley
On Thu, 13 Jul 2023 08:45:27 +0800
jeremy ardley  wrote:

> [I]t's preferable to not put a swap partition on them for wear
> reasons.

I'm not sure that this is correct. I have several SSDs around here, all
several years old, all with swap partitions and all in daily use. None
has failed me yet.

Are there any recent experiments or other studies that back this up?

I was working with people who worked with what is now called flash
memory in the early 1980s, when it came with capacities measured in
bits (256 bits, e.g.) and lifetimes were measured in hundreds of
writes. And they were not byte addressable; the driver had to read,
modify and write the whole kazoo back again. They were working on
drivers for flash memory for the AIM-65. Clearly minimizing writes was a
good idea in those days, but also clearly they've gotten much better.

-- 
Does anybody read signatures any more?

https://charlescurley.com
https://charlescurley.com/blog/



Re: file server

2023-07-12 Thread David Christensen

On 7/12/23 02:44, lina wrote:

Dear all,

My computer only has 2 TB data storage capacity,

I want to have 100 TB capacity to store/analyze data.

I am thinking of adding 5 hard drives, each is 18TB, and then merge
them into one volume? or get a file server? What is the best option
for me, and what is the budget?

Thanks so much for your advice, best, lina



On 7/12/23 04:48, lina wrote:

Currently I do not have a plan to keep the data, once the data
finished analyzing, I can just remove it.



On 7/12/23 06:00, lina wrote:

I need to extract the data for downstream analysis. after that, these
data can be removed.


It is hard to provide recommendations without knowing your computer, 
your network, your analysis, your quality metrics, or your budget.



I use ZFS.  Given an x86_64/amd64 computer with Debian, sufficient HDD 
bays, and sufficient HBA ports, yes, you could install 5 @ 18 TB HDD's 
and merge them into one 90 TB ZFS pool.  If your computer has 5 bays and 
ports, this will be your lowest cost solution; but is unlikely to be 
your "best" solution.



ZFS likes memory; the more the better.  (I use ECC memory.)  For 90 TB, 
I would consider filling all memory slots with the fastest and largest 
modules that are supported.



ZFS allows SSD's to be added as read cache devices and/or write cache 
devices.  Done correctly, either or both can improve performance at a 
fraction the cost of all-SSD storage.



If your analysis can make use of concurrent I/O, more drives of smaller 
size each will improve performance.  One or more external chassis may be 
desirable:


 6 @ 15 TB
 9 @ 10 TB
10 @  9 TB
15 @  6 TB
18 @  5 TB
30 @  3 TB
45 @  2 TB
90 @  1 TB


And, smaller drives make RAID more feasible.  E.g. 20 @ 6 TB arranged as 
5 raidz1 virtual devices (vdev) of 4 drives each would provide 90 TB of 
storage, support 5 concurrent I/O operations, and tolerate 1 drive 
failure per vdev at an incremental cost of +33%.  Whereas 10 @ 18 TB 
drives arranged as 5 mirror vdev's of 2 drives each would provide 90 TB 
of storage, support 5 concurrent I/O operations, and tolerate 1 drive 
failure per vdev at an incremental cost of +100%.  But, the latter will 
resilver faster when you replace a failed drive (or a spare activates).



If your analysis can be partitioned across multiple threads and the 
threads have independent memory and I/O patterns, putting the data onto 
a file server (or NAS) would allow multiple computers to work together 
and do the analysis in less time.  You will want a fast connection 
between the analysis computers and the storage server (e.g. 10+ Gbps 
Ethernet).  (Alternatively, a storage area network; SAN.)



David



Re: Migrating system from u-sd to nvme memory on arm64's?

2023-07-12 Thread Carl Fink

On 7/12/23 22:23, Default User wrote:

Now you tell me . . .

In February, I transferred an existing Debian 11 setup to a new 64-bit
x86 computer with an nvme ssd. I have a 1 Gb swap partition.

What do to avoid wearing the ssd out?

Very important these days, since you can't just open up a computer and
swap out the ssd.  Now, a worn out ssd, or any other part really, means
a whole new computer.  So much for reducing waste . . .


Really? I have never owned a computer where I couldn't replace the SSD.

-Carl Fink



Re: file descriptor VS file handle

2023-07-12 Thread Dan Ritter
cor...@free.fr wrote: 
> In linux systems, are file descriptor and file handle meaning the same
> stuff?


Almost.

A file handle is a variable that can hold a file descriptor. You
might reuse the file handle to hold a different file descriptor
later.

-dsr-



Re: Migrating system from u-sd to nvme memory on arm64's?

2023-07-12 Thread Dan Ritter
jeremy ardley wrote: 
> 
> On 13/7/23 08:31, mick.crane wrote:
> > I was wondering what these Nvme M2 things are and if can plug into
> > motherboard or need an adaptor, are they like a RAM disk or something.
> > mick
> 
> 
> Depending on your motherboard you can plug them in directly. With an older
> motherboard you need a PCiE adaptor card that supports multiple NVME
> devices.
> 
> Be careful however, there are at least two flavours of NVME drive. A PCiE
> flavour which is very fast, and a SSD flavour which isn't. You want the PCiE
> flavour if possible.

Sorry, have to correct you here.

M.2 is an interface format, a micro card edge. M.2 has a set of
key cutouts that specify what exact interfaces are allowed to
connect. It can be used to connect PCIe, SATA, or USB devices.

There are enough possibilities that it's best to reference the
wikipedia article: https://en.wikipedia.org/wiki/M.2

M.2 drives can either be SATA SSDs or NVMe SSDs. SATA is exactly
the same electrical interface as you are used to on 2.5 and 3.5
inch disks, with the same 6Gb/s maximum rate. If you have a
spare SATA port, no point in using up the valuable motherboard
M.2 port.

NVMe (non-volatile memory express) is a command protocol running
directly on PCIe and can run at full PCIe 3, 4 or 5 speed, whatever's
supported by the best intersection of the motherboard and the
drive.

-dsr- 



Re: Migrating system from u-sd to nvme memory on arm64's?

2023-07-12 Thread Default User
On Thu, 2023-07-13 at 08:45 +0800, jeremy ardley wrote:
> 
> On 13/7/23 08:31, mick.crane wrote:
> > I was wondering what these Nvme M2 things are and if can plug into 
> > motherboard or need an adaptor, are they like a RAM disk or
> > something.
> > mick 
> 
> 
> Depending on your motherboard you can plug them in directly. With an 
> older motherboard you need a PCiE adaptor card that supports multiple
> NVME devices.
> 
> Be careful however, there are at least two flavours of NVME drive. A 
> PCiE flavour which is very fast, and a SSD flavour which isn't. You
> want 
> the PCiE flavour if possible.
> 
> In either case they are detected as ordinary HDD drives and you need
> do 
> nothing out of the ordinary other than it's preferable to not put a
> swap 
> partition on them for wear reasons.
> 
> 
> Jeremy
> 



Now you tell me . . . 

In February, I transferred an existing Debian 11 setup to a new 64-bit
x86 computer with an nvme ssd. I have a 1 Gb swap partition.

What do to avoid wearing the ssd out?

Very important these days, since you can't just open up a computer and
swap out the ssd.  Now, a worn out ssd, or any other part really, means
a whole new computer.  So much for reducing waste . . . 



Re: Migrating system from u-sd to nvme memory on arm64's?

2023-07-12 Thread jeremy ardley



On 13/7/23 08:31, mick.crane wrote:
I was wondering what these Nvme M2 things are and if can plug into 
motherboard or need an adaptor, are they like a RAM disk or something.
mick 



Depending on your motherboard you can plug them in directly. With an 
older motherboard you need a PCiE adaptor card that supports multiple 
NVME devices.


Be careful however, there are at least two flavours of NVME drive. A 
PCiE flavour which is very fast, and a SSD flavour which isn't. You want 
the PCiE flavour if possible.


In either case they are detected as ordinary HDD drives and you need do 
nothing out of the ordinary other than it's preferable to not put a swap 
partition on them for wear reasons.



Jeremy



file descriptor VS file handle

2023-07-12 Thread coreyh

Hello,

In linux systems, are file descriptor and file handle meaning the same 
stuff?


Thanks.



Re: Migrating system from u-sd to nvme memory on arm64's?

2023-07-12 Thread mick.crane

On 2023-07-12 17:14, gene heskett wrote:

On 7/12/23 10:28, Jeffrey Walton wrote:



It seems like it should be a new thread, but I want to make sure I am
not missing something obvious.

Jeff
.


My bad Jeff, get out the wet noodles & give me 30 lashes, I neglected
to update the subject. But it is related to migrating, so this s/b
better.


the OP said.
"I am thinking of changing my storage from two 1TB hard drives in a 
software

RAID 1 configuration to two M.2 Nvme 1 TB SSDs."

I was wondering what these Nvme M2 things are and if can plug into 
motherboard or need an adaptor, are they like a RAM disk or something.

mick



Re: file server

2023-07-12 Thread tomas
On Wed, Jul 12, 2023 at 11:00:30AM -0400, Jeffrey Walton wrote:
> On Wed, Jul 12, 2023 at 6:07 AM jeremy ardley  wrote:
> >
> > On 12/7/23 17:44, lina wrote:
> > > My computer only has 2 TB data storage capacity,
> > >
> > > I want to have 100 TB capacity to store/analyze data.
> >
> > On this scale it's almost certainly easier and cheaper to use a cloud
> > provider who can provide a good CPU and a large attached storage.
> >
> > I use AWS as a provider for this type of application, but you can find
> > many other providers at various price points.
> 
> One of my favorite tee-shirts:
> https://www.amazon.com/s?k=there+is+no+cloud+tee+shirt

There is /some/ irony in Amazon selling this, isnt't it? Buying from
that outfit is one of the things I hope I never have to do.

I'd prefer the original, anyway :-)

  https://fsfe.org/order/2016/tshirt-nocloud-black-front-large.jpg

Cheers
-- 
t


signature.asc
Description: PGP signature


Re: latest upgrade to systemd 252.12-1 error about invalid attributes /var/log/journal and slow sshd connections

2023-07-12 Thread Gareth Evans


> On 12 Jul 2023, at 15:12, David Mehler  wrote:
> 
> Hello,
> 
> I'm running Debian 12 on a vps. I just upgraded it and am now
> apparently running the latest systemd version 252.12-1. I saw an error
> about invalid attributes on /var/log/journal then it said ignoring.
> I've seen others with this error but only in reference as far as I can
> tell to the btrfs filesystem which I'm not using. I've got a single
> drive running ext4. I'm also seeing very slow like over a minute
> connection times between when I authenticate via sshd and I get a
> terminal prompt which is also since this upgrade. The initial server
> connection goes as normal, it gets my public key then a good long
> delay and then I finally get my terminal prompt.
> 
> Any comments on either of these appreciated.

Hi Dave,

Can you specify the journal error messages?

This suggests ssh login delay may be a DNS issue

https://superuser.com/questions/166359/why-is-my-ssh-login-slow

Does

ssh -vvv ...

(at client) shed any light?

Thanks,
Gareth

> Thanks.
> Dave.
> 


Re: file server

2023-07-12 Thread debian-user
lina  wrote:
> I need to extract the data for downstream analysis. after that, these
> data can be removed.

Do you need all the data present at the same time to extract it?
Obviously you won't need as much storage if you can analyse/extract it
in sections.

> On Wed, Jul 12, 2023 at 2:24 PM Jerome BENOIT
>  wrote:
> 
> > Hello,
> >
> > On 12/07/2023 13:48, lina wrote:  
> > > Currently I do not have a plan to keep the data, once the data
> > > finished  
> > analyzing, I can just remove it.
> >
> > Do you want a `scratch space` [1] ?
> > What can be also interesting to know is whether you can compress
> > your data.
> >
> > Best wishes,
> > Jerome
> >
> > [1] https://en.wikipedia.org/wiki/Scratch_space
> >  
> > >
> > > On Wed, Jul 12, 2023 at 12:26 PM Dan Ritter
> > >  > > wrote:  
> > >
> > > lina wrote:  
> > >  > Dear all,
> > >  >
> > >  > My computer only has 2 TB data storage capacity,
> > >  >
> > >  > I want to have 100 TB capacity to store/analyze data.
> > >  >
> > >  > I am thinking of adding 5 hard drives, each is 18TB, and
> > >  > then  
> > merge them  
> > >  > into one volume?
> > >  > or get a file server?
> > >  > What is the best option for me, and what is the budget?  
> > >
> > > Two questions:
> > >
> > > - do you have a backup plan in mind, or will you accept the
> > > loss of all data?
> > >
> > > - how fast does new data come in, and how long do you need to
> > >keep it around?
> > >
> > > -dsr-
> > >  
> >  



Re: file server

2023-07-12 Thread Andrew M.A. Cater
On Wed, Jul 12, 2023 at 01:48:28PM +0200, lina wrote:
> Currently I do not have a plan to keep the data, once the data finished
> analyzing, I can just remove it.
> 
> On Wed, Jul 12, 2023 at 12:26 PM Dan Ritter  wrote:
> 
> > lina wrote:
> > > Dear all,
> > >
> > > My computer only has 2 TB data storage capacity,
> > >
> > > I want to have 100 TB capacity to store/analyze data.
> > >

*Maybe* AWS storage might be cheaper if you're going to throw the data
away eventually - but the costs of getting 100TB in here might be 
considerable in data transfer fees.

Something like a second hand HP tower server - say a Z440 - and 7 x 18TB drives
(to allow for failure - LVM is lovely when it works, but not when a drive
fails - LVM over RAID 10 is much better) is going to be expensive in terms
of disks but reliable and reusable.

Andy


> > > I am thinking of adding 5 hard drives, each is 18TB, and then merge them
> > > into one volume?
> > > or get a file server?
> > > What is the best option for me, and what is the budget?
> >
> > Two questions:
> >
> > - do you have a backup plan in mind, or will you accept the loss
> >   of all data?
> >
> > - how fast does new data come in, and how long do you need to
> >   keep it around?
> >
> > -dsr-
> >



Migrating system from u-sd to nvme memory on arm64's?

2023-07-12 Thread gene heskett

On 7/12/23 10:28, Jeffrey Walton wrote:

On Wed, Jul 12, 2023 at 6:40 AM gene heskett  wrote:

[ ...]
One of the things apparently missing in today's support for the arm64
boards such as the bananapi-m5, is the lack of support for the nvme
memory on some of these devices. I have quite a few of them, all booting
and running from 64G micro-sd's.  Yet these all have, soldered to the
board, several gigs of nvme memory, more that enough to contain a full
desktop install with all the toys, but totally unused.

So my question is, when do we get support for using it? It is reported
to be several times faster than the u-sd's its running from now. u-sd's
touted to do 100mhz/second, but generally can only do less than 20
megs/second in actual practice. So what plans are in place to use this
memory on the arms, that we users can look fwd to?


I don't follow. What does this have to do with raid and migrating?

It seems like it should be a new thread, but I want to make sure I am
not missing something obvious.

Jeff
.


My bad Jeff, get out the wet noodles & give me 30 lashes, I neglected to 
update the subject. But it is related to migrating, so this s/b better.


Cheers, Gene Heskett.
--
"There are four boxes to be used in defense of liberty:
 soap, ballot, jury, and ammo. Please use in that order."
-Ed Howdershelt (Author, 1940)
If we desire respect for the law, we must first make the law respectable.
 - Louis D. Brandeis
Genes Web page 



Re: file server

2023-07-12 Thread Dan Ritter
lina wrote: 
> Currently I do not have a plan to keep the data, once the data finished
> analyzing, I can just remove it.


OK.

If your computer has sufficient physical space and SATA
interfaces, you can install 5 x 18TB or 5 x 20TB or however many
disks that you like, and use mdadm in linear or stripe mode, or
ZFS in stripe mode, or just mount them all as individual
volumes.

There is no data protection in any of these modes.

If you do not have enough SATA interfaces, but has a PCIe slot available,
you can use a SAS controller with, most usefully, an LSI 3008 interface
chip to plug in an external box. The useful variants of that card include
the 9300 4i4e and 9300 8e.

That card would connect to a storage enclosure which would hold
the drives and a power supply. They are typically available to
hold 4, 6, 8, and 12 3.5" disks.

Linux has built-in support for the 9300 series cards. If the
vendor offers you a choice of R mode or IT mode firmware, you
want the IT mode firmware -- it is more flexible, and easier to
use with mdadm and ZFS and such. The R mode implements a RAID
system directly.

-dsr-



Re: file server

2023-07-12 Thread Jeffrey Walton
On Wed, Jul 12, 2023 at 6:07 AM jeremy ardley  wrote:
>
> On 12/7/23 17:44, lina wrote:
> > My computer only has 2 TB data storage capacity,
> >
> > I want to have 100 TB capacity to store/analyze data.
>
> On this scale it's almost certainly easier and cheaper to use a cloud
> provider who can provide a good CPU and a large attached storage.
>
> I use AWS as a provider for this type of application, but you can find
> many other providers at various price points.

One of my favorite tee-shirts:
https://www.amazon.com/s?k=there+is+no+cloud+tee+shirt

On a more serious note, the cloud is fine as long as you don't mind
someone else fingering your data.[1,2,3,4,...]

[1] https://www.protocol.com/bulletins/amazon-data-investigation
[2] https://threatpost.com/amazon-fires-employee-customer-data/160610/
[3] 
https://www.theverge.com/2019/1/10/18177305/ring-employees-unencrypted-customer-video-amazon
[4] 
https://www.geekwire.com/2020/amazon-fires-employees-leaking-customer-data-days-disclosing-ring-dismissals/

Jeff



Re: Migrating from hard drives to SSDs

2023-07-12 Thread Jeffrey Walton
On Wed, Jul 12, 2023 at 6:40 AM gene heskett  wrote:
> [ ...]
> One of the things apparently missing in today's support for the arm64
> boards such as the bananapi-m5, is the lack of support for the nvme
> memory on some of these devices. I have quite a few of them, all booting
> and running from 64G micro-sd's.  Yet these all have, soldered to the
> board, several gigs of nvme memory, more that enough to contain a full
> desktop install with all the toys, but totally unused.
>
> So my question is, when do we get support for using it? It is reported
> to be several times faster than the u-sd's its running from now. u-sd's
> touted to do 100mhz/second, but generally can only do less than 20
> megs/second in actual practice. So what plans are in place to use this
> memory on the arms, that we users can look fwd to?

I don't follow. What does this have to do with raid and migrating?

It seems like it should be a new thread, but I want to make sure I am
not missing something obvious.

Jeff



latest upgrade to systemd 252.12-1 error about invalid attributes /var/log/journal and slow sshd connections

2023-07-12 Thread David Mehler
Hello,

I'm running Debian 12 on a vps. I just upgraded it and am now
apparently running the latest systemd version 252.12-1. I saw an error
about invalid attributes on /var/log/journal then it said ignoring.
I've seen others with this error but only in reference as far as I can
tell to the btrfs filesystem which I'm not using. I've got a single
drive running ext4. I'm also seeing very slow like over a minute
connection times between when I authenticate via sshd and I get a
terminal prompt which is also since this upgrade. The initial server
connection goes as normal, it gets my public key then a good long
delay and then I finally get my terminal prompt.

Any comments on either of these appreciated.
Thanks.
Dave.



xrdp and KDE Plasma desktop

2023-07-12 Thread Petric Frank
Hello,

i'm not sure where to look for this problem. Entering here because the Debian
Bookworm is used.

Installed Debian with Plasma desktop. The installed xrdp anf tigervnc-
standalone-server to allow RDP connections.

If i connect to this machine using xfreerdp the desktop is correctly shown.
But immediately a password request popup window is displayed containing this
(freely translated from german):

- cut 
Title: Authentication required
Action: Allow control of network connections
Identity: org.freedesktop.NetworkManager.network-control
...
- cut 

If i look at the nmcli general permissions for the id i get:

  org.freedesktop.NetworkManager.network-control  auth

If i log in locally i get:

  org.freedesktop.NetworkManager.network-control  yes


It seems that something goes wrong - but what and how to fix it ?
I use the same userid both times.

kind regards
  Petric





Re: file server

2023-07-12 Thread lina
I need to extract the data for downstream analysis. after that, these data
can be removed.

On Wed, Jul 12, 2023 at 2:24 PM Jerome BENOIT 
wrote:

> Hello,
>
> On 12/07/2023 13:48, lina wrote:
> > Currently I do not have a plan to keep the data, once the data finished
> analyzing, I can just remove it.
>
> Do you want a `scratch space` [1] ?
> What can be also interesting to know is whether you can compress your data.
>
> Best wishes,
> Jerome
>
> [1] https://en.wikipedia.org/wiki/Scratch_space
>
> >
> > On Wed, Jul 12, 2023 at 12:26 PM Dan Ritter  > wrote:
> >
> > lina wrote:
> >  > Dear all,
> >  >
> >  > My computer only has 2 TB data storage capacity,
> >  >
> >  > I want to have 100 TB capacity to store/analyze data.
> >  >
> >  > I am thinking of adding 5 hard drives, each is 18TB, and then
> merge them
> >  > into one volume?
> >  > or get a file server?
> >  > What is the best option for me, and what is the budget?
> >
> > Two questions:
> >
> > - do you have a backup plan in mind, or will you accept the loss
> >of all data?
> >
> > - how fast does new data come in, and how long do you need to
> >keep it around?
> >
> > -dsr-
> >
>


Re: file server

2023-07-12 Thread Jerome BENOIT

Hello,

On 12/07/2023 13:48, lina wrote:

Currently I do not have a plan to keep the data, once the data finished 
analyzing, I can just remove it.


Do you want a `scratch space` [1] ?
What can be also interesting to know is whether you can compress your data.

Best wishes,
Jerome

[1] https://en.wikipedia.org/wiki/Scratch_space



On Wed, Jul 12, 2023 at 12:26 PM Dan Ritter mailto:d...@randomstring.org>> wrote:

lina wrote:
 > Dear all,
 >
 > My computer only has 2 TB data storage capacity,
 >
 > I want to have 100 TB capacity to store/analyze data.
 >
 > I am thinking of adding 5 hard drives, each is 18TB, and then merge them
 > into one volume?
 > or get a file server?
 > What is the best option for me, and what is the budget?

Two questions:

- do you have a backup plan in mind, or will you accept the loss
   of all data?

- how fast does new data come in, and how long do you need to
   keep it around?

-dsr-





Re: Firefox on wrong desktop

2023-07-12 Thread paulf
On Wed, 12 Jul 2023 08:17:47 +0200
didier gaumet  wrote:

> Le 12/07/2023 à 04:53, Manphiz a écrit :
> >  writes:
> > 
> >> Folks:
> >>
> >> This is Bookworm, XFCE4. Using claws-mail, when I click on a web
> >> link, it opens a tab for that URL on Firefox. As expected. I run
> >> Firefox on desktop 1 and claws-mail on desktop 2. However, when
> >> claws-mail launches a tab in firefox, it moves firefox to desktop
> >> 2, on top of claws-mail.
> >>
> >> Obscure, I know. Any solutions?
> >>
> >> Paul
> > 
> > This seems to be an issue with XFce4's window manager.  The same
> > happened to me with thunderbird + Chrome.  It doesn't happen on
> > Gnome. Probably worth filing a bug against xfwm4.
> > 
> > --
> > Manphiz
> 
> Hello,
> 
> This is not a bug.
> Xfce has a different default choice to Gnome, that's all.
> 
> It seems (not tested, I haven't used Xfce for years) that you can
> adjust this behavior to your preferences:
> https://docs.xfce.org/xfce/xfwm4/wmtweaks#focus
> 
> the "Do nothing" option would probably do the trick
> 
> 

Thank you, Didier. That fixes it.

Paul

-- 
Paul M. Foster
Personal Blog: http://noferblatz.com
Company Site: http://quillandmouse.com
Software Projects: https://gitlab.com/paulmfoster



Clock Icon

2023-07-12 Thread Mick Ab
I have a clock icon on my desktop PC which opens on reboot and then remains
on the top right of the screen all the time.

However, for some reason it has moved (not sure how this happened) and is
now in the middle of the screen under my browser windows (so not visible).

I would like to move the clock, but can't seem to move it with my mouse -
none of the buttons when pressed and held down move it and right clicking
doesn't do anything either.

Can anyone help, please ?


Re: file server

2023-07-12 Thread gene heskett

On 7/12/23 06:52, Dan Purgert wrote:

On Jul 12, 2023, gene heskett wrote:

On 7/12/23 06:01, Stanislav Vlasov wrote:

ср, 12 июл. 2023 г. в 14:45, lina :
[...]
2 (see https://en.wikipedia.org/wiki/RAIDhttps://en.wikipedia.org/wiki/RAID)

Unfortunately, I'm getting wikipedia's fancy 403 at that link?


Seems it was doubled-up.  The correct link is simply

   


Got it, thanks.

Cheers, Gene Heskett.
--
"There are four boxes to be used in defense of liberty:
 soap, ballot, jury, and ammo. Please use in that order."
-Ed Howdershelt (Author, 1940)
If we desire respect for the law, we must first make the law respectable.
 - Louis D. Brandeis
Genes Web page 



Re: file server

2023-07-12 Thread lina
Currently I do not have a plan to keep the data, once the data finished
analyzing, I can just remove it.

On Wed, Jul 12, 2023 at 12:26 PM Dan Ritter  wrote:

> lina wrote:
> > Dear all,
> >
> > My computer only has 2 TB data storage capacity,
> >
> > I want to have 100 TB capacity to store/analyze data.
> >
> > I am thinking of adding 5 hard drives, each is 18TB, and then merge them
> > into one volume?
> > or get a file server?
> > What is the best option for me, and what is the budget?
>
> Two questions:
>
> - do you have a backup plan in mind, or will you accept the loss
>   of all data?
>
> - how fast does new data come in, and how long do you need to
>   keep it around?
>
> -dsr-
>


Re: file server

2023-07-12 Thread Dan Purgert
On Jul 12, 2023, gene heskett wrote:
> On 7/12/23 06:01, Stanislav Vlasov wrote:
> > ср, 12 июл. 2023 г. в 14:45, lina :
> > [...]
> > 2 (see https://en.wikipedia.org/wiki/RAIDhttps://en.wikipedia.org/wiki/RAID)
> Unfortunately, I'm getting wikipedia's fancy 403 at that link?

Seems it was doubled-up.  The correct link is simply 

  

-- 
|_|O|_|
|_|_|O| Github: https://github.com/dpurgert
|O|O|O| PGP: DDAB 23FB 19FA 7D85 1CC1  E067 6D65 70E5 4CE7 2860


signature.asc
Description: PGP signature


Re: file server

2023-07-12 Thread Carl Fink



On 7/12/23 06:49, gene heskett wrote:

On 7/12/23 06:01, Stanislav Vlasov wrote:

ср, 12 июл. 2023 г. в 14:45, lina :

I want to have 100 TB capacity to store/analyze data.
I am thinking of adding 5 hard drives, each is 18TB,


Some primitive calcs: 5*18T = 90T, not 100T. Maybe you need 6 hdd?


and then merge them into one volume?


If your hardware supports 6 hard drives (5*18T + your 2T), you can use
lvm for merging 5 of them to one volume, or create raid0 by mdadm.
Some risks with plain disk merging - if one of your drives die, entire
volume dies.
It may be mitigated by use raid5 with 1 additional drive or raid6 with
2 (see 
https://en.wikipedia.org/wiki/RAIDhttps://en.wikipedia.org/wiki/RAID)


--
Stanislav

.

Unfortunately, I'm getting wikipedia's fancy 403 at that link?

Cheers, Gene Heskett.



It's a double-paste. The real URL is https://en.wikipedia.org/wiki/RAID

-Carl Fink



Re: file server

2023-07-12 Thread gene heskett

On 7/12/23 06:01, Stanislav Vlasov wrote:

ср, 12 июл. 2023 г. в 14:45, lina :

I want to have 100 TB capacity to store/analyze data.
I am thinking of adding 5 hard drives, each is 18TB,


Some primitive calcs: 5*18T = 90T, not 100T. Maybe you need 6 hdd?


and then merge them into one volume?


If your hardware supports 6 hard drives (5*18T + your 2T), you can use
lvm for merging 5 of them to one volume, or create raid0 by mdadm.
Some risks with plain disk merging - if one of your drives die, entire
volume dies.
It may be mitigated by use raid5 with 1 additional drive or raid6 with
2 (see https://en.wikipedia.org/wiki/RAIDhttps://en.wikipedia.org/wiki/RAID)

--
Stanislav

.

Unfortunately, I'm getting wikipedia's fancy 403 at that link?

Cheers, Gene Heskett.
--
"There are four boxes to be used in defense of liberty:
 soap, ballot, jury, and ammo. Please use in that order."
-Ed Howdershelt (Author, 1940)
If we desire respect for the law, we must first make the law respectable.
 - Louis D. Brandeis
Genes Web page 



Re: file server

2023-07-12 Thread Dan Ritter
lina wrote: 
> Dear all,
> 
> My computer only has 2 TB data storage capacity,
> 
> I want to have 100 TB capacity to store/analyze data.
> 
> I am thinking of adding 5 hard drives, each is 18TB, and then merge them
> into one volume?
> or get a file server?
> What is the best option for me, and what is the budget?

Two questions:

- do you have a backup plan in mind, or will you accept the loss
  of all data?

- how fast does new data come in, and how long do you need to
  keep it around?

-dsr-



Re: Migrating from hard drives to SSDs

2023-07-12 Thread gene heskett

On 7/11/23 21:39, David Christensen wrote:

On 7/11/23 13:18, Mick Ab wrote:
I am thinking of changing my storage from two 1TB hard drives in a 
software

RAID 1 configuration to two M.2 Nvme 1 TB SSDs. The two SSDs would be put
into a software RAID 1 configuration. Currently each hard drive contains
both the operating system and user data.

What steps would you recommend to achieve the above result and would 
those

steps be the quickest way ?

One of the M.2 slots can operate at PCIe 4.0 and PCIe 3.0, while the 
other

slot can only operate at PCIe 3.0. If they are to be in a RAID 1 array, I
guess that both slots should be operated at PCIe 3.0 speed.



I would backup the system configuration files and data, power down, 
remove the HDD's, install the NVMe drives, boot Debian installation 
media, do a fresh install, restore/ merge the system configuration 
files, and restore the data.



The above should be the most reliable approach and produce a "known 
good" Debian system instance.



AIUI Linux md RAID can deal with block device speed differences.


Taking a step back, you might want to re-think using two 1 TB devices in 
RAID1 for everything -- boot, swap, root, and data.  I put boot, swap, 
and root on a single 2.5" SATA SSD's and keep the entire instance small 
enough to fit onto a "16 GB" device (you might want to target "32 GB", 
"64 GB", etc., if you install a lot of software).  I then put 2.5" SATA 
trayless bays in all of my computers.  This makes it easy to move OS 
instances to other machines (subject to BIOS/UEFI compatibility), to 
clone images to additional devices (USB flash drives, HDD's, SD cards), 
and to take and store images on a regular basis for disaster 
preparedness/ recovery.  I would then wipe the 1 TB HDD's and build a 
ZFS pool using the HDD's as a mirror.  A surplus of memory will help ZFS 
performance.  For further ZFS improvements, add small/ fast/ high 
endurance NVMe devices as ZFS cache and/or log devices.



David


One of the things apparently missing in today's support for the arm64 
boards such as the bananapi-m5, is the lack of support for the nvme 
memory on some of these devices. I have quite a few of them, all booting 
and running from 64G micro-sd's.  Yet these all have, soldered to the 
board, several gigs of nvme memory, more that enough to contain a full 
desktop install with all the toys, but totally unused.


So my question is, when do we get support for using it? It is reported 
to be several times faster than the u-sd's its running from now. u-sd's 
touted to do 100mhz/second, but generally can only do less than 20 
megs/second in actual practice. So what plans are in place to use this 
memory on the arms, that we users can look fwd to?


Thank you.

.


Cheers, Gene Heskett.
--
"There are four boxes to be used in defense of liberty:
 soap, ballot, jury, and ammo. Please use in that order."
-Ed Howdershelt (Author, 1940)
If we desire respect for the law, we must first make the law respectable.
 - Louis D. Brandeis
Genes Web page 



Re: file server

2023-07-12 Thread Cindy Sue Causey
On 7/12/23, Stanislav Vlasov  wrote:
> ср, 12 июл. 2023 г. в 14:45, lina :
>> I want to have 100 TB capacity to store/analyze data.
>> I am thinking of adding 5 hard drives, each is 18TB,
>
> Some primitive calcs: 5*18T = 90T, not 100T. Maybe you need 6 hdd?
>
>> and then merge them into one volume?
>
> If your hardware supports 6 hard drives (5*18T + your 2T), you can use
> lvm for merging 5 of them to one volume, or create raid0 by mdadm.
> Some risks with plain disk merging - if one of your drives die, entire
> volume dies.
> It may be mitigated by use raid5 with 1 additional drive or raid6 with
> 2 (see
> https://en.wikipedia.org/wiki/RAIDhttps://en.wikipedia.org/wiki/RAID)


Almost typed something similar, just without as much technical
knowledge to back it up. I like 5 separate, too. If one 18TB dies, the
other 4 can keep on clicking depending on how one's system is set up.

Cindy :)
-- 
Talking Rock, Pickens County, Georgia, USA
* runs with birdseed *



Re: file server

2023-07-12 Thread jeremy ardley



On 12/7/23 17:44, lina wrote:

My computer only has 2 TB data storage capacity,

I want to have 100 TB capacity to store/analyze data.



On this scale it's almost certainly easier and cheaper to use a cloud 
provider who can provide a good CPU and a large attached storage.


I use AWS as a provider for this type of application, but you can find 
many other providers at various price points.



Jeremy



Re: file server

2023-07-12 Thread Stanislav Vlasov
ср, 12 июл. 2023 г. в 14:45, lina :
> I want to have 100 TB capacity to store/analyze data.
> I am thinking of adding 5 hard drives, each is 18TB,

Some primitive calcs: 5*18T = 90T, not 100T. Maybe you need 6 hdd?

> and then merge them into one volume?

If your hardware supports 6 hard drives (5*18T + your 2T), you can use
lvm for merging 5 of them to one volume, or create raid0 by mdadm.
Some risks with plain disk merging - if one of your drives die, entire
volume dies.
It may be mitigated by use raid5 with 1 additional drive or raid6 with
2 (see https://en.wikipedia.org/wiki/RAIDhttps://en.wikipedia.org/wiki/RAID)

--
Stanislav



file server

2023-07-12 Thread lina
Dear all,

My computer only has 2 TB data storage capacity,

I want to have 100 TB capacity to store/analyze data.

I am thinking of adding 5 hard drives, each is 18TB, and then merge them
into one volume?
or get a file server?
What is the best option for me, and what is the budget?

Thanks so much for your advice, best, lina